# ???[AMD] NEW HORIZON - ?Zen preview on 12/13 at 3 pm CST



## SystemTech

Until its in users hands, we will never know its true performance.
This will be another AMD event, going through cherry picked numbers with people who are supposed to be neutral (but are being paid to be there) maybe saying a few things about Zen...

So we wait for Q1...


----------



## maarten12100

eSports is probably the worst way to showcase a supposedly powerful new architecture.


----------



## TopicClocker

Awesome! Really excited to get a look at Zen!


----------



## Lex Luger

Until we know how high it clocks, what its IPC is, and how much power it consumes, we know nothing.

My original prediction well over a year ago was sandy bridge IPC and low 4's. Hopefully it at least can do that without consuming over 150 watts.


----------



## LongtimeLurker

They did the same thing with Bulldozer.

They told everyone "Look! See how great it is?! It's running DIRT3 awesomely!!" But didn't allow anyone to get too close...

NOTE: I am not saying Zen will be another bulldozer, only pointing out a fact from a prior CPU architecture launch:

http://www.kitguru.net/components/cpu/carl/amd-fx-8-core-spotted-at-gamescom-running-dirt-3/





Now, if by "Putting it through its paces", they mean, installing and running Cinebench and\or PassMark, etc, then that would be awesome...if they don't....draw your own conclusions...


----------



## ZealotKi11er

Dota 2 will show the IPC differences from Intel easily.


----------



## Ashura

Quote:


> Originally Posted by *LongtimeLurker*
> 
> They did the same thing with Bulldozer.
> 
> They told everyone "Look! See how great it is?! It's running DIRT3 awesomely!!" But didn't allow anyone to get too close...
> 
> NOTE: I am not saying Zen will be another bulldozer, only pointing out a fact from a prior CPU architecture launch:


when would people stop comparing everything AMD does with Bulldozer


----------



## Ultracarpet

Quote:


> Originally Posted by *Ashura*
> 
> when would people stop comparing everything AMD does with Bulldozer


Because that was the last time they unveiled a new x86 architecture.


----------



## kd5151

Put Zen through its paces? Please be more specific. Ugh .


----------



## PostalTwinkie

Quote:


> Originally Posted by *SystemTech*
> 
> Until its in users hands, we will never know its true performance.
> This will be another AMD event, going through cherry picked numbers with people who are supposed to be neutral (but are being paid to be there) maybe saying a few things about Zen...
> 
> So we wait for Q1...


/thread


----------



## Lex Luger

People will stop talking about Bulldozer when AMD releases a new desktop CPU that isn't hot garbage like their current lineup. Everything I've seen so far leads me to believe that Zen will be a success, especially if they are selling an unlocked 8 core processor at 350 dollars.


----------



## budgetgamer120

Quote:


> Originally Posted by *SystemTech*
> 
> Until its in users hands, we will never know its true performance.
> This will be another AMD event, going through cherry picked numbers with people who are supposed to be neutral (but are being paid to be there) maybe saying a few things about Zen...
> 
> So we wait for Q1...


Cherry picked doesn't mean untrue. If you know what they use to test and settings, we can then dio the same on our own machines and make a comparison.
Quote:


> Originally Posted by *LongtimeLurker*
> 
> They did the same thing with Bulldozer.
> 
> They told everyone "Look! See how great it is?! It's running DIRT3 awesomely!!" But didn't allow anyone to get too close...
> 
> NOTE: I am not saying Zen will be another bulldozer, only pointing out a fact from a prior CPU architecture launch:
> 
> http://www.kitguru.net/components/cpu/carl/amd-fx-8-core-spotted-at-gamescom-running-dirt-3/
> 
> 
> 
> 
> 
> Now, if by "Putting it through its paces", they mean, installing and running Cinebench and\or PassMark, etc, then that would be awesome...if they don't....draw your own conclusions...


It ran dirt fine didn't it?

I hope they test the 16thread variant.


----------



## The Robot

Will they run Forza Horizon on it?


----------



## Pantsu

These previews aren't for people looking for proper benchmark numbers. They're just marketing fluff, eSports pro gamers fulfilling their sponsorship obligations telling us how cool the product is.

If they're going to show something substantial, don't let it be another game that's 99% GPU bottlenecked and CPU performance has zero effect on the frame rate. The enthusiast gaming crowd is potentially a big market for Zen, but it's only there if it can actually compete with Intel in CPU heavy titles and not just when CPU performance doesn't matter.


----------



## Particle

Quote:


> Originally Posted by *maarten12100*
> 
> eSports is probably the worst way to showcase a supposedly powerful new architecture.


Games typical of eSports like StarCraft 2 and DOTA2 tend to be CPU limited. I'd have to disagree with your assertion.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Particle*
> 
> Games typical of eSports like StarCraft 2 and DOTA2 tend to be CPU limited. I'd have to disagree with your assertion.


Yup, but at the same time they both can run at 8 Billion FPS on a toaster, so their overall relevance is fading.


----------



## Chaython

https://www.overclock3d.net/news/cpu_mainboard/overclocker_delids_amd_am4_cpu/1


Spoiler: Reportedly it has been delided, and is soldered :D






Best for cooling, worst for RMA


----------



## 7850K

Quote:


> Originally Posted by *Chaython*
> 
> https://www.overclock3d.net/news/cpu_mainboard/overclocker_delids_amd_am4_cpu/1
> 
> 
> Spoiler: Reportedly it has been delided, and is soldered :D
> 
> 
> 
> 
> 
> 
> Best for cooling, worst for RMA


we already have a thread for that http://www.overclock.net/t/1617200/pcgh-amd-am4-first-pictures-of-a-delidded-summit-ridge-zen-or-bristol-ridge-from-south-korea

*"That's a Excavator based Bristol Ridge, not a Summit Ridge







"*
~The Stilt


----------



## andydabeast

Quote:


> Originally Posted by *Chaython*
> 
> https://www.overclock3d.net/news/cpu_mainboard/overclocker_delids_amd_am4_cpu/1
> 
> 
> Spoiler: Reportedly it has been delided, and is soldered :D
> 
> 
> 
> 
> 
> 
> Best for cooling, worst for RMA


"The CPU pictured is likely an AMD Bristol Ridge APU, rather than an AMD Zen-based Summit Ridge CPU. "


----------



## aberrero

Quote:


> Originally Posted by *SystemTech*
> 
> Until its in users hands, we will never know its true performance.
> This will be another AMD event, going through cherry picked numbers with people who are supposed to be neutral (but are being paid to be there) maybe saying a few things about Zen...
> 
> So we wait for Q1...


ASHES OF THE SINGULARITY
ALL DAY EVERY DAY


----------



## ZealotKi11er

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yup, but at the same time they both can run at 8 Billion FPS on a toaster, so their overall relevance is fading.


Dota 2 yes but SC2 not really.


----------



## Chaython

Quote:


> Originally Posted by *7850K*
> 
> we already have a thread for that http://www.overclock.net/t/1617200/pcgh-amd-am4-first-pictures-of-a-delidded-summit-ridge-zen-or-bristol-ridge-from-south-korea
> 
> *"That's a Excavator based Bristol Ridge, not a Summit Ridge
> 
> 
> 
> 
> 
> 
> 
> "*
> ~The Stilt


Be mad at Tiny Tom Logan for reporting it not me :s
AMD died in like 2010 IDK anything about it besides it performs poorly even worse year over year


----------



## AlphaC

Run it on Ashes of the singularity , [email protected], superpi/other singlethreaded OC benches, & specwpc (workstation benchmark)

esports you can do on a laptop...


----------



## Tobiman

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Dota 2 yes but SC2 not really.


I'm pretty sure that starcraft 2 is very CPU intensive.


----------



## Marios145

Breaking News:
+40% Hype per month, confirmed for Zen!!


----------



## Redwoodz

Quote:


> Originally Posted by *Marios145*
> 
> Breaking News:
> +40% Hype per month, confirmed for Zen!!


I suppose to make you happy AMD should just do a quiet launch.









Ok I am noticing a few things here not normally involved in an AMD launch.

First they are paying some gaming celeb to be at the launch.
Second they are going to let the public test drive.
And finally let's all remember that this lines up with the original Zen timeline....no delays at all.


----------



## Kinaesthetic

Quote:


> Originally Posted by *Redwoodz*
> 
> I suppose to make you happy AMD should just do a quiet launch.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ok I am noticing a few things here not normally involved in an AMD launch.
> 
> First they are paying some gaming celeb to be at the launch.
> Second they are going to let the public test drive.
> And finally let's all remember that this lines up with the original Zen timeline....no delays at all.


They should do a "quiet" launch. You do realize that hyping things up to Mars and beyond, and failing to deliver on hype, is the entire reason that company is where it is at right now? They never seem to learn.


----------



## mothergoose729

Quote:


> Originally Posted by *Kinaesthetic*
> 
> They should do a "quiet" launch. You do realize that hyping things up to Mars and beyond, and failing to deliver on hype, is the entire reason that company is where it is at right now? They never seem to learn.


The only people who care about the hype is us. We don't matter. Tiny niche market and all that.

All AMD is trying to do is get consumers to think about AMD and esports in the same sentence. Brand association. That is the point - so uninitiated buyers might pick a computer with an AMD processor next time they buy. Zen is just a pretense. The performance is irrelevant so long as the average consumer walks away thinking it is good for gaming because some esports bloke said so.

There is really no news here. It is a marketing event. Know what to expect, and adjust expectations accordingly.


----------



## jezzer

Quote:


> Originally Posted by *SystemTech*
> 
> Until its in users hands, we will never know its true performance.
> This will be another AMD event, going through cherry picked numbers with people who are supposed to be neutral (but are being paid to be there) maybe saying a few things about Zen...
> 
> So we wait for Q1...


Haha yea just like the Nvidia "Just a random card from the pallet" lies

U can't fake performance unless u hussle like Nvidia, showing temps and clocks with 5% load and stuff like that so really depends on what they are going to show and how.


----------



## LancerVI

Quote:


> Originally Posted by *mothergoose729*
> 
> The only people who care about the hype is us. We don't matter. Tiny niche market and all that.
> 
> All AMD is trying to do is get consumers to think about AMD and esports in the same sentence. Brand association. That is the point - so uninitiated buyers might pick a computer with an AMD processor next time they buy. Zen is just a pretense. The performance is irrelevant so long as the average consumer walks away thinking it is good for gaming because some esports bloke said so.
> 
> There is really no news here. It is a marketing event. Know what to expect, and adjust expectations accordingly.


This exactly.

Companies don't exist to make you happy because they love you. They exist to make money. Doing a so called "quiet launch" and expecting them to say nothing about a new product coming out is beyond ridiculous. I'd be irritated as a stock holder if they weren't doing this type of stuff. It's important.

"For what?" you may ask......TO MAKE MONEY.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Tobiman*
> 
> I'm pretty sure that starcraft 2 is very CPU intensive.


He said both games get a lot of fps. I said Dota 2 yes while SC2 does not as its very CPU limited.


----------



## Marios145

Quote:


> Originally Posted by *Redwoodz*
> 
> I suppose to make you happy AMD should just do a quiet launch.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ok I am noticing a few things here not normally involved in an AMD launch.
> 
> First they are paying some gaming celeb to be at the launch.
> Second they are going to let the public test drive.
> And finally let's all remember that this lines up with the original Zen timeline....no delays at all.


Oh, i'm not talking about amd hype, they will probably be pretty clear on their claims with zen because they want a serious image for the cpu industry.
People on teh interwebz though....


----------



## Redwoodz

Quote:


> Originally Posted by *Marios145*
> 
> Oh, i'm not talking about amd hype, they will probably be pretty clear on their claims with zen because they want a serious image for the cpu industry.
> People on teh interwebz though....


Hype is food for the enthusiast PC scene. Half the people that argue relentlessly never even use the product. It's just the way it is.


----------



## assaulth3ro911

I'm buying into Zen no matter what since I'm due for an upgrade and I know that at WORST it will perform at 'average'. So that's more than enough for me to support a less-corrupt company than the competitor.


----------



## hotrod717

Starting to get a bit excited now. Have very fond memories of 1090's, 1100's, 960T's.


----------



## SoloCamo

Quote:


> Originally Posted by *mothergoose729*
> 
> The only people who care about the hype is us. We don't matter. Tiny niche market and all that.
> 
> All AMD is trying to do is get consumers to think about AMD and esports in the same sentence. Brand association. That is the point - so uninitiated buyers might pick a computer with an AMD processor next time they buy. Zen is just a pretense. The performance is irrelevant so long as the average consumer walks away thinking it is good for gaming because some esports bloke said so.
> 
> There is really no news here. It is a marketing event. Know what to expect, and adjust expectations accordingly.


Quote:


> Originally Posted by *LancerVI*
> 
> This exactly.
> 
> Companies don't exist to make you happy because they love you. They exist to make money. Doing a so called "quiet launch" and expecting them to say nothing about a new product coming out is beyond ridiculous. I'd be irritated as a stock holder if they weren't doing this type of stuff. It's important.
> 
> "For what?" you may ask......TO MAKE MONEY.


I'm relieved to see I'm not the only one that sees this for what it is...


----------



## TheLAWNOOB

Quote:


> Originally Posted by *budgetgamer120*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SystemTech*
> 
> Until its in users hands, we will never know its true performance.
> This will be another AMD event, going through cherry picked numbers with people who are supposed to be neutral (but are being paid to be there) maybe saying a few things about Zen...
> 
> So we wait for Q1...
> 
> 
> 
> Cherry picked doesn't mean untrue. If you know what they use to test and settings, we can then dio the same on our own machines and make a comparison.
> Quote:
> 
> 
> 
> Originally Posted by *LongtimeLurker*
> 
> They did the same thing with Bulldozer.
> 
> They told everyone "Look! See how great it is?! It's running DIRT3 awesomely!!" But didn't allow anyone to get too close...
> 
> NOTE: I am not saying Zen will be another bulldozer, only pointing out a fact from a prior CPU architecture launch:
> 
> http://www.kitguru.net/components/cpu/carl/amd-fx-8-core-spotted-at-gamescom-running-dirt-3/
> 
> 
> 
> 
> 
> Now, if by "Putting it through its paces", they mean, installing and running Cinebench and\or PassMark, etc, then that would be awesome...if they don't....draw your own conclusions...
> 
> Click to expand...
> 
> It ran dirt fine didn't it?
> 
> I hope they test the 16thread variant.
Click to expand...

lol defending cherry picked benchmarks, good one.


----------



## budgetgamer120

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> lol defending cherry picked benchmarks, good one.


Cherry picked or not. There way no lie told, It ran Dirt 3. That's my point.


----------



## TheLAWNOOB

Quote:


> Originally Posted by *budgetgamer120*
> 
> Quote:
> 
> 
> 
> Originally Posted by *TheLAWNOOB*
> 
> lol defending cherry picked benchmarks, good one.
> 
> 
> 
> Cherry picked or not. There way no lie told, It ran Dirt 3. That's my point.
Click to expand...

It would be a shame if it can't run Dirt3.


----------



## mouacyk

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yup, but at the same time they both can run at 8 Billion FPS on a toaster, so their overall relevance is fading.


Right, I wouldn't dote on it.


----------



## SuperZan

Quote:


> Originally Posted by *mothergoose729*
> 
> The only people who care about the hype is us. We don't matter. Tiny niche market and all that.
> 
> All AMD is trying to do is get consumers to think about AMD and esports in the same sentence. Brand association. That is the point - so uninitiated buyers might pick a computer with an AMD processor next time they buy. Zen is just a pretense. The performance is irrelevant so long as the average consumer walks away thinking it is good for gaming because some esports bloke said so.
> 
> There is really no news here. It is a marketing event. Know what to expect, and adjust expectations accordingly.


Thank you. I think that AMD has been about as quiet on Zen as is feasible for a new product in a market duopoly. Any 'hype' has come from forumites and tech journalists. To try to call AMD on the carpet for, paraphrasing here, 'hyping Zen to the moon', is just absolutely ridiculous. It's indicative of the bizarre psychic wound people allow themselves to suffer from by becoming too emotionally invested in hardware launches and/or feeling an intense compulsion to be 'right' about hardware launches on the internet.

This event is almost literally the minimum they could do without upsetting shareholders. Saying that they are in fact launching a product and that its launch is on time, and oh, here's some evidence of our functioning product, is not hype. The only way it becomes hype is if *we* hype it up. Take it for what it is: evidence that Zen exists, will be launched on time, and that it functions.


----------



## tristan3214

Quote:


> Originally Posted by *assaulth3ro911*
> 
> I'm buying into Zen no matter what since I'm due for an upgrade and I know that at WORST it will perform at 'average'. So that's more than enough for me to support a less-corrupt company than the competitor.


I am in the same boat; still have a 1090T acquired in 2012. I would rather not pay for a pricey Intel that may only be slightly better, when I can get more cores for less on an architecture that is brand new. Although, I must admit, I was worried they might try to price them high.


----------



## Olivon

Quote:


> Geoff Keighley ...
> 
> If you're serious about gaming, this is an event you do not want to miss.


Ah ah, good one.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> Cherry picked doesn't mean untrue. If you know what they use to test and settings, we can then dio the same on our own machines and make a comparison.


Cherry picked does imply that it's not representative of a wider whole.

They wouldn't be called cherry picked if they were fabrications, they would be called lies. A cherry picked benchmark, by definition of the idiom, is accurate, but misleads by only showing part of the truth.


----------



## DarkBlade6

Quote:


> Originally Posted by *Kinaesthetic*
> 
> They should do a "quiet" launch. You do realize that hyping things up to Mars and beyond, and failing to deliver on hype, is the entire reason that company is where it is at right now? They never seem to learn.


It has nothing to do with ''Hype'' . Their current products are just inferior to their competitor, the 3 big market (Mobile, Server, OEM) are just not interested at all in AMD CPU and GPU, except the companys that make low end garbage laptop with 720P TFT screen. They would be in the same situation even if they didn't hype anything.


----------



## flippin_waffles

Quote:


> Originally Posted by *SuperZan*
> 
> Thank you. I think that AMD has been about as quiet on Zen as is feasible for a new product in a market duopoly. Any 'hype' has come from forumites and tech journalists. To try to call AMD on the carpet for, paraphrasing here, 'hyping Zen to the moon', is just absolutely ridiculous. It's indicative of the bizarre psychic wound people allow themselves to suffer from by becoming too emotionally invested in hardware launches and/or feeling an intense compulsion to be 'right' about hardware launches on the internet.
> 
> This event is almost literally the minimum they could do without upsetting shareholders. Saying that they are in fact launching a product and that its launch is on time, and oh, here's some evidence of our functioning product, is _not_ hype. The only way it becomes hype is if _*we*_ hype it up. Take it for what it is: evidence that Zen exists, will be launched on time, and that it functions.


Yeah its embarrassingly obvious. This is how pathetic the tech community has become thanks to viral marketing that started with NV's focus group trolls. They didnt just disappear after NV claimed they closed down their viral marketing scheme. All you have to do is read the forums and it kinda sticks out like a sore thumb. Now NV and intel have a common enemy. 2 online teams of trolls on the same side can turn the narrative in which anyway they please. It literally wouldnt matter what AMD revealed, the tone would be spun negative.
In the end it wont matter. If Summit Ridge is good people wont care what the trolls say. It might be a must have chip for gaming who knows. One would imagine that considering they have a world class grapics division, that a new CPU designed from scratch would be built with gaming in mind. AMD have released some world class products lately and even on the software design with a ton of new stuff. GPUOpen, HSA\ROCm, HIP.... On and on. This platform that AMD is building seems pretty cool with a lot of performance potential for HPC, gaming, data center, entertainment. Their engineers are doing amazing work. R&D budget vs R&D budget efficiency must be heavily in AMD's favor considering they are designing and building new cutting edge CPUs, cutting edge GPUs, cutting edge APUs, cutting edge software, cutting edge Semi Custom chips, cutting edge chipsets.

Looking forward to Summit Ridge and a GPU for the platform.


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> Cherry picked does imply that it's not representative of a wider whole.
> 
> They wouldn't be called cherry picked if they were fabrications, they would be called lies. A cherry picked benchmark, by definition of the idiom, is accurate, but misleads by only showing part of the truth.


I don't disagree with that.


----------



## Paladin Goo

Regardless how it turns out - if the hype is real, or if it's even close to that of the performance of the broadwell-E chips, I'll be buying this my next build. No questions. Which is the first time in nearly a decade I am saying I'll go AMD. I haven't gone AMD for my main rig since dual core was originally the big thing (had a Athlon x2 5600+)


----------



## ZealotKi11er

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> TL;DR
> 
> What is AMD Red Team Plus?
> 
> Do yourself a favor and stop selectively ignore facts that are inconvenient.


In in a bunch of VLANs with them and never have I seen them fanboying over AMD or hating on Nvidia. They are a bunch of tech enthusiast.


----------



## Pyrotagonist

I don't think NV Focus Group/Red Team plus is the problem. Members are a tiny minority of users - most people don't need any extra convincing to become hardline fans of a particular side. Plus I doubt the more rabid members of either group would be any different if the group didn't exist. Just seems like people want a "boogeyman". We're all the boogeyman.


----------



## lombardsoup

Quote:


> Originally Posted by *Pyrotagonist*
> 
> I don't think NV Focus Group/Red Team plus is the problem. Members are a tiny minority of users - most people don't need any extra convincing to become hardline fans of a particular side. Plus I doubt the more rabid members of either group would be any different if the group didn't exist. Just seems like people want a "boogeyman". We're all the boogeyman.


The only thing I want out of this is for AMD to compete again in the enthusiast segment, which will hopefully lead to lower prices at retail, benefiting all of us.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Pyrotagonist*
> 
> I don't think NV Focus Group/Red Team plus is the problem. Members are a tiny minority of users - most people don't need any extra convincing to become hardline fans of a particular side. Plus I doubt the more rabid members of either group would be any different if the group didn't exist. Just seems like people want a "boogeyman". We're all the boogeyman.


The way I see thing in GPU market is that Nvidia is like the default installation option and you have to out of your way to select AMD.


----------



## AlphaC

Quote:


> Originally Posted by *tristan3214*
> 
> I am in the same boat; still have a 1090T acquired in 2012. I would rather not pay for a pricey Intel that may only be slightly better, when I can get more cores for less on an architecture that is brand new. Although, I must admit, I was worried they might try to price them high.


Intel is not that pricey if you have a Microcenter... plus the Sandy bridge CPUs are still relevant today.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> The way I see thing in GPU market is that Nvidia is like the default installation option and you have to out of your way to select AMD.


It's more like Nvidia GPUs are reference points for AMD products due to the market share in discrete non-console GPUs.


----------



## ZealotKi11er

Quote:


> Originally Posted by *AlphaC*
> 
> Intel is not that pricey if you have a Microcenter... plus the Sandy bridge CPUs are still relevant today.
> It's more like Nvidia GPUs are reference points for AMD products due to the market share in discrete non-console GPUs.


Even if AMD has the product you still have to convince people to buy their products. We are talking about the average gamer here.


----------



## sepiashimmer

At any rate we can be sure that it won't be as bad as their current line up and at the best it'll be at least 40% better than Excavator, I'd definitely want to upgrade to SR3 if it's price is right, I just want a CPU which can fully utilize my GPU and record without problems.


----------



## The Robot

Quote:


> Originally Posted by *DarkBlade6*
> 
> It has nothing to do with ''Hype'' . Their current products are just inferior to their competitor, the 3 big market (Mobile, Server, OEM) are just not interested at all in AMD CPU and GPU, except the companys that make low end garbage laptop with 720P TFT screen. They would be in the same situation even if they didn't hype anything.


You're forgetting consoles.


----------



## Defoler

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Dota 2 will show the IPC differences from Intel easily.


Hehe that is cute.


----------



## Theelichtje

"And here we have Zen running Ashes of the Singularity"


----------



## formula m

Quote:


> Originally Posted by *AlphaC*
> 
> Run it on Ashes of the singularity , [email protected], superpi/other singlethreaded OC benches, & specwpc (workstation benchmark)
> 
> esports you can do on a laptop...


Don't know anyone who plays [email protected], superpi, etc..

But I do know people who play eSports and they do not use laptops. They have $500 CPUs & $800 video cards.

Oddly, you can run [email protected], superpi, etc on a laptop... if watching numbers crunch, is your thing.


----------



## guttheslayer

When you see how kaby-lake turn out to be such disappointment, you better hope Zen is good.

If not, we all lose out, not just AMD.


----------



## EightDee8D

The guy who always performs above all, failed. so now for some weird reason we are going to expect something big from the guy who always fails.

_OCN logic_ /-


----------



## Blameless

Quote:


> Originally Posted by *formula m*
> 
> Oddly, you can run [email protected], superpi, etc on a laptop... if watching numbers crunch, is your thing.


The purpose of [email protected], or any other distributed computing project isn't to watch numbers crunch, it's to return quality work to the project. How much work one can return is directly dependent on the performance of the hardware used to run it and the time it can be run.
Quote:


> Originally Posted by *EightDee8D*
> 
> The guy who always performs above all, failed. so now for some weird reason we are going to expect something big from the guy who always fails.
> 
> _OCN logic_ /-


Kaby Lake is neither a failure nor can one claim that AMD always fails without ignoring quite a few successes.


----------



## Particle

Quote:


> Originally Posted by *Blameless*
> 
> Kaby Lake is neither a failure nor can one claim that AMD always fails without ignoring quite a few successes.


Agreed. The Phenom II is not a distant memory, and it was well regarded in its time even if it didn't hold the overall performance crown. Various APU products since then have been successes in their segment. AMD has also had a number of GPU products that were either brief performance leaders or at least very strong but for good prices. The problem is how jaded people seem to want to be as if it makes them look cool or above anyone who dares think otherwise. Hating on AMD has been the cool kid thing to do for quite some time, silly and uninformed as it makes a person look.


----------



## Olivon

Quote:


> Originally Posted by *guttheslayer*
> 
> When you see how kaby-lake turn out to be such disappointment.


And what did you expect exactly ? KBL is exactly where we expect it.
Just a clock bump with a better 14nm enhanced. And Tom's Hardware preview is made on Z170 board with a beta BIOS.
People expect way too much when not necessary and must stop to believe everything they read on WCCFTurd.
If Intel can approach 5GHz overclock for most retail CPUs, KBL will be a better success than SKL.


----------



## Redwoodz

Quote:


> Originally Posted by *Olivon*
> 
> And what did you expect exactly ? KBL is exactly where we expect it.
> Just a clock bump with a better 14nm enhanced. And Tom's Hardware preview is made on Z170 board with a beta BIOS.
> People expect way too much when not necessary and must stop to believe everything they read on WCCFTurd.
> If Intel can approach 5GHz overclock for most retail CPUs, KBL will be a better success than SKL.


Intel's brand new cpu architecture releases Jan 2017. 14nm
AMD's brand new cpu architecture releases Jan 2017. 14nm

FINALLY AMD gets a cpu out on the similar node and same time as Intel.
Zen will be WAY bigger than Kaby Lake, even IF they don't sell as many chips.


----------



## ebduncan

Quote:


> Originally Posted by *maarten12100*
> 
> eSports is probably the worst way to showcase a supposedly powerful new architecture.


Actually its one of the best ways. Esports is currently one of the biggest drivers in the PC market. Besides a-lot of esports guys stream to twitch and other services which is an additional hit to performance.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> Dota 2 will show the IPC differences from Intel easily.


to a certain degree yes, but only when testing above 60hz. Dota 2 got much harder to run here recently with their engine updates to support better shadows.

Quote:


> Originally Posted by *Particle*
> 
> Games typical of eSports like StarCraft 2 and DOTA2 tend to be CPU limited. I'd have to disagree with your assertion.


this.

Everyone always says these games are easy to run. However they completely forget that most of these games are streamed online to various other services. It actually takes a good bit of CPU/GPU power to push a simple 1080p bitrate stream out on top of gaming at 60FPS+

I've been waiting to upgrade for a long time now due to this reason. The mainstream I7 quad cores+ HT are not fast enough for my needs, and the I7 eight cores+HT cost to dang much.

Hopefully Zen brings the price of the PCMR platform down and we go back to the days of price wars. I still cannot believe I've rocked the same cpu for nearly 4 years now.


----------



## Paladin Goo

Quote:


> Originally Posted by *EightDee8D*
> 
> The guy who always performs above all, failed. so now for some weird reason we are going to expect something big from the guy who always fails.
> 
> _OCN logic_ /-


"Always fail"? wat?

They don't meet the benchmark performance of their competition, big whoop. I'm an intel guy usually, but that doesn't mean AMD is no good.

For a mid range rig or home theater PC, I'd buy AMD, no question. Their APU's are amazing for such a use. They have the low power market cornered, which gave them the console market. The entire console market. Yes, they're failing.


----------



## GamerusMaximus

Quote:


> Originally Posted by *The Robot*
> 
> You're forgetting consoles.


Console margins are razor thin compared to servers or even consumer desktop parts. You dont see nvidia fighting for the console space for a reason, there isnt enough money there to justify the fight.

AMD controls all 3 consoles, yet they were loosing money even with the ps4 selling big. It's only with the 400 series launch things are looking a little better, although it's still not good for them. If consoles were worth good money, AMD would have been in the black for 2014. Instead they lost tons of cash from maxwell destroying the 200 series in sales, and the 300 series being delayed (and then eventually coming out as little more then rebrands with newer firmware, except the 380x).

TLDR: consoles are not worth enough cash to be a consideration next to servers, desktops, laptops, ece. Consoles get the scraps of the market.
Quote:


> Originally Posted by *formula m*
> 
> Don't know anyone who plays [email protected], superpi, etc..
> But I do know people who play eSports and they do not use laptops. They have $500 CPUs & $800 video cards.
> 
> Oddly, you can run [email protected], superpi, etc on a laptop... if watching numbers crunch, is your thing.


[email protected], superpi, ece give us a good idea of general performance, and an easy way to compare to chips currently in existence.

Just because they are not games does not mean they are useless. They are good benchmarks, and an event like this needs consistent, respected benchmarks to show off their new silicon. Hiding behind a game that a toaster can run acceptably stinks of cherrypicking and misdirection ALA bulldozer's launch.

If Zen is as good as AMD says it is, there is no reason to not allow some hard benchmark numbers to be shown. Not saying that showing games is bad by any means, but they need to show more then just esports games.


----------



## budgetgamer120

Quote:


> Originally Posted by *GamerusMaximus*
> 
> Console margins are razor thin compared to servers or even consumer desktop parts. You dont see nvidia fighting for the console space for a reason, there isnt enough money there to justify the fight.
> 
> AMD controls all 3 consoles, yet they were loosing money even with the ps4 selling big. It's only with the 400 series launch things are looking a little better, although it's still not good for them. If consoles were worth good money, AMD would have been in the black for 2014. Instead they lost tons of cash from maxwell destroying the 200 series in sales, and the 300 series being delayed (and then eventually coming out as little more then rebrands with newer firmware, except the 380x).
> 
> TLDR: consoles are not worth enough cash to be a consideration next to servers, desktops, laptops, ece. Consoles get the scraps of the market.


Do you have any proof of any of this? Sounds like assumptions.

That's like saying ARM doesn't makeb money because they sell cheap SoC.


----------



## GamerusMaximus

Quote:


> Originally Posted by *budgetgamer120*
> 
> Do you have any proof of any of this? Sounds like assumptions.
> 
> That's like saying ARM doesn't makeb money because they sell cheap SoC.


"AMD has stated in the past that typical margins hover around the 35%-40% range, and that margins on parts similar to the ones used in the PS4 yield a lower number, hovering in the mid-teens. So, the normal range of 35%-40% certainly isn't $100 to begin with, and a "mid-teens" percentage would only be around $15"

http://www.geek.com/games/each-ps4-sale-makes-more-profit-for-amd-than-sony-but-how-much-1577855/

https://www.extremetech.com/gaming/150892-nvidia-gave-amd-ps4-because-console-margins-are-terrible

Meanwhile, intel's server chips have ridiculously large margins. They had 50-60% margins on parts that sold for thousands apiece, as opposed to 15% margins on parts that sold for under $100. AMD would need to sell far more chips then intel did to get the same gross profit, and would need to buy far more wafers then intel to get to that point. Especially their APUs, which have large dies given the have CPUs and GPUs on them.

ARM makes their money entirely on royalties. AMD isnt licensing their stuff out to other CPU makers. Comparing apples to oranges here.


----------



## magnek

Quote:


> Originally Posted by *mothergoose729*
> 
> The only people who care about the hype is us. We don't matter. Tiny niche market and all that.
> 
> All AMD is trying to do is get consumers to think about AMD and esports in the same sentence. Brand association. That is the point - so uninitiated buyers might pick a computer with an AMD processor next time they buy. Zen is just a pretense. The performance is irrelevant so long as the average consumer walks away thinking it is good for gaming because some esports bloke said so.
> 
> There is really no news here. It is a marketing event. Know what to expect, and adjust expectations accordingly.


One of the most sensible posts in this thread by far.


----------



## chir

Yeah no thanks. Didn't impress with Bulldozer, not going to impress now.


----------



## TheBloodEagle




----------



## Ultracarpet

Quote:


> Originally Posted by *chir*
> 
> Yeah no thanks. Didn't impress with Bulldozer, not going to impress now.


Met a guy, didn't like him. Done with humans.


----------



## magnek

Quote:


> Originally Posted by *Ultracarpet*
> 
> Met a guy, didn't like him. Done with humans. *guys*


FTFY


----------



## Ultracarpet

Quote:


> Originally Posted by *magnek*
> 
> FTFY


Wow, I totally botched the delivery


----------



## The Robot

Quote:


> Originally Posted by *GamerusMaximus*
> 
> Console margins are razor thin compared to servers or even consumer desktop parts. You dont see nvidia fighting for the console space for a reason, there isnt enough money there to justify the fight.
> 
> AMD controls all 3 consoles, yet they were loosing money even with the ps4 selling big. It's only with the 400 series launch things are looking a little better, although it's still not good for them. If consoles were worth good money, AMD would have been in the black for 2014. Instead they lost tons of cash from maxwell destroying the 200 series in sales, and the 300 series being delayed (and then eventually coming out as little more then rebrands with newer firmware, except the 380x).
> 
> TLDR: consoles are not worth enough cash to be a consideration next to servers, desktops, laptops, ece. Consoles get the scraps of the market.
> [email protected], superpi, ece give us a good idea of general performance, and an easy way to compare to chips currently in existence.
> 
> Just because they are not games does not mean they are useless. They are good benchmarks, and an event like this needs consistent, respected benchmarks to show off their new silicon. Hiding behind a game that a toaster can run acceptably stinks of cherrypicking and misdirection ALA bulldozer's launch.
> 
> If Zen is as good as AMD says it is, there is no reason to not allow some hard benchmark numbers to be shown. Not saying that showing games is bad by any means, but they need to show more then just esports games.


Yet Nvidia will produce a custom SoC for Nintendo Switch. Maybe they paid through the roof for it though.


----------



## chir

Quote:


> Originally Posted by *Ultracarpet*
> 
> Met a guy, didn't like him. Done with humans.


I meant this teaser bs. Measurements will speak for themselves. Just seems pointless when everybody knows they're terrible at PR and can't be trusted to give any straight facts of significance.


----------



## Ultracarpet

Quote:


> Originally Posted by *chir*
> 
> I meant this teaser bs. Measurements will speak for themselves. Just seems pointless when everybody knows they're terrible at PR and can't be trusted to give any straight facts of significance.


Oh, well in that case, you had a lot more material to use than just bulldozer lol.


----------



## BulletBait

Frankly, both of his statements are a bit asinine. He must be on a ARM based Chromebook, tablet, or phone only these days since Netburst was super unimpressive and nV's marketing isn't exactly squeaky clean.

Of course we wouldn't want to break the narrative here.


----------



## Cyro999

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yup, but at the same time they both can run at 8 Billion FPS on a toaster, so their overall relevance is fading.


Not true at all, sc2 runs poorly on the fastest CPU's and extremely poorly on slow ones

doesn't help that the frametimes are very uneven because of the engine style - for all frames to be faster than 1/60'th of a second (what people call "60fps") you actually need the FPS meter to say 100 or so.


----------



## TTheuns

If they can make high-powered, high core count CPUs available at decent pricing, maybe developers will start optimizing games for more cores.


----------



## flippin_waffles

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> TL;DR
> 
> What is AMD Red Team Plus?
> 
> Do yourself a favor and stop selectively ignore facts that are inconvenient.


Yeah, no. Not even remotely the same, it's a false equivalency. I know how NV's viral marketing team likes to bring that up as a counter point, thinking there is some victory to claim by doing so.


----------



## Serios

Quote:


> Originally Posted by *The Robot*
> 
> Yet Nvidia will produce a custom SoC for Nintendo Switch. Maybe they paid through the roof for it though.


Nah the way I heard it they are loosing money on those chips.
Yeah it nicely contradicts the assumption that Nvidia doesn't care about consoles.
Semicustoms the division responsible for the GPUs found in the new consoles and it's AMD most profitable division right now so I don't see the problem. Also Nvidia doesn't have the most important thing MS and Sony wanted: an x86 CPU, so it's not like they would have been able to compete with AMD for the console contracts.


----------



## budgetgamer120

Quote:


> Originally Posted by *GamerusMaximus*
> 
> "AMD has stated in the past that typical margins hover around the 35%-40% range, and that margins on parts similar to the ones used in the PS4 yield a lower number, hovering in the mid-teens. So, the normal range of 35%-40% certainly isn't $100 to begin with, and a "mid-teens" percentage would only be around $15"
> 
> http://www.geek.com/games/each-ps4-sale-makes-more-profit-for-amd-than-sony-but-how-much-1577855/
> 
> https://www.extremetech.com/gaming/150892-nvidia-gave-amd-ps4-because-console-margins-are-terrible
> 
> Meanwhile, intel's server chips have ridiculously large margins. They had 50-60% margins on parts that sold for thousands apiece, as opposed to 15% margins on parts that sold for under $100. AMD would need to sell far more chips then intel did to get the same gross profit, and would need to buy far more wafers then intel to get to that point. Especially their APUs, which have large dies given the have CPUs and GPUs on them.
> 
> ARM makes their money entirely on royalties. AMD isnt licensing their stuff out to other CPU makers. Comparing apples to oranges here.


I total overlooked that ARM does licensing. But AMD is making good money on the low performance parts they sell to Microsoft and Sony for the consoles.


----------



## NuclearPeace

Quote:


> Originally Posted by *TTheuns*
> 
> If they can make high-powered, high core count CPUs available at decent pricing, maybe developers will start optimizing games for more cores.


Maybe in theory but in practice I doubt it.

We already have 8 core CPUs in consoles and we have had Bulldozer and its derivative architectures for around half a decade. At this point I would say right about now is as good as its going to get for a long time. Developers aren't going to make games more multithreaded more than they have to. Game developers program to sell games, not benchmarks.


----------



## motoray

Quote:


> Originally Posted by *NuclearPeace*
> 
> Maybe in theory but in practice I doubt it.
> 
> We already have 8 core CPUs in consoles and we have had Bulldozer and its derivative architectures for around half a decade. At this point I would say right about now is as good as its going to get for a long time. *Developers aren't going to make games more multithreaded more than they have to*. Game developers program to sell games, not benchmarks.


So the point was that if zen truly is good and the majority of ppl start moving to high core count cpu's due to the significantly cheaper price dev's will start to change. Yes we have had large core count bulldozer for a long time. But that does not mean that a lot of ppl use it. So why would they code for the minority? And with the presumably lower power consumption larger core count laptops will be coming, which will make the need to use more cores for every day programs more relevant.


----------



## NuclearPeace

Even if people buy Zen in droves, where is the incentive to spend a lot of resources making games more multithreaded if the level of multithreading is enough for 99.9% of people to get satisfactory performance? Game studios have limited resources and cant spend all of their money on optimization.


----------



## SoloCamo

Quote:


> Originally Posted by *NuclearPeace*
> 
> Even if people buy Zen in droves, where is the incentive to spend a lot of resources making games more multithreaded if the level of multithreading is enough for 99.9% of people to get satisfactory performance? Game studios have limited resources and cant spend all of their money on optimization.


Consoles. And being that we are seeing the effect now and the next few years ahead for consoles will still be many core, weak ipc cpu's I'd say it's a safe bet that 8 cores will at least be taken advantage of.


----------



## Aussiejuggalo

So this preview will it be full specs finally or just how it performs in games and crap while the specs will be at CES?

Also when the hell are we going to see some motherboards







.


----------



## Redwoodz

Quote:


> Originally Posted by *NuclearPeace*
> 
> Maybe in theory but in practice I doubt it.
> 
> We already have 8 core CPUs in consoles and we have had Bulldozer and its derivative architectures for around half a decade. At this point I would say right about now is as good as its going to get for a long time. Developers aren't going to make games more multithreaded more than they have to. Game developers program to sell games, not benchmarks.


Quote:


> Originally Posted by *NuclearPeace*
> 
> Even if people buy Zen in droves, where is the incentive to spend a lot of resources making games more multithreaded if the level of multithreading is enough for 99.9% of people to get satisfactory performance? Game studios have limited resources and cant spend all of their money on optimization.


Quote:


> Originally Posted by *SoloCamo*
> 
> Consoles. And being that we are seeing the effect now and the next few years ahead for consoles will still be many core, weak ipc cpu's I'd say it's a safe bet that 8 cores will at least be taken advantage of.


@ NuclearPeace
You are assuming gaming is a top priority in designing a chip, and the current level of multi-threading will always be enough.It's not and It won't. Multi-core is superior where it matters, where the money is made.
Have you seen the core count of Intel's latest chips?

As far as the game devs, AMD has over 60% marketshare of x86 gpu's in gaming, including consoles.
Multi-core is likely the future because we are reaching the point where single-threaded gains are largely a by-product of node reduction.


----------



## AlphaC

Quote:


> Originally Posted by *Redwoodz*
> 
> @ NuclearPeace
> You are assuming gaming is a top priority in designing a chip, and the current level of multi-threading will always be enough.It's not and It won't. Multi-core is superior where it matters, where the money is made.
> Have you seen the core count of Intel's latest chips?
> 
> As far as the game devs, AMD has over 60% marketshare of x86 gpu's in gaming, including consoles.
> Multi-core is likely the future because we are reaching the point where single-threaded gains are largely a by-product of node reduction.


Gaming and non-threaded applications have similar workloads

If you look at say Autocad or any other 3d CAD program, 4 cores clocked high do better than 8 cores clocked at half the speed

i.e.


----------



## Redwoodz

Quote:


> Originally Posted by *AlphaC*
> 
> Gaming and non-threaded applications have similar workloads
> 
> If you look at say Autocad or any other 3d CAD program, 4 cores clocked high do better than 8 cores clocked at half the speed
> 
> i.e.


True, that's my point. You don't develop a chip for gaming you develop for industry to a certain degree. Gaming performance will come as a result. With adjustable boost and TDP you tailor for the demand.


----------



## Nightbird

Time to sell AMD stock, if performance was as rumored they wouldn't do the reveal this way. It's non-optimal.


----------



## SuperZan

Quote:


> Originally Posted by *Nightbird*
> 
> Time to sell AMD stock, if performance was as rumored t*hey wouldn't do the reveal this way. It's non-optimal*.


Nice to meet you, my name is AMD.


----------



## Newwt

Quote:


> Originally Posted by *Nightbird*
> 
> Time to sell AMD stock, if performance was as rumored they wouldn't do the reveal this way. It's non-optimal.


Or...their not revealing the final chip and are giving us a preview of what they have so far...


----------



## budgetgamer120

Quote:


> Originally Posted by *Nightbird*
> 
> Time to sell AMD stock, if performance was as rumored they wouldn't do the reveal this way. It's non-optimal.


They clearly don't have a product to sell but want to keep people's interest.


----------



## DMatthewStewart

Quote:


> Originally Posted by *maarten12100*
> 
> eSports is probably the worst way to showcase a supposedly powerful new architecture.


^^this

And add that "eSports legend" sounds so ridiculous. I guess my grandmother was a "crocheting legend" but we'd never refer to her that way


----------



## xzamples

Quote:


> Originally Posted by *maarten12100*
> 
> eSports is probably the worst way to showcase a supposedly powerful new architecture.


agreed, especially since esports games don't require great hardware to run and the pros usually set the settings low.


----------



## PostalTwinkie

Quote:


> Originally Posted by *maarten12100*
> 
> eSports is probably the worst way to showcase a supposedly powerful new architecture.


That is the perspective we would have as nerds. But the nerds among the crowd that would understand true benchmarking are extreme minorities.

Let me offer you a business perspective;

eSports has a massive and frothing fan-base willing to spend money! League of Legends by itself commands ~70,000,000+ active users, DoTA 2, CS:GO, etc, etc. To a business wanting to launch a new product they are looking at ~100,000,000 individuals as a target group. A group that is generally more technically 'savvy' than the average consumer, that is easily accessible via online, and consider themselves 'fans'.

The big one there is the word 'fan'. The NFL figured out how to get grown men to buy a plastic cheese hat and to dress up....



AMD can sell a few processors to some nerds.


----------



## SoloCamo

Quote:


> Originally Posted by *xzamples*
> 
> agreed, especially since esports games don't require great hardware to run and the pros usually set the settings low.


Sure it doesn't take much, but maintaining locked 300fps in CS still requires a fast cpu even today - same with Dota 2, you wan't to maintain a high locked framerate cpu is going to matter a lot.

If AMD showed harder to run games like BF1 you all would be saying why did they show such a gpu bound game....

Esports is a huge community to hit and it's a smart move. Brand perception is the bigger issue. OCN is a niche of a niche, we know what to look for already - we know what a cpu bottleneck vs gpu bottleneck is (well most of us do hopefully), etc.

Getting AMD's name alone into the public eye is huge. When was the last time you saw an AMD commercial on TV? I see intel ones often enough.


----------



## dmasteR

Quote:


> Originally Posted by *PostalTwinkie*
> 
> The big one there is the word 'fan'. The NFL figured out how to get grown men to buy a plastic cheese hat and to dress up....


I own one of those









Anyone else get a email?



Thought I signed up already....


----------



## cssorkinman

Quote:


> Originally Posted by *dmasteR*
> 
> Quote:
> 
> 
> 
> Originally Posted by *PostalTwinkie*
> 
> The big one there is the word 'fan'. The NFL figured out how to get grown men to buy a plastic cheese hat and to dress up....
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I own one of those
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Anyone else get a email?
> 
> 
> 
> Thought I signed up already....
Click to expand...

Yup I got the email


----------



## TheLAWNOOB

Mentions Zen but nothing for RX 490?


----------



## EightDee8D

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> Mentions Zen but nothing for RX 490?


Because it's not coming until 17q2 most likely thus they say 2017H1. but those idiots from wccturd want them clicks for food so making stuff up anyway.

and this "ENTHUSIAST" community is hungry for any info on vega, so they forget history of wccturd and ............ you know the usual stuff.


----------



## budgetgamer120

I can't wait to see what Zen is capable of. This will tell me whether or not I should sell my 2011-3 stuff early to prepare for Zen.


----------



## Olivon

budgetgamer with a 2011-3 rig waiting for Zen. Yep, pure logic.


----------



## TheLAWNOOB

AMD gpu works best with intel CPU, so that's why AMD and their employees usually use intel CPU for maximal GPU performance.


----------



## formula m

Quote:
Originally Posted by *GamerusMaximus* 



> [email protected], superpi, ece give us a good idea of general performance, and an easy way to compare to chips currently in existence.
> 
> Just because they are not games does not mean they are useless. They are good benchmarks, and an event like this needs consistent, respected benchmarks to show off their new silicon. Hiding behind a game that a toaster can run acceptably stinks of cherrypicking and misdirection ALA bulldozer's launch.
> 
> If Zen is as good as AMD says it is, there is no reason to not allow some hard benchmark numbers to be shown. Not saying that showing games is bad by any means, but they need to show more then just esports games.


The point is, Zen is for gaming. And the vast majority of us are looking forward to what that might bring for future titles. So we are not choosing, based on EXTREME use, like folding, but planning for an uber gaming rig.

IMO, Zen does not have to meet or exceed anything Intel has currently, to be a success. Zen only has to offer us what we want (ie: 6/8 core @ 4Ghz). On a fresh platform that encompasses the latest tech for gamers and is price conscious. win/win/win

Those who do EXTREME stuff, like folding, bit mining, lightbending, etc.. always have the option of $1k+ chips anyways. But I think AMD might see AMD fans leverage the Server market to adopt some monster consumer rigs for people who do extreme stuff & ultra game.

I believe what makes Zen enticing is that it is on a new AMD platform. AM4 motherboards & chips may compete directly with Intel in everyway. Specially in areas where it counts the most with consumers.

Don't know about any of You, but 75W gamer chips don't sound fun.


----------



## GamerusMaximus

Quote:


> Originally Posted by *formula m*
> 
> The point is, Zen is for gaming. And the vast majority of us are looking forward to what that might bring for future titles. So we are not choosing, based on EXTREME use, like folding, but planning for an uber gaming rig.
> 
> IMO, Zen does not have to meet or exceed anything Intel has currently, to be a success. Zen only has to offer us what we want (ie: 6/8 core @ 4Ghz). On a fresh platform that encompasses the latest tech for gamers and is price conscious. win/win/win
> 
> Those who do EXTREME stuff, like folding, bit mining, lightbending, etc.. always have the option of $1k+ chips anyways. But I think AMD might see AMD fans leverage the Server market to adopt some monster consumer rigs for people who do extreme stuff & ultra game.
> 
> I believe what makes Zen enticing is that it is on a new AMD platform. AM4 motherboards & chips may compete directly with Intel in everyway. Specially in areas where it counts the most with consumers.
> 
> Don't know about any of You, but 75W gamer chips don't sound fun.


75 watt gamer chips dont sound fun? You mean like i7's? Those are pretty fun, and the ivys were 77 watt. The 65 watt i5s are more then enough for every game on the market ATM. You dont need 125 watt TDPs to have a good chip.

Also, what is this about folding being EXTREME use case? uber gaming isnt, but folding is? Where do you draw the line? And what does that have to do with benchmarks not being a legitimate way to gauge performance?

Zen is not "for gaming" Zen is the basis for everything AMD is building, from the consumer zen chips that most gamers interested in AMD will buy, to APUs for all in one machines and small desktops, to mobile APUs, to the operton server chip lineup. So knowing how these chips performs in tasks other then gaming is, in fact, just as important as knowing their gaming performance.


----------



## amd-dude

PLEASE AMD prove a lot of these youngins wrong and make my name great again.


----------



## budgetgamer120

Quote:


> Originally Posted by *amd-dude*
> 
> PLEASE AMD prove a lot of these youngins wrong and make my name great again.


Lol make your name great again


----------



## mihai21ro

Will my GA-970A-UD3P support AMD Zen? And anyone knows the approximate price of the new cpu's?


----------



## EightDee8D

Quote:


> Originally Posted by *mihai21ro*
> 
> Will my GA-970A-UD3P support AMD Zen? And anyone knows the approximate price of the new cpu's?


No, you will need new am4 mobo and ddr4 sticks. it will be priced more than current 8 core FX for sure.


----------



## SoloCamo

Quote:


> Originally Posted by *mihai21ro*
> 
> Will my GA-970A-UD3P support AMD Zen? And anyone knows the approximate price of the new cpu's?


No, they aren't going to release this cpu on a chipset released in 2011. AMD is good about compatibility but that's asking a bit too much.


----------



## JakdMan

Quote:


> Originally Posted by *formula m*
> 
> *The point is, Zen is for gaming*. And the vast majority of us are looking forward to what that might bring for future titles. So we are not choosing, based on EXTREME use, like folding, but planning for an uber gaming rig.
> 
> IMO, Zen does not have to meet or exceed anything Intel has currently, to be a success. Zen only has to offer us what we want (ie: 6/8 core @ 4Ghz). On a fresh platform that encompasses the latest tech for gamers and is price conscious. win/win/win
> 
> Those who do EXTREME stuff, like folding, bit mining, lightbending, etc.. always have the option of $1k+ chips anyways. But I think AMD might see AMD fans leverage the Server market to adopt some monster consumer rigs for people who do extreme stuff & ultra game.
> 
> I believe what makes Zen enticing is that it is on a new AMD platform. AM4 motherboards & chips may compete directly with Intel in everyway. Specially in areas where it counts the most with consumers.
> 
> Don't know about any of You, but 75W gamer chips don't sound fun.


The _grotesque_ short sighted nonsense..................

I understand you are part of the 99.999999999999999999999999999999999999999999999999999999% of OCN users only interested in what hardware you can get your hands on for the primary purpose of gaming, but I can assure you that these _CPUs_ are meant for more than that, and we will all be better for it if AMD can bring the heavy artilery with some multi cored beast CPUS

Come on AMD! Show Intel what for


----------



## Aussiejuggalo

Does anyone else find it a little annoying that we haven't seen any motherboards yet?

Intel Z270 motherboards have been shown but we haven't seen diddly-squat from AMD, hope this preview shows motherboards to







.


----------



## SuperZan

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Does anyone else find it a little annoying that we haven't seen any motherboards yet?
> 
> Intel Z270 motherboards have been shown but we haven't seen diddly-squat from AMD, hope this preview shows motherboards to
> 
> 
> 
> 
> 
> 
> 
> .


I'd love to see some boards but on the bright side of things it's the polar opposite of the BD launch yet again. Hopefully we'll see them soon. I'd wager we'll know more about incoming boards as soon as we get the confirmed SKU range from AMD.


----------



## Aussiejuggalo

Yeah true.

AMD has kept pretty much everything concerning Zen pretty quiet so I suppose I shouldn't be surprised there's nothing on boards yet. I just hope we see some decent MATX X370 boards on launch day because if Zen proves to be good at this preview and we have decent motherboards on launch think I'll be jumping ship to get that 8c 16t beast







.


----------



## mihai21ro

Quote:


> Originally Posted by *EightDee8D*
> 
> No, you will need new am4 mobo and ddr4 sticks. it will be priced more than current 8 core FX for sure.


Damn. Thanks.
Quote:


> Originally Posted by *SoloCamo*
> 
> No, they aren't going to release this cpu on a chipset released in 2011. AMD is good about compatibility but that's asking a bit too much.


You're right. I also heard they release their cheaper cpu's in march so that was a bit dissapointing.


----------



## warr10r

I hope we'll see some physical motherboards and the finished silicon that we will buy in 2017 in this livestream preview. Also a reveal of each SKU with prices would be nice.
I need to start planning my Zen build


----------



## ZealotKi11er

I hope ITX MB for Zen. Got PSU/Case/HDD/SSD ready for a Zen + Vega build if they deliver.


----------



## formula m

Quote:


> Originally Posted by *GamerusMaximus*
> 
> 75 watt gamer chips dont sound fun? You mean like i7's? Those are pretty fun, and the ivys were 77 watt. The 65 watt i5s are more then enough for every game on the market ATM. You dont need 125 watt TDPs to have a good chip.
> 
> Also, what is this about folding being EXTREME use case? uber gaming isnt, but folding is? Where do you draw the line? And what does that have to do with benchmarks not being a legitimate way to gauge performance?
> 
> Zen is not "for gaming" Zen is the basis for everything AMD is building, from the consumer zen chips that most gamers interested in AMD will buy, to APUs for all in one machines and small desktops, to mobile APUs, to the operton server chip lineup. So knowing how these chips performs in tasks other then gaming is, in fact, just as important as knowing their gaming performance.


Your points does not rebuttal mine, it further verifying my points. You are taking a broad scope of ideas/points and interjecting them into the conversation, outside the scope of my comments. Post count?

*ZEN is aimed at mainstream acceptance.* It's uarch can be compartmentalized and pushed out to 32, or 64 core variants.. but again, none of that matter to the mainstream, nor my comments. That is why I pointed out that EXTREME people, are already being served by EXTREME measures. Zen main goal is not to replace those EXTREME situations, where EXTREME techtards play. Those are niche markets, not mainstream.

Understand now..? Zen is NOT meant to compete with i7 EXTREME systems. Summit Ridge (Zen) is for gamers and "enthusiasts". Thus the context of my response to him.. in which I was responding, that you quoted and replied to. I expect Zen to push Battlefield with less effort, than my Devil's Canyon. I'll know first hand, because I will also have a fresh kaby system too. I can compare 3 rigs all playing the same map, at the same time.

I think u jumpn the gun too quick. Your points are not in contention with my posts, only the fact... that you keep claiming I am saying "benchmarks not being a legitimate way to guage performance?". Performance & Gaming Performance are two different things. My posts are specifically about gaming performance.

So, GAMING is the best way to guage gaming performance. Gamers do not need to know mathematically what their system benches, or what the theoretical highest peaks it reaches... gamers only need to play with the hardware, to feel/know if there are sustained hesitations, lows, stutters, or hitches, bobbles... based on how the SYSTEM works. (ie: suitable for their needs)

Ironically, my point about 75w cpu went over your head. No gamer cares if his CPU uses 40 watts less or more watts than his neighbor, or friend. The ***** is plugged into the wall, there are no batteries to worry about. That is mainstream gamer mentality.

I hope Zen uses energy... because I get more performance. In the end, the majority of the Populace whom look to "benches"... do so, to know if that "piece of kit" is worth owning/gaming on. But if instead, they are given a forum in which to game on it for hours, before buying..?

Not a bad way to get mainstream acceptance and brand recognition (marketing), prior to the release of the chip. And frankly, nobody benchmarks power usage when gaming... it is a laughable metric. (ie: Nobody cares evAr).

Lastly, I DO agree with you. That if a person is a Bencher, Folder, or general extreme techtard, ... then gaming smoothness isn't a measure of performance for them. Where static numbers & highest peaks are more of a measurement of ePeen & money.


----------



## JakdMan

Quote:


> Originally Posted by *formula m*
> 
> *-SNIP-*


You realize Broadwell E isn't exactly mainstream right -_- You know, the chip they put the engineering sample against in the Blender render.......

If anything it looks like they are very much so going for every segment to me


----------



## Aussiejuggalo

So TweakTown released a thing saying we'll see X370 boards at the preview... do we think it's true? (I haven't found anyone else saying we will).


----------



## Assimilator87

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Does anyone else find it a little annoying that we haven't seen any motherboards yet?
> 
> Intel Z270 motherboards have been shown but we haven't seen diddly-squat from AMD, hope this preview shows motherboards to
> 
> 
> 
> 
> 
> 
> 
> .


I'm assuming you meant "retail motherboards" when you said "any motherboards." Bristol Ridge officially launched on Sept. 5, with them appearing in desktops, mostly from HP, soon after. You can get your hands on an AM4 mobo right now if you really want to.


----------



## 7850K

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> So TweakTown released a thing saying we'll see X370 boards at the preview... do we think it's true? (I haven't found anyone else saying we will).


Here's a review of an A12-9800(Bristol Ridge) on an ASUS A320M-C
https://translate.google.com/translate?hl=en&sl=de&tl=en&u=http%3A%2F%2Fwww.planet3dnow.de%2Fvbulletin%2Fshowthread.php%3Ft%3D426871


----------



## Redwoodz

Quote:


> Originally Posted by *JakdMan*
> 
> You realize Broadwell E isn't exactly mainstream right -_- You know, the chip they put the engineering sample against in the Blender render.......
> 
> If anything it looks like they are very much so going for every segment to me


It will be mainstream after Zen launch. Intel will lower pices, but will release a new sku for the "extreme" title. You have to realise Intel will have a response to Zen.


----------



## mumford

Does anyone know how many PCI-E lanes does Zen has? The reason I ask is that I need at least 24 (16 for GPU and 8 for a 10G nic).


----------



## ZealotKi11er

Quote:


> Originally Posted by *mumford*
> 
> Does anyone know how many PCI-E lanes does Zen has? The reason I ask is that I need at least 24 (16 for GPU and 8 for a 10G nic).


I would assume at least 32 Lanes.


----------



## Blameless

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I would assume at least 32 Lanes.


I'm betting fewer CPU attached lanes.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *Assimilator87*
> 
> I'm assuming you meant "retail motherboards" when you said "any motherboards." Bristol Ridge officially launched on Sept. 5, with them appearing in desktops, mostly from HP, soon after. You can get your hands on an AM4 mobo right now if you really want to.


Quote:


> Originally Posted by *7850K*
> 
> Here's a review of an A12-9800(Bristol Ridge) on an ASUS A320M-C
> https://translate.google.com/translate?hl=en&sl=de&tl=en&u=http%3A%2F%2Fwww.planet3dnow.de%2Fvbulletin%2Fshowthread.php%3Ft%3D426871


Retail Summit Ridge, couldn't give a rats about Bristol Ridge







.
Quote:


> Originally Posted by *mumford*
> 
> Does anyone know how many PCI-E lanes does Zen has? The reason I ask is that I need at least 24 (16 for GPU and 8 for a 10G nic).


Quote:


> Originally Posted by *ZealotKi11er*
> 
> I would assume at least 32 Lanes.


If you believe Wiki







it'll have 24 lanes, some other rumours have it at 36 lanes.

Considering X370 is for CF / SLI I'd guess something along the lines of 24 on chip with the rest using PLX type chips on board? unless the PCI lanes on chip are locked for all boards under X370 but that would be stupid.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Blameless*
> 
> I'm betting fewer CPU attached lanes.


Before for AMD it was in NB 16x2 for 990FX. I am sure they want 32 again for Zen.


----------



## Blameless

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Before for AMD it was in NB 16x2 for 990FX. I am sure they want 32 again for Zen.


I'm sure they want as many as is practicable.

I'm less sure they have the pins/traces to spare when ~1% of AM4 systems will be doing anything that would benefit from the increased I/O and only a fraction of those will be doing so in a way were the addition of a PCI-E switch wouldn't be suitable.

Every PCI-E/UMI lane needs at least six CPU pins/traces. 1331 might be enough for 36 total lanes (32 PCI-E to the CPU and four as UMI to the chipset) and everything else that's required, or it might not. For them to dedicate ~220 pins to PCI-E I/O and simultaneously have more miscellaneous I/O than LGA-115x they'd likely have to have fewer VCC/VSS contacts, which doesn't seem likely (AM4 has tiny pins, which are already less efficient than LGA lands in this regard, and Zen likely has comparable if not greater power needs).

In addition, AM4 is a mainstream socket. The only reason Intel's HEDT sockets/platforms have the I/O they do is because they are also used for enterprise tasks that can really utilize the I/O they provide. 16 lanes to the CPU and four to the DMI is enough for Intel's mainstream platform, and will likely continue to be.

Again, I don't think it's necessarily impossible for the higher lane count on AM4, but I do think it more likely that they will reserve that sort of I/O for the SP3 platform.


----------



## lolerk52

Quote:


> Originally Posted by *Blameless*
> 
> I'm sure they want as many as is practicable.
> 
> I'm less sure they have the pins/traces to spare when ~1% of AM4 systems will be doing anything that would benefit from the increased I/O and only a fraction of those will be doing so in a way were the addition of a PCI-E switch wouldn't be suitable.
> 
> Every PCI-E/UMI lane needs at least six CPU pins/traces. 1331 might be enough for 36 total lanes (32 PCI-E to the CPU and four as UMI to the chipset) and everything else that's required, or it might not. For them to dedicate ~220 pins to PCI-E I/O and simultaneously have more miscellaneous I/O than LGA-115x they'd likely have to have fewer VCC/VSS contacts, which doesn't seem likely (AM4 has tiny pins, which are already less efficient than LGA lands in this regard, and Zen likely has comparable if not greater power needs).
> 
> In addition, AM4 is a mainstream socket. The only reason Intel's HEDT sockets/platforms have the I/O they do is because they are also used for enterprise tasks that can really utilize the I/O they provide. 16 lanes to the CPU and four to the DMI is enough for Intel's mainstream platform, and will likely continue to be.
> 
> Again, I don't think it's necessarily impossible for the higher lane count on AM4, but I do think it more likely that they will reserve that sort of I/O for the SP3 platform.


They have less pins, but they're also limited to dual channel on AM4, as opposed to Intel's Quad on HEDT.
That's a LOT of pins saved.


----------



## Blameless

Quote:


> Originally Posted by *lolerk52*
> 
> They have less pins, but they're also limited to dual channel on AM4, as opposed to Intel's Quad on HEDT.
> That's a LOT of pins saved.


And that's not even the half of it, LGA-2011 takes into account 44 PCI-E lanes (four of those for DMI), two QPIs (to connect to other CPUs in upto a 4P setup), and power/ground lands sufficient for 160w TDP parts.

I'm still not convinced 1331 is enough for all the I/O AM4 seems to have plus 36 lanes (32 direct, 4 to the chipset) of PCI-E, or the need for so many lanes on a mainstream platform.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Blameless*
> 
> And that's not even the half of it, LGA-2011 takes into account 44 PCI-E lanes (four of those for DMI), two QPIs (to connect to other CPUs in upto a 4P setup), and power/ground lands sufficient for 160w TDP parts.
> 
> I'm still not convinced 1331 is enough for all the I/O AM4 seems to have plus 36 lanes (32 direct, 4 to the chipset) of PCI-E, or the need for so many lanes on a mainstream platform.


Well it is more than Intel mainstream and same as 1366.


----------



## AlphaC

Quote:


> Originally Posted by *Blameless*
> 
> And that's not even the half of it, LGA-2011 takes into account 44 PCI-E lanes (four of those for DMI), two QPIs (to connect to other CPUs in upto a 4P setup), and power/ground lands sufficient for 160w TDP parts.
> 
> I'm still not convinced 1331 is enough for all the I/O AM4 seems to have plus 36 lanes (32 direct, 4 to the chipset) of PCI-E, or the need for so many lanes on a mainstream platform.


Isn't AM4 95W?

Anyway I think this is a "wait and see realworld performance" thing


----------



## kd5151

Sassafras


----------



## Blameless

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Well it is more than Intel mainstream and same as 1366.


LGA-1366 isn't comparable to AM4 because LGA-1366 never had any PCI-E lanes attached to the CPU.

Everything in or out of an LGA-1366 processor went via the memory or the QPI. On single socket systems, QPI connected to the IOH which was mostly a QPI to 40 lane PCI-E bridge.

Interconnects like QPI and Hypertransport need far fewer pins for the same bandwidth compared to things like PCI-E.
Quote:


> Originally Posted by *AlphaC*
> 
> Isn't AM4 95W?


As far as I know.

Mainstream Intel parts are in the same ballpark, but they are in an LGA socket and for similar pin densities LGA's electrical properties are superior.

How many VCC/VSS pins are actually needed for a given current draw is something I'd have to research, but despite most sockets having a large degree of headroom (probably ~3x what TDP would suggest on most platforms) worked in, on more than one occasion I have seen the socket be the point of failure.


----------



## epic1337

why wouldn't 24+32 (X370 chipset) lanes be enough? thats 56lanes in total you know.

plus its not like AM3+ had anything wrong with their PCI-E lanes, despite having zero on-die PCI-E lanes themselves.
all of AM3+ PCI-E connections goes through its chipset's HyperTransport interconnect, which is limited to 6.4GT/s.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> why wouldn't 24+32 lanes be enough?


It would be plenty, for most uses.
Quote:


> Originally Posted by *epic1337*
> 
> thats 56lanes in total you know.


Can't (or at least shouldn't) use those chipset lanes for highly bandwidth or latency sensitive stuff.
Quote:


> Originally Posted by *epic1337*
> 
> plus its not like AM3+ had anything wrong with their PCI-E lanes, despite having zero on-die PCI-E lanes themselves.
> all of AM3+ PCI-E connections goes through its chipset's HyperTransport interconnect, which is limited to 6.4GT/s.


Well, they did have the added complexity, space, and power/cooling requirements of a second chip on the board.

6.4GT/s is 12.8GiB/s, each direction, on a 16-bit HTT link, which was plenty. Most people in the consumer segment that are going to find value in many PCI-E lanes are going to do so via multi-GPU setups and multi-GPU setup need a lot of device to device bandwidth and low device to device latency...connecton to the CPU is much less important because the traffic is much less heavy.

This is part of why I think 24 CPU attached lanes is plenty generous and that ultra high-end multi-GPU oriented boards will simply include a switch/bridge chip.


----------



## epic1337

thats what i think so as well, 24 CPU lanes is plenty for a couple of NVMe SSDs and a dGPU without factoring in the chipset's lanes.
and even when CF/SLI is factored in, the X370 chipset's 32lanes is plenty for a full 16+16 setup.

from what i can tell though, the probable reason why they even added that much CPU lanes is to allow the lower-end chipsets to scale smaller.
e.g. Zen + B350 would have 24+6 lanes in total, this would still be sufficient for most common setups while being much cheaper in general.


----------



## JakdMan

Stop! Stop!Enough is too much!!!!

That's literally all I'm hearing here. If it wasn't enough Intel / AMD aren't stagnating somewhat for the sake of it, we can expect stagnation because we aren't demanding much. What we have is apparently "enough" or "plenty"









More goodies to the front I say that I can play with


----------



## epic1337

*shrugs* rather than getting dramatically faster, i'd like for them to get dramatically cheaper instead.
technically speaking, until all of our softwares gets more demanding it isn't "worth it" to get more powerful rigs, and yes i'm not talking about games.


----------



## Diffident

Quote:


> Originally Posted by *formula m*
> 
> Don't know anyone who plays [email protected], superpi, etc..
> But I do know people who play eSports and they do not use laptops. They have $500 CPUs & $800 video cards.
> 
> Oddly, you can run [email protected], superpi, etc on a laptop... if watching numbers crunch, is your thing.


[email protected] isn't watching numbers crunch, along with BOINC projects, it's for science and curing diseases. I currently have 44 threads running 24/7 helping to cure cancer and Zika virus. With that, power consumption is a major concern.


----------



## formula m

Quote:


> Originally Posted by *Diffident*
> 
> [email protected] isn't watching numbers crunch, along with BOINC projects, it's for science and curing diseases. I currently have 44 threads running 24/7 helping to cure cancer and Zika virus. With that, power consumption is a major concern.


Perhaps you miss some of my sarcasm, in my reply to him?

Actually, I have several "mission" rigs too. But again we are only talking about Summit Ridge, which is NOT meant to compete with i7 EXTREME systems (x299). Summit Ridge (Zen) is for gamers and "enthusiasts". Thus the context of my response to him..

I explain it further in the thread.


----------



## JakdMan

^^ I'm sorry but so far I'm still only seeing your own internalized belief and logic as opposed to "facts" that "Zen is for gamers"

More like * "I am personally looking to Zen for the purpose of a new gaming rig and want to see and spout it as nothing more than that even though AMD THEMSELVES are very much so demonstrating ,in small droplets mind you, as being an all around beast" *

With all their talk of creating scaleable hardware for the future I think it's quite blatant, whether Intel has something else coming or not, and lets be honest, outside of chipset features it really doesn't look like Intel are remotely bringing any big gunz to the battle due to their own issues.

ZEN IS NOT MEANT FOR JUST MAINSTREAM AND GAMERS

If you're on AMDs engineering team you need to spill the actual beans. Otherwise don't confuse your combobulation with their aims


----------



## Silent Scone

Quote:


> Originally Posted by *Lex Luger*
> 
> People will stop talking about Bulldozer when AMD releases a new desktop CPU that isn't hot garbage like their current lineup. Everything I've seen so far leads me to believe that Zen will be a success, especially if they are selling an unlocked 8 core processor at 350 dollars.


Doesn't that scream exactly the opposite? lol


----------



## kd5151

Less than 24 hours to go!


----------



## kd5151

Quote:


> Originally Posted by *oxidized*
> 
> HYPE


----------



## formula m

Quote:


> Originally Posted by *JakdMan*
> 
> ^^ I'm sorry but so far I'm still only seeing your own internalized belief and logic as opposed to "facts" that "Zen is for gamers"
> 
> More like * "I am personally looking to Zen for the purpose of a new gaming rig and want to see and spout it as nothing more than that even though AMD THEMSELVES are very much so demonstrating ,in small droplets mind you, as being an all around beast" *
> 
> With all their talk of creating scaleable hardware for the future I think it's quite blatant, whether Intel has something else coming or not, and lets be honest, outside of chipset features it really doesn't look like Intel are remotely bringing any big gunz to the battle due to their own issues.
> 
> ZEN IS NOT MEANT FOR JUST MAINSTREAM AND GAMERS
> 
> If you're on AMDs engineering team you need to spill the actual beans. Otherwise don't confuse your combobulation with their aims


Sorry if you are misunderstanding. In the context of this thread... Zen *is* (for gaming). And I have qualified that statement several times, saying we are discussing Summit Ridge.

Or just go to AMD's web site. I quoted them. (The combobulations are in your head..)

Ironically, and counter to your tirade, AMD has chosen to give the first look of Zen, tomorrow at a eSport Event..! (Nobody here is talking about 12, 24 core Zen chips.. those are on the way.)


----------



## AmericanLoco

For those questioning PCI-E lanes vs pin count:

1 PCI-E "lane" only requires 4 connections. Two differential receive pairs, and two differential transmit pairs. Each _slot_ requires a few other supporting lines, like hot-plug detect, etc... but I'm unsure if those need to go to the CPU, or if they can go to the system instead. The clock lines are unique per slot, and require two pins to the CPU as well.

So at the absolute minimum, for Zen to support 40 lanes of PCI-E, it would need 160 pins at the CPU + 2 pins/slot for clock information.

FM2 only has 906 pins, but supports dual channel memory and has 20 PCI-E lanes. AM4 bumps that up to 1331 pins, so I'm sure they can squeeze a few more lanes in there.


----------



## Tobiman

http://videocardz.com/64741/ryzen Hmmm...


----------



## kd5151

Quote:


> Originally Posted by *Tobiman*
> 
> http://videocardz.com/64741/ryzen Hmmm...


----------



## JakdMan

Quote:


> Originally Posted by *formula m*
> 
> Sorry if you are misunderstanding. In the context of this thread... Zen *is* (for gaming). And I have qualified that statement several times, saying we are discussing Summit Ridge.
> 
> Or just go to AMD's web site. I quoted them. (The combobulations are in your head..)
> 
> Ironically, and counter to your tirade, AMD has chosen to give the first look of Zen, tomorrow at a eSport Event..! (Nobody here is talking about 12, 24 core Zen chips.. those are on the way.)


For this thread, sure. Over all, not even (hence my interest [not that I'm alone but not to muddy things any further







])
Quote:


> Originally Posted by *Tobiman*
> 
> http://videocardz.com/64741/ryzen Hmmm...


Some of tomorrow's goodies I'm guessing


----------



## lolerk52

Anyone with a deeper understanding of Intel's CPU's know if Intel has something like this? Googling Intel Prefetch didn't give me conclusive results.


----------



## Wishmaker

Quote:


> Lastly, we have Smart Prefetch which is the new Prefetcher. This isn't that big of a development as prefetchers have always been a focus of improvement in every new CPU. *Intel does have the industry lead on prefetching so it will be interesting to see what AMD has done.* Judging from the incoherence between the slides, I am more than certain that more will come, likely in just a few hours at New Horizons so stay tuned!


Should explain that it will need to be something completely superior to beat Intel.


----------



## TheBloodEagle

Quote:


> Originally Posted by *epic1337*
> 
> *shrugs* rather than getting dramatically faster, i'd like for them to get dramatically cheaper instead.
> technically speaking, until all of our softwares gets more demanding it isn't "worth it" to get more powerful rigs, and yes i'm not talking about games.


Are you building a new computer monthly?


----------



## ladcrooks

lets moan about hype, lets moan about no hype, lets just moan - cos a lot of you like doing it

i just hope Amd can shut some of you up









no one forces you to buy their products


----------



## sepiashimmer

Quote:


> Originally Posted by *epic1337*
> 
> *shrugs* rather than getting dramatically faster, i'd like for them to get dramatically cheaper instead.
> technically speaking, until all of our softwares gets more demanding it isn't "worth it" to get more powerful rigs, and yes i'm not talking about games.


Most of the games released these days are unoptimized and buggy, and we have to brute force them with more processing power to get them to run at a decent frame rate and quality.

Cheaper and faster are not mutually exclusive.


----------



## PostalTwinkie

Quote:


> Originally Posted by *sepiashimmer*
> 
> Most of the games released these days are unoptimized and buggy, and we have to brute force them with more processing power to get them to run at a decent frame rate and quality.


It really is getting disgusting. The last time we had this much crap in the industry was just before the crash in the 80s.


----------



## ToTheSun!

Quote:


> Originally Posted by *ladcrooks*
> 
> lets moan about hype, lets moan about no hype, lets just moan - cos a lot of you like doing it


Let's moan about moaning!


----------



## umeng2002

Quote:


> Originally Posted by *PostalTwinkie*
> 
> It really is getting disgusting. The last time we had this much crap in the industry was just before the crash in the 80s.


I don't think the PC gaming market has ever crashed.


----------



## ladcrooks

Quote:


> Originally Posted by *ToTheSun!*
> 
> Let's moan about moaning!


I'm getting like Victor Mildrew every day , moan about work, traffic , life in general, but i am sure do not moan about things I have not bought or things I have after the reviews


----------



## SystemTech

Just over an hour to go.
So whos going to be watching?
I might be on the bus, but what a perfect time to watch


----------



## kd5151

I'm ready to watch .


----------



## sepiashimmer

Quote:


> Originally Posted by *SystemTech*
> 
> Just over an hour to go.
> So whos going to be watching?
> I might be on the bus, but what a perfect time to watch


I'm hoping to watch it, mainly to win some prizes, it starts at 2:30AM in the morning here. Clearly disrupting my sleep cycle.


----------



## cechk01

what prizes?


----------



## sepiashimmer

Quote:


> Originally Posted by *cechk01*
> 
> what prizes?


Don't know, they said there will be giveaways.


----------



## epic1337

Quote:


> Originally Posted by *sepiashimmer*
> 
> Most of the games released these days are unoptimized and buggy, and we have to brute force them with more processing power to get them to run at a decent frame rate and quality.
> 
> Cheaper and faster are not mutually exclusive.


you do realize i wasn't talking about games right? right?

Quote:


> Originally Posted by *TheBloodEagle*
> 
> Are you building a new computer monthly?


no, but i wouldn't pass up for some bargain deal rig though.


----------



## sepiashimmer

Quote:


> Originally Posted by *cechk01*
> 
> what prizes?


From their email with the live link:
Quote:


> Plus, we're running a contest for everyone watching. Show us how you're getting ready for "Zen" with photos, gifs, or videos. Our favorites will win swag from the event.


----------



## kd5151

less than 15 minutes to go.


----------



## cechk01

the stream is now up https://www.youtube.com/watch?v=4DEfj2MRLtA


----------



## dmasteR

http://tieba.baidu.com/p/4898582118?pn=1

Someone got their hands in a gigabyte board!


----------



## Aussiejuggalo

How long these stream gonna be, anyone know?


----------



## kd5151

Quote:


> Originally Posted by *dmasteR*
> 
> http://tieba.baidu.com/p/4898582118?pn=1
> 
> Someone got their hands in a gigabyte board!


sweet!


----------



## dmasteR

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> How long these stream gonna be, anyone know?


Expect hour - hour half I would assume.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *dmasteR*
> 
> Expect hour - hour half I would assume.


Cool, probably gonna be crap talking for at least the first 20 mins or so wont it?


----------



## cechk01

Quote:


> Originally Posted by *dmasteR*
> 
> http://tieba.baidu.com/p/4898582118?pn=1
> 
> Someone got their hands in a gigabyte board!


Someone in china just lost their job


----------



## keikei

I just want to see a release date and $.


----------



## doza

give some vega info please, i'm dying here


----------



## lombardsoup

lol this stream chat. "RIP INTEL"

I want competition, not a new monopoly


----------



## LancerVI

Quote:


> Originally Posted by *lombardsoup*
> 
> lol this stream chat. "RIP INTEL"
> 
> I want competition, not a new monopoly


LOL......

.....and what fancy techo music they're playing. Very Eve Online of them.


----------



## JakdMan

OMG 44 seconds and counting


----------



## doza

at least they got a spam robot in chat on youtube


----------



## kd5151

be back after the show


----------



## cechk01

IT"S HAPPENING!!!!


----------



## Lipos

Quote:


> Unfortunately that 6900K result screen was out of reach for me to video so I just pointed my camera at the stuff that mattered, the Summit Ridge resutls. *Now here's the shocking news, the RIZEN engineering sample (Summit Ridge 8-core / 16-thread) processor did not have any turbos enabled just yet. So the product was running its cores at the default 3.4 GHz.*


http://www.guru3d.com/articles-pages/editorial-amd-zen-is-now-ryzen-processor,1.html


----------



## LancerVI

Yeah!!!! Thunderbird!!


----------



## JakdMan

History lesson!









And she read up on what we've said over time. Oh dear


----------



## keikei

Imagine PC gamers first.


----------



## lombardsoup

Quote:


> Originally Posted by *keikei*
> 
> Imagine PC gamers first.


Long, silky neckbeards astride leather chairs of the gods

...lol this presentation


----------



## Aussiejuggalo

Lol "who here has an RX 480?" *cheers from the back*, me "ah the cheap seats"







.


----------



## Shau76434

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Lol "who here has an RX 480?" *cheers from the back*, me "ah the cheap seats"
> 
> 
> 
> 
> 
> 
> 
> .


Cracked me a bit up when she said they managed to sell the rx480 for under 200


----------



## axiumone

Ok, its Ryzen, so far the leaks are on point. Pronounced Rye-Zen


----------



## JakdMan

It's official. RYZEN is it's name


----------



## Evil Penguin

Ah, so it is pronounced Ry-zen. Good.


----------



## Particle

Lisa pronounces it ry-zen instead of ri-zen. Thank goodness.


----------



## AlphaC

Quote:


> Originally Posted by *Barca130*
> 
> Cracked me a bit up when she said they managed to sell the rx480 for under 200


The 4GB is under $200 , I don't know what you're going on about

More on topic they said they are beating their +40% IPC goals.

3.4GHz or higher base clocks , boost frequencies not disclosed

Neural network prediction
smart prefetch
SenseMi

100s of sensors - pure power, precision boost = milliseconds response , higher frequencies and lower power
* extended frequency range "for gamers" - air cool/liquid/LN2 , sense premium cooling = higher clocks if more cool

---
*demo*
3.4Ghz no boost vs 8 core 16 thread i7-6900K @ stock 3.2Ghz base , 3.7Ghz boost
* Blender render workload --- matches $1100 CPU
* 95W TDP vs 140W TDP i7
* Handbrake transcode -- 54 seconds vs 59 seconds for i7









*VR demo* - mixed reality building Ryzen PC

*4K demo* - Battlefield 1
Ryzen @ 3.4Ghz w/ Pascal Titan X vs i7-6900K .... 60-70fps unless explosions , smart prefetch makes it a bit faster

*Zbrush* ... no lag with millions of polygons
*Keyshot* CPU render in realtime , HDR ... 53 mil polygons

*esports* - twitch streaming "PPD" playing Dota2 maxed out
i7-6900K keeps up with Ryzen
i7-6700K @ 4.5GHz is slower than Ryzen --- drops a ton of frames

*product lineup*
desktop - AM4 , 20MB L2+L3 , 95W TDP
notebook
Q1 2017 launch

*AMD Gaming - 4K*
Ryzen + Vega GPU playing Starwars Battlefront Rogue 1 in 4K > 60FPS

*SO MUCH AMD HYPE.







*


----------



## axiumone

To be released next quarter.


----------



## Aussiejuggalo

She called us... GEEKS!!!!














.

Than again it is 7am here and I'm watching a live stream on a CPU.


----------



## Gilles3000

Here's some performance already: https://www.youtube.com/watch?v=7yxSFmEOkrA

Very impressive.


----------



## JakdMan

Where is that clown talking about "just gaming" Lisa is talking across the board here from what I hear........


----------



## Shau76434

Quote:


> Originally Posted by *AlphaC*
> 
> The 4GB is under $200 , I don't know what you're going on about
> 
> More on topic they said they are beating their +40% IPC goals.
> 
> 3.4GHz or higher base clocks , boost frequencies not disclosed


They are dropping to their actual prices these days but not at release. No need to get hostile


----------



## scorch062

It took me a whole day to realise that Ryzen is a combination of words 'rise' and 'zen'. Before was associating with cyborg stuff. Feel a bit dumb =/


----------



## lombardsoup

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> She called us... GEEKS!!!!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Than again it is 7am here and I'm watching a live stream on a CPU.


NEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEERDS

20MB cache intrigues me.


----------



## nakano2k1

Quote:


> Originally Posted by *lombardsoup*
> 
> NEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEERDS
> 
> 20MB cache intrigues me.


I almost lost it at 20mb l2/l3 cache


----------



## JakdMan

Blender Yo!









And Handbrake!!


----------



## Butthurt Beluga

Another Blender render... where are the games?


----------



## lombardsoup

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> Another Blender render... where are the games?


"Guys this is a heavy, heavy CPU workload"

...lol


----------



## Shau76434

Hopefully we will get some game benchmarks aswell.


----------



## Aussiejuggalo

Looks like when the guy started Blender he clicked one slightly faster, could just be me though.


----------



## GameBoy

Handbrake interests me.


----------



## JakdMan

Quote:


> Originally Posted by *lombardsoup*
> 
> "Guys this is a heavy, heavy CPU workload"
> 
> ...lol


Then pull out your core2 Duo and get to rendering me that 4k Pixar movie NAOW


----------



## nakano2k1

OMG... I need Zen in my life ASAP


----------



## Blameless

Broadwell IPC in Blender and x264.

Will have to double check their demos on my own systems.


----------



## axiumone

So far. [email protected][email protected] in blender.


----------



## morbid_bean

Friendly reminder....

Lets not get hyped up. Big corporations can say what ever they want on screen. Until Consumer reviews come, none of this means squat.


----------



## Ha-Nocri

so Zen's 95W matches Intel's 140W?! That's impressive


----------



## JakdMan

Here's the VR demos
Quote:


> Originally Posted by *morbid_bean*
> 
> Friendly reminder....
> Lets not get hyped up. Big corporations can say what ever they want on screen. Until Consumer reviews come, none of this means squat.


Then why are we even here?................
To you lame one. To you


----------



## keikei

With a name like Ryzen, the memes just create themselves. Impressive results given no boost used yet.


----------



## nakano2k1

Quote:


> Originally Posted by *axiumone*
> 
> So far. [email protected][email protected] in blender.


Zen is at 3.4 no boost. Intel has boost enabled

Plus intel = 140 watt, AMD = 95W


----------



## Roaches

OLD MEN, WARNING WARNING!


----------



## lombardsoup

Finally, game benchmarks


----------



## Lipos




----------



## LazarusIV

Quote:


> Originally Posted by *axiumone*
> 
> So far. [email protected][email protected] in blender.


She specifically mentions that Intel's Turbo Boost *is not* disabled... though I wonder how much it would boost at that load...


----------



## hawker-gb

AMD is back.


----------



## utnorris

Quote:


> Originally Posted by *axiumone*
> 
> So far. [email protected][email protected] in blender.


They did say that BW had a boost of 3.7Ghz.


----------



## Robenger

http://www.anandtech.com/bench/product/1729?vs=1289

To see how old AMD compared.


----------



## Aussiejuggalo

Hahah VR computer building.


----------



## Newwt

DDDDDDDDD GG my wallet


----------



## axiumone

Quote:


> Originally Posted by *utnorris*
> 
> They did say that BW had a boost of 3.7Ghz.


The 3.7 boost is max boost for single core applications. What do they boost do automatically with ALL 8 cores?


----------



## JakdMan

4K gaming!!!!!!!

I sees screen tearz. *** are those GPUS doing. Bothe CPUS FAIL XDDDDDDDDDDDDDDDDDDDDDDDDDD


----------



## Butthurt Beluga

Isn't 4K gameplay a bad comparison of CPU performance because it's more GPU intensive? Or am I wrong?


----------



## Serandur

Announce price --> watch Intel's profit margins disintegrate


----------



## Aussiejuggalo

Shows gameplay, crops out FPS... really?


----------



## DizzlePro

didn't see the fps in the bf1 demo


----------



## lombardsoup

No fps? The heck?


----------



## Gilles3000

Quote:


> Originally Posted by *axiumone*
> 
> The 3.7 boost is max boost for single core applications. What do they boost do automatically with ALL 8 cores?


They still boost when running on 8c/16t, but how much depends on various factors. So its most likely running higher than 3.2Ghz.


----------



## rt123

Quote:


> Originally Posted by *axiumone*
> 
> The 3.7 boost is max boost for single core applications. What do they boost do automatically with ALL 8 cores?


3.4G or 3.5G I think.

Edit:- Its 3.5G


----------



## JJBY

facepalm at lack of bf1 fps counter..............


----------



## Newwt

I'm on that hype train if the test are true


----------



## JakdMan

Keyshot CPU Based rendering. More goodies for me


----------



## prznar1

Dat bf1 bench. XD


----------



## Pro3ootector

Yup, a DX12 game test.


----------



## lombardsoup

Quote:


> Originally Posted by *prznar1*
> 
> Dat bf1 bench. XD


Absolutely comical. Hypes new CPU, doesn't show numbers


----------



## h4rdcor3

my boy PPD


----------



## rick19011

RIP intel.


----------



## keikei

Rmember when AMD had bad marketing?! Things have changed.


----------



## hawker-gb

Ryzen 95W beats Intel 140W:thumb:


----------



## fewness

6700k got murdered


----------



## Outcasst

I refuse the believe that the Intel stream is that laggy.


----------



## JakdMan

Quote:


> Originally Posted by *fewness*
> 
> 6700k got murdered


Darn right!


----------



## Xuper

ok I see new game ( Dota 2 ? ) what's that lag?


----------



## Ha-Nocri

Too much emphasis on multi-threading... makes me worried


----------



## Butthurt Beluga

So glad I didn't get a 6700k to upgrade my 3770k...


----------



## luisxd

RIP 6700K


----------



## Gerbacio

im buying into the hype something Fierce!!!


----------



## Blameless

Quote:


> Originally Posted by *utnorris*
> 
> They did say that BW had a boost of 3.7Ghz.


Quote:


> Originally Posted by *nakano2k1*
> 
> Zen is at 3.4 no boost. Intel has boost enabled


Since these are using every logical core and some degree of AVX, Broadwell-E isn't going to be using turbo much.
Quote:


> Originally Posted by *nakano2k1*
> 
> Plus intel = 140 watt, AMD = 95W


Don't confuse TDP with power consumption or efficiency.
Quote:


> Originally Posted by *LazarusIV*
> 
> She specifically mentions that Intel's Turbo Boost *is not* disabled... though I wonder how much it would boost at that load...


Probably 3.4 in Blender and 3.3 in x264, based on how many multipliers of turbo my 6800Ks use at all default settings.

It _can't_ go above 3.4 at stock with all cores loaded.


----------



## S.M.

Well they just lied about the 6700K's streaming performance.


----------



## lombardsoup

Quote:


> Originally Posted by *S.M.*
> 
> Well they just lied about the 6700K's streaming performance.


And unless I'm blind, didn't see an FPS counter with that game either.


----------



## beatfried

oh god... she did the steve jobs.. -.-


----------



## fatmario

Quote:


> Originally Posted by *S.M.*
> 
> Well they just lied about the 6700K's streaming performance.


and they didn't show fps number for bf1 lol.


----------



## JakdMan

ONE MORE THING!!!


----------



## Pro3ootector

They showed how great Titan X is, weren't they making Radeons?


----------



## budgetgamer120

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Too much emphasis on multi-threading... makes me worried


What should be spoken about when talking about a 16 thread cpu?


----------



## wsfrazier

That 6700k test was not pure, it is not that bad at all. That thing was sabotaged.

These demos are useless, need real benches. After seeing that, I don't trust any of these.


----------



## keikei

Vega?!


----------



## JakdMan

She said VEGA!!!!!!


----------



## mutatedknutz

VEGA IS HERE BOYS


----------



## nakano2k1

Battlefront?? C'mon....


----------



## Imouto

Quote:


> Originally Posted by *S.M.*
> 
> Well they just lied about the 6700K's streaming performance.


Depends on the x264 preset. Switching to a slower preset is a killer and my i7 4790K chokes on "fast" preset at 1080p 30FPS.


----------



## fatmario

Hottest game of year battlefront what ?


----------



## LazarusIV

Quote:


> Originally Posted by *nakano2k1*
> 
> Battlefront?? C'mon....


Kinda what I said... My phone can run that shizz


----------



## lombardsoup

Quote:


> Originally Posted by *fatmario*
> 
> Hottest game of year battlefront what ?


Game is literally dead

The heck are they doing?


----------



## LazarusIV

I would've preferred Titanfall 2 or some sick, 64 player BF1...


----------



## Blameless

Quote:


> Originally Posted by *S.M.*
> 
> Well they just lied about the 6700K's streaming performance.


I can certainly find streaming settings that a 4.5GHz 6700K cannot handle where a hex core part, even at lower clocks speeds, does fine.

AMD isn't going to lie, but they are going to use encoding settings that even a very fast quad core will choke on when they have an octo-core part in the demo.


----------



## Butthurt Beluga

Battlefront, I see FPS dips lol
People still play Battlefront?


----------



## JakdMan

The Rogue Onbe DLC Explains it. Only one other person is on lolz


----------



## nakano2k1

Please don't crash, please don't crash, please don't crash....


----------



## keikei

So Vega at least matches 1080 then.


----------



## mutatedknutz

New dlc I guess


----------



## lombardsoup

-Claims arbitrary fps
-No actual numbers shown

Um.


----------



## nakano2k1

Urgh... I'm kinda more interested in Ryzen than Vega at this moment.


----------



## luisxd

Quote:


> Originally Posted by *nakano2k1*
> 
> Please don't crash, please don't crash, please don't crash....


This


----------



## reqq

its Rogue ONE DLC boys


----------



## fewness

What's the message I'm supposed to take away from the last demo?


----------



## doza

2 games with no fps info, going to Cryosleep thill march....


----------



## Butthurt Beluga

yeah, not showing FPS during gameplay is a HUGE concern.
That means AMD is not confident in their own product.


----------



## Gilles3000

Quote:


> Originally Posted by *fewness*
> 
> What's the message I'm supposed to take away from the last demo?


That VEGA is focused at the high end and coming soon?


----------



## DaaQ

Quote:


> Originally Posted by *fewness*
> 
> What's the message I'm supposed to take away from the last demo?


Working card??


----------



## sugarhell

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


Or they are using VEGA and they dont want to show the performance.

Just wait for the reviews.


----------



## Cyro999

Quote:


> we are reaching the point where single-threaded gains are largely a by-product of node reduction


That's by design, Intel has been pushing to shrink cores and stack more of them rather than improving core performance for a long time
Quote:


> Originally Posted by *Blameless*
> 
> I can certainly find streaming settings that a 4.5GHz 6700K cannot handle where a hex core part, even at lower clocks speeds, does fine.
> 
> AMD isn't going to lie, but they are going to use encoding settings that even a very fast quad core will choke on when they have an octo-core part in the demo.


Yeah, you can cripple a system with settings that are slightly off. It's easy to make a profile that 4c8t will choke on but 6c12t will run perfectly


----------



## Xuper

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


Or AMd doesn't want to show NV, did NV do same thing ?


----------



## LazarusIV

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


I wouldn't read that much into it... I would say that we're nit-picky and want to see absolutely every detail. Most people watching this actually consider buying their PCs from Best Buy...


----------



## Blameless

Well, 30 minutes of video to get one useful (x264) bench isn't...that bad?


----------



## nakano2k1

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


Either that or they're experiencing FPS spikes because of poor Vega drivers as this point in time.


----------



## Robenger

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


LOL


----------



## verovdp

No FPS counter (except for the ZBrush stuff), no price announcement, no mention of different skus, what in the world *** is AMD doing!? Oh right, they have an incompetent marketing department


----------



## Shau76434

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


I think that the BF1 demo was just a mistake when it came to not showing the FPS.


----------



## lombardsoup

Quote:


> Originally Posted by *verovdp*
> 
> No FPS counter (except for the ZBrush stuff), no price annoucement, no mention of different skus, *** is AMD doing!?
> 
> 
> 
> 
> 
> 
> 
> Oh right, they have an incompetent marketing department


RIP INTEL









A meme has been born, lads


----------



## Cyro999

Quote:


> Originally Posted by *Blameless*
> 
> Well, 30 minutes of video to get one useful (x264) bench isn't...that bad?


Glad to have that bench, it's vaguely skylake-ish IPC so clocking to 4.4ghz+ with an affordable 6c12t / 8c16t would be excellent.


----------



## hawker-gb

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


Nope,ofc they wont show fps on VEGA.


----------



## SoloCamo

Quote:


> Originally Posted by *Blameless*
> 
> Well, 30 minutes of video to get one useful (x264) bench isn't...that bad?


At least the Fixer didn't show up


----------



## Robenger

Quote:


> Originally Posted by *verovdp*
> 
> No FPS counter (except for the ZBrush stuff), no price annoucement, no mention of different skus, *** is AMD doing!?
> 
> 
> 
> 
> 
> 
> 
> Oh right, they have an incompetent marketing department


If you paid any attention at all you would understand that this is a PREVIEW, not a launch.


----------



## nakano2k1

Quote:


> Originally Posted by *verovdp*
> 
> No FPS counter (except for the ZBrush stuff), no price annoucement, no mention of different skus, *** is AMD doing!?
> 
> 
> 
> 
> 
> 
> 
> Oh right, they have an incompetent marketing department


What you're talking about is stuff you do at a product LAUNCH. That's not what this is nor was it supposed to be.


----------



## fewness

Quote:


> Originally Posted by *DaaQ*
> 
> Working card??


Maybe...both Zen and Vega are real after all. What a relief!


----------



## LazarusIV

I'm very curious about that auto frequency adjustment based on cooling capability. If I could get a mild, daily-driver OC-ish experience without having to tweak / test / tweak / cry / test that would be awesome.


----------



## SuperZan

Quote:


> Originally Posted by *Robenger*
> 
> If you paid any attention at all you would understand that this is a PREVIEW, not a launch.


Now, now, let's not rain on a perfectly good narrative. Gotta get those Scully-type posts in, can't have people thinking we're optimistic about AMD. It's just not done.


----------



## Slinkey123

The fact that they didnt announce the price is a bit worrying - I'm guessing its going to be quite expensive although cheaper than the Intel. They didn't show the FPS in the BF1 demo but the presenter guy stated it was at 60 - 70fps on both machines.


----------



## JakdMan

Quote:


> Originally Posted by *verovdp*
> 
> No FPS counter (except for the ZBrush stuff), no price annoucement, no mention of different skus, *** is AMD doing!?
> 
> 
> 
> 
> 
> 
> 
> Oh right, they have an incompetent marketing department


This was blatanly said to be a sneak peak from the beginning, so they're doing exactly what was expected.

If you really want something to take away Lisa did at least take a few minor jabs at Intel's $1000 cpu RYZEN was on par with


----------



## utnorris

It does look promising. Would have loved it if they had covered the chipset features, i.e. PCIe lanes.


----------



## nakano2k1

Quote:


> Originally Posted by *LazarusIV*
> 
> I'm very curious about that auto frequency adjustment based on cooling capability. If I could get a mild, daily-driver OC-ish experience without having to tweak / test / tweak / cry / test that would be awesome.


The announcement of new technologies and the fact that the performance per watt seems to be there and then some makes me want this even more. It seems like it's been worth the wait with my FX-8320e.


----------



## kd5151

leaked info about Ryzen is real. this forum is blowing up. to many posts to read!!!

Is it just me or didnt Lisa Su sound like she kept saying verizon instead Ry Zen? lol.


----------



## Outcasst

https://youtu.be/7yxSFmEOkrA?t=41


----------



## 113802

Ryzen looks impressive. Curious if Vega will be as power efficient as nVidia's 10 series. Will we see Vega desktop GPUs in laptops or will Vega be another power hungry card?

Find out next time on Dragon Ball Z


----------



## Peter Nixeus

Quote:


> Originally Posted by *wsfrazier*
> 
> That 6700k test was not pure, it is not that bad at all. That thing was sabotaged.
> 
> These demos are useless, need real benches. After seeing that, I don't trust any of these.


They were running it at a higher graphics settings and higher bitrate than you normally would when gaming and streaming on the same PC - enough to make an i7-6700K struggle. Other wise you can stream and play DOTA 2 fine with an i5 on the same PC with lower settings of course.


----------



## fewness

Quote:


> Originally Posted by *LazarusIV*
> 
> I'm very curious about that auto frequency adjustment based on cooling capability. If I could get a mild, daily-driver OC-ish experience without having to tweak / test / tweak / cry / test that would be awesome.


Doesn't it sound like Nvidia boost 3.0 on CPU? I don't know if intel boost is clearly temperature dependent...


----------



## SuperZan

Quote:


> Originally Posted by *utnorris*
> 
> It does look promising. Would have loved it if they had covered the chipset features, i.e. PCIe lanes.


I think they'll save that for more appropriate fora. This was oriented towards gamerz with cursory knowledge of PC tech.


----------



## p4inkill3r

Quote:


> Originally Posted by *Robenger*
> 
> LOL


Scorching hot take, wasn't it?


----------



## Blameless

Quote:


> Originally Posted by *Cyro999*
> 
> Glad to have that bench, it's vaguely skylake-ish IPC so clocking to 4.4ghz+ with an affordable 6c12t / 8c16t would be excellent.


The x264 result was promising, but we are still back to pricing and clock being the two main factors we have next to no info on, and still being what we need.


----------



## VSG

Quote:


> Originally Posted by *Outcasst*
> 
> 
> 
> https://youtu.be/7yxSFmEOkrA?t=41


Did he forget how TDP works?


----------



## Aussiejuggalo

Bit of a shame they didn't give any hints as to the price but I can live with that, also wanted to know how many PCI-E lanes but again I'll live with it as well.

At the very least this CPU should be a good enough upgrade for 8 cores without spending (at least here in Aus) over $1500 (hopefully), I'm kinda happy but will wait for actual reviews.


----------



## verovdp

Quote:


> Originally Posted by *lombardsoup*
> 
> RIP INTEL
> 
> 
> 
> 
> 
> 
> 
> 
> 
> A meme has been born, lads


seriously?








Quote:


> Originally Posted by *Robenger*
> 
> If you paid any attention at all you would understand that this is a PREVIEW, not a launch.


Quote:


> Originally Posted by *nakano2k1*
> 
> What you're talking about is stuff you do at a product LAUNCH. That's not what this is nor was it supposed to be.


It is unreasonable to say that they're going to have different cpu skus and such in a preview? I don't think that's so unreasonable.
Quote:


> Originally Posted by *JakdMan*
> 
> This was blatanly said to be a sneak peak from the beginning, so they're doing exactly what was expected.
> 
> If you really want something to take away Lisa did at least take a few minor jabs at Intel's $1000 cpu RYZEN was on par with


Believe me, my retired 1090T definitely wants to see AMD do well and bring competition back to the market


----------



## ciarlatano

That was really poorly done in many ways. They missed a huge opportunity to show off.


----------



## sugarhell

Quote:


> Originally Posted by *geggeg*
> 
> Did he forget how TDP works?


I told him but it seems that he forgot the basics.


----------



## luisxd

Quote:


> Originally Posted by *verovdp*
> 
> No FPS counter (except for the ZBrush stuff), no price announcement, no mention of different skus, what in the world *** is AMD doing!? Oh right, they have an incompetent marketing department


The hate is real.


----------



## fewness

Quote:


> Originally Posted by *Outcasst*
> 
> 
> 
> https://youtu.be/7yxSFmEOkrA?t=41


There was a GPU helping in Zen machine? Ok...my bad, my bad, I was kidding...


----------



## Blameless

Quote:


> Originally Posted by *geggeg*
> 
> Did he forget how TDP works?


Most people do.

Ryan just doesn't have an excuse for it.


----------



## M3TAl

Lisa said something about these files for blender and handbrake being available to test ourselves, where are these files?


----------



## JJBY

Stream Take Away:

1: No actual price announcement. Just benchmarks showing performance equivalent to $1000+ USD intel part in only heavy multi threaded tests were provided.

2: For actual game play shown, there is NO FPS COUNTER or summary demo graph to compare side by side examples with.

3: Cpu Cooling solutions and actual clock speed boosts also not shown, with only the stock speed being disclosed. Temperatures also not disclosed aside from the noticeable high fan rpm/noise with the system showing vega and ryzen together.

4: Vega teased at and shown running a game supposedly at 4k, although again no fps counter or system performance disclosed.

5: Multi threaded use again highlighted with streaming and gaming combined, although no streaming and game settings were disclosed for us to better understand the inability of the intel 6700k quad core to have a watchable stream.


----------



## Serios

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> yeah, not showing FPS during gameplay is a HUGE concern.
> That means AMD is not confident in their own product.


Lisa did mention that it's running at 4K max setting at higher than 60 fps.
Zen and Vega are not 100% ready so not showing the exact numbers is understandable.


----------



## fewness

No after hour price change for Amd or intel...guess analysts were not impressed either...


----------



## Blameless

Quote:


> Originally Posted by *fewness*
> 
> No after hour price change for Amd or intel...guess analysts were not impressed either...


Next to nothing new was revealed.


----------



## SoloCamo

Quote:


> Originally Posted by *Blameless*
> 
> Most people do.
> 
> Ryan just doesn't have an excuse for it.


The only reason for that immediate post from him was either a personal bias, or a bias due to a paycheck from a certain company. Can't think of any other reason why someone who has been doing this that long would do that...


----------



## JakdMan

A bunch of Techies had some exclusive stuff under NDA from some AMD gathering they were at a few days ago lol....

https://youtu.be/FNQjh47PPXQ

A little more on some of the tech
Quote:


> Originally Posted by *Blameless*
> 
> Next to nothing new was revealed.


Rumor confirmation, which seems to be a *HUGE* necessity for some of you prudes


----------



## JJBY

Quote:


> Originally Posted by *Blameless*
> 
> Next to nothing new was revealed.


exactly


----------



## MadGoat

Quote:


> Originally Posted by *M3TAl*
> 
> Lisa said something about these files for blender and handbrake being available to test ourselves, where are these files?


this is what I am looking for as well...


----------



## Ha-Nocri

Now watch it struggles to OC much past 4GHz....

It would be such a fail. Long 3 months in front of us


----------



## Dimaggio1103

Lots of butthurt i see in here. Prices are 499 at launch. I'm willing to bet anything, leaked slides already have now been verified as accurate for the benchmarks. I see no reason why price would be diff. im stunned nobody is mentioning AMD just stomped a 1k and the new 6700k Intel CPU live....That's huge in my book. Specially if it come in at 500 or cheaper. Way to go AMD. Looking forward to a replacement for my 4790.
Quote:


> Originally Posted by *Ha-Nocri*
> 
> Now watch it struggles to OC much past 4GHz....
> 
> It would be such a fail. Long 3 months in front of us


Lol it beat a 1k CPU without OC, but if it cant OC it would be a fail? um...k?


----------



## Ultracarpet

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Now watch it struggles to OC much past 4GHz....
> 
> It would be such a fail. Long 3 months in front of us


That wouldn't be a fail at all.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Now watch it struggles to OC much past 4GHz....
> 
> It would be such a fail. Long 3 months in front of us


3 months? don't you mean a month?
Quote:


> Originally Posted by *Dimaggio1103*
> 
> Lots of butthurt i see in here. Prices are 499 at launch. I'm willing to bet anything, leaked slides already have now been verified as accurate for the benchmarks. I see no reason why price would be diff. im stunned nobody is mentioning AMD just stomped a 1k and the new 6700k Intel CPU live....That's huge in my book. Specially if it come in at 500 or cheaper. Way to go AMD. Looking forward to a replacement for my 4790.


The pricing rumours were pulled out of someone's ass and everyone's just gone with it... RedGamingTech did a video on it because they were contacted by the guy who made the prices up.


----------



## coolhandluke41

all this wait for this !?..get the ..


----------



## Ha-Nocri

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> 3 months? don't you mean a month?


She said next quarter, so I'm expecting March realistically


----------



## Aussiejuggalo

Quote:


> Originally Posted by *Ha-Nocri*
> 
> She said next quarter, so I'm expecting March realistically


We talking Zen or Vega? because she did say Zen was ready for launch somewhere in the middle of the stream.


----------



## SoloCamo

If the IPC is truly at/nearBroadwell levels that is more than I expected. Unless Intel drastically changes things in their pricing and the fact that my next cpu will have no less than 8 core it looks like AMD might be my only option for a decent price.


----------



## Firann

The BF1 demo showed was running a Titan XP Nvidial (and they kinda advertise it as the cutting edge gpu and pascal -_- ) so i will assume that al rigs except the Rogue One dlc was running the titan.

Goimg by that, how much performance boost does zen provide in the fps section for BF1? I would assume the titan was doing all the work, especially if they were running almost identical fps like they mention. That would mean the demo was more GPU limited than CPU no?


----------



## Newwt

If Amd keeps inline with their CPUs being unlocked. I'll take the $350 SR7 please


----------



## M3TAl

Quote:


> Originally Posted by *MadGoat*
> 
> this is what I am looking for as well...


Can't find anything so far. Want to see how my 4790K compares in handbrake.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> We talking Zen or Vega? because she did say Zen was ready for launch somewhere in the middle of the stream.


I'm pretty sure she said ryZen next quarter...


----------



## cechk01

also 95W tdp
Quote:


> Originally Posted by *Aussiejuggalo*
> 
> 3 months? don't you mean a month?


a month, Don't you mean 18 days?


----------



## Blameless

Quote:


> Originally Posted by *SoloCamo*
> 
> The only reason for that immediate post from him was either a personal bias, or a bias due to a paycheck from a certain company. Can't think of any other reason why someone who has been doing this that long would do that...


Maybe a knee-jerk reaction to Lisa emphasizing the TDP figure?


----------



## fewness

To bench CPU in a game you need to go low resolution. But how can you say to the world please look at the beautiful 640x480 demo?


----------



## Ghoxt

Quote:


> Compared towards an *8-core Intel Core i7-6900K* Processor on the demos we have seen, the RIZEN engineering sample is FASTER.




I know...


----------



## Coydog

Quote:


> Originally Posted by *Firann*
> 
> The BF1 demo showed was running a Titan XP Nvidial (and they kinda advertise it as the cutting edge gpu and pascal -_- ) so i will assume that al rigs except the Rogue One dlc was running the titan.
> 
> Goimg by that, how much performance boost does zen provide in the fps section for BF1? I would assume the titan was doing all the work, especially if they were running almost identical fps like they mention. That would mean the demo was more GPU limited than CPU no?


I would think so, but on the flip side you can say that at least the CPU isn't hurting performance either.


----------



## Omicron

The processor certainly does not look bad at all, I mean, it looks like it is on par with Haswell IPC which is huge from AMD. This will definitely yield some competition in the X99 like platforms with multicores, especially for $499. From what I can tell, it isn't going to beat Intel's latest offerings, but is optimal for: small servers, workstations, and those doing a lot of multithreaded programs.

For general gaming and desktop usage though, I still see the future i7-7700k optimal for my purpose though. Seems to have stronger guaranteed single threaded performance (with enough cores) and (slightly) better pricing. However, if more general items get multithreaded (games and such,) Zen will likely have the edge then.


----------



## Blameless

Quote:


> Originally Posted by *fewness*
> 
> To bench CPU in a game you need to go low resolution. But how can you say to the world please look at the beautiful 640x480 demo?


Build a time machine to go back to 1995 first?
Quote:


> Originally Posted by *Coydog*
> 
> I would think so, but on the flip side you can say that at least the CPU isn't hurting performance either.


Yep.


----------



## SoloCamo

Quote:


> Originally Posted by *Coydog*
> 
> I would think so, but on the flip side you can say that at least the CPU isn't hurting performance either.


That was my take away from it... Why would they risk running into gpu bottlenecks that would hold back both their new cpu and the competition's. Might as well say 'look how good our cpu pushes this 750ti - it matches the best intel offers at half the price!".


----------



## kd5151

Quote:


> Originally Posted by *M3TAl*
> 
> Can't find anything so far. Want to see how my 4790K compares in handbrake.


http://www.amd.com/en-us/innovations/new-horizon

right below the video. you need blender to test! gonna try my i3 4170 htpc.lol


----------



## M3TAl

Quote:


> Originally Posted by *kd5151*
> 
> http://www.amd.com/en-us/innovations/new-horizon
> 
> right below the video. you need blender to test! gonna try my i3 4170 htpc.lol


Swear that wasn't there 2 minutes ago haha. More interested in Handbrake though as I recently started using it to convert game recordings.


----------



## MadGoat

Quote:


> Originally Posted by *M3TAl*
> 
> Can't find anything so far. Want to see how my 4790K compares in handbrake.


alright, they have the blender test up here... but I cant find the handbrake file. It's supposedly a 116mb, 16mbps file.
_
BTW, It looks like the blender test on Zen @ 3.4 takes about 36 seconds._


----------



## mouacyk

My god... for obvious reasons, they couldn't simply pull down the console in BF1 and typed:
PerfOverlay.DrawGraph 1

Blender -- meh, we saw this before
Handbrake -- those of you on 6900K better verify that 5sec win by Zen

RyZen at 95 TDP can't be good for overclocking. Anything that is designed too efficiently has little reason to leave any headroom for free, think laptop parts. Anything with "intelligent" overclocking will also be cumbersome.


----------



## mohiuddin

Quote:


> Originally Posted by *MadGoat*
> 
> this is what I am looking for as well...


Me too.....


----------



## kd5151

Quote:


> Originally Posted by *M3TAl*
> 
> Swear that wasn't there 2 minutes ago haha. More interested in Handbrake though as I recently started using it to convert game recordings.


maybe they're adding it as we speak!


----------



## lombardsoup

Quote:


> Originally Posted by *mouacyk*
> 
> My god... for obvious reasons, they couldn't simply pull down the console in BF1 and typed:
> PerfOverlay.DrawGraph 1
> 
> Blender -- meh, we saw this before
> Handbrake -- those of you on 6950K and 6900K better verify that 5sec win by Zen
> 
> RyZen at 95 TDP can't be good for overclocking. Anything that is designed too efficiently has little reason to leave any headroom for free, think laptop parts. Anything with "intelligent" overclocking will also be cumbersome.


A wasted opportunity. Time to start firing people in the marketing department!


----------



## SoloCamo

Quote:


> Originally Posted by *mouacyk*
> 
> RyZen at 95 TDP can't be good for overclocking. Anything that is designed too efficiently has little reason to leave any headroom for free, think laptop parts. Anything with "intelligent" overclocking will also be cumbersome.


It's amazing to me that people on this site can turn great things into negatives. Crazyness. How is a feature, which is likely enabled or disabled in the bios, cumbersome? Sounds like a great feature to me.

Do Intel's lower TDP products OC well? I think we know the answer. AMD always got blasted for high TDP, now they lower it and people say it's a bad thing.


----------



## JakdMan

Quote:


> Originally Posted by *M3TAl*
> 
> Swear that wasn't there 2 minutes ago haha. More interested in Handbrake though as I recently started using it to convert game recordings.


Gonna see how the 4930K does lol

1:21:23 stock. Hmmm
Quote:


> Originally Posted by *Blameless*
> 
> Next to nothing new was revealed.


Honestly. Apparently AMD are simply incompetent bimbos incapable of doing anything right


----------



## sugarhell

The demo did 02:11:00 on my 3770k to render. But i am running some stuff on background so it should be a bit faster


----------



## cechk01

Quote:


> Originally Posted by *mouacyk*
> 
> My god... for obvious reasons, they couldn't simply pull down the console in BF1 and typed:
> PerfOverlay.DrawGraph 1
> 
> Blender -- meh, we saw this before
> Handbrake -- those of you on 6900K better verify that 5sec win by Zen
> 
> RyZen at 95 TDP can't be good for overclocking. Anything that is designed too efficiently has little reason to leave any headroom for free, think laptop parts. Anything with "intelligent" overclocking will also be cumbersome.


assuming you can disable the intelligent OC function, i think they would be good for overclocking because they need less cooling


----------



## M3TAl

Quote:


> Originally Posted by *kd5151*
> 
> maybe they're adding it as we speak!


Hope so


----------



## Cyro999

Quote:


> Originally Posted by *Butthurt Beluga*
> 
> Isn't 4K gameplay a bad comparison of CPU performance because it's more GPU intensive? Or am I wrong?


Yeah, both of these CPU's can handle a lot more than 60-70fps in battlefield 1. They're more than likely just testing the GPU and any performance difference would be due to different conditions in the game. It wasn't a benchmark, looked like a guy was driving tanks with one hand per computer


----------



## hawker-gb

So AMD beat 1100 dollars 6900K in bench. I forsee that 1100 dollars price will not last very long.


----------



## Roaches

Gonna have to check out that blender benchmark on my Xeon setup once I get home.


----------



## mouacyk

All the gaming take away is that... AMD can NOW do it too. We just don't know at what price yet.


----------



## Blameless

Quote:


> Originally Posted by *Dimaggio1103*
> 
> Lol it beat a 1k CPU without OC, but if it cant OC it would be a fail? um...k?


If it can't OC it won't beat a 5820K or 6800K (and yes, both my 5820K and 6800K will beat a stock 6900K in essentially everything) which are below 400 dollars.

I mean it will be a steal for the OEM markets and mainstream non-OCers, but for most of OCN it will be a big "meh" if it doesn't clock vaguely well.

I'll get one, if I think I can get 4GHz+ out of it.


----------



## Imouto

Quote:


> Originally Posted by *sugarhell*
> 
> Quote:
> 
> 
> 
> Originally Posted by *geggeg*
> 
> Did he forget how TDP works?
> 
> 
> 
> I told him but it seems that he forgot the basics.
Click to expand...

I've said this several times but people at PCPer doesn't know what they're doing and they're, somehow, getting away with it. It is funny how they're referenced quite a few times over here when they don't know how to do basic math.
Quote:


> Originally Posted by *sugarhell*
> 
> The demo did 02:11:00 on my 3770k to render. But i am running some stuff on background so it should be a bit faster


1:01.91 on a stock i7 4790K. Maybe not comparable because I'm on Linux and Blender is much faster here than Windows.


----------



## kd5151

Quote:


> Originally Posted by *sugarhell*
> 
> The demo did 02:11:00 on my 3770k to render. But i am running some stuff on background so it should be a bit faster


my i3 4170 @ stock 3.7ghz did 3:09.61 to complete with only 1 tab of firefox open. so wait a second. how fast was zen?


----------



## hawker-gb

Quote:


> Originally Posted by *Blameless*
> 
> If it can't OC it won't beat a 5820K or 6800K (and yes, both my 5820K and 6800K will beat a stock 6900K in essentially everything) which are below 400 dollars.
> 
> I mean it will be a steal for the OEM markets and mainstream non-OCers, but for most of OCN it will be a big "meh" if it doesn't clock vaguely well.
> 
> I'll get one, if I think I can get 4GHz+ out of it.


Can you bench that Handbrake file then?


----------



## sugarhell

Quote:


> Originally Posted by *Imouto*
> 
> I've said this several times but people at PCPer doesn't know what they're doing and they're, somehow, getting away with it. It is funny how they're referenced quite a few times over here when they don't know how to do basic math.
> 1:01.91 on a stock i7 4790K. Maybe not comparable because I'm on Linux and Blender is much faster here than Windows.


Hmm i need to try it on my linux system. Thanks for the input !


----------



## Serandur

Quote:


> Originally Posted by *hawker-gb*
> 
> So AMD beat 1100 dollars 6900K in bench. I forsee that 1100 dollars price will not last very long.


I sincerely hope it doesn't last long. Broadwell-E's pricing is freaking disgusting. They actually _raised_ the prices from Haswell-E to Broadwell-E across the board while providing literally no performance or features over Haswell-E.

It's about time some Intel octocores come down to the ~$600 range and that ridiculous 6950X come down from *$1750*. The consumer CPU world's gone crazy with Intel alone at the wheel.


----------



## Firann

Thats one of the single player levels where at the start you play rhe driver and pretty much just press W to move on a scrpted path.

The Intel one btw seemed to have denser smoke but ill just give it to different timing.

The cpu felt alright and has me intrigued just need more info.

What did make me skeptical was the vega demo. Why not bf1 instead of the empty space used in rogue one
Quote:


> Originally Posted by *Cyro999*
> 
> Yeah, both of these CPU's can handle a lot more than 60-70fps in battlefield 1. They're more than likely just testing the GPU and any performance difference would be due to different conditions in the game. It wasn't a benchmark, looked like a guy was driving tanks with one hand per computer


----------



## mouacyk

Quote:


> Originally Posted by *Blameless*
> 
> If it can't OC it won't beat a 5820K or 6800K (and yes, both my 5820K and 6800K will beat a stock 6900K in essentially everything) which are below 400 dollars.
> 
> I mean it will be a steal for the OEM markets and mainstream non-OCers, but for most of OCN it will be a big "meh" if it doesn't clock vaguely well.
> 
> I'll get one, if I think I can get 4GHz+ out of it.


I'll want 1.25x at least, so 4.25GHz. That's the other big thing missing, single threading performance was absent entirely which is worrisome. Those of you who don't understand this worry, go look at any CPUz single thread bechmark between Excavator and Haswell/Broadwell. Yes, single-thread performance is useful and important on a 8-core chip -- virtualized or multi-seat emulators.


----------



## Dimaggio1103

Quote:


> Originally Posted by *mouacyk*
> 
> My god... for obvious reasons, they couldn't simply pull down the console in BF1 and typed:
> PerfOverlay.DrawGraph 1
> 
> Blender -- meh, we saw this before
> Handbrake -- those of you on 6900K better verify that 5sec win by Zen
> 
> RyZen at 95 TDP can't be good for overclocking. Anything that is designed too efficiently has little reason to leave any headroom for free, think laptop parts. Anything with "intelligent" overclocking will also be cumbersome.


If it beats a 1k CPU for atleast 500 or below, who freakin cares about OCing at that point. If i wanna OC ill go play with my other CPU's. I care about performance.

People here are never happy. We just saw it stomp a 6700k in dota2 and streaming, and yes I own a i7 brand new devils canyon.


----------



## Coydog

I think it depends on how AMD prices Ryzen.

Some say they are looking at $499 for the CPU, I honestly think that is WAY to low, although since Intel had no competition for that segment, they could charge whatever the hell they wanted and consumers would be forced to pay for it.

Honestly, I think if they can keep it in the $499-649 range they would probably sell a ton if trusted benchmarks show what was shown in the preview to be accurate or in the same building (Figure within 5-10% of Intel CPU). I know many will complain that AMD didn't just wipe Intel, but AMD was so far behind in performance, they had way to much ground to maek up. The failure of Bulldozer really screwed them up and cost them valuable R&D money and time.


----------



## denman

Quote:


> Originally Posted by *Firann*
> 
> What did make me skeptical was the vega demo. Why not bf1 instead of the empty space used in rogue one


Because they probably have an agreement with EA that they needed to show some of the new Rogue One DLC.

You understand this wasn't a launch event right? This was them showing that everything they planned on is on track. Basically to get some hype for the launch in the coming months.


----------



## AmericanLoco

I can't believe the people here expecting them to reveal everything, including the complete performance, the full line up, pricing, etc... This is a preview, not a launch. The launch is in January.


----------



## kd5151

Quote:


> Originally Posted by *AmericanLoco*
> 
> I can't believe the people here expecting them to reveal everything, including the complete performance, the full line up, pricing, etc... This is a preview, not a launch. The launch is in January.


ill wait.


----------



## Fierceleaf

Blender was 3:20 on my qx9650 4C/4T @ 4.0...it might be time to upgrade.

Ryzen looks promising, lets hope for decent overclocking.


----------



## Kuivamaa

Quote:


> Originally Posted by *sugarhell*
> 
> I told him but it seems that he forgot the basics.


Παχύτερο μούσι κι'απο το δικό μου έχεις.

140W TDP probably applies to AVX2 workloads, which are notorious for stressing intel CPUs. Anyway,I expect big Zen to be 599 at launch. Margins are fat enough and it will compare very favorably vs 6850k.


----------



## Shatun-Bear

Is that the name of the Zen CPUs? Ryzen?


----------



## Robenger

Quote:


> Originally Posted by *Shatun-Bear*
> 
> Is that the name of the Zen CPUs? Ryzen?


Yes.


----------



## Robenger

2 minutes 33 seconds on a 4690 for the Blender benchmark.


----------



## mouacyk

Quote:


> Originally Posted by *AmericanLoco*
> 
> I can't believe the people here expecting them to reveal everything, including the complete performance, the full line up, pricing, etc... This is a preview, not a launch. The launch is in January.


If you currently suck at a particular thing, and at the earliest opportunity, you don't quell that worry, I don't see much hope for continuing to believe: talking about single-threading performance specifically. It takes literally a minute to run the CPUz benchmark.


----------



## Firann

Quote:


> Originally Posted by *denman*
> 
> Because they probably have an agreement with EA that they needed to show some of the new Rogue One DLC.
> 
> You understand this wasn't a launch event right? This was them showing that everything they planned on is on track. Basically to get some hype for the launch in the coming months.


Im not judging them on what they've shown, but to be fair i have more questions than aswers now, so in part their review did work ?


----------



## sugarhell

Quote:


> Originally Posted by *Kuivamaa*
> 
> Παχύτερο μούσι κι'απο το δικό μου έχεις.
> 
> 140W TDP probably applies to AVX2 workloads, which are notorious for stressing intel CPUs.


Είχα







Τώρα λιγόστεψε λίγο.

Yeah probably .


----------



## Shatun-Bear

Thanks looks good to me so far. Very exciting times in the CPU space.


----------



## Cyrious

1:09.55 on my Xeon. What did the Ryzen chip get again?


----------



## AmericanLoco

Quote:


> Originally Posted by *mouacyk*
> 
> If you currently suck at a particular thing, and at the earliest opportunity, you don't quell that worry, I don't see much hope for continuing to believe: talking about single-threading performance specifically. It takes literally a minute to run the CPUz benchmark.


...and if they show all their cards in their hand now, what do they have for launch? Nothing.


----------



## BIGTom

In the Blender demo, my 5820k with modest 4.3Ghz OC can't hold a candle to the extra cores/threads that Ryzen and the 6900k offer. 54 seconds compared to the 36 in the presentation.


----------



## Robenger

Quote:


> Originally Posted by *Cyrious*
> 
> 1:09.55 on my Xeon. What did the Ryzen chip get again?


36 seconds I believe.


----------



## Roaches

Quote:


> Originally Posted by *Serandur*
> 
> I sincerely hope it doesn't last long. Broadwell-E's pricing is freaking disgusting. They actually _raised_ the prices from Haswell-E to Broadwell-E across the board while providing literally no performance or features over Haswell-E.
> 
> It's about time some Intel octocores come down to the ~$600 range and that ridiculous 6950X come down from *$1750*. The consumer CPU world's gone crazy with Intel alone at the wheel.


Agreed, after Sandy / Ivybridge times I see very little incentive to upgrade to newer hardware on my end. I'm holding high hopes until independent reviews and pricing are shown.


----------



## OcCam

Quote:


> Originally Posted by *Fierceleaf*
> 
> Blender was 3:20 on my qx9650 4C/4T @ 4.0...it might be time to upgrade.
> 
> Ryzen looks promising, lets hope for decent overclocking.


Stock i5-2500k @3.3 was 3:02

She used to do 4.8 24/7 but my motherboard is dying a slow death

I hope AMD comes thru I need a new motherboard and I dont want to pay thru the nose for old tech.


----------



## MadGoat

Quote:


> Originally Posted by *MadGoat*
> 
> alright, they have the blender test up here... but I cant find the handbrake file. It's supposedly a 116mb, 16mbps file.
> _
> BTW, It looks like the blender test on Zen @ 3.4 takes about 36 seconds._


For reference, I ran the blender test on my rig:

~2:24


----------



## Cyrious

Quote:


> Originally Posted by *Robenger*
> 
> 36 seconds I believe.


Holy crap, that frickin flattens my Xeon.


----------



## JakdMan

Quote:


> Originally Posted by *SoloCamo*
> 
> It's amazing to me that people on this site can turn great things (AMD always got blasted for high TDP) into negatives. Crazyness. How is a feature, which is likely enabled or disabled in the bios, cumbersome?
> 
> Sounds like a great feature to me.


Quote:


> Originally Posted by *Robenger*
> 
> 36 seconds I believe.


Well I know I'm due fro a RED build regardless

1:21:23 stock 4930K Hmmm I don't imagine I'll get too much better OC'd but it's all fun and game (while not working at least)


----------



## Robenger

Quote:


> Originally Posted by *Cyrious*
> 
> Holy crap, that frickin flattens my Xeon.


Quote:


> Originally Posted by *JakdMan*
> 
> Well I know I'm due fro a RED build regardless
> 
> 1:21:23 stock 4930K Hmmm I don't imagine I'll get too much better OC'd but it's all fun and game (while not working at least)


I just double checked its 35.something, I can't make out the rest of the numbers.


----------



## naz2

any word on win7 compatibility?


----------



## Cyrious

Quote:


> Originally Posted by *Robenger*
> 
> I just double checked its 35.something, I can't make out the rest of the numbers.


That still annihilates my Xeon on a thread-to-thread basis, and this chip, even though its Sandy Bridge, is no slouch.


----------



## Cyro999

Quote:


> 140W TDP probably applies to AVX2 workloads, which are notorious for stressing intel CPUs


"TDP" doesn't mean "average power consumption" for Intel. It's a ceiling on sustained power draw before turbo boost will drop, etc.

Actual consumption varies quite dramatically by workload


----------



## axiumone

So far it looks promising, but nothing mind blowing was revealed. We still don't know anything gaming performance, price, complete list of specs. All we've see is blender and handbrake. As well as that it can in fact run some games at some frame rates.


----------



## Dimaggio1103

People lets not forget the Neural Net learning implemented here. As a programmer I find neural/deep learning programming fascinating.


----------



## DNMock

Correct me if I'm wrong, but haven't AMD chips always performed extremely well on Blender tests as compared to Intel, but get stomped in every other benchmark?


----------



## Timmaigh!

Quote:


> Originally Posted by *BIGTom*
> 
> In the Blender demo, my 5820k with modest 4.3Ghz OC can't hold a candle to the extra cores/threads that Ryzen and the 6900k offer. 54 seconds compared to the 36 in the presentation.


52,8 seconds here with 6850k at 4,2GHz. These results are quite interesting and different compared to Cinebench, where not only there is about 10 percent gap in performance between HW-E and BW-E, but hexacores overclocked past 4GHz can match stock 8-cores (or almost can, 6850k at 4,2 gets the same score as stock 5960x). In Blender it seems, its all about core/thread counts and higher clocks dont help that much.


----------



## Imouto

Quote:


> Originally Posted by *Robenger*
> 
> I just double checked its 35.something, I can't make out the rest of the numbers.


You should check again because it is 00:24.82

EDIT: My bad, I was looking at August numbers from another event.

Sorry Robenger!


----------



## Techenthused73

AMD

https://www.youtube.com/watch?v=tqaZeqSW89A


----------



## }SkOrPn--'

Any speculation on what AMD might price the 8 core Ryzen at? What happens to the PC industry if they price it less then $500?


----------



## sugarhell

Quote:


> Originally Posted by *Imouto*
> 
> You should check again because it is 00:24.82


Wrong quote


----------



## Blameless

Quote:


> Originally Posted by *hawker-gb*
> 
> Can you bench that Handbrake file then?


Primary signature system (5820K @ 4.3GHz core, 4.1GHz uncore, DDR4-2666 12-11-12-27-T1) gets 52.39 seconds in Windows and Blender 2.78 x64 with default settings.

I'll run it in Linux in a bit.
Quote:


> Originally Posted by *BIGTom*
> 
> In the Blender demo, my 5820k with modest 4.3Ghz OC can't hold a candle to the extra cores/threads that Ryzen and the 6900k offer. 54 seconds compared to the 36 in the presentation.


Shouldn't be this large of a difference. Blender scales pretty linearly with clock speed _and_ core count. A 4.3GHz 5820k should almost exactly tie a stock 6900K.

I'm wondering if there are options not disclosed, if it was run with the 32-bit version of Blender, or if there is something else we are missing, but something is definitely off.

Need to find someone in the Broadwell-E thread with an actual 6900K....
Quote:


> Originally Posted by *DNMock*
> 
> Correct me if I'm wrong, but haven't AMD chips always performed extremely well on Blender tests as compared to Intel, but get stomped in every other benchmark?


The x264 results were even more in AMD's favor.

Both are extremely well threaded and things AMD has traditionally been good at, but this is still promising.


----------



## mohiuddin

Guys, i did some tests on my x5650 with that @3.5ghz same image same 2.78a Blender version.
On live new horizon 6900k stock or ryzen 8c/16t 3.4ghz took around 35 seconds
My XEON x5650 @3.5GHz took 1min 27 seconds .


----------



## Banko

So I did the Blender test on my work computer which is a stock 5820K with 64gb of RAM, and while still have background programs it rendered in 1:08.21 on Windows 10 64 Bit.


----------



## axiumone

So, anyone with a 6900k? lol


----------



## pony-tail

I do not know about 6700k cpus nor do I care !
My 4790ks are good enough .
But this shows that AMD is at least back in the game ( glaring bugs aside ) and having AMD producing competitive products is good for everybody .
Might even wake the spIntel engineers from their current slumber .
Will I buy one of the new AMD rigs , If the price is good enough and I can get a linux distro working on it , I will get one to play with , ( I am old , retired and bored )
Anyway , a competitive AMD is good for all ! and it looks like they are once again competitive !

nb! As usual there demos can be taken with the appropriate grains of salt .


----------



## Insan1tyOne

This is absolutely insane, AMD actually pulled it off boys, a _full-scale_ comeback into the CPU market. I seriously want to know what Intel is going to do to try and gain back their monopoly.

If AMD really starts selling that Ryzen CPU they just demoed for $499 then that makes the prices that Intel is charging for their comparable CPUs look absolutely criminal. Good on you AMD! Welcome back!

- Insan1tyOne


----------



## Kuivamaa

Quote:


> Originally Posted by *DNMock*
> 
> Correct me if I'm wrong, but haven't AMD chips always performed extremely well on Blender tests as compared to Intel, but get stomped in every other benchmark?


No, they are very poor performers on blender, at least on Windows.

http://media.bestofmicro.com/I/4/386860/original/blender.png

In typical multithreaded throughput tests, the FX 8350 is beating the 2600k but gets handily beaten here.


----------



## JJBY

i don't know if they will make this cpu as low of a price point as you guys are anticipating.

if they have a low price, intel will simply lower theirs to match and just not price gouge as much.

from a marketing perspective this would make no sense as they finally have a chip that theoretically can compete in the high end market share and this market share has already become accustomed to the huge price tags.

AMD will take profit any day over affordability as they are a business.

the only reason why you all think it will be cheap is because you have become accustomed to AMD's marketing strategy as of late.
For them the only market play they have had recently was to appeal to the value side of the market.
The introduction of a competing product on the high end now voids this market strategy.

Expect top end RyZen cpus to be on par with intel's pricing garunteed


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Insan1tyOne*
> 
> This is absolutely insane, AMD actually pulled it off boys, a _full-scale_ comeback into the CPU market.


I would argue that it was the hiring of Jim Keller that actually pulled it off, as he has done before for them.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *JJBY*
> 
> i don't know if they will make this cpu as low of a price point as you guys are anticipating.
> 
> if they have a low price, intel will simply lower theirs to match and just not price gouge as much.
> 
> from a marketing perspective this would make no sense as they finally have a chip that theoretically can compete in the high end market share and this market share has already become accustomed to the huge price tags.
> 
> AMD will take profit any day over affordability as they are a business.
> 
> the only reason why you all think it will be cheap is because you have become accustomed to AMD's marketing strategy as of late.
> For them the only market play they have had recently was to appeal to the value side of the market.
> The introduction of a competing product on the high end now voids this market strategy.
> 
> Expect top end RyZen cpus to be on par with intel's pricing garunteed


Yeah I am expecting the same, but a $1000 AMD CPU seems strange. I wouldn't drop $1000 on any CPU made by AMD just because it seems far to weird, lol.


----------



## Boomer1990

5:13.09 on my A10-5800k that is overclocked to 4.3, I think it is long paste time for an upgrade.


----------



## TokenBC

Something is off with those Blender results, it seems...


----------



## OcCam

Quote:


> Originally Posted by *OcCam*
> 
> Stock i5-2500k @3.3 was 3:02
> 
> She used to do 4.8 24/7 but my motherboard is dying a slow death
> 
> I hope AMD comes thru I need a new motherboard and I dont want to pay thru the nose for old tech.


And i5-2500k @ 4.8 does the blender test at 2:09

Quite an improvement

Edit: Boosted the old Girl up to 5.2 and got 1:57.

Shall I keep going......

SUICIDE BLENDER BENCHEZ!!!

Final result i5-2500k @5.3ghz 1:55.8

5.4ghz wont complete the test before locking up

God Damn i Love Sandy Bridge!!


----------



## Robenger

Quote:


> Originally Posted by *TokenBC*
> 
> Something is off with those Blender results, it seems...


Why?


----------



## jprovido

where do I download the blender test? I want to compare to my 5820k @ 4.7ghz

edit:

finally i can upgrade my cpu! im so excited


----------



## Xuper

here

http://download.amd.com/demo/RyzenGraphic_27.blend


----------



## p4inkill3r

Quote:


> Originally Posted by *TokenBC*
> 
> Something is off with those Blender results, it seems...


Naturally.


----------



## JakdMan

Quote:


> Originally Posted by *jprovido*
> 
> where do I download the blender test? I want to compare to my 5820k @ 4.7ghz


The event page under the video

http://www.amd.com/en-us/innovations/new-horizon


----------



## jprovido

Quote:


> Originally Posted by *Xuper*
> 
> here
> 
> http://download.amd.com/demo/RyzenGraphic_27.blend


thanks +rep


----------



## Nick the Slick

Almost regretting picking up a 6700k even though I got it dirt cheap from Retail Edge. Oh well, still far more processing power than I know what to do with. GG AMD, glad to see you back! Do still need a GPU upgrade though so hopefully that Vega will do me good.


----------



## Dimaggio1103

Quote:


> Originally Posted by *p4inkill3r*
> 
> Naturally.


This forum suffers at times from conformational bias. People refuse to believe anything against Intel at times or Nvidia. I own both a DC i7 and a Nvidia GPU and im excited for AMD to be kickin tires and lighting fires.


----------



## TokenBC

Quote:


> Originally Posted by *p4inkill3r*
> 
> Naturally.


A lot of people are getting very different results compared to the machines in the event.

Not saying AMD is pulling anything fishy here. Maybe it comes down to certain settings like Blameless said.


----------



## CDub07

Quote:


> Originally Posted by *CDub07*
> 
> You will have software provided by Intel or AMD that benchmarks the cpu and set the frequency accordingly. No vcore adjustments or bus clock increases. This cpu can run to that frequency and that's it. PERIOD!!! CPU lottery will have a whole new meaning.


I called it back in July. Overclocking is coming to a end.


----------



## Xuper

Big different between Ryzen and 6900K is that Ryzen does have 16 "Full hardware threads" while Intel ( 8 hardware + 8 software ) threads


----------



## sugarhell

Quote:


> Originally Posted by *Xuper*
> 
> Big different between Ryzen and 6900K is that Ryzen does have 16 "Full hardware threads" while Intel ( 8 hardware + 8 software ) threads


----------



## Dimaggio1103

Quote:


> Originally Posted by *TokenBC*
> 
> A lot of people are getting very different results compared to the machines in the event.
> 
> Not saying AMD is pulling anything fishy here. Maybe it comes down to certain settings like Blameless said.


No.....no they are not. Im a firm believer of proof. Show it or please refrain from making false comments. Respectfully of course.


----------



## Imouto

Quote:


> Originally Posted by *sugarhell*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Imouto*
> 
> You should check again because it is 00:24.82
> 
> 
> 
> Wrong quote
Click to expand...

And wrong time. Looks like I was checking the time from the event in August.

Sorry Robenger, you're correct.


----------



## Insan1tyOne

Quote:


> Originally Posted by *JJBY*
> 
> i don't know if they will make this cpu as low of a price point as you guys are anticipating.
> 
> if they have a low price, intel will simply lower theirs to match and just not price gouge as much.
> 
> from a marketing perspective this would make no sense as they finally have a chip that theoretically can compete in the high end market share and this market share has already become accustomed to the huge price tags.
> 
> AMD will take profit any day over affordability as they are a business.
> 
> the only reason why you all think it will be cheap is because you have become accustomed to AMD's marketing strategy as of late.
> For them the only market play they have had recently was to appeal to the value side of the market.
> The introduction of a competing product on the high end now voids this market strategy.
> 
> *Expect top end RyZen cpus to be on par with intel's pricing garunteed*


I don't believe this at all an here is why: AMD has next to _zero_ brand equity when it comes to CPUs, especially when it comes to performance. They are basically having to start from scratch with RyZen due to the fact that they have been out of the game so long, AND due to the fact that they essentially lied (grossly over-stated) the performance of all the bulldozer CPUs when they were released. I mean think about is, AMD hasn't been competitive in the CPU market since the Phenom days (2009 - 2010), and 6 or 7 years is an absolutely massive amount of time when it comes to microprocessor technology.

AMD needs to price these new RyZen CPUs in an extremely disruptive (<$600) manner, far below Intel, otherwise they will not see anywhere near the user adoption that they are banking on. Intel has the current user-base and far greater brand equity in the CPU market space right now. You know that, AMD knows that, everyone knows that. The pricing schema of the RyZen CPUs will reflect this.

- Insan1tyOne


----------



## Blameless

Quote:


> Originally Posted by *Robenger*
> 
> Why?


Because the 6900K results are impossibly low for the default settings of the file up for download in Windows Blender 2.78a x64.
Quote:


> Originally Posted by *p4inkill3r*
> 
> Naturally.


He appears to be correct.
Quote:


> Originally Posted by *Dimaggio1103*
> 
> This forum suffers at times from conformational bias. People refuse to believe anything against Intel at times or Nvidia. I own both a DC i7 and a Nvidia GPU and im excited for AMD to be kickin tires and lighting fires.


I think you've misunderstood.

No one is saything the Zen system shown isn't getting 35.38s vs. the 6900K system which got 35.44s, but that the file up for download isn't the test they ran, or they haven't listed all the settings they are using.

http://www.overclock.net/t/1601679/broadwell-e-thread/4030#post_25710181

If that was the same benchmark AMD ran, he wouldn't have gotten 42 seconds, he would have gotten 28-30.


----------



## Mahigan

Looks great! So it is faster than a 6900K while consuming less power. Not bad... not bad at all.


----------



## NuclearPeace

What has been shown so far is pretty promising. It seems like Zen has good enough IPC and is an competitive architecture. That being said, we wont really know for sure until other people independent of AMD test it.

Unless Zen is being sold at fire sale prices i'm probably just going to upgrade on my LGA 1150 system rather than buy into Zen. Zen will probably have better value CPUs than Intel but the costs to switch over is too high for me. I would have to buy new DDR4 RAM and a new motherboard.

I can get basically an i7 4790 and 8GB of more DDR3 for under $300. If I go Zen i'm looking at like $250 for probably the cheapest 6 core, $80 for the motherboard, and $70 for 16GB DDR4.

I don't really need a lot of multithreaded performance and honestly the 4 core "i7" is more than enough for me. A lot of my games are very single threaded dependent. I wont be overclocking on either platform (adds too much to costs) and if Zen is ~Haswell IPC I should stick with Haswell since it has higher base clocks.


----------



## sammkv

If Zen is really delivering this kind of performance no way this is $499!


----------



## JakdMan

Quote:


> Originally Posted by *Blameless*
> 
> Because the 6900K results are impossibly low for the default settings of the file up for download in Windows Blender 2.78a x64.
> He appears to be correct.
> I think you've misunderstood.
> 
> No one is saything the Zen system shown isn't getting 35.38s vs. the 6900K system which got 35.44s, but that the file up for download isn't the test they ran, or they haven't listed all the settings they are using.
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4030#post_25710181
> 
> If that was the same benchmark AMD ran, he wouldn't have gotten 42 seconds, he would have gotten 28-30.


Unless they've changed some settings in the final blender file they gave us, those setting don't change upon opening files in blender. (perhaps unless some people missed what's right beside the file on the page and are using a grotesquely out of date version of blender or some crazy internal preferences on their machine that would alter their render)


----------



## DizzlePro

we looking at a january release date or just Q1?

what are they doing at CES 2017? Vega?


----------



## Mahigan

Quote:


> Originally Posted by *sammkv*
> 
> If Zen is really delivering this kind of performance no way this is $499!


Do you remember the $299 Radeon HD 4870? when nVIDIA were selling their GTX 280 series for around $800?

$499 is believable. AMD need market share. They desperately need mind share and market share. So they're focusing on marketing and software lately as you've no doubt noticed.


----------



## Pawelr98

It's already over a year since I purchased my 5820K.

But it's nice to see that AMD is trying to do something about intel monopoly.

However I'm watching for a new gpu so i'm interested in Vega.


----------



## Dimaggio1103

Quote:


> Originally Posted by *Blameless*
> 
> Because the 6900K results are impossibly low for the default settings of the file up for download in Windows Blender 2.78a x64.
> He appears to be correct.
> I think you've misunderstood.
> 
> No one is saything the Zen system shown isn't getting 35.38s vs. the 6900K system which got 35.44s, but that the file up for download isn't the test they ran, or they haven't listed all the settings they are using.
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4030#post_25710181
> 
> If that was the same benchmark AMD ran, he wouldn't have gotten 42 seconds, he would have gotten 28-30.


Ah ok I see good job with the correction. I too would love to see that detail hammered out. People in this thread are however, (some not all or even most) are discounting it completely like it's nothing new. lol like I said i'm really interested in the neural learning and predictive processing, and the effect users will see after time. AMD has come out swinging, and as most here will tell you I consistently dog AMD. So for me to be interested, and defending them, says something.
Quote:


> Originally Posted by *Mahigan*
> 
> Do you remember the $299 Radeon HD 4870? when nVIDIA were selling their GTX 280 series for around $800?
> 
> $499 is believable. AMD need market share. They desperately need mind share and market share. So they're focusing on marketing and software lately as you've no doubt noticed.


Well said. A fact many miss. In business it dont matter about profit margins in the short run, if you fail to take back market share.


----------



## chantruong

Hopefully I can pick up some used cheap(er) 6900k and 5960x with the release of Zen. If not having a 5820k can't be too bad.


----------



## Mahigan

The only people interested are the enthusiasts. The average buyer is buying a GeForce and an Intel whatever because that's what he or she was trained to do when they joined the PC Gaming world and left console land.

This isn't the same tech community it once was. Most self described tech enthusiasts don't know and/or don't care what a branch predictor is. They're buying an Intel whatever because that's what they're used to doing.

Sad but true.


----------



## Nenkitsune

Quote:


> Originally Posted by *Mahigan*
> 
> The only people interested are the enthusiasts. The average buyer is buying a GeForce and an Intel whatever because that's what he or she was trained to do when they joined the PC Gaming world and left console land.
> 
> This isn't the same tech community it once was. Most self described tech enthusiasts don't know and/or don't care what a branch predictor is. They're buying an Intel whatever because that's what they're used to doing.
> 
> Sad but true.


Honestly, I wanna see how the midrange ryzen chips run. I wanna see a 4c/8t mid range chip that costs less than the 6600k and can match it (or nearly match it) clock for clock.

I like my 6600k, it overclocks like crazy and runs games just fine, but I really want AMD to give intel some real competition like back in the Athlon 64 days.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Mahigan*
> 
> Do you remember the $299 Radeon HD 4870? when nVIDIA were selling their GTX 280 series for around $800?
> 
> $499 is believable. AMD need market share. They desperately need mind share and market share. So they're focusing on marketing and software lately as you've no doubt noticed.


Yeah that is how I feel, if they do not absolutely beat Intel at everything they will NOT be seeing any of my money. It must match or beat Intel performance and cost at least a 3rd less otherwise I am sticking to Intel mainstream. I doubt it will be $500 for the 8 Core Zen, but that is the cost it "should" be in order for them to have any hope of gaining real market share that truly matters.

My guess is $899, lol....


----------



## caswow

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah that is how I feel, if they do not absolutely beat Intel at everything they will NOT be seeing any of my money. It must match or beat Intel performance and cost at least a 3rd less otherwise I am sticking to Intel mainstream. I doubt it will be $500 for the 8 Core Zen, but that is the cost it "should" be in order for them to have any hope of gaining real market share that truly matters.
> 
> My guess is $899, lol....


wth why are you so negative towards amd?


----------



## NuclearPeace

I want to buy Zen but when I can just buy a LGA 1150 i7 on the cheap vs buying Zen CPU, motherboard, and RAM, it becomes a lot harder for me to justify a switch to AMD.


----------



## JJBY

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah that is how I feel, if they do not absolutely beat Intel at everything they will NOT be seeing any of my money. It must match or beat Intel performance and cost at least a 3rd less otherwise I am sticking to Intel mainstream. I doubt it will be $500 for the 8 Core Zen, but that is the cost it "should" be in order for them to have any hope of gaining real market share that truly matters.
> 
> My guess is $899, lol....


Ya i see it around that range too.

$500 is just too cheap, if I was an uniformed consumer I would be afraid I was getting an inferior product.


----------



## Imouto

Switched to Windows 7 and ran the test.

i7 4790K stock.

Linux. 1:01.91
Windows 7 64b 1:25.49


----------



## Evil Penguin

27.89s Blender result with 5960X @ 4.5 GHz (Arch Linux).

@ 3.4 GHz - 36.70s


----------



## jprovido

47.9 seconds 5820k @ 4.7ghz 32gb 3200mhz ram


----------



## Nenkitsune

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Mahigan*
> 
> Do you remember the $299 Radeon HD 4870? when nVIDIA were selling their GTX 280 series for around $800?
> 
> $499 is believable. AMD need market share. They desperately need mind share and market share. So they're focusing on marketing and software lately as you've no doubt noticed.
> 
> 
> 
> Yeah that is how I feel, if they do not absolutely beat Intel at everything they will NOT be seeing any of my money. It must match or beat Intel performance and cost at least a 3rd less otherwise I am sticking to Intel mainstream. I doubt it will be $500 for the 8 Core Zen, but that is the cost it "should" be in order for them to have any hope of gaining real market share that truly matters.
> 
> My guess is $899, lol....
Click to expand...

I won't bother with AMD again if they aren't able to match intel. I really wanna know some overclocking stats for them once they come out. That might push me towards an early upgrade from my current system. If they can clock as well as intel, and match (or beat) them in performance, at a price point that makes them a lot more enticing than intel, then I might go back to AMD.

My last AMD system was a 940BE. Nothing from AMD really looked good enough to upgrade from it. I ended up going for intel because a 6600k was cheap for the performance it provides.

I really do hope that the price point is $499, but I'm not really expecting the 8c/16t chip to be that cheap.


----------



## Dimaggio1103

Quote:


> Originally Posted by *Mahigan*
> 
> The only people interested are the enthusiasts. The average buyer is buying a GeForce and an Intel whatever because that's what he or she was trained to do when they joined the PC Gaming world and left console land.
> 
> This isn't the same tech community it once was. Most self described tech enthusiasts don't know and/or don't care what a branch predictor is. They're buying an Intel whatever because that's what they're used to doing.
> 
> Sad but true.


Yes sir. I have seen the same thing. I taught my two brothers about PC gaming and overclocking. I initially told them Intel was better as it was at the time (still is for now) and Nvidia had better frametimes. The majority of this is a non sequitur anymore. When I announced to them I was going probably AMD this time for my GPU, they proceeded to laugh at me (even though they are taught by me) smh....I had trouble even trying to reason with them about anything AMD. Boggles my mind. its like people get a taste of being a PC enthusiast then proceeded to turn of their brains.


----------



## Benny89

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah that is how I feel, if they do not absolutely beat Intel at everything they will NOT be seeing any of my money. It must match or beat Intel performance and cost at least a 3rd less otherwise I am sticking to Intel mainstream. I doubt it will be $500 for the 8 Core Zen, but that is the cost it "should" be in order for them to have any hope of gaining real market share that truly matters.
> 
> My guess is $899, lol....


If Intel XX CPU is 1000$, AMD CPU that matches it needs to be 800-850$
If Intel XX CPU is 800$, AMD CPU that matches it needs to be 650-700$
If Intel XX COU is 350$, AMD CPU that matches it needs to be 200-250$

More or less of course...

Only then I see them making a HUGE comeback and getting great market shares.

That might not be best strategy for EARNING maximum profit, but it is best for their situation where they need first of all a *comback/market shares/fans back >>>>> Income*.

After they are back in top-game they can price future products higher.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *caswow*
> 
> wth why are you so negative towards amd?


I'm actually an AMD fan and I ran PlanetAMD64 for over 10 years (head admin), however I was employed by Intel even longer. I just don't want to see any more price gouging. My X5650 runs just fine and I will stick to that until their is something THREE times more powerful and something at or under $500. I also want to see AMD have an answer to Optane. So I am not negative towards AMD, I am just saying a CPU over $500 and its not for me period. $1000 for a CPU is just an evil rip off.


----------



## caswow

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I'm actually an AMD fan and I ran PlanetAMD64 for over 10 years (head admin), however I was employed by Intel even longer. I just don't want to see any more price gouging. My X5650 runs just fine and I will stick to that until their is something THREE times more powerful and something at or under $500. I also want to see AMD have an answer to Optane. So I am not negative towards AMD, I am just saying a CPU over $500 and its not for me period. $1000 for a CPU is just an evil rip off.


ok i might have misunderstood you


----------



## Pro3ootector

1:12:94 is the score of my Xeon E5-2670 3,1ghz 8c/16t









Just remember people that AMD could use it's "*Smart Prefetch" technology, and run this test multiple times, so the CPU learned the data it needs, and got better performance.*


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> Unless they've changed some settings in the final blender file they gave us, those setting don't change upon opening files in blender. (perhaps unless some people missed what's right beside the file on the page and are using a grotesquely out of date version of blender or some crazy internal preferences on their machine that would alter their render)


The file supplied, run with Blender 2.78a x64 for Windows, does not produce the results AMD is getting on their 6900K. It's not even close.

I noticed this the instant I ran the test on my 5820K. Core scaling would need to be much better than 1.00 to get the results AMD is getting, if they used this file.

This was confirmed by someone who ran the test on a 4.2GHz 6900K (which is clocked 800MHz, or ~25%, higher than the one in AMD's demo) and got ~42 seconds, where the demo system clocked in at 35.44.


----------



## IRobot23

Hopefully RYZEN or Summit ridge processors will be priced very competitively.
Since they wont have iGPU AMD should be cheaper.

So I would guess (unlocked CPUs)
- 4C/8T at 250$
- 6C/8T at 350$
- 8C/16T at 450-500$

For lower end parts
Well - here might be best budget CPU ever
- 2C/4T might not exist
- 4C/4T 150$

I hope AM4 processors will be even cheaper something like 6C/12T for 250-300$ with unlocked multiplier should be great deal.

Server parts might be much more expensive.


----------



## Imouto

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *JakdMan*
> 
> Unless they've changed some settings in the final blender file they gave us, those setting don't change upon opening files in blender. (perhaps unless some people missed what's right beside the file on the page and are using a grotesquely out of date version of blender or some crazy internal preferences on their machine that would alter their render)
> 
> 
> 
> The file supplied, run with Blender 2.78a x64 for Windows, does not produce the results AMD is getting on their 6900K. It's not even close.
> 
> I noticed this the instant I ran the test on my 5820K. Core scaling would need to be much better than 1.00 to get the results AMD is getting, if they used this file.
> 
> This was confirmed by someone who ran the test on a 4.2GHz 6900K (which is clocked 800MHz, or ~25%, higher than the one in AMD's demo) and got ~42 seconds, where the demo system clocked in at 35.44.
Click to expand...

Oh man, Blameless at full blast conspiracy mode again.

You're on Windows 7 if your signature is right.
That guy just told you his time and it is a confirmation? Seriously...


----------



## epic1337

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I also want to see AMD have an answer to Optane.


unless AMD expands towards manufacturing or designing their own NAND/RAM, Optane competition is better off left to other NAND/RAM manufacturers.

Optane is a NAND/RAM product and is not within AMD's field, just because Intel makes it doesn't mean AMD needs to make one as well.
i mean hey, Intel has beautiful NIC controllers which is the best of the field, i don't see AMD having an answer against it.


----------



## HITTI

I dont get this program.

How do I benchmark?


----------



## Evil Penguin

Quote:


> Originally Posted by *HITTI*
> 
> I dont get this program.
> 
> How do I benchmark?
> 
> 
> 
> Spoiler: Warning: Spoiler!


Press F12


----------



## Xuper

Download file from AMD website :

http://download.amd.com/demo/RyzenGraphic_27.blend

Then Run file with blender it.


----------



## Blameless

Quote:


> Originally Posted by *Imouto*
> 
> Oh man, Blameless at full blast conspiracy mode again.


Again? I'm can't recall the last time I claimed conspiracy for anything and I'm certainly not doing so now.

Can you please leave any personal disagreement you may have with me in the threads where they crop up? It's entirely off topic here.
Quote:


> Originally Posted by *Imouto*
> 
> You're on Windows 7 if your signature is right.


Irrelevant to Blender times. Windows XP x64 will give virtually the same time as Windows 10...and yes, I've tried it.
Quote:


> Originally Posted by *Imouto*
> 
> That guy just told you his time and it is a confirmation? Seriously...


Yes. It's evidence enough to corroborate what I had already discovered and ruled out some mistake on my part.


----------



## MadGoat

Quote:


> Originally Posted by *HITTI*
> 
> I dont get this program.
> 
> How do I benchmark?


Drag RyzenGraphic_27.blend into blender, hit f11 then f12 and sit back...


----------



## HITTI

Quote:


> Originally Posted by *Evil Penguin*
> 
> Press F12


It does nothing.


----------



## }SkOrPn--'

Wow, my X5650 is 1 min 33 seconds. What were the Zen results again?


----------



## JJBY

It would be nice though if AMD is top dog again like the old Athalon/ Duron days gone by









I would LOVE to support AMD if they are close in performance just to give them more market share


----------



## Evil Penguin

Quote:


> Originally Posted by *HITTI*
> 
> It does nothing.


Once you open the file from AMD through Blender, then you F12.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *JJBY*
> 
> It would be nice though if AMD is top dog again like the old Athalon/ Duron days gone by
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I would LOVE to support AMD if they are close in performance just to give them more market share


I want to support them again too. But unless I find a reason for 8 cores, I will probably go with the 6 core version.


----------



## Coydog

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Wow, my X5650 is 1 min 33 seconds. What were the Zen results again?


I think like 52/54/56 seconds or something like that.


----------



## HITTI

Quote:


> Originally Posted by *MadGoat*
> 
> Drag RyzenGraphic_27.blend into blender, hit f11 then f12 and sit back...


Thank you.


----------



## Blameless

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Wow, my X5650 is 1 min 33 seconds. What were the Zen results again?


35.38, but it's starting to look like the AMD demo tests were run in Linux.


----------



## Evil Penguin

Quote:


> Originally Posted by *Blameless*
> 
> 35.38, but it's starting to look like the AMD demo tests were run in Linux.


If that was the case, yeah it edged out my 5960X (3.4 GHz - 36.70s).


----------



## atomicmew

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Wow, my X5650 is 1 min 33 seconds. What were the Zen results again?


54 seconds.


----------



## f1LL

My [email protected] on Win10 64bit took 2.23mins


----------



## kd5151

I just rewatched the stream. I noticed a couple of things. Both AMD and Intel scored 35 seconds in blender which has been confirmed by you guys with even better eyes than me.

When running handbrake. They have Task manager,notepad and sticky notes or something running in the backround. Now in task manager it says AMD Eng Sample then "Summit Ridge" . But summit ridge is in a different window and is blocking the full eng sample name.

Lastly there was a fps counter in bf1 in the upper right hand corner but got cut off/cropped in the live stream???


----------



## Imouto

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *}SkOrPn--'*
> 
> Wow, my X5650 is 1 min 33 seconds. What were the Zen results again?
> 
> 
> 
> 35.38, but it's starting to look like the AMD demo tests were run in Linux.
Click to expand...

Dude. Have you watched the stream? It is a Win 10 machine alright.


----------



## Coydog

Quote:


> Originally Posted by *Blameless*
> 
> 35.38, but it's starting to look like the AMD demo tests were run in Linux.


Well, if you go and look at the rebroadcast, fast-forward to 32:33 mark you will see the desktop of everything. You will also see they are running it under Windows 10.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Blameless*
> 
> 35.38, but it's starting to look like the AMD demo tests were run in Linux.


Yeah OK that makes more sense now. I was about to say Wow Zen your a out of this world beast, but its looking more down to Earth now. Still really good though and with decent pricing puts them back into the game. If Zen can do 30 seconds at 4Ghz then call me impressed.

I might throw on Ubuntu real quick and do the tests now.


----------



## S.M.

Stocks went absolutely bananas today.


----------



## Coydog

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah OK that makes more sense now. I was about to say Wow Zen your a out of this world beast, but its looking more down to Earth now. Still really good though and with decent pricing puts them back into the game. If Zen can do 30 seconds at 4Ghz then call me impressed.
> 
> I might throw on Ubuntu real quick and do the tests now.


As I pointed out above, it's run in Windows 10.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Coydog*
> 
> Well, if you go and look at the rebroadcast, fast-forward to 32:33 mark you will see the desktop of everything. You will also see they are running it under Windows 10.


Hmmmm


----------



## Nenkitsune

So there's the real question. Why did amd's test go so fast when NO ONE can match them using comparable systems (that is, a 6900k at stock speeds)


----------



## prznar1

Quote:


> Originally Posted by *Pro3ootector*
> 
> Yup, a DX12 game test.


Its not about that. DX12 bench is good, even better than 11. The sooner dx11 dies the better. But there was no fps meeter







and shots were not the same. Same area, but not the same camera angle etc. Still, there is lots of us to see. Good or bad, doesnt matter. Just lots.


----------



## Coydog

Quote:


> Originally Posted by *prznar1*
> 
> Its not about that. DX12 bench is good, even better than 11. The sooner dx11 dies the better. But there was no fps meeter
> 
> 
> 
> 
> 
> 
> 
> and shots were not the same. Same area, but not the same camera angle etc. Still, there is lots of us to see. Good or bad, doesnt matter. Just lots.


Well, looked like it was one guy controlling both computers. Since it wasn't a benchmark and more of a live game it's hard to get all the angles right.


----------



## Pro3ootector

Quote:


> Originally Posted by *prznar1*
> 
> Its not about that. DX12 bench is good, even better than 11. The sooner dx11 dies the better. But there was no fps meeter
> 
> 
> 
> 
> 
> 
> 
> and shots were not the same. Same area, but not the same camera angle etc. Still, there is lots of us to see. Good or bad, doesnt matter. Just lots.


Battlefield in DX12 won't show full CPU potential, they say that TitanX got around 60 -70 fps while this API cripples it's performance. In DX11 reaches around 85 fps.


----------



## prznar1

Quote:


> Originally Posted by *Coydog*
> 
> Well, looked like it was one guy controlling both computers. Since it wasn't a benchmark and more of a live game it's hard to get all the angles right.


oh that makes sense now. Thx for saying that.


----------



## }SkOrPn--'

Yeah the event appears highly manipulated for the laymen who doesn't dig much deeper. Can't wait to see 3rd parties get hold of the hardware, then we will start to see a clearer picture of course.


----------



## Blameless

Quote:


> Originally Posted by *Imouto*
> 
> Dude. Have you watched the stream? It is a Win 10 machine alright.


I didn't say it wasn't a Windows 10 machine. I said it's looking like the benchmark was run in Linux, because the Windows times are impossible for the 6900K.

At least two people with eight-core Intels are confirming that the 6900K results are too low to be possible in Windows with the file presented.

I'm not sure why you are so skeptical.

Do you really think a 3.4GHz 6900K is nearly three times as fast as 4GHz+ quad core i7s of similar architecture? On what dimension does that make sense?
Quote:


> Originally Posted by *Coydog*
> 
> Well, if you go and look at the rebroadcast, fast-forward to 32:33 mark you will see the desktop of everything. You will also see they are running it under Windows 10.


I saw that before I made my post.

Either it's a replay of a Linux test, a window from a Linux VM, or not the test they have for download.


----------



## axiumone

Quote:


> Originally Posted by *S.M.*
> 
> Stocks went absolutely bananas today.


What stocks are you looking at? Amd hardly moved.


----------



## MadGoat

Quote:


> Originally Posted by *Coydog*
> 
> Well, if you go and look at the rebroadcast, fast-forward to 32:33 mark you will see the desktop of everything. You will also see they are running it under Windows 10.


if anything it looks like the intel could have been run in linux? some things look odd...




Why do they have the closeup of the intel start bar cutoff? and look top right corner, that doeant look like a minimize bar... kinda looks like a bubble button?

I donno, just throwing it out there....


----------



## Coydog

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah the event appears highly manipulated for the laymen who doesn't dig much deeper. Can't wait to see 3rd parties get hold of the hardware, then we will start to see a clearer picture of course.


Same here. I have high hopes since I remember the days of $300 BUDGET cpu's from Intel. I really want to get into more enthusiast builds and such, and would love to get an upgrade to my i5-6500. The tech they have just fascinates me, and curious if the deep learning can affect game play.

To many people want AMD to surpass Intel in one shot. Problem is, AMD was really hindered by Bulldozer. In Baseball its soft of like being down 6-7 runs with 2 outs left in the inning and expect/demand the team take the lead that inning.


----------



## formula m

*This thread is forever old, where is the new one, with all the details..?*

Anyone..?

Where is all the information, you people are discussing..? There is Zero updates in the OP.


----------



## Coydog

Quote:


> Originally Posted by *MadGoat*
> 
> if anything it looks like the intel could have been run in linux? some things look odd...
> 
> 
> 
> 
> Why do they have the closeup of the intel start bar cutoff? and look top right corner, that doeant look like a minimize bar... kinda looks like a bubble button?
> 
> I donno, just throwing it out there....


Ok, went back to 32:14 of the stream where they showed the 2 monitors. If you look a the far right one, though it is hard to make out, desktop taskbar area at the bottom looks like the Intel CPU is also running Windows 10. One could very well be running x32 and the other x64 as far as I know. But both do appear to be Windows 10.


----------



## PostalTwinkie

Quote:


> Originally Posted by *MadGoat*
> 
> if anything it looks like the intel could have been run in linux? some things look odd...
> 
> 
> 
> 
> Why do they have the closeup of the intel start bar cutoff? and look top right corner, that doeant look like a minimize bar... kinda looks like a bubble button?
> 
> I donno, just throwing it out there....


The bar that is being pointed at in with a big red "?" is the Blender bar at the bottom of Blender, the Blender window is just moved up a little and the actual task bar is minimized itself. You can see the damn Blender icons right on the bar "in question".


----------



## Nenkitsune

Quote:


> Originally Posted by *MadGoat*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Coydog*
> 
> Well, if you go and look at the rebroadcast, fast-forward to 32:33 mark you will see the desktop of everything. You will also see they are running it under Windows 10.
> 
> 
> 
> 
> 
> if anything it looks like the intel could have been run in linux? some things look odd...
> 
> 
> 
> 
> Why do they have the closeup of the intel start bar cutoff? and look top right corner, that doeant look like a minimize bar... kinda looks like a bubble button?
> 
> I donno, just throwing it out there....
Click to expand...

That's exactly what I noticed too. Like, ***? why are they masking that area?

Something is fishy for sure.

also, it looks like the task bar isn't just minimized, as the blender bar on the bottom is actually cut off as well partially.


----------



## Kuivamaa

Is it visible or stated anywhere what Blender build did AMD use here?It is quite possible we are comparing apples vs oranges here.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Nenkitsune*
> 
> That's exactly what I noticed too. Like, ***? why are they masking that area?
> 
> Something is fishy for sure.












EDIT:

It is so sad that the anti AMD echo-chamber has gotten so loud and full of crap that it sits here and pats itself on the back for finding something fishy. Yet can't stop long enough to really look at what they saw.


----------



## Nenkitsune

Quote:


> Originally Posted by *Kuivamaa*
> 
> Is it visible or stated anywhere what Blender build did AMD use here?It is quite possible we are comparing apples vs oranges here.


if you go to their website they tell you what version of blender you need.


----------



## oxidized

I don't understand why everybody is doing such an incredible job proving something by showing things on screenshots from a youtube video? There's nothing to prove here guys, just need to wait for the real thing to come out and see how it performs, there surely are some fishy things on the showcase, but not enough to prove anything at the end, AMD already proved being not really trustworthy in terms of graphs and slides, so...


----------



## formula m

Quote:


> Originally Posted by *Coydog*
> 
> Ok, went back to 32:14 of the stream where they showed the 2 monitors. If you look a the far right one, though it is hard to make out, desktop taskbar area at the bottom looks like the Intel CPU is also running Windows 10. One could very well be running x32 and the other x64 as far as I know. But both do appear to be Windows 10.


Where is the link to the STREAM...?


----------



## MadGoat

Quote:


> Originally Posted by *PostalTwinkie*
> 
> The bar that is being pointed at in with a big red "?" is the Blender bar at the bottom of Blender, the Blender window is just moved up a little and the actual task bar is minimized itself. You can see the damn Blender icons right on the bar "in question".


yeah i agree its blender... but the whole view is "zoomed in" slightly to cover the start bar (which is there in the video footage of the actual monitor). and zoomed in on the right to hide the window controls (close maximize and minimize) but can see whatever you can see does not look like a minimize bar...

like i said, this is just observation... I could care less either way.


----------



## JakdMan

Quote:


> Originally Posted by *MadGoat*
> 
> if anything it looks like the intel could have been run in linux? some things look odd...
> 
> 
> 
> 
> Why do they have the closeup of the intel start bar cutoff? and look top right corner, that doeant look like a minimize bar... kinda looks like a bubble button?
> 
> I donno, just throwing it out there....


Well if that is the case, and as people say, Blender runs better on linux, then RYZEN may be doing a smiging better than thy sold themselves then









_If_, of course


----------



## PostalTwinkie

Quote:


> Originally Posted by *MadGoat*
> 
> yeah i agree its blender... but the whole view is "zoomed in" slightly to cover the start bar (which is there in the video footage of the actual monitor). and zoomed in on the right to hide the window controls (close maximize and minimize) but can see whatever you can see does not look like a minimize bar...
> 
> like i said, this is just observation... I could care less either way.


Nothing is zoomed in or being hidden. Blender is being ran in a Window and is slightly higher on the display on the Intel system, while the Start Bar auto-hid itself.

There is nothing fishy going on.

People are getting excited over some mysterious bar at the bottom of the screen......and it is the Blender bar!!


----------



## Nenkitsune

Quote:


> Originally Posted by *oxidized*
> 
> I don't understand why everybody is doing such an incredible job proving something, showing thing on screenshots from a youtube video? There's nothing to prove here guys, just need to wait for the real thing to come out and see how it performs, there surely are some fishy things on the showcase, but not enough to prove anything at the end, AMD already proved being not really trustworthy in terms of graphs and slides, so...


Well, the reason is because they're showing benchmark results that aren't possible and we want to know how they did it.


----------



## sugarhell

It seems fine?

http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710337

Wrong i forgot that the 6950x is a 10core. My bad


----------



## PostalTwinkie

Quote:


> Originally Posted by *Nenkitsune*
> 
> Well, the reason is because they're showing benchmark results that aren't possible and we want to know how they did it.







































Not possible?!?! Based off what?!


----------



## Nenkitsune

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *MadGoat*
> 
> yeah i agree its blender... but the whole view is "zoomed in" slightly to cover the start bar (which is there in the video footage of the actual monitor). and zoomed in on the right to hide the window controls (close maximize and minimize) but can see whatever you can see does not look like a minimize bar...
> 
> like i said, this is just observation... I could care less either way.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nothing is zoomed in or being hidden. Blender is being ran in a Window and is slightly higher on the display on the Intel system, while the Start Bar auto-hid itself.
> 
> There is nothing fishy going on.
> 
> People are getting excited over some mysterious bar at the bottom of the screen......and it is the Blender bar!!
Click to expand...



I don't know about you, but that blender bar looked like it's cut in half. Auto hiding the task bar wouldn't cut it in half.


----------



## Nenkitsune

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> Well, the reason is because they're showing benchmark results that aren't possible and we want to know how they did it.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Not possible?!?! Based off what?!
Click to expand...

Using THEIR PROVIDED FILE it's not possible to obtain their scores in windows 10 (that is, using a 6900k at its stock clock speeds)


----------



## Blameless

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Not possible?!?! Based off what?!


All the scores from the test they have for download not matching what the 6900K in the demo is getting.


----------



## SoloCamo

Just did the test provided...

Stock 4790k (turbo locked to 4.4ghz) pulled a 1:18. Sigh.


----------



## JakdMan

Quote:


> Originally Posted by *Nenkitsune*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I don't know about you, but that blender bar looked like it's cut in half. Auto hiding the task bar wouldn't cut it in half.


..... If the windows was crudely resized (i.e. dragged by the mouse as opposed to pressing the full screen button) things DO get cut off.....

_Edit:_ I can't believe I'm even humoring this silly detective game


----------



## S.M.

Quote:


> Originally Posted by *axiumone*
> 
> What stocks are you looking at? Amd hardly moved.


There were 3 HUGE shorts and about a dozen huge buys. There was an order for 1.5M shares and another for 950K.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *JakdMan*
> 
> ..... If the windows was crudely resized (i.e. dragged by the mouse as opposed to pressing the full screen button) things DO get cut off.....


Precisely this, that cut off bar means nothing.


----------



## oxidized

Quote:


> Originally Posted by *Nenkitsune*
> 
> Well, the reason is because they're showing benchmark results that aren't possible and we want to know how they did it.


Whatever but it's starting to sound a bit stupid, showcases mean nothing, and what they showed today was really something not new.


----------



## Nenkitsune

Quote:


> Originally Posted by *SoloCamo*
> 
> Just did the test provided...
> 
> Stock 4790k (turbo locked to 4.4ghz) pulled a 1:18. Sigh.


my 6600k clocked at 4.7ghz did it in 1:39.35 lol. apparently I'm over a minute slower than in the benchmarks they showed


----------



## PostalTwinkie

Quote:


> Originally Posted by *Nenkitsune*
> 
> Using THEIR PROVIDED FILE it's not possible to obtain their scores in windows 10 (that is, using a 6900k at its stock clock speeds)


You can use their provided file, but you don't have their exact system with exact installation; software, services, and all. In other words, you are attempting a benchmark test in a dirty OS environment.

Anyone worth their salt, and any time in the industry, knows there can be significant ( you are seeing minor) performance differences depending on how clean an OS environment is on a system.
Quote:


> Originally Posted by *Blameless*
> 
> All the scores from the test they have for download not matching what the 6900K in the demo is getting.


See the above, you should damn well know it.
Quote:


> Originally Posted by *JakdMan*
> 
> ..... If the windows was crudely resized (i.e. dragged by the mouse as opposed to pressing the full screen button) things DO get cut off.....


Get out of here with your logic!


----------



## }SkOrPn--'

Quote:


> Originally Posted by *oxidized*
> 
> Whatever but it's starting to sound a bit stupid, showcases mean nothing, and what they showed today was really something not new,


Well if the results are even remotely close, it sure is new for AMD, lol.... No and's if or buts about that...


----------



## Newwt

Look at the arm chair investigators. Intel fan boys must be sweating bullets


----------



## PostalTwinkie

Quote:


> Originally Posted by *Newwt*
> 
> Look at the arm chair investigators. Intel fan boys must be sweating bullets


They are pooping bricks.

Instead of people being excited with logical skepticism, they go to the complete other end and start making fantastic conspiracy theories.

Quote:


> Originally Posted by *S.M.*
> 
> There were 3 HUGE shorts and about a dozen huge buys. There was an order for 1.5M shares and another for 950K.


Jim Keller cashed his AMD paycheck and is topping off the Portfolio.


----------



## axiumone

Quote:


> Originally Posted by *S.M.*
> 
> There were 3 HUGE shorts and about a dozen huge buys. There was an order for 1.5M shares and another for 950K.


Oh wow, that's interesting stuff. I was just looking at the price. Why didn't the price have a larger swing if the volume was so massive?


----------



## Nenkitsune

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> Using THEIR PROVIDED FILE it's not possible to obtain their scores in windows 10 (that is, using a 6900k at its stock clock speeds)
> 
> 
> 
> You can use their provided file, but you don't have their exact system with exact installation; software, services, and all. In other words, you are attempting a benchmark test in a dirty OS environment.
> 
> Anyone worth their salt, and any time in the industry, knows there can be significant ( you are seeing minor) performance differences depending on how clean an OS environment is on a system.
Click to expand...

so you're saying a 10 second difference in completion times using a comparable system is normal? cause that doesn't sound normal to me. that's something like a 15% difference with the times they were clocking.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Newwt*
> 
> Look at the arm chair investigators. Intel fan boys must be sweating bullets
> 
> 
> 
> They are pooping bricks.
> 
> Instead of people being excited with logical skepticism, they go to the complete other end and start making fantastic conspiracy theories.
Click to expand...

Honestly I just wanna know why there's such a big difference in the bench times users are seeing vs what they obtained. a 15% difference in the completion time is pretty significant in my opinion and doesn't seem to make sense as far as it being a clean vs dirty environment.


----------



## AmericanLoco

Why would AMD purposely make the Intel system faster than it would normally be?


----------



## Removed1

Maybe it is the same test that the one u dl, but the length or the difficulty is not set the same.
U didn't have their settings of the test.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Nenkitsune*
> 
> so you're saying a 10 second difference in completion times using a comparable system is normal? cause that doesn't sound normal to me. that's something like a 15% difference with the times they were clocking.


No, a 10 second difference isn't surprising or crazy on a "comparable" system, because your system really isn't comparable.

You are entirely underestimating how different a system can run based off what boxes are checked on or off, what software is installed, what services are running, etc.

By the way, your i5 6600K is not even close to being comparable to RyZen's flagship. So your 10 seconds is even more understandable.


----------



## S.M.

Quote:


> Originally Posted by *axiumone*
> 
> Oh wow, that's interesting stuff. I was just looking at the price. Why didn't the price have a larger swing if the volume was so massive?


There's plenty of people selling.


----------



## Blameless

Quote:


> Originally Posted by *SoloCamo*
> 
> Just did the test provided...
> 
> Stock 4790k (turbo locked to 4.4ghz) pulled a 1:18. Sigh.


Does it make sense to you that double your core count at a third slower clocks is coming in at under half your render time?
Quote:


> Originally Posted by *oxidized*
> 
> Whatever but it's starting to sound a bit stupid, showcases mean nothing, and what they showed today was really something not new,


I still want to be able to compare scores and I cannot do that with the file provided unless they were using custom test settings they haven't revealed.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> You can use their provided file, but you don't have their exact system with exact installation; software, services, and all. In other words, you are attempting a benchmark test in a dirty OS environment.
> 
> Anyone worth their salt, and any time in the industry, knows there can be significant ( you are seeing minor) performance differences depending on how clean an OS environment is on a system.
> See the above, you should damn well know it.


Postal, you are simply incorrect here. The test environment (with similar OS and Blender version) shouldn't be responsible for a 30%+ benchmark differential.


----------



## oxidized

The only thing i see here is some people are really easily fooled by what they see in such things as this showcase.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Blameless*
> 
> Does it make sense to you that double your core count at a third slower clocks is coming in at under half your render time?
> I still want to be able to compare scores and I cannot do that with the file provided unless they were using custom test settings they haven't revealed.
> Postal, you are simply incorrect here. The test environment (with similar OS and Blender version) shouldn't be responsible for a 30%+ benchmark differential.


I am wrong that an 8 core 16 thread 14nm BRAND SPANKING NEW processor wouldn't be 30% faster than a quad core i5 from Intel?


----------



## Imouto

Quote:


> Originally Posted by *Nenkitsune*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SoloCamo*
> 
> Just did the test provided...
> 
> Stock 4790k (turbo locked to 4.4ghz) pulled a 1:18. Sigh.
> 
> 
> 
> my 6600k clocked at 4.7ghz did it in 1:39.35 lol. apparently I'm over a minute slower than in the benchmarks they showed
Click to expand...

My stock i7 4790K did it in 1:25 on Windows.

You did something wrong as is probably the case with the guy with the i7 6900K


----------



## Nenkitsune

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> so you're saying a 10 second difference in completion times using a comparable system is normal? cause that doesn't sound normal to me. that's something like a 15% difference with the times they were clocking.
> 
> 
> 
> No, a 10 second difference isn't surprising or crazy on a "comparable" system, because your system really isn't comparable.
> 
> You are entirely underestimating how different a system can run based off what boxes are checked on or off, what software is installed, what services are running, etc.
> 
> By the way, your i5 6600K is not even close to being comparable to RyZen's flagship. So your 10 seconds is even more understandable.
Click to expand...

not me. Him
http://www.overclock.net/t/1601679/broadwell-e-thread/4040_20#post_25710292

my system at 4.7ghz does it in like, 1:39.
Quote:


> Originally Posted by *Imouto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SoloCamo*
> 
> Just did the test provided...
> 
> Stock 4790k (turbo locked to 4.4ghz) pulled a 1:18. Sigh.
> 
> 
> 
> my 6600k clocked at 4.7ghz did it in 1:39.35 lol. apparently I'm over a minute slower than in the benchmarks they showed
> 
> Click to expand...
> 
> My stock i7 4790K did it in 1:25 on Windows.
> 
> You did something wrong as is probably the case with the guy with the i7 6900K
Click to expand...

you have 8 threads. I have 4. my time seems pretty ok i would think.


----------



## SoloCamo

Quote:


> Originally Posted by *Imouto*
> 
> My stock i7 4790K did it in 1:25 on Windows.
> 
> You did something wrong as is probably the case with the guy with the i7 6900K


Interesting. My stock 4790k did it in 1:18....

7 second difference is pretty big

Does ram speed make a big difference in this bench? Could come down to that if AMD was using a higher mem frequency for both?

I'm running 2400mhz cas10 DDR3


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Imouto*
> 
> My stock i7 4790K did it in 1:25 on Windows.
> 
> You did something wrong as is probably the case with the guy with the i7 6900K


I did it in 1 min 33 sec and my hexa core is so damn old its not funny (5 years maybe?), and that is at 160x20 putting this CPU at only 3.2Ghz. The only thing I did was shut down everything in the background best as I could find.

Intel Xeon X5650 at 3.2Ghz


----------



## JakdMan

Blender output setting (let alone any of the project settings) don't change between opening the file. It's either different file from what they ran on stage or everyone's systems are simply wildly different.

And considering the output settings weren't anything special to begin with I find it highly unlikely they went through the extra "trouble" to change final settings on what they gave us.

(not only did I say this already but yall among us who also use blender should know this)


----------



## PostalTwinkie

Quote:


> Originally Posted by *Blameless*
> 
> Does it make sense to you that double your core count at a third slower clocks is coming in at under half your render time?
> I still want to be able to compare scores and I cannot do that with the file provided unless they were using custom test settings they haven't revealed.
> Postal, you are simply incorrect here. The test environment (with similar OS and Blender version) shouldn't be responsible for a 30%+ benchmark differential.


Quote:


> Originally Posted by *Nenkitsune*
> 
> not me. Him
> http://www.overclock.net/t/1601679/broadwell-e-thread/4040_20#post_25710292
> 
> my system at 4.7ghz does it in like, 1:39.
> you have 8 threads. I have 4. my time seems pretty ok i would think.


Using that link and the sample size of ONE that was provided in it......

About 9 seconds difference very easily could have been caused by issue in that one user system. In counter to your sample of one, I invite you to look just below that to my sample of one. A person using a 6950X posted a ~32 second score....which is about expected.

So, we now have two Intel products performing with parity, and your one outlying performer......

But AMD must totally be hiding something fishy extreme....


----------



## }SkOrPn--'

If AMD prices this 8C under $600, I will purposefully video me falling out of my chair. If they try at $500, I will try to find a way to have my jaw also fall to the floor...


----------



## Blameless

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I am wrong that an 8 core 16 thread 14nm BRAND SPANKING NEW processor wouldn't be 30% faster than a quad core i5 from Intel?


That's not the discrepancy being pointed out.

The *6900K in the AMD demo is scoring 35.44 seconds at 3.4GHz*.

So far, one 6900K clocked at 4.2GHz using the file is scoring 42 seconds. Same CPU, same OS, AMD specified Blender version, AMD test...25% higher clock speed, 15% worse performance.

This also corresponds exactly to what I'm seeing on my 5820K and 6800K. I've been using Blender for ten years, it scales almost perfectly linearly with core clocks and number of cores, but not it, and not anything, scales much better than linearly.

A 6900K has the same architecture as my 6800K but has 33% more cores. With a 33% OC on my 6800K, I should score nearly the same as a stock 6900K. With a 33% OC, my 6800K scores 51-52 seconds. Same goes for my 5820K, same clock, almost the same score because almost the same architecture.

*Anyone with a stock 6900K who uses the file AMD has for download will get ~52 seconds. Not the ~36 seconds AMD's demo 6900K is showing.*


----------



## Kpjoslee

I will just wait for benchmarks next year. Nothing really new here.


----------



## Imouto

Quote:


> Originally Posted by *SoloCamo*
> 
> Interesting. My stock 4790k did it in 1:18....
> 
> 7 second difference is pretty big
> 
> Does ram speed make a big difference in this bench? Could come down to that if AMD was using a higher mem frequency for both?
> 
> I'm running 2400mhz cas10 DDR3


I'm running Windows 7 x64 Ultimate 1800 Mhz DDR3

And 7 seconds isn't that big.
Quote:


> Originally Posted by *JakdMan*
> 
> (not only did I say this already but yall among us who also use blender should know this)


It is known.


----------



## S.M.

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> If AMD prices this 8C under $600, I will purposefully video me falling out of my chair. If they try at $500, I will try to find a way to have my jaw also fall to the floor...


It's quite likely. Regaining market share is more profitable than per-unit-yield at this point in the game.


----------



## SoloCamo

Quote:


> Originally Posted by *Blameless*
> 
> Does it make sense to you that double your core count at a third slower clocks is coming in at under half your render time?


Now I'm no mathematician but if it scales as well as many say, no, no it doesn't it.
Quote:


> Originally Posted by *Imouto*
> 
> I'm running Windows 7 x64 Ultimate 1800 Mhz DDR3
> 
> And 7 seconds isn't that big.


Think you meant to quote me... Anyways 7 seconds seems pretty big considering it's the same exact cpu at the same clocks? Then again, I don't regularly use blender soo....


----------



## Blameless

Quote:


> Originally Posted by *PostalTwinkie*
> 
> So, we now have two Intel products performing with parity, and your one outlying performer......


No.

Every Haswell, Haswell-E, Broadwell-E, and Skylake result in this very thread reinforces what I've been saying.

Find any such benchmark done in Windows, any one, and compare it to the AMD demo 6900K and you'll see some crazy nonsense like twice linear per-core performance scaling.
Quote:


> Originally Posted by *Imouto*
> 
> And 7 seconds isn't that big.


7 seconds is huge in a 36 second test and about two thousand times higher than the run to run margin of error.


----------



## Newwt

Quote:


> Originally Posted by *Blameless*
> 
> That's not the discrepancy being pointed out.
> 
> The *6900K in the AMD demo is scoring 35.44 seconds at 3.4GHz*.
> 
> So far, one 6900K clocked at 4.2GHz using the file is scoring 42 seconds. Same CPU, same OS, AMD specified Blender version, AMD test...25% higher clock speed, 15% worse performance.
> 
> This also corresponds exactly to what I'm seeing on my 5820K and 6800K. I've been using Blender for ten years, it scales almost perfectly linearly with core clocks and number of cores, but not it, and not anything, scales much better than linearly.
> 
> A 6900K has the same architecture as my 6800K but has 33% more cores. With a 33% OC on my 6800K, I should score nearly the same as a stock 6900K. With a 33% OC, my 6800K scores 51-52 seconds. Same goes for my 5820K, same clock, almost the same score because almost the same architecture.
> 
> *Anyone with a stock 6900K who uses the file AMD has for download will get ~52 seconds. Not the ~36 seconds AMD's demo 6900K is showing.*


I thought the amd was locked to 3.4 and the 6900k was boosted


----------



## Nenkitsune

Quote:


> Originally Posted by *SoloCamo*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Imouto*
> 
> I'm running Windows 7 x64 Ultimate 1800 Mhz DDR3
> 
> And 7 seconds isn't that big.
> 
> 
> 
> Quoted the wrong guy. 7 seconds seems pretty big considering it's the same exact cpu at the same clocks? Then again, I don't regularly use blender soo....
Click to expand...

they're running different ram speeds but i don't think that would effect it much.

I think the 7 seconds could come from if one of them had any background processes slowing things down a lot. I saw a 2 second difference by browsing OCN while running blender lol


----------



## }SkOrPn--'

Quote:


> Originally Posted by *S.M.*
> 
> It's quite likely. Regaining market share is more profitable than per-unit-yield at this point in the game.


Hmm, I might have a reason to finally use my GoPro then... I better go buy a nice soft area rug first...


----------



## Nenkitsune

Quote:


> Originally Posted by *Newwt*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> That's not the discrepancy being pointed out.
> 
> The *6900K in the AMD demo is scoring 35.44 seconds at 3.4GHz*.
> 
> So far, one 6900K clocked at 4.2GHz using the file is scoring 42 seconds. Same CPU, same OS, AMD specified Blender version, AMD test...25% higher clock speed, 15% worse performance.
> 
> This also corresponds exactly to what I'm seeing on my 5820K and 6800K. I've been using Blender for ten years, it scales almost perfectly linearly with core clocks and number of cores, but not it, and not anything, scales much better than linearly.
> 
> A 6900K has the same architecture as my 6800K but has 33% more cores. With a 33% OC on my 6800K, I should score nearly the same as a stock 6900K. With a 33% OC, my 6800K scores 51-52 seconds. Same goes for my 5820K, same clock, almost the same score because almost the same architecture.
> 
> *Anyone with a stock 6900K who uses the file AMD has for download will get ~52 seconds. Not the ~36 seconds AMD's demo 6900K is showing.*
> 
> 
> 
> I thought the amd was locked to 3.4 and the 6900k was boosted
Click to expand...

the 6900k was using its stock settings of 3.2ghz base 3.7ghz boost


----------



## }SkOrPn--'

Quote:


> Originally Posted by *S.M.*
> 
> It's quite likely. Regaining market share is more profitable than per-unit-yield at this point in the game.


Lisa went out of her way to focus on the fact that the Intel CPU is $1100. She would only do that if they plan on making a pricing statement, right? Otherwise why focus on cost if you plan on matching it, lol...


----------



## Shau76434

Quote:


> Originally Posted by *S.M.*
> 
> It's quite likely. Regaining market share is more profitable than per-unit-yield at this point in the game.


Do you think that AMD stock price will have a similar rise as 2016 (from 2 to 10 dollar) if Zen sells really well ? Or is it hard to predict ?


----------



## Blameless

Quote:


> Originally Posted by *Newwt*
> 
> I thought the amd was locked to 3.4 and the 6900k was boosted


Quote:


> Originally Posted by *Nenkitsune*
> 
> the 6900k was using its stock settings of 3.2ghz base 3.7ghz boost


You aren't running at 3.7GHz in Blender at stock. The max all core turbo of a stock 6900K is 3.4GHz.


----------



## Banko

As an update the rig in my Sig a 3770K overclocked to 4.7 GHZ did it in 1:34:48 this is in Windows 10 64bit .


----------



## CULLEN

Quote:


> Originally Posted by *Blameless*
> 
> *Anyone with a stock 6900K who uses the file AMD has for download will get ~52 seconds. Not the ~36 seconds AMD's demo 6900K is showing.*


Wait, I'm so confused here, I though lower time was better, did AMD test show 6900K getting better results than it's actually capable of doing?


----------



## Nenkitsune

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Newwt*
> 
> I thought the amd was locked to 3.4 and the 6900k was boosted
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> the 6900k was using its stock settings of 3.2ghz base 3.7ghz boost
> 
> Click to expand...
> 
> You aren't running at 3.7GHz in Blender at stock. The max all core turbo of a stock 6900K is 3.4GHz.
Click to expand...

Ah, I wasn't aware of that. first thing I did with my 6600k when I got it was disable turbo boost and overclocked it with all C-states disabled, so I never experienced how the boost function worked.
Quote:


> Originally Posted by *CULLEN*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> *Anyone with a stock 6900K who uses the file AMD has for download will get ~52 seconds. Not the ~36 seconds AMD's demo 6900K is showing.*
> 
> 
> 
> Wait, I'm so confused here, I though lower time was better, did AMD test show 6900K getting better results than it's actually capable of doing?
Click to expand...

Basically. The 36s score they got isn't lining up with what makes sense using the file they provided, so it seems that there's the possibility that the file they used and the file they uploaded are different. It kind of sucks because if we had the right file we could compare numbers to what they got in the live stream.


----------



## Mad Pistol

About what I was expecting from this event, nothing Earth-shattering. I am cautiously excited for what this means for AMD's future.

As a side note, one of the people in a demo section (I won't say who) was one of my TaeKwonDo instructors when I was in middle/high school... I think that was the biggest shock of all for me.









Hint: It wasn't Lisa Su.


----------



## Blameless

Quote:


> Originally Posted by *CULLEN*
> 
> Wait, I'm so confused here, I though lower time was better, did AMD test show 6900K getting better results than it's actually capable of doing?


Lower time is better. The point is that the file we are supposed to be able to use to reproduce the test doesn't allow us to reproduce the test.

AMD probably uploaded the wrong file, or possibly used some custom settings that weren't listed.


----------



## JakdMan

Quote:


> Originally Posted by *CULLEN*
> 
> Wait, I'm so confused here, I though lower time was better, did AMD test show 6900K getting better results than it's actually capable of doing?


I believe it's a delta being shown. Showing that the silicon lottery is real + non parity between systems, even of equal parts.


----------



## }SkOrPn--'

Does anyone know if Zen was running XFR or not? I can't remember if they mentioned that or not.


----------



## Nenkitsune

Quote:


> Originally Posted by *JakdMan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *CULLEN*
> 
> Wait, I'm so confused here, I though lower time was better, did AMD test show 6900K getting better results than it's actually capable of doing?
> 
> 
> 
> I believe it's a delta being shown. Showing that the silicon lottery is real + non parity between systems, even of equal parts.
Click to expand...

silicon lottery only affects the chips overclocking capabilities. a chip guaranteed to clock at 5ghz vs one that only clocks at 4.5ghz will still have the same scores if clocked at the same speed. I've never seen any sort of clock for clock performance difference between higher binned chips vs lower binned.


----------



## CULLEN

Quote:


> Originally Posted by *Blameless*
> 
> Lower time is better. The point is that the file we are supposed to be able to use to reproduce the test doesn't allow us to reproduce the test.
> 
> AMD probably uploaded the wrong file, or possibly used some custom settings that weren't listed.


Ah, that make sense! Rep for the direct explanation without mockery (which seems to be the norm).


----------



## Fyrwulf

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Does anyone know if Zen was running XFR or not? I can't remember if they mentioned that or not.


They explicitly said boost was disabled on Zen.

As for the time discrepancies, has anyone thought of the fact that on the Intel side this was new hardware on a clean OEM install of Windows with only the latest drivers and the benchmark software installed? So the registries are clean and there aren't a bunch of background processes running.


----------



## Newwt

Well as far as pricing goes i can see the SR7 being released in between $350 and $400 going off their previous product releases.


----------



## epic1337

i have the feeling that AMD tuned their settings to get Zen to perform this well, but i'm not saying they cheated.

but hey, if you wanna have a great blender score, then you have to use one of this!


----------



## HAL900

Base clock zen is 3.4 GHZ .
Iits low


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Fyrwulf*
> 
> They explicitly said boost was disabled on Zen.
> 
> As for the time discrepancies, has anyone thought of the fact that on the Intel side this was new hardware on a clean OEM install of Windows with only the latest drivers and the benchmark software installed? So the registries are clean and there aren't a bunch of background processes running.


Thanks

Yeah several have tried using that argument already. That easily makes sense to me as well, but to some it doesn't explain anything. I just can't wait to see 3rd parties get a hold of the hardware. Going to be a really interesting Q1 for sure....


----------



## SuperZan

Quote:


> Originally Posted by *HAL900*
> 
> Base clock zen is 3.4 GHZ .
> Iits low


For an 8c/16t? Hardly.


----------



## JakdMan

Quote:


> Originally Posted by *Nenkitsune*
> 
> silicon lottery only affects the chips overclocking capabilities. a chip guaranteed to clock at 5ghz vs one that only clocks at 4.5ghz will still have the same scores if clocked at the same speed. I've never seen any sort of clock for clock performance difference between higher binned chips vs lower binned.










I know. Just wanted to toss in my own botched nonsense since we playing conspiracy theory time


----------



## Coydog

Quote:


> Originally Posted by *oxidized*
> 
> I don't understand why everybody is doing such an incredible job proving something by showing things on screenshots from a youtube video? There's nothing to prove here guys, just need to wait for the real thing to come out and see how it performs, there surely are some fishy things on the showcase, but not enough to prove anything at the end, AMD already proved being not really trustworthy in terms of graphs and slides, so...


Well, they stated something as fact and if you actually paid attention to the presentation, you would see their 'fact' as incorrect.

Remember the adage, repeat a lie long enough you will believe it.


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> I believe it's a delta being shown. Showing that the silicon lottery is real + non parity between systems, even of equal parts.


This isn't the case and looking through this thread will show that.

Similar settings and hardware = similar score, as long as you are in either Windows or Linux in both cases.
Quote:


> Originally Posted by *Fyrwulf*
> 
> has anyone thought of the fact


Yes. That's not what's responsible for the discrepancy.

My 5820K setup is leaner than 99.99% of setups out there and kept in immaculate OS/environment condition. My 6800K system was built two days ago. They get the same scores at the same settings with the same CPUs (or even different CPUs of the same aggregate performance) in Blender as any other Windows system that has even cursory precautions taken to minimize background load.

Again, the AMD Demo is showing results that are ~40% off what their file is producing.
Quote:


> Originally Posted by *JakdMan*
> 
> 
> 
> 
> 
> 
> 
> 
> I know. Just wanted to toss in my own botched nonsense since we playing conspiracy theory time


There is no conspiracy. Someone screwed up the organization between the demonstrators and the website.

Never assume malice when incompetence will suffice.


----------



## HAL900

Quote:


> Originally Posted by *SuperZan*
> 
> For an 8c/16t? Hardly.


i7 7700k 4.2 GHZ and new game is max 6 core + 60 % IPC


----------



## oxidized

Quote:


> Originally Posted by *Coydog*
> 
> Well, they stated something as fact and if you actually paid attention to the presentation, you would see their 'fact' as incorrect.
> 
> Remember the adage, repeat a lie long enough you will believe it.


That i know, but still nothing is proven 100% that's why i'm saying it'd be wise to wait for the actual release, and for neutral tests.


----------



## Newwt

If we're going to throw out random speculation about how amd go a good score. It probably has something to do with the Nural net predictionand ran the test a few times before the preview


----------



## epic1337

Quote:


> Originally Posted by *HAL900*
> 
> Base clock zen is 3.4 GHZ .
> Iits low


you're kidding right? its an 8core chip, i'm quite surprised they even managed to hit 3.4Ghz with a 95W TDP restriction.


----------



## JakdMan

Quote:


> Originally Posted by *Blameless*
> 
> This isn't the case and looking through this thread will show that.
> 
> Similar settings and hardware = similar score, as long as you are in either Windows or Linux in both cases.
> Yes. That's not what's responsible for the discrepancy.
> 
> My 5820K setup is leaner than 99.99% of setups out there and kept in immaculate OS/environment condition. My 6800K system was built two days ago. They get the same scores at the same settings with the same CPUs (or even different CPUs of the same aggregate performance) in Blender as any other Windows system that has even cursory precautions taken to minimize background load.
> 
> Again, the AMD Demo is showing results that are ~40% off what their file is producing.
> _There is no conspiracy. Someone screwed up the organization between the demonstrators and the website.
> _
> Never assume malice when incompetence will suffice.


You and I must be reading two different threads because the hunt seems real with some of these posts

Might I suggest one of yall with absolute interest in picking apart this blender thing play with the render setting till you match the on stage times already









I would do it myself But I'm in the midst of creating me a wallpaper with their lovely file they supplied us. Upped the samples and moved a few things around


----------



## HAL900

Quote:


> Originally Posted by *SuperZan*
> 
> For an 8c/16t? Hardly.


Quote:


> Originally Posted by *epic1337*
> 
> you're kidding right? its an 8core chip, i'm quite surprised they even managed to hit 3.4Ghz with a 95W TDP restriction.


i dont now tpd but high ipc is low clock








And 14 nm and 3.4 is not impresive


----------



## LancerVI

Reading this thread, I'm starting to wonder why the vitriolic hatred toward wccf????


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> You and I must be reading two different threads because the hunt seems real with some of these posts
> 
> Might I suggest one of yall with absolute interest in picking apart this blender thing play with the render setting till you match the on stage times already


Well that would leave considerable doubt unless I was sure they used the same settings.
Quote:


> Originally Posted by *JakdMan*
> 
> I would do it myself But I'm in the midst of creating me a wallpaper with their lovely file they supplied us. Upped the samples and moved a few things around


What time do you get if you use default settings?


----------



## done12many2

Quote:


> Originally Posted by *Blameless*
> 
> That's not the discrepancy being pointed out.
> 
> The *6900K in the AMD demo is scoring 35.44 seconds at 3.4GHz*.
> 
> So far, one 6900K clocked at 4.2GHz using the file is scoring 42 seconds. Same CPU, same OS, AMD specified Blender version, AMD test...25% higher clock speed, 15% worse performance.
> 
> This also corresponds exactly to what I'm seeing on my 5820K and 6800K. I've been using Blender for ten years, it scales almost perfectly linearly with core clocks and number of cores, but not it, and not anything, scales much better than linearly.
> 
> A 6900K has the same architecture as my 6800K but has 33% more cores. With a 33% OC on my 6800K, I should score nearly the same as a stock 6900K. With a 33% OC, my 6800K scores 51-52 seconds. Same goes for my 5820K, same clock, almost the same score because almost the same architecture.
> 
> *Anyone with a stock 6900K who uses the file AMD has for download will get ~52 seconds. Not the ~36 seconds AMD's demo 6900K is showing.*


I ran and reran the test several times with my 5960x @ 4.8 GHz, Cache at 4.6 GHz and memory at 3200 14-14-14-32-CR1 and the best I could do was consistently repeating 35.x seconds. I found it very odd that a 3.4 GHz Ryzen chip and a 3.2 GHz base/3.7 Turbo 6900k was matching my overlcocked times in the same render. I don't necessarily want to add to the conspiracy theories, but I would like to know what kind of magic is at play.


----------



## lombardsoup

Quote:


> Originally Posted by *LancerVI*
> 
> Reading this thread, I'm starting to wonder why the vitriolic hatred toward wccf????


There's one or two infamous individuals there in particular who come to mind.


----------



## AmericanLoco

Quote:


> Originally Posted by *HAL900*
> 
> i dont now tpd but high ipc is low clock
> 
> 
> 
> 
> 
> 
> 
> 
> And 14 nm and 3.4 is not impresive


So if 3.4 GHz at 14nm isn't impressive for an 8 core, Intels 3.2 GHz 14nm 8 core must be downright disappointing.


----------



## JakdMan

Quote:


> Originally Posted by *Blameless*
> 
> Well that would leave considerable doubt unless I was sure they used the same settings.
> What time do you get if you use default settings?


1:21:23 for me on this stock 4930K


----------



## CULLEN

Quote:


> Originally Posted by *done12many2*
> 
> I ran and reran the test several times with my 5960x @ 4.8 GHz, Cache at 4.6 GHz and memory at 3200 14-14-14-32-CR1 and the best I could do was consistently repeating 35.x seconds.D


This will be asked, got any proof?


----------



## HAL900

Quote:


> Originally Posted by *AmericanLoco*
> 
> So if 3.4 GHz at 14nm isn't impressive for an 8 core, Intels 3.2 GHz 14nm 8 core must be downright disappointing.


name cpu ?


----------



## oxidized

Quote:


> Originally Posted by *done12many2*
> 
> I would like to know what kind of magic is at play.


AMD's sliders magic


----------



## CULLEN

Quote:


> Originally Posted by *HAL900*
> 
> name cpu ?


I think he was referring to the i7-6900K..


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> 1:21:23 for me on this stock 4930K


And specifically what settings were you using to get your ~36 second result?
Quote:


> Originally Posted by *done12many2*
> 
> I ran and reran the test several times with my 5960x @ 4.8 GHz, Cache at 4.6 GHz and memory at 3200 14-14-14-32-CR1 and the best I could do was consistently repeating 35.x seconds. I found it very odd that a 3.4 GHz Ryzen chip and a 3.2 GHz base/3.7 Turbo 6900k was matching my overlcocked times in the same render.


Obviously it's not the same render, or AMD wasn't using the default Blender settings.
Quote:


> Originally Posted by *done12many2*
> 
> I don't necessarily want to add to the conspiracy theories, but I would like to know what kind of magic is at play.


I'm starting to get really irked about people calling a legitimate concern that could easily have resulted from a simple mistake on AMD's part a conspiracy theory and dismissing it out of hand.


----------



## done12many2

Quote:


> Originally Posted by *CULLEN*
> 
> This will be asked, got any proof?


Sure do bud. I happen to be recording it all as I was going to post my findings on another forum. Can't be making claims without proof, right.







I'm still working on beating what they were able to do at stock clock speeds with the Ryzen. Give me a little bit and I'll upload the video.


----------



## Coydog

Got benchmarks wrong. was thinking of the result of another.


----------



## Blameless

Quote:


> Originally Posted by *done12many2*
> 
> Can't be making claims without proof, huh.


Not even of the blatantly obvious and easily demonstrable.

Next time I someone asks me what color the sky is I'll be sure to have notarized forms, in triplicate, that say "blue, but only on a clear day".


----------



## done12many2

Quote:


> Originally Posted by *Coydog*
> 
> You do realize AMD's Blender score is in the 54 second range, NOT the 36, right?


It was 35.x seconds.


----------



## AmericanLoco

Quote:


> Originally Posted by *HAL900*
> 
> name cpu ?


i7-6900.


----------



## HAL900

Quote:


> Originally Posted by *CULLEN*
> 
> I think he was referring to the i7-6900K..


zen has 40% high IPC fx2 .This is not IPC brodwell


----------



## Arturo.Zise

Ran the test on my 5820k @4ghz = 58 seconds.


----------



## CULLEN

Quote:


> Originally Posted by *done12many2*
> 
> Sure do bud. I happen to be recording it all as I was going to post my findings on another forum. Can't be making claims without proof, right.
> 
> 
> 
> 
> 
> 
> 
> I'm still working on beating what they were able to do at stock clock speeds with the Ryzen. Give me a little bit and I'll upload the video.


Yeah or just a screenshot of the time and settings used will be sufficient for now.


----------



## Blameless

Quote:


> Originally Posted by *Coydog*
> 
> You do realize AMD's Blender score is in the 54 second range, NOT the 36, right?


AMD Blender demo shows their Zen getting 35.38 seconds and their 6900K getting 35.44. These may well be perfectly accurate for the test they ran, but the Blender file they have provided does not give anywhere near this figure on a stock 6900K.

You are talking about Handbrake/x264.


----------



## done12many2

Quote:


> Originally Posted by *Blameless*
> 
> Not even of the blatantly obvious and easily demonstrable.
> 
> Next time I someone asks me what color the sky is I'll be sure to have notarized forms, in triplicate, that say "blue, but only on a clear day".


So sad and very true.


----------



## xartic1

Quote:


> Originally Posted by *HAL900*
> 
> zen has 40% high IPC fx2 .This is not IPC brodwell


The point is that you said 3.4ghz is low, but intel has a i7 6900k only at 3.2ghz. Both are 8 cores and AMD has a higher clock speed.

Now where in your mind did you begin to create a situation where 3.4ghz is low for an 8 core? I would say it's disappointing if intel had a 16 core at 3.5ghz but that isn't the case.

In fact, why don't you name a CPU that is up to your standard in clock speeds for 14nm?


----------



## LancerVI

So I tested it on my Talino (5820k @stock 64GB DDR4 @2400 Win10 64) and got 1:04:98.

I emptied out my system trey and ran the AMD file straight up.


----------



## dagget3450

This thread is mega entertainment for sure... *Gets more popcorn*


----------



## HAL900

Quote:


> Originally Posted by *xartic1*
> 
> The point is that you said 3.4ghz is low, but intel has a i7 6900k only at 3.2ghz. Both are 8 cores and AMD has a higher clock speed.
> 
> Now where in your mind did you begin to create a situation where 3.4ghz is low for an 8 core? I would say it's disappointing if intel had a 16 core at 3.5ghz but that isn't the case.
> 
> In fact, why don't you name a CPU that is up to your standard in clock speeds for 14nm?


?


----------



## Blameless

Quote:


> Originally Posted by *LancerVI*
> 
> So I tested it on my Talino (5820k @stock 64GB DDR4 @2400 Win10 64) and got 1:04:98.
> 
> I emptied out my system trey and ran the AMD file straight up.


That's about what I get at stock.

Now, do you think a 6900K is actually 90% faster than a 5820K?


----------



## JakdMan

Quote:


> Originally Posted by *Blameless*
> 
> *And specifically what settings were you using to get your ~36 second result?*
> Obviously it's not the same render, or AMD wasn't using the default Blender settings.
> I'm starting to get really irked about people calling a legitimate concern that could easily have resulted from a simple mistake on AMD's part a conspiracy theory and dismissing it out of hand.


literally the untouched setting in the file as downloaded from their site....


----------



## LancerVI

Quote:


> Originally Posted by *Blameless*
> 
> That's about what I get at stock.
> 
> Now, do you think a 6900K is actually 90% faster than a 5820K?


Yeah, probably not.

They must've provided the wrong file.


----------



## Coydog

Quote:


> Originally Posted by *done12many2*
> 
> It was 35.x seconds.


Ok, my bad. I was thinking of Handbrake.


----------



## Coydog

Quote:


> Originally Posted by *Blameless*
> 
> AMD Blender demo shows their Zen getting 35.38 seconds and their 6900K getting 35.44. These may well be perfectly accurate for the test they ran, but the Blender file they have provided does not give anywhere near this figure on a stock 6900K.
> 
> You are talking about Handbrake/x264.


I was wrong and thinking of handbrake. Fixed my original post.


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> literally the untouched setting in the file as downloaded from their site....


Ok, now I'm confused.

How are you getting 1:21:23 and ~36 seconds?


----------



## AmericanLoco

Quote:


> Originally Posted by *HAL900*
> 
> ?


Find me an Intel 8-Core CPU with a base clock higher than 3.4GHz.


----------



## comagnum

Quote:


> Originally Posted by *AmericanLoco*
> 
> Find me an Intel 8-Core CPU with a base clock higher than 3.4GHz.


Spoiler alert!

You can't.


----------



## wreckless

HAL900 makes his own processors


----------



## HAL900

Quote:


> Originally Posted by *AmericanLoco*
> 
> Find me an Intel 8-Core CPU with a base clock higher than 3.4GHz.


find you. I wrote what I think about it. IPC low and the clock low.
Broadwell has a higher IPC and lower clock
Skylake IPC has a lower and higher clock. Both are on the 14nm. It's not hard xD

END of STORY


----------



## JakdMan

Quote:


> Originally Posted by *Blameless*
> 
> Ok, now I'm confused.
> 
> How are you getting 1:21:23 and ~36 seconds?


Here's my rerun test, 3 times, with screenshots, of the untouched blender file as supplied by AMD on the event page. the render setting are seen on the far right panel in the first 2 images. Final times are at the top of the output window. Chrome with my many tabs was closed for this test

Final render times now are 1:20.90, 1:20.75, and 1:25.45

It is outputting an 800x800 8bit RGB PNG, at full resolution, using 32x32 sampling, 200 samples. Auto-detecting the threads for use, with the final output file having a 90%compression


----------



## HAL900

oxidized
little knowledge









I return to the subject as he will be. Because I fight like about free Mondays. 40% of the IPC and the clock is very little home


----------



## Maelthras

Wow that was crazy, faster than a 6700k @ 4.5, now that is some serious performance.


----------



## Banko

So since AMD is basically getting double the performance even with the 6900K they used, the only way I am able to achive close to double performance is, if you change the number of samples for rendering from 200 to 100.

If anyone has a 6900K you can try clicking on the camera Icon on the right side, expanding Sampling, and changing the amount of Render samples to 100 instead of 200.

On my 3770K at 4.7 ghz I went to 47 seconds from from 1:37, I don't have access to my work machine now since I am at home, but reducing the amount of samples should scale it linearly.

So I would suggest someone with a 6800K or a 6900K changing it to 100 and see if you are more inline with AMD benchmarks.

It's possible they supplied us the wrong blender file, or maybe that setting doesn't get saved in the blend file and it's only saved on the machine.


----------



## budgetgamer120

AMD is back









Well my xeon system will be replaced shortly I guess


----------



## Roaches

Finally got home and gave it a shot, scored 36.94 , dual Xeon E5-2670 stock.



Impressive for a single chip to score 36 seconds.


----------



## Master__Shake

my dual x5670's won't run it.

i don't have a gpu in there...


----------



## EniGma1987

Quote:


> Originally Posted by *Blameless*
> 
> No one is saything the Zen system shown isn't getting 35.38s vs. the 6900K system which got 35.44s, but that the file up for download isn't the test they ran, or they haven't listed all the settings they are using.
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4030#post_25710181
> 
> If that was the same benchmark AMD ran, he wouldn't have gotten 42 seconds, he would have gotten 28-30.


Not necessarily. He probably has a much older installation of Windows with a lot more going on in the background taking CPU time. He also has pretty loose timings on that memory. That can easily account for a 7 second difference. And are you sure the person is running Windows 10 and not Windows 7 still? From other people testing in this thread Win7 makes a huge difference in the test going slower.


----------



## Jpmboy

Quote:


> Originally Posted by *Roaches*
> 
> Finally got home and gave it a shot, scored 36.94 , dual Xeon E5-2670 stock.
> 
> 
> 
> Impressive for a single chip to score 36 seconds.


amazing.. this "benchmark" really responds to cores....
6950X stock

6950X 4.4, cache 3.7, ram 3400c13


changing to 100 from 200, it renders in 16sec with the same 4.4 settings


----------



## Nenkitsune

Quote:


> Originally Posted by *Banko*
> 
> So since AMD is basically getting double the performance even with the 6900K they used, the only way I am able to achive close to double performance is, if you change the number of samples for rendering from 200 to 100.
> 
> If anyone has a 6900K you can try clicking on the camera Icon on the right side, expanding Sampling, and changing the amount of Render samples to 100 instead of 200.
> 
> On my 3770K at 4.7 ghz I went to 47 seconds from from 1:37, I don't have access to my work machine now since I am at home, but reducing the amount of samples should scale it linearly.
> 
> So I would suggest someone with a 6800K or a 6900K changing it to 100 and see if you are more inline with AMD benchmarks.
> 
> It's possible they supplied us the wrong blender file, or maybe that setting doesn't get saved in the blend file and it's only saved on the machine.


that seems like a possibility. I went from 1:39 to 49.9 on my i5 when I did the same thing.

also, that's crazy how close our times are considering you have twice the threads (my 6600k also runs at 4.7ghz)


----------



## axiumone

Quote:


> Originally Posted by *Banko*
> 
> So since AMD is basically getting double the performance even with the 6900K they used, the only way I am able to achive close to double performance is, if you change the number of samples for rendering from 200 to 100.
> 
> If anyone has a 6900K you can try clicking on the camera Icon on the right side, expanding Sampling, and changing the amount of Render samples to 100 instead of 200.
> 
> On my 3770K at 4.7 ghz I went to 47 seconds from from 1:37, I don't have access to my work machine now since I am at home, but reducing the amount of samples should scale it linearly.
> 
> So I would suggest someone with a 6800K or a 6900K changing it to 100 and see if you are more inline with AMD benchmarks.
> 
> It's possible they supplied us the wrong blender file, or maybe that setting doesn't get saved in the blend file and it's only saved on the machine.


That can't be right either. On a [email protected] I'm getting 1:04 at 200% sample and 32.71 at 100%.


----------



## Nenkitsune

Quote:


> Originally Posted by *axiumone*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Banko*
> 
> So since AMD is basically getting double the performance even with the 6900K they used, the only way I am able to achive close to double performance is, if you change the number of samples for rendering from 200 to 100.
> 
> If anyone has a 6900K you can try clicking on the camera Icon on the right side, expanding Sampling, and changing the amount of Render samples to 100 instead of 200.
> 
> On my 3770K at 4.7 ghz I went to 47 seconds from from 1:37, I don't have access to my work machine now since I am at home, but reducing the amount of samples should scale it linearly.
> 
> So I would suggest someone with a 6800K or a 6900K changing it to 100 and see if you are more inline with AMD benchmarks.
> 
> It's possible they supplied us the wrong blender file, or maybe that setting doesn't get saved in the blend file and it's only saved on the machine.
> 
> 
> 
> That can't be right either. On a [email protected] I'm getting 1:04 at 200% sample and 32.71 at 100%.
Click to expand...

clock your 6700 to 3.4ghz and see what happens then.


----------



## Banko

Quote:


> Originally Posted by *Nenkitsune*
> 
> that seems like a possibility. I went from 1:39 to 49.9 on my i5 when I did the same thing.
> 
> also, that's crazy how close our times are considering you have twice the threads (my 6600k also runs at 4.7ghz)


Yeah, I think for rendering like this Hyper-Threading doesn't really help as much or the IPC jump from IvyBridge to Skylake alleviates you having the lack of Hyper Threading.


----------



## Roaches

Quote:


> Originally Posted by *Jpmboy*
> 
> amazing.. this "benchmark" really responds to cores....
> 6950X stock
> 
> 6950X 4.4, cache 3.7, ram 3400c13


Heh pretty good gains there. Saved 8 seconds.


----------



## Banko

Quote:


> Originally Posted by *Nenkitsune*
> 
> That can't be right either. On a [email protected] I'm getting 1:04 at 200% sample and 32.71 at 100%.


Yeah rendering like this is very much a linear process. You have a 41% overclock, if you clocked it to 3.4 GHZ my guess is you would end up scoring around 46.17 seconds. You add double the cores, remove the IPC jump from Haswell to Skylake and I think this might be it. Or maybe they had it at 150 samples but this is the only setting I can see that is responsible for such discrepancies.


----------



## Jpmboy

Quote:


> Originally Posted by *Roaches*
> 
> Heh pretty good gains there. Saved 8 seconds.


with the 12 core, you rendered with a 200% setting - yes?


----------



## Roaches

200 Samples in the render setting? if thats what you mean?, I just opened the file and pressed render prior to posting my result. 16 Cores btw.

Let me know if you want me to run renderer again with special parameters.


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> Final render times now are 1:20.90, 1:20.75, and 1:25.45


Ok, those are the results I'd expect from that CPU with the default settings.
Quote:


> Originally Posted by *Banko*
> 
> So since AMD is basically getting double the performance even with the 6900K they used, the only way I am able to achive close to double performance is, if you change the number of samples for rendering from 200 to 100.
> 
> If anyone has a 6900K you can try clicking on the camera Icon on the right side, expanding Sampling, and changing the amount of Render samples to 100 instead of 200.
> 
> On my 3770K at 4.7 ghz I went to 47 seconds from from 1:37, I don't have access to my work machine now since I am at home, but reducing the amount of samples should scale it linearly.
> 
> So I would suggest someone with a 6800K or a 6900K changing it to 100 and see if you are more inline with AMD benchmarks.
> 
> It's possible they supplied us the wrong blender file, or maybe that setting doesn't get saved in the blend file and it's only saved on the machine.


If I set 100 samples, I get under 24 seconds in Blender x86 with this file, so that can't be the specific discrepancy.
Quote:


> Originally Posted by *EniGma1987*
> 
> Not necessarily. He probably has a much older installation of Windows with a lot more going on in the background taking CPU time. He also has pretty loose timings on that memory. That can easily account for a 7 second difference. And are you sure the person is running Windows 10 and not Windows 7 still? From other people testing in this thread Win7 makes a huge difference in the test going slower.


None of these factors you mention can account for anywhere near that discrepancy.

Blender is not exceptionally memory sensitive and as long as the version of Windows you run it on knows the CPU and has support for it's instructions, it doesn't have much of an influence on performance.


----------



## inedenimadam

15 minutes before the stream. I got home at just the right time.


----------



## MadGoat

200 samples:

2:24



100 samples:

1:13


----------



## Nenkitsune

Quote:


> Originally Posted by *inedenimadam*
> 
> 15 minutes before the stream. I got home at just the right time.


What? AMD New Horizons stream was hours ago.


----------



## done12many2

Quote:


> Originally Posted by *CULLEN*
> 
> This will be asked, got any proof?


So I tried several more times to beat AMD's phenomenal feat, but the best I could do is match their 3.4 GHz Ryzen with a 4.8 GHz 5960x. Weird, but AMD's new Ryzen chip and the 6900k they used as a comparison must be some FREAK chips or there was definitely some funny business going on with this comparison test.

5960x @ 4.8 GHz / cache at 4.6 GHz / 64 GB DDR4 3200 14-14-14-32-CR1

Render time of AMD Ryzen and 6900k comparison verified in video supplied.

Render file straight from AMD was used without modification. I used the same timing on the 5960x at the above mentioned overclock.

Cinebench R15 run to demonstrate that the 5960x was performing as expected for proper overclock scaling.

5960x had minimal additional overhead from monitoring and nVidia ShadowPlay video capture while running tests.

I'm no expert, but something doesn't seem right with AMD's results.





@Blameless


----------



## oxidized

Quote:


> Originally Posted by *Nenkitsune*
> 
> What? AMD New Horizons stream was hours ago.


----------



## luisxd

Quote:


> Originally Posted by *Nenkitsune*
> 
> What? AMD New Horizons stream was hours ago.


I guess he got home hours ago, when the stream wasn't even started.


----------



## Nenkitsune

Quote:


> Originally Posted by *luisxd*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> What? AMD New Horizons stream was hours ago.
> 
> 
> 
> I guess he got home hours ago, when the stream wasn't even started.
Click to expand...

I just thought it was funny that he just now posted that he was glad he got home 15min before the stream, HOURS after the stream ended.


----------



## cssorkinman

Quote:


> Originally Posted by *MadGoat*
> 
> 200 samples:
> 
> 2:24
> 
> 
> 
> 100 samples:
> 
> 1:13


Very close to what I get with my Fx 8 cores.

Trying to estimate the performance gain over Piledriver. Does Zen have instruction set advantages used by blender that Piledriver doesn't have?


----------



## Roaches

Quote:


> Originally Posted by *Jpmboy*
> 
> with the 12 core, you rendered with a 200% setting - yes?


Quote:


> Originally Posted by *Roaches*
> 
> 200 Samples in the render setting? if thats what you mean?, I just opened the file and pressed render prior to posting my result. 16 Cores btw.
> 
> Let me know if you want me to run renderer again with special parameters.




100 samples....


----------



## Jpmboy

Quote:


> Originally Posted by *Roaches*
> 
> 200 Samples in the render setting? if thats what you mean?, I just opened the file and pressed render prior to posting my result. 16 Cores btw.
> 
> Let me know if you want me to run renderer again with special parameters.


thanks - but not necessary. AMD basically fudged the bench. Best to wait for independent comparos. Typical manufacturer "show".


----------



## Roaches

Quote:


> Originally Posted by *Jpmboy*
> 
> thanks - but not necessary. AMD basically fudged the bench. Best to wait for independent comparos. Typical manufacturer "show".


Yeah me too, gonna wait this out, hopefully whats on stage today was real.


----------



## JakdMan

Quote:


> Originally Posted by *Jpmboy*
> 
> thanks - but not necessary. AMD basically fudged the bench. Best to wait for independent comparos. Typical manufacturer "show".


To be fair both chips on stage had some magical properties according to all this digging we're doing so............








intel x amd


----------



## Banko

Quote:


> Originally Posted by *Roaches*
> 
> 
> 
> 100 samples....


You ran that with 16 cores right? If you double it like you are running it with 8 that would bring it 36 seconds which is much closer to AMD's bench.


----------



## Jpmboy

Quote:


> Originally Posted by *JakdMan*
> 
> To be fair both chips on stage had some magical properties according to all this digging we're doing so............
> 
> 
> 
> 
> 
> 
> 
> 
> intel x amd


no magic involved - they skewed the test by changing the settings - so users are not able to compare with any previous gen chip, even AMD's own.
I DO hope AMD can launch a competitive product, I'd like to see 12 or 14 cores at an AMD price point.








Quote:


> Originally Posted by *Banko*
> 
> You ran that with 16 cores right? If you double it like you are running it with 8 that would bring it 36 seconds which is much closer to AMD's bench.


he has 12c/24T per cpu.
nvm


----------



## Roaches

Quote:


> Originally Posted by *Banko*
> 
> You ran that with 16 cores right? If you double it like you are running it with 8 that would bring it 36 seconds which is much closer to AMD's bench.


My first run at 200 samples gave me 36.94 seconds, 16 cores 32 threads, running full blast during the rendering according to task manager thread monitor graphs.


----------



## xzamples

So it's just going to be called AMD Ryzen, not FX branded, no numbers, nothing like that?

So on Newegg you'll just see under processors "AMD Ryzen"

Right?


----------



## Blameless

Quote:


> Originally Posted by *JakdMan*
> 
> To be fair both chips on stage had some magical properties according to all this digging we're doing so............
> 
> 
> 
> 
> 
> 
> 
> 
> intel x amd


This doesn't have anything to do with Intel vs. AMD. I'd be astounded if they didn't use the same test on both of _their_ systems; they'd have to be suicidal to cheat as that would be blatantly obvious as soon as reviews hit. However, they most certainly have not provided us with the test they used.

This is an AMD vs. where-the-hell-is-my-benchmark? thing.


----------



## Dhoulmagus

Anything like a white paper out or data sheet for this new architecture yet or still all NDA hush hush? I want to see the new gear topless









Benchmarks look good though, may the AMD vs Intel flames reignite among the enthusiasts and may it then rain cheap high end hardware for meeeeeeeee!!!!


----------



## Blameless

Quote:


> Originally Posted by *Serious_Don*
> 
> Anything like a white paper out or data sheet for this new architecture yet or still all NDA hush hush? I want to see the new gear topless


They released essentially nothing new of note, except x264 results.


----------



## Dhoulmagus

Quote:


> Originally Posted by *Blameless*
> 
> They released essentially nothing new of note, except x264 results.


Well that's boring! Nothing on socket AM4 either or any of the boards coming for this?

I've had a copy of the Intel 'Core' gen 7 datasheet for months now which was easily found on their website, oh well.


----------



## bavarianblessed

Quote:


> Originally Posted by *HAL900*
> 
> oxidized
> little knowledge
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I return to the subject as he will be. Because I fight like about free Mondays. 40% of the IPC and the clock is very little home


What the..are you a broken chat bot? That makes zero sense


----------



## inedenimadam

Quote:


> Originally Posted by *Nenkitsune*
> 
> Quote:
> 
> 
> 
> Originally Posted by *luisxd*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> What? AMD New Horizons stream was hours ago.
> 
> 
> 
> I guess he got home hours ago, when the stream wasn't even started.
> 
> 
> 
> 
> 
> 
> 
> 
> Click to expand...
> 
> I just thought it was funny that he just now posted that he was glad he got home 15min before the stream, HOURS after the stream ended.
Click to expand...

when I clicked the link in the OP, it took me to what I thought was a live stream. Now I feel stupid for watching patiently through the 15 minute countdown.


----------



## kd5151




----------



## Jpmboy

Quote:


> Originally Posted by *bavarianblessed*
> 
> What the..are you a broken chat bot? That makes zero sense


Quote:


> Originally Posted by *inedenimadam*
> 
> when I clicked the link in the OP, it took me to what I thought was a live stream. Now I feel stupid for watching patiently through the 15 minute countdown.


thanks guys - gotta grin at these posts.


----------



## done12many2

Quote:


> Originally Posted by *Jpmboy*
> 
> no magic involved - they skewed the test by changing the settings - so users are not able to compare with any previous gen chip, even AMD's own.


The screwed up part of this is that they basically challenged people to run the "same" test on their current rigs, which would obviously give folks the perception that their rigs were a great deal slower than the new Ryzen chip. It's one thing to skew results, but it's another to mention that you can run the same test while knowing that the outcome would be misleading.


----------



## Blameless

Quote:


> Originally Posted by *done12many2*
> 
> The screwed up part of this is that they basically challenged people to run the "same" test on their current rigs, which would obviously give folks the perception that their rigs were a great deal slower than the new Ryzen chip. It's one thing to skew results, but it's another to mention that you can run the same test while knowing that the outcome would be misleading.


I'm still willing to give AMD the benefit of the doubt, for now. Again, I'll assume incompetence before malice.

However, that may change if they don't correct this by providing the correct benchmark file/parameters.

Also, if anyone still thinks there is no discrepancy or that it's just a result of margin of error or test environment, I challenge you to explain results like these:

http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710264

http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710337

http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/610#post_25710606

http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/630#post_25710683

http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/640#post_25710730

http://www.overclock.net/t/1601679/broadwell-e-thread/4050#post_25710774

Thanks to all who contributed their benches.


----------



## Jpmboy

Quote:


> Originally Posted by *Blameless*
> 
> I'm still willing to give AMD the benefit of the doubt, for now. Again, I'll assume incompetence before malice.
> 
> However, that may change if they don't correct this by providing the correct benchmark file/parameters.


lol - i tend to go with malice until proven otherwise.


----------



## luisxd

Quote:


> Originally Posted by *Nenkitsune*
> 
> I just thought it was funny that he just now posted that he was glad he got home 15min before the stream, HOURS after the stream ended.


Guess i need to turn on my sacrasm detector


----------



## Fierceleaf

Quote:


> Originally Posted by *done12many2*
> 
> The screwed up part of this is that they basically challenged people to run the "same" test on their current rigs, which would obviously give folks the perception that their rigs were a great deal slower than the new Ryzen chip. It's one thing to skew results, but it's another to mention that you can run the same test while knowing that the outcome would be misleading.


This right here is the truth. I completely agree with you.

Nothing that we were shown today allowed anyone to actually compare numbers. They are not allowing an apples to apples comparison.
But it got lots of people to stare at the Ryzen Logo for a couple minutes while they ran the benchmark. Oh how simple and devious they are.


----------



## Nenkitsune

Quote:


> Originally Posted by *Fierceleaf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *done12many2*
> 
> The screwed up part of this is that they basically challenged people to run the "same" test on their current rigs, which would obviously give folks the perception that their rigs were a great deal slower than the new Ryzen chip. It's one thing to skew results, but it's another to mention that you can run the same test while knowing that the outcome would be misleading.
> 
> 
> 
> This right here is the truth.
> 
> Nothing that we were shown today allowed anyone to actually compare numbers. They are not allowing an apples to apples comparison.
> But it got lots of people to stare at the Ryzen Logo for a couple minutes while they ran the benchmark. Oh how simple and devious they are.
Click to expand...

Yep, that's the problem all of us are having. I don't even care about the Ryzen's time, I just wanted to see ANYONE get a time comparable to the 6900k with a similar setup, and no one has been able to. It either takes tons of clock speed or more cores over their test to get a comparable score, and that's not cool. Without anyone actually trying to compare the score against the 6900k and not realizing something funny is going on, it makes Ryzen look way more incredible than it really is. And if they fudged the benchmark that much, who's to say both systems weren't running different settings all together to make them look closer?

I really want Ryzen to compete toe to toe with intel, but AMD's got to make sure they aren't fudging numbers. I really hope it was just an accident with uploading the wrong file setup.


----------



## steve210

i just did a blender bench with my 3570k 2:07 at 4.6ghz seems about right to me


----------



## JakdMan

Quote:


> Originally Posted by *Blameless*
> 
> This doesn't have anything to do with Intel vs. AMD. I'd be astounded if they didn't use the same test on both of _their_ systems; they'd have to be suicidal to cheat as that would be blatantly obvious as soon as reviews hit. However, they most certainly have not provided us with the test they used.
> 
> This is an AMD vs. where-the-hell-is-my-benchmark? thing.


Not what was meant, though I guess that 's what I get for using the x there. I was more like. You guys have better best chips that you arent giving US?? Or where the magic wand lol


----------



## jincuteguy

Quote:


> Originally Posted by *Nenkitsune*
> 
> Yep, that's the problem all of us are having. I don't even care about the Ryzen's time, I just wanted to see ANYONE get a time comparable to the 6900k with a similar setup, and no one has been able to. It either takes tons of clock speed or more cores over their test to get a comparable score, and that's not cool. Without anyone actually trying to compare the score against the 6900k and not realizing something funny is going on, it makes Ryzen look way more incredible than it really is. And if they fudged the benchmark that much, who's to say both systems weren't running different settings all together to make them look closer?
> 
> I really want Ryzen to compete toe to toe with intel, but AMD's got to make sure they aren't fudging numbers. I really hope it was just an accident with uploading the wrong file setup.


The reason why they did this is because their Ryzen number is not that fast so they have to make up a number from the 6900 to compare. So then ppl can't really see the real performance yet, only to promote and showing their new chip that it can be as fast as a 6900K.


----------



## Nenkitsune

Quote:


> Originally Posted by *jincuteguy*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> Yep, that's the problem all of us are having. I don't even care about the Ryzen's time, I just wanted to see ANYONE get a time comparable to the 6900k with a similar setup, and no one has been able to. It either takes tons of clock speed or more cores over their test to get a comparable score, and that's not cool. Without anyone actually trying to compare the score against the 6900k and not realizing something funny is going on, it makes Ryzen look way more incredible than it really is. And if they fudged the benchmark that much, who's to say both systems weren't running different settings all together to make them look closer?
> 
> I really want Ryzen to compete toe to toe with intel, but AMD's got to make sure they aren't fudging numbers. I really hope it was just an accident with uploading the wrong file setup.
> 
> 
> 
> The reason why they did this is because their Ryzen number is not that fast so they have to make up a number from the 6900 to compare. So then ppl can't really see the real performance yet, only to promote and showing their new chip that it can be as fast as a 6900K.
Click to expand...

assuming they just changed the sample size from 200 to a lower number and all settings between the intel and amd test setups are the same, it is good to see them close together. I'm just annoyed that there's no way to compare similar systems to their test numbers simply because the supplied file must be a different version than what they used on the live stream.


----------



## Blameless

I sent AMD an email to inform them of the discrepancy.


----------



## formula m

Quote:


> Originally Posted by *inedenimadam*
> 
> when I clicked the link in the OP, it took me to what I thought was a live stream. Now I feel stupid for watching patiently through the 15 minute countdown.


Dec 13th 3pm CST.


----------



## morbid_bean

Quote:


> Originally Posted by *Blameless*
> 
> I sent AMD an email to inform them of the discrepancy.


discrepancy..

Woah, what happened?!


----------



## Nenkitsune

Quote:


> Originally Posted by *morbid_bean*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> I sent AMD an email to inform them of the discrepancy.
> 
> 
> 
> discrepancy..
> 
> Woah, what happened?!
Click to expand...

They were showing a blender test where their stock clock 6900k was rendering their file in 36s. User's are finding that comparable systems (even other 6900k systems) aren't anywhere near getting the same render times as AMD did using the file that they themselves uploaded to their website so other people could run it to compare the render times. We're thinking that some settings were changed compared to the file they used in their live stream.


----------



## Evil Penguin

Turns out they were running a Windows 10 themed Kubuntu.








Performance results then make sense.


----------



## morbid_bean

Yikes...

*Just Sayin*, This kind of activity has happened before.  such a bummer


----------



## ozlay

I get 36 secs with my SR-2 lol


----------



## budgetgamer120

AMD seemed to have made the doubters quiet lol


----------



## FattysGoneWild

Quote:


> Originally Posted by *budgetgamer120*
> 
> AMD seemed to have made the doubters quiet lol


Not for me. I say FUD until we see actual in the wild results. Not "demo controlled" environments. AMD blows this. Its tits up time for them.


----------



## Insan1tyOne

Quote:


> Originally Posted by *Evil Penguin*
> 
> *Turns out they were running a Windows 10 themed Kubuntu.*
> 
> 
> 
> 
> 
> 
> 
> 
> Performance results then make sense.


Where is the hard proof that they were running Kubuntu?

--

There is so much baseless hatred, conspiracy, and general malice in this thread that it makes me seriously question if any of us "enthusiasts" actually want competition against Intel in the performance CPU market or not? Save the conspiracy theories and warmongering for _after_ 3rd parties benchmark RyZen and confirm that AMD did (or did not) lie about its performance metrics. Until then, all of these claims mean nothing.

Just be patient and relax everyone. AMD at least deserves the benefit of the doubt for choosing to enter back into direct competition with Intel in the first place. That is not an easy feat.

- Insan1tyOne


----------



## Nenkitsune

Quote:


> Originally Posted by *FattysGoneWild*
> 
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> AMD seemed to have made the doubters quiet lol
> 
> 
> 
> Not for me. I say FUD until we see actual in the wild results. Not "demo controlled" environments. AMD blows this. Its tits up time for them.
Click to expand...

I'm really rootin for AMD to put major pressure on intel like they did back in the day. We need more competition in the CPU market. Look at the GPU market. The competition between ATI/AMD and Nvidia has always lead to each side pushing for better graphics with each gen.

in any case, I'm gonna wait till I see some reviewers get chips in their hands to do independent testing.


----------



## budgetgamer120

Quote:


> Originally Posted by *FattysGoneWild*
> 
> Not for me. I say FUD until we see actual in the wild results. Not "demo controlled" environments. AMD blows this. Its tits up time for them.


I think the benches are plausible. They are benches we can reproduce. Instead of some graphs


----------



## }SkOrPn--'

Quote:


> Originally Posted by *ozlay*
> 
> I get 36 secs with my SR-2 lol


I think the point they are trying to make is that you can't do what they did at the same energy consumption levels. Sure your system is just as fast, but how much more energy did it consume doing it?

Two beautiful new Lamborghini's side by side doing 200 MPH, one is consuming energy at 9 MPG, the other at 15 MPG, which car looks far far more interesting to you? In this day and age of new energy efficiency standards, climate change and air pollution people want power and performance at a lower overall cost.

I don't trust a damn thing I saw today. CES 2017 is needed now and some 3rd party folks to come forward with their initial reviews.


----------



## Evil Penguin

Quote:


> Originally Posted by *Insan1tyOne*
> 
> Where is the hard proof that they were running Kubuntu?
> 
> --
> 
> There is so much baseless hatred, conspiracy, and general malice in this thread that it makes me seriously question if any of us "enthusiasts" actually want competition against Intel in the performance CPU market or not? Save the conspiracy theories and warmongering for _after_ 3rd parties benchmark RyZen and confirm that AMD did (or did not) lie about its performance metrics. Until then, all of these claims mean nothing.
> 
> Just be patient and relax everyone. AMD at least deserves the benefit of the doubt for choosing to enter back into direct competition with Intel in the first place. That is not an easy feat.
> 
> - Insan1tyOne


I was joking, that's obviously not true.


----------



## Forceman

Quote:


> Originally Posted by *budgetgamer120*
> 
> I think the benches are plausible. They are benches we can reproduce. Instead of some graphs


Except no one can reproduce them with the file they provided.


----------



## sepiashimmer

Quote:


> Originally Posted by *Blameless*
> 
> If it can't OC it won't beat a 5820K or 6800K (and yes, both my 5820K and 6800K will beat a stock 6900K in essentially everything) which are below 400 dollars.
> 
> I mean it will be a steal for the OEM markets and mainstream non-OCers, but for most of OCN it will be a big "meh" if it doesn't clock vaguely well.
> 
> I'll get one, if I think I can get 4GHz+ out of it.


As you usually say, OCN and enthusiasts don't even make up 1% of the computer market, so why pander to them?

They are mostly targeting casual users with those benchmarks, casual users don't care about the details of the benchmarks, they just care what beat what, who usually use their computers for word processing, web browsing, movie watching and music listening, in those things even a Piledriver is enough.


----------



## budgetgamer120

Quote:


> Originally Posted by *Forceman*
> 
> Except no one can reproduce them with the file they provided.


How comes?
Quote:


> Originally Posted by *Blameless*
> 
> If it can't OC it won't beat a 5820K or 6800K (and yes, both my 5820K and 6800K will beat a stock 6900K in essentially everything) which are below 400 dollars.
> 
> I mean it will be a steal for the OEM markets and mainstream non-OCers, but for most of OCN it will be a big "meh" if it doesn't clock vaguely well.
> 
> I'll get one, if I think I can get 4GHz+ out of it.


It will be great for me. I don't overclock. What would lead you to even think about Zen not being able to overclock?

Before today I read here that it won't be 3ghz. Now we should expected overclocking.


----------



## headd

tried blender test with 6700k 4.5Ghz-1:07
Zen is pretty fast.


----------



## LancerVI

Quote:


> Originally Posted by *Banko*
> 
> So since AMD is basically getting double the performance even with the 6900K they used, the only way I am able to achive close to double performance is, if you change the number of samples for rendering from 200 to 100.
> 
> If anyone has a 6900K you can try clicking on the camera Icon on the right side, expanding Sampling, and changing the amount of Render samples to 100 instead of 200.
> 
> On my 3770K at 4.7 ghz I went to 47 seconds from from 1:37, I don't have access to my work machine now since I am at home, but reducing the amount of samples should scale it linearly.
> 
> So I would suggest someone with a 6800K or a 6900K changing it to 100 and see if you are more inline with AMD benchmarks.
> 
> It's possible they supplied us the wrong blender file, or maybe that setting doesn't get saved in the blend file and it's only saved on the machine.


That's an interesting thought.

I just tested my home system which is also a 5820k @4.2 with 16GB DDR4 @2400 and got this with 200 and 100 settings

200 @ 59 seconds'ish


100 @28 seconds'ish


You may be onto something there. 59 secs versus 28.


----------



## Nenkitsune

Quote:


> Originally Posted by *budgetgamer120*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Forceman*
> 
> Except no one can reproduce them with the file they provided.
> 
> 
> 
> How comes?
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> If it can't OC it won't beat a 5820K or 6800K (and yes, both my 5820K and 6800K will beat a stock 6900K in essentially everything) which are below 400 dollars.
> 
> I mean it will be a steal for the OEM markets and mainstream non-OCers, but for most of OCN it will be a big "meh" if it doesn't clock vaguely well.
> 
> I'll get one, if I think I can get 4GHz+ out of it.
> 
> Click to expand...
> 
> It will be great for me. I don't overclock. What would lead you to even think about Zen not being able to overclock?
> 
> Before today I read here that it won't be 3ghz. Now we should expected overclocking.
Click to expand...

we can't reproduce them because we're seeing a large disparity in the benchmark times of a comparable 6900k system. AMD ran theirs at stock clocks and got a 36s render time. a user here has a 6900k overclocked to 4.2ghz and had a render time of 42 seconds. Does that seem normal to you? It seems possible that AMD accidentally uploaded the wrong file to their website that has different settings than what they had in the live stream.

as for overclocking, if they can clock as high as skylake with a close enough IPC I think they'll be able to pull a good amount of people back to AMD. The reason why most people here at OCN won't care if it doesn't clock well is because a lot of users here probably would only buy one if they could clock it high enough to be better than their current overclocked systems. No one would buy a CPU knowing that it won't overclock high enough to be a noticeable performance gain over what they're already running. It's why I never bothered with any AMD chips after my 940BE came out. I had it clocked at 3.5ghz, and every time a new AMD chip came out, the performance gain wasn't enough to make me want one. Intel on the other hand, has massive gains over AMD, so I got a 6600k this time.


----------



## budgetgamer120

Quote:


> Originally Posted by *Nenkitsune*
> 
> we can't reproduce them because we're seeing a large disparity in the benchmark times of a comparable 6900k system. AMD ran theirs at stock clocks and got a 36s render time. a user here has a 6900k overclocked to 4.2ghz and had a render time of 42 seconds. Does that seem normal to you? It seems possible that AMD accidentally uploaded the wrong file to their website that has different settings than what they had in the live stream.
> 
> as for overclocking, if they can clock as high as skylake with a close enough IPC I think they'll be able to pull a good amount of people back to AMD. The reason why most people here at OCN won't care if it doesn't clock well is because a lot of users here probably would only buy one if they could clock it high enough to be better than their current overclocked systems. No one would buy a CPU knowing that it won't overclock high enough to be a noticeable performance gain over what they're already running. It's why I never bothered with any AMD chips after my 940BE came out. I had it clocked at 3.5ghz, and every time a new AMD chip came out, the performance gain wasn't enough to make me want one. Intel on the other hand, has massive gains over AMD, so I got a 6600k this time.


The 6900k amd was using was at 4.544ghz. No?


----------



## maltamonk

So how many comparable systems (not other cpus, not other ram, not other variables) have tried to replicate it here?


----------



## LancerVI

Quote:


> Originally Posted by *maltamonk*
> 
> So how many comparable systems (not other cpus, not other ram, not other variables) have tried to replicate it here?


Nearly an impossible task I would think; not knowing the details of the systems AMD used, however I think @Banko may be onto something on his post here.


----------



## Forceman

Quote:


> Originally Posted by *budgetgamer120*
> 
> The 6900k amd was using was at 4.544ghz. No?


If only it was that simple.


----------



## maltamonk

Quote:


> Originally Posted by *LancerVI*
> 
> Nearly an impossible task I would think; not knowing the details of the systems AMD used, however I think @Banko may be onto something on his post here.


Well there ya go. That right there means all the conclusions being made about those comparisons mean absolutely jack all. That's why we have controls in testing.


----------



## CrazyElf

Major props to Blameless for noting this discrepancy.

I'm at a loss to explain how they got their results. In the original, the AMD Blender demo shows their Zen getting 35.38 seconds and then in comparison, 6900K got 35.44 seconds. For that to happen, I'd expect the 6900K would have to be at around 4.5 GHz.

Anyways, my results with a 5960X @ 4.5 GHz. The fifth result is visible as well on the Blender. RAM was quad channel 2666 @ 13-13-13-32.



Let me know what you guys think. I've haven't had the time to aggressively push the overclock.

Run 1: 39.74
Run 2: 38.63
Run 3: 38.89
Run 4: 38.76
Run 5: 38.63

I was not able to get AMD's results.

Edit:

I believe that both of the demos were at 3.4 GHz.

Edit 2:
It may very well be the 100 vs 200 samples. Will have to run tomorrow.

I wish that AMD had disclosed the full parameters behind this. I find this somewhat misleading. A well deserved rep to Blameless.


----------



## LancerVI

Quote:


> Originally Posted by *maltamonk*
> 
> Well there ya go. That right there means all the conclusions being made about those comparisons mean absolutely jack all. That's why we have controls in testing.


Well, this is OCN. We can't help ourselves.

Truly, isn't this why we are here? I understand where you're coming from, but what an exciting time yes?


----------



## maltamonk

Quote:


> Originally Posted by *LancerVI*
> 
> Well, this is OCN. We can't help ourselves.
> 
> Truly, isn't this why we are here? I understand where you're coming from, but what an exciting time yes?


Good point. I'll grab my popcorn


----------



## LancerVI

Quote:


> Originally Posted by *CrazyElf*
> 
> Major props to Blameless for noting this discrepancy.
> 
> I'm at a loss to explain how they got their results. In the original, the AMD Blender demo shows their Zen getting 35.38 seconds and then in comparison, 6900K got 35.44 seconds. For that to happen, I'd expect the 6900K would have to be at around 4.5 GHz.
> 
> Anyways, my results with a 5960X @ 4.5 GHz. The fifth result is visible as well on the Blender. RAM was quad channel 2666 @ 13-13-13-32.
> 
> 
> 
> Let me know what you guys think. I've haven't had the time to aggressively push the overclock.
> 
> Run 1: 39.74
> Run 2: 38.63
> Run 3: 38.89
> Run 4: 38.76
> Run 5: 38.63
> 
> I was not able to get AMD's results.


Again, I think you should read @Banko's post here. Because according to AMD, both rigs were at stock clocks. (if I remember correctly)


----------



## budgetgamer120

Quote:


> Originally Posted by *CrazyElf*
> 
> Major props to Blameless for noting this discrepancy.
> 
> I'm at a loss to explain how they got their results. In the original, the AMD Blender demo shows their Zen getting 35.38 seconds and then in comparison, 6900K got 35.44 seconds. For that to happen, I'd expect the 6900K would have to be at around 4.5 GHz.
> 
> Anyways, my results with a 5960X @ 4.5 GHz. The fifth result is visible as well on the Blender. RAM was quad channel 2666 @ 13-13-13-32.
> 
> 
> 
> Let me know what you guys think. I've haven't had the time to aggressively push the overclock.
> 
> Run 1: 39.74
> Run 2: 38.63
> Run 3: 38.89
> Run 4: 38.76
> Run 5: 38.63
> 
> I was not able to get AMD's results.


AMD said the 6900k was at 4.5ghz.


----------



## Forceman

Quote:


> Originally Posted by *budgetgamer120*
> 
> AMD said the 6900k was at 4.5ghz.


No. They said it was at stock.


----------



## Nenkitsune

Quote:


> Originally Posted by *budgetgamer120*
> 
> AMD said the 6900k was at 4.5ghz.


AMD said it was at its stock frequency.

https://youtu.be/4DEfj2MRLtA?t=31m52s

"3.2ghz base, 3.7ghz boost, no adjustments, just straight out of the box"


----------



## LancerVI

Quote:


> Originally Posted by *Forceman*
> 
> No. They said it was at stock.


Yeah, I could of swore she said that Ryzen is at stock with boost disabled and the 6900k was at stock with boost enabled. Right?


----------



## Forceman

Quote:


> Originally Posted by *LancerVI*
> 
> Yeah, I could of swore she said that Ryzen is at stock with boost disabled and the 6900k was at stock with boost enabled. Right?


Yes, which in theory means they were both running at 3.4 (pretty sure that's what a fully loaded stock 6900K turbos to).


----------



## budgetgamer120

Hmm I must have heard wrong


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> Yes, which in theory means they were both running at 3.4 (pretty sure that's what a fully loaded stock 6900K turbos to).


They boost to 3.7 i believe.


----------



## Nenkitsune

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Forceman*
> 
> Yes, which in theory means they were both running at 3.4 (pretty sure that's what a fully loaded stock 6900K turbos to).
> 
> 
> 
> They boost to 3.7 i believe.
Click to expand...

when all 8 cores are loaded they only boost to 3.4ghz. I think they only hit 3.7 when only one or two cores are loaded.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Nenkitsune*
> 
> when all 8 cores are loaded they only boost to 3.4ghz. I think they only hit 3.7 when only one or two cores are loaded.


Yeah, okay. It seems the cores hover around 3.5 in something like blender, and do 3.7-4.0 in lightly threaded workloads.


----------



## Pesmerrga

1 min 08 seconds with an E5-1660 @4.7ghz.


----------



## Fierceleaf

ENHANCE! Clean that up a bit....Ya this doesn't work like TV. Is that a 200 or 100 render samples? I cant tell by the pixels LOL











we need the original photo, before it was compressed for web...


----------



## SuperZan

Undercover NSA chappie, please reveal yourself so we can get a clean enhance on that shot.


----------



## Nick the Slick

Quote:


> Originally Posted by *Fierceleaf*
> 
> ENHANCE! Clean that up a bit....Ya this doesn't work like TV. Is that a 200 or 100 render samples? I cant tell by the pixels LOL


It actually looks like 116 to me. I haven't used Blender, is that even an option?


----------



## Aussiejuggalo

Was looking on Reddit, seen a thread about the FPS counters, apparently if you skip to 39:17 in the steam you can see them on the big screens.

I'm surprised there's been no photos posted online from anyone there, bit strange isn't it? would of expect some close up's of the computers, all the slides etc.


----------



## Firann

Quote:


> Originally Posted by *Nick the Slick*
> 
> It actually looks like 116 to me. I haven't used Blender, is that even an option?


It indeed does look like the first 2 numbers are the same and the third a more "rounded" number. It looks like 11x where x could be any fat round number I do not know how the sampling works on blender though


----------



## lolerk52

Mystery solved. It's 100 samples.


----------



## lombardsoup

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Was looking on Reddit, seen a thread about the FPS counters, apparently if you skip to 39:17 in the steam you can see them on the big screens.
> 
> I'm surprised there's been no photos posted online from anyone there, bit strange isn't it? would of expect some close up's of the computers, all the slides etc.


Will have to go back and look, didn't see didly the first time.


----------



## Nenkitsune

Quote:


> Originally Posted by *lolerk52*
> 
> 
> 
> Mystery solved. It's 100 samples.


wow, were you really able to get it zoom in that well? the middle number doesn't seem to be quite a 0 compared to the 0 on the right. It's definitely 1x0 though.

could it be 150?


----------



## Roaches

Popcorn time I guess.


----------



## lolerk52

Quote:


> Originally Posted by *Nenkitsune*
> 
> wow, were you really able to get it zoom in that well? the middle number doesn't seem to be quite a 0 compared to the 0 on the right. It's definitely 1x0 though.
> 
> could it be 150?


I did my best to enhance it, but it really does look like 100 samples to me even before the enhancement.


----------



## n4p0l3onic

What would be the point if it used a convoluted number like 11x instead of straight 100?


----------



## Aussiejuggalo

Quote:


> Originally Posted by *lombardsoup*
> 
> Will have to go back and look, didn't see didly the first time.


Yeah I didn't see it either than again I wasn't watching it all the time.


----------



## lombardsoup

Quote:


> Originally Posted by *Roaches*
> 
> Popcorn time I guess.


There was a troll meltdown on wccftech earlier, this guy was attacking anyone who wants to buy zen. Interesting times...


----------



## Nenkitsune

Whatever the sampling render count is, I don't think it was their intention to mislead us. I'm wondering if they halved it from 200 samples for time sake, so the bench tests wouldn't take too long, and to make it look more exciting to see it render faster. It's not particularly exciting to see little squares render slowly, vs watching a flurry of squares filling up all over the screen.


----------



## lolerk52

Quote:


> Originally Posted by *Nenkitsune*
> 
> Whatever the sampling render count is, I don't think it was their intention to mislead us. I'm wondering if they halved it from 200 samples for time sake, so the bench tests wouldn't take too long, and to make it look more exciting to see it render faster. It's not particularly exciting to see little squares render slowly, vs watching a flurry of squares filling up all over the screen.


That's exactly what I thought is going on there. It's very logical to do that, cuts nearly half a minute of basically doing nothing.

Just as a double confirmation from the anandtech forums:


----------



## Nenkitsune

And there we have it! now we know what they changed to get the benchmark scores they were hitting. Now we can actually compare our results to theirs.

I'll bet they uploaded the file before the presentation, and the dude running the computers felt the need to halve the sampling size to cut time and make it look more exciting for those watching.


----------



## lombardsoup

Quote:


> Originally Posted by *Nenkitsune*
> 
> And there we have it! now we know what they changed to get the benchmark scores they were hitting. Now we can actually compare our results to theirs.
> 
> I'll bet they uploaded the file before the presentation, and the dude running the computers felt the need to halve the sampling size to cut time and make it look more exciting for those watching.


Announcing a price estimate would have been more effective at generating hype imo, but this works too. I have never seen shares go up this fast before.


----------



## Nenkitsune

Quote:


> Originally Posted by *lombardsoup*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> And there we have it! now we know what they changed to get the benchmark scores they were hitting. Now we can actually compare our results to theirs.
> 
> I'll bet they uploaded the file before the presentation, and the dude running the computers felt the need to halve the sampling size to cut time and make it look more exciting for those watching.
> 
> 
> 
> Announcing a price estimate would have been more effective at generating hype imo, but this works too. I have never seen shares go up this fast before.
Click to expand...

when you wanna drum up excitement, rather than telling people how much they'll be expected to pay for it, it's better to imply that your product will be cheaper (by quoting the price of the competitors in a manner that makes it seem astronomically expensive) and also show how well it performs vs the competitor in a manner that makes it look like it's a very high end product (in this case by making the render test go by much faster)

That's just my opinion though. Whether or not it's true, I have no idea.


----------



## lolerk52

Now that we know the exact settings, I tested my Core i5 6600K at 4.4GHz.

One test was done with the exact build AMD linked, which resulted in 54 seconds. The second was using an experimental build compiled by a The Stilt on the Anand forums enabling AVX256 instructions, which resulted in 32 seconds render time.


----------



## tashcz

Could any of you assume how the clock per clock comparison with lets say, skylake, would be? 5% difference?

I really expected a bit more on this presentation. They basicly didn't show us anything. Some of you rendering guys were probably happy, but how about gaming and other stuff? I feel like one game can't represent the chip. BF1 favors AMD anyway, even FX's cut out Skylake in BF1, mostly online.


----------



## Roaches

Quote:


> Originally Posted by *lombardsoup*
> 
> There was a troll meltdown on wccftech earlier, this guy was attacking anyone who wants to buy zen. Interesting times...


I don't pay attention to comments on that site, you can tell most of their user names are just a bunch of throwaways looking to start a circle jerk or a bunch of corporate sponsored users doing their usual thing.

What I'm most interested out of AMD is their server chip lineup. Big sockets make me drool, hope they have something competitive for the workstation and HEDT market for scientific and content creators alike. So far I like what I'm seeing.


----------



## lombardsoup

Quote:


> Originally Posted by *Roaches*
> 
> I don't pay attention to comments on that site, you can tell most of their user names are just a bunch of throwaways looking to start a circle jerk or a bunch of corporate sponsored users doing their usual thing.
> 
> What I'm most interested out of AMD is their server chip lineup. Big sockets make me drool, hope they have something competitive for the workstation and HEDT market for scientific and content creators alike. So far I like what I'm seeing.


On the gaming front I'm waiting for third party benchmarks, but everything else is thus far looking fantastic. This is a very big deal for anyone who uses their PC for work.


----------



## airfathaaaaa

i wonder if amd used their new extension for blender..(the one that someone "leaked" 2 days ago and its still in development..)


----------



## lolerk52

Quote:


> Originally Posted by *airfathaaaaa*
> 
> i wonder if amd used their new extension for blender..(the one that someone "leaked" 2 days ago and its still in development..)


Hmmm? What are you talking about?


----------



## MadGoat

Quote:


> Originally Posted by *lolerk52*
> 
> That's exactly what I thought is going on there. It's very logical to do that, cuts nearly half a minute of basically doing nothing.
> 
> Just as a double confirmation from the anandtech forums:


explains it...


----------



## airfathaaaaa

Quote:


> Originally Posted by *lolerk52*
> 
> Hmmm? What are you talking about?


i cant really find the article atm cause google its bloated with yesterday show but i saw an article saying that amd has been developed an "extension" for blender along with some other mumbo jumbo stuff


----------



## Nick the Slick

Quote:


> Originally Posted by *lolerk52*
> 
> 
> 
> Mystery solved. It's 100 samples.


Not quite as impressive then. Would like to see what it would do with boost enabled or overclocked. My 4.6 GHz 4770k does the test in 37.78s with it set to 100.


----------



## aberrero

If anyone wants to try Stilt's build, it's here: https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD

pw: ryzen

This is the blender file: http://download.amd.com/demo/RyzenGraphic_27.blend

Just go to Render -> Render Image and check your time.

I got 1:13 on my 4.2GHz 4690k. Could certainly use an upgrade.

Edit: 1:13 was at 200 samples. At 100 samples it is 36.47 seconds.


----------



## lolerk52

Quote:


> Originally Posted by *Nick the Slick*
> 
> Not quite as impressive then. Would like to see what it would do with boost enabled or overclocked. My 4.6 GHz 4770k does the test in 37.78s with it set to 100.


Well, of course. This test scales nearly perfectly with clocks and cores, so a core advantage gets somewhat negated by clock speeds.


----------



## Nenkitsune

I did some testing for fun. Managed to suicide my CPU at 4.9ghz (it would hit up to like, 84-85c, and could only do one pass. Would crash trying to do a 2nd run after letting it cool off.)

3.4ghz


4.4ghz


4.7ghz (my 24/7 setting)


4.9ghz (there are some odd things happening in this image. I think my UI scaling was disabled due to the overclock being so unstable)


----------



## fs123

Quote:


> Originally Posted by *Nick the Slick*
> 
> Not quite as impressive then. Would like to see what it would do with boost enabled or overclocked. My 4.6 GHz 4770k does the test in 37.78s with it set to 100.


The fact that RYZEN is beating a 6900K (Broadwell-E) by about 10% is fantastic. The 6900K is probably not really boosting and running at 3.2GHz on all cores when fully loaded. That means RYZEN (after taking account of the 200MHz advantage) is probably around 6% faster than the 6900K clock for clock.

This puts it right up there with the latest Intel chips like Skylake when we all expected it to barely reach Sandybridge levels. A big surprise and great for consumers.


----------



## Nenkitsune

If ryzen can overclock as well as intel chips then it'll probably be a big hit as long as it's also competitive vs intel's counterparts.


----------



## Marios145

Somebody should run this on their FX-8350, see how much amd has improved.


----------



## Nenkitsune

I would run it on my 940BE and clock it to 3.4ghz but I don't think the PSU it's currently running on would take too kindly to that sort of abuse.


----------



## Nick the Slick

Quote:


> Originally Posted by *fs123*
> 
> The fact that RYZEN is beating a 6900K (Broadwell-E) by about 10% is fantastic. The 6900K is probably not really boosting and running at 3.2GHz on all cores when fully loaded. That means RYZEN (after taking account of the 200MHz advantage) is probably around 6% faster than the 6900K clock for clock.
> 
> This puts it right up there with the latest Intel chips like Skylake when we all expected it to barely reach Sandybridge levels. A big surprise and great for consumers.


Oh no, don't get me wrong, I'm not trying to take away from what happened here. I think it's amazing in the grand scheme of things and I'm really excited. It was just a bit misleading in that I though this was completely crushing my particular chip at my overclock settings, but with the settings sorted out I can almost match that chips (stock) score, so it's not _as_ impressive, but impressive nonetheless (considering what you stated, plus it's only one benchmark, plus it could potentially be very cheap relatively speaking). Lots of unknowns. I won't be getting one regardless since my 4770k is already far more than what I know what to do with and on an impulse buy I just got a 6700k from the Retail Edge program (just couldn't pass that price up..), but I'm still excited!


----------



## Marios145

Quote:


> Originally Posted by *lolerk52*
> 
> That's exactly what I thought is going on there. It's very logical to do that, cuts nearly half a minute of basically doing nothing.
> 
> Just as a double confirmation from the anandtech forums:


They're running version 2.77?


----------



## Xuper

I'm confused.This pronounces "risin" ?

https://youtu.be/aNczvNJ9KEo?t=30

This guy said : risin


----------



## fleetfeather

What's the source of Stilts build? Like, where has it come from and why?


----------



## Kuivamaa

Quote:


> Originally Posted by *fs123*
> 
> This puts it right up there with the latest Intel chips like Skylake when we all expected it to barely reach Sandybridge levels. A big surprise and great for consumers.


No big surprise. Sandy levels were only brought in the discussion because this is where 40% from XV would land. Otherwise many of us pointed out that the core as described in early 2015 is at least "haswell class". Sandy bridge level of performance would have meant that Zen designers were incompetent. Samsung's current 14nm FF process is light years ahead of Intel's 32nm from 2010 that is used in Sandy. AMD had a much bigger transistor budget per core to work with, and it shows. This core is much more powerful, exactly as we expected to be. As for Skylake, there is a caveat. AVX2 should still be considerably faster with current intels.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *Xuper*
> 
> I'm confused.This pronounces "risin" ?
> 
> https://youtu.be/aNczvNJ9KEo?t=30
> 
> This guy said : risin


Lisa Sue said it like "Rye-Zen" so yeah he said it right.


----------



## coolhandluke41

my 4930k @ 4.4


Spoiler: $)


----------



## Marios145

My FX-6300 results(100 samples):

CPU: 3.4GHz
02:12.31

CPU: 4.4GHz (daily)
01:41.5

Btw, these results are with CPUNB @2.4GHz and CL9-1600 ram


----------



## airfathaaaaa

so where does the actual ipc increase lands? 50-55% ?


----------



## Blameless

Quote:


> Originally Posted by *Insan1tyOne*
> 
> Where is the hard proof that they were running Kubuntu?


Pretty sure that was a joke.

Anyway, one of my initial hypotheses was that they ran the test in Linux and either played back a pre-recorded demo, or were using a Linux VM (from within Windows 10) to run Blender in the faster/more neutral environment. This seemed plausible because of how enormous the benchmark discrepancy is.

After more testing (I ran the 32-bit and 64-bit benchmark in Windows 7, Windows 10, and Lubuntu 16.10) and discovering that even the large advantage Linux provides was not enough to make up for the difference in reported time, I've all but dismissed this hypothesis.

The only sensible conclusion remaining is that the test file AMD provided did not contain all the settings actually used in the demonstration.
Quote:


> Originally Posted by *budgetgamer120*
> 
> I think the benches are plausible. They are benches we can reproduce. Instead of some graphs


The Zen vs. 6900K results are plausible, but it's still obvious that the file they provided for comparison is not what was used in their tests.
Quote:


> Originally Posted by *CrazyElf*
> 
> Major props to Blameless for noting this discrepancy.


Thanks.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> They boost to 3.7 i believe.


Only if you are only loading two cores, or if you have "multi-core enhancement" (which is technically an overclock) enabled.
Quote:


> Originally Posted by *lolerk52*
> 
> 
> 
> Mystery solved. It's 100 samples.


100 samples is producing lower times than I would expect, given the 6900K results in the test.

My primary system, which should match a 6900K in this test gets ~52.5 seconds at 200 samples and 26.33 seconds at 100 samples. This CPU should not be any where near 10 seconds faster than a stock 6900K in any 30-40 second Blender render.

The Anandtech image seems to imply you are correct, but the scores still don't add up.
Quote:


> Originally Posted by *n4p0l3onic*
> 
> What would be the point if it used a convoluted number like 11x instead of straight 100?


What's the point in AMD using 100 when the file defaults to 200?


----------



## budgetgamer120

Quote:


> Originally Posted by *Nick the Slick*
> 
> Not quite as impressive then. Would like to see what it would do with boost enabled or overclocked. My 4.6 GHz 4770k does the test in 37.78s with it set to 100.


Quote:


> Originally Posted by *Nick the Slick*
> 
> Oh no, don't get me wrong, I'm not trying to take away from what happened here. I think it's amazing in the grand scheme of things and I'm really excited. It was just a bit misleading in that I though this was completely crushing my particular chip at my overclock settings, but with the settings sorted out I can almost match that chips (stock) score, so it's not _as_ impressive, but impressive nonetheless (considering what you stated, plus it's only one benchmark, plus it could potentially be very cheap relatively speaking). Lots of unknowns. I won't be getting one regardless since my 4770k is already far more than what I know what to do with and on an impulse buy I just got a 6700k from the Retail Edge program (just couldn't pass that price up..), but I'm still excited!


Your massively overclocked chip can't match zen and you say it is not that impressive.


----------



## Marios145

Quote:


> Originally Posted by *Blameless*
> 
> Pretty sure that was a joke.
> 
> Anyway, one of my initial hypotheses was that they ran the test in Linux and either played back a pre-recorded demo, or were using a Linux VM (from within Windows 10) to run Blender in the faster/more neutral environment. This seemed plausible because of how enormous the benchmark discrepancy is.
> 
> After more testing (I ran the 32-bit and 64-bit benchmark in Windows 7, Windows 10, and Lubuntu 16.10) and discovering that even the large advantage Linux provides was not enough to make up for the difference in reported time, I've all but dismissed this hypothesis.
> 
> The only sensible conclusion remaining is that the test file AMD provided did not contain all the settings actually used in the demonstration.
> The Zen vs. 6900K results are plausible, but it's still obvious that the file they provided for comparison is not what was used in their tests.
> Thanks.
> Only if you are only loading two cores, or if you have "multi-core enhancement" (which is technically an overclock) enabled.
> 100 samples is producing lower times than I would expect, given the 6900K results in the test.
> 
> My primary system, which should match a 6900K in this test gets ~52.5 seconds at 200 samples and 26.33 seconds at 100 samples. This CPU should not be any where near 10 seconds faster than a stock 6900K in any 30-40 second Blender render.
> 
> The Anandtech image seems to imply you are correct, but the scores still don't add up.
> What's the point in AMD using 100 when the file defaults to 200?


If you look at the anand photo, they seem to run on the 2.77(a)? version, could you try checking if that's slower than 2.78a on 6900k? Also, try using dual channel instead of quad.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> Your massively overclocked chip can't match zen and you say it is not that impressive.


A 3.4GHz octo-core only matching a 4.6GHz Ivy quad core in a highly threaded test when the former is supposed to have more IPC than the later is not impressive. If it was accurate, it would be hugely disappointing.

Blender scales almost perfectly with cores and clock speed. It's as CPU dependent and as parallel as it gets. You should need a ~7GHz hyperthreaded Ivy quad to match at stock 6900K or 3.4GHz Zen.

However, I do not think it's accurate, because the 100 sample size used is still not producing comparable scores to AMD's demo.


----------



## hawker-gb

Well done AMD.

Xperia Z5 via Tapatalk


----------



## Blameless

Quote:


> Originally Posted by *Marios145*
> 
> If you look at the anand photo, they seem to run on the 2.77(a)? version, could you try checking if that's slower than 2.78a on 6900k? Also, try using dual channel instead of quad.


I'll try.


----------



## Blameless

Quote:


> Originally Posted by *Marios145*
> 
> If you look at the anand photo, they seem to run on the 2.77(a)? version, could you try checking if that's slower than 2.78a on 6900k? Also, try using dual channel instead of quad.


2.77a is producing virtually identical results to 2.78a on my systems, maybe ever so slightly faster.

Edit: Best of three runs with 2.78a x64 on my primary system is 52.3 seconds (default 200 sample size). Best of three runs with 2.77a x64 is 51.9 seconds.

100 render samples knocks both to ~26 seconds.


----------



## aberrero

Quote:


> Originally Posted by *fleetfeather*
> 
> What's the source of Stilts build? Like, where has it come from and why?


He's a guy on anandtech forums. He recompiled blender to support AVX extensions.


----------



## Shatun-Bear

I hope no-one in here is falling for the AMD marketing. *There is no chance in hell Ryzen has better IPC or is generally faster than a 6900K.* The tests have been set-up, cherry-picked etc but you can be sure they're designed to obviously be extremely flattering.

That said, actual Ryzen performance should be competitive.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Shatun-Bear*
> 
> I hope no-one in here is falling for the AMD marketing. *There is no chance in hell Ryzen has better IPC or is generally faster than a 6900K.* The tests have been set-up, cherry-picked etc but you can be sure they're designed to obviously be extremely flattering.
> 
> That said, actual Ryzen performance should be competitive.


i guess you provide facts about that ya?


----------



## Olivon

Quote:


> Originally Posted by *Mahigan*
> 
> Do you remember the $299 Radeon HD 4870? *when nVIDIA were selling their GTX 280 series for around $800?*


What are you talking about ? Are you not fed up to lie and misinform people ?
Quote:


> The GeForce GTX 280 will retail for $650 with availability planned for June 17th.


http://www.anandtech.com/show/2549
Quote:


> The price point for the GTX 285 is $400, but newegg has parts for $380 right now and overclocked variants for not too much more.


http://www.anandtech.com/show/2711


----------



## fs123

Quote:


> Originally Posted by *Blameless*
> 
> My primary system, which should match a 6900K in this test gets ~52.5 seconds at 200 samples and 26.33 seconds at 100 samples. This CPU should not be any where near 10 seconds faster than a stock 6900K in any 30-40 second Blender render.
> 
> The Anandtech image seems to imply you are correct, but the scores still don't add up.
> 
> What's the point in AMD using 100 when the file defaults to 200?


Maybe your overclock to 4.3GHz makes a big difference but ideally we need someone with a 6900K setup to do the test and then clock it to 3.4GHz like the RYZEN. Would give a good idea of IPC.

As for using 100 instead of 200, It makes sense to use a quicker render time than let everyone wait for over a minute. The render time is essentially halved and the test is reproducible anyway.


----------



## Blameless

Quote:


> Originally Posted by *fs123*
> 
> Maybe your overclock to 4.3GHz makes a big difference


It makes almost exactly a 30% difference, which is the percentage of the OC.
Quote:


> Originally Posted by *fs123*
> 
> but ideally we need someone with a 6900K setup to do the test and then clock it to 3.4GHz like the RYZEN.


Been done, several times; I'm hardly the only person on OCN.

Problem is that we still don't know for sure what the exact test parameters were. If render scale is all they changed, we need to know exactly the number they changed it to to be able to make

100 is too small, 200 is too large, 150 is a little off as well. Maybe they used 133, or 141, or it might not even be a render scale change that's the issue at all.

Without more info from AMD, we are in a rut.
Quote:


> Originally Posted by *fs123*
> 
> Would give a good idea of IPC.


A stock 6900K would have been running at 3.4GHz in AMDs demo (3.7GHz is only reached with two cores loaded), so if AMD used the same test on each system, and I believe they did, Zen Blender IPC is one of the things we are most sure of.

However, this issue isn't so much about how a 3.4GHz 8c/16t Zen compares to a 3.4GHz 8c/16t BW-E in Blender...it's about us not being able to do our own comparisons as we were promised.
Quote:


> Originally Posted by *fs123*
> 
> As for using 100 instead of 200, It makes sense to use a quicker render time than let everyone wait for over a minute. The render time is essentially halved and the test is reproducible anyway.


Yes, that would make sense, but the test isn't reproducible, because we aren't sure what they did.


----------



## Nick the Slick

Quote:


> Originally Posted by *budgetgamer120*
> 
> Your massively overclocked chip can't match zen and you say it is not that impressive.


Whoa, put your noose away there bud, it's all good. I said not _*as*_ impressive. As in, not as impressive as I initially thought. Still impressive? Yes of course. But when my impressions go from "my god, this thing is an absolute monster, and for potentially only $500? Even though I don't need it, I might think about getting it" to "well, it's not all that much better than my OC'd chip that is already more than I know what to do with", then yes, it's not _as_ impressive. But again, as I said, still a lot of unknowns such as pricing, clocking capabilities, other features, etc. so my opinion can still change. Can't be foolish enough to make final assessments over a single benchmark now lol.


----------



## Nenkitsune

They used 100. We can clearly see that in the picture that was taken during the event.
Quote:


> Originally Posted by *lolerk52*
> 
> That's exactly what I thought is going on there. It's very logical to do that, cuts nearly half a minute of basically doing nothing.
> 
> Just as a double confirmation from the anandtech forums:


----------



## Blameless

Quote:


> Originally Posted by *Nenkitsune*
> 
> They used 100. We can clearly see that in the picture that was taken during the event.


Except that 100 gives me 26 seconds with a CPU set up to match 6900K Blender performance.


----------



## Nenkitsune

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> They used 100. We can clearly see that in the picture that was taken during the event.
> 
> 
> 
> Except that 100 gives me 26 seconds with a CPU set up to match 6900K Blender performance.
Click to expand...

what do you get if you clock it to 3.4ghz?

someone in another thread got 29 seconds with a 6900k clocked at 4.3ghz


----------



## Blameless

Quote:


> Originally Posted by *Nenkitsune*
> 
> what do you get if you clock it to 3.4ghz?


The 5820K I'm sitting on now gets 34 seconds at 3.4GHz with 100 render samples. This is two seconds faster than the 6900K result in the AMD test, which is impossible, because a 6900K has a tiny IPC advantage and 33% more cores.
Quote:


> Originally Posted by *Nenkitsune*
> 
> someone in another thread got 29 seconds with a 6900k clocked at 4.3ghz


Maybe at 200, but that CPU at that clocks peed should pull sub 20 seconds at 100 render samples.


----------



## fs123

Quote:


> Originally Posted by *Blameless*
> 
> Without more info from AMD, we are in a rut.
> A stock 6900K would have been running at 3.4GHz in AMDs demo (3.7GHz is only reached with two cores loaded), so if AMD used the same test on each system, and I believe they did, Zen Blender IPC is one of the things we are most sure of.
> 
> However, this issue isn't so much about how a 3.4GHz 8c/16t Zen compares to a 3.4GHz 8c/16t BW-E in Blender...it's about us not being able to do our own comparisons as we were promised.
> Yes, that would make sense, but the test isn't reproducible, because we aren't sure what they did.


The stock speed of the 6900K is 3.2GHz afaik.

This is why I'm saying if someone runs a 6900K at 3.2GHz and tries to match the demo results then clock to the Ryzen speed of 3.4GHz we can compare IPC better.


----------



## Blameless

Quote:


> Originally Posted by *fs123*
> 
> The stock speed of the 6900K is 3.2GHz afaik.


The base clock speed is 3.2GHz, but it can all core turbo to 3.4GHz if within it's TDP limit.

It won't exceed TDP limit in a short Blender run.


----------



## Nenkitsune

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> what do you get if you clock it to 3.4ghz?
> 
> 
> 
> The 5820K I'm sitting on now gets 34 seconds at 3.4GHz with 100 render samples. This is two seconds faster than the 6900K result in the AMD test, which is impossible, because a 6900K has a tiny IPC advantage and 33% more cores.
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> someone in another thread got 29 seconds with a 6900k clocked at 4.3ghz
> 
> Click to expand...
> 
> Maybe at 200, but that CPU at that clocks peed should pull sub 20 seconds at 100 render samples.
Click to expand...

Quote:


> Originally Posted by *Nizzen*
> 
> 29seconds with 6900k @ 4300mhz and mem 3400 cl15


that was with 100 samples btw (unless he ran it at 150 like he did earlier)


----------



## lolerk52

Hey guys, something we completely missed.
The press settings were different from the livestream demo!

https://youtu.be/7yxSFmEOkrA?t=1m15s

The press demo completes in about 25 seconds, which matches up with what our 6900K's do with 100 samples. So the settings for the live demo are still a mystery.


----------



## Blameless

If Nizzen's result was done at 100, then it's an outlier.

The next bench in the thread is showing a faster figure with a 6*8*00K at only slightly higher clock speeds.
Quote:


> Originally Posted by *lolerk52*
> 
> Hey guys, something we completely missed.
> The press settings were different from the livestream demo!
> 
> https://youtu.be/7yxSFmEOkrA?t=1m15s
> 
> The press demo completes in about 25 seconds, which matches up with what our 6900K's do with 100 samples. So the settings for the live demo are still a mystery.


Thanks for the catch. It's as I suspected then.


----------



## HITTI

Quote:


> Originally Posted by *HITTI*
> 
> Thank you.


Above quote is 200 samples.
New results of 100 samples. Lol... Man I would'nt pay 1000+usd for a cpu that is 15 seconds faster in rendering etc... My i7-3770k is just fine...


----------



## Nenkitsune

Quote:


> Originally Posted by *Blameless*
> 
> If Nizzen's result was done at 100, then it's an outlier.
> 
> The next bench in the thread is showing a faster figure with a 6*8*00K at only slightly higher clock speeds.
> Quote:
> 
> 
> 
> Originally Posted by *lolerk52*
> 
> Hey guys, something we completely missed.
> The press settings were different from the livestream demo!
> 
> https://youtu.be/7yxSFmEOkrA?t=1m15s
> 
> The press demo completes in about 25 seconds, which matches up with what our 6900K's do with 100 samples. So the settings for the live demo are still a mystery.
> 
> 
> 
> Thanks for the catch. It's as I suspected then.
Click to expand...

Yeah I think he did it at 150 and hasn't done it at 100 yet.

however, it looks like they must have dropped it further than what they had on the live stream. Now that we can see the new results, seems like they're getting around 25s at 100 samples.

Quote:


> Originally Posted by *HITTI*
> 
> New results of 100 samples. Lol... Man I would'nt pay 1000+usd for a cpu that is 15 seconds faster in rendering etc... My i7-3770k is just fine...


Based on the new info we have from the press demo, they actually got something like 25s with 100 renders. It's 20 seconds faster than yours, and only running at 3.4ghz. Imagine how fast it would render it if they had it clocked above 4ghz.


----------



## Blameless

Assuming render samples is the only difference, ~140 is what I suspect they used. That's just a ballpark figure though, and even if I'm only off by five in either direction, it not very useful.


----------



## lolerk52

Quote:


> Originally Posted by *Blameless*
> 
> Assuming render samples is the only difference, ~140 is what I suspect they used. That's just a ballpark figure though, and even if I'm only off by five in either direction, it not very useful.


Yeah, that isn't particularly helpful because we can't be sure that's the only thing they changed. We can obviously play with settings until they match up, but that won't prove anything until we know what settings were actually used.

Edit: At 140 samples, my 6600K @ 4.4GHz did it in 1:15 minutes.


----------



## HITTI

Quote:


> Originally Posted by *Nenkitsune*
> 
> Yeah I think he did it at 150 and hasn't done it at 100 yet.
> 
> however, it looks like they must have dropped it further than what they had on the live stream. Now that we can see the new results, seems like they're getting around 25s at 100 samples.
> Based on the new info we have from the press demo, they actually got something like 25s with 100 renders. It's 20 seconds faster than yours, and only running at 3.4ghz. Imagine how fast it would render it if they had it clocked above 4ghz.


Yea my bad but i was referring to that intel k series chip. My bad not mentioning it in my post.


----------



## Cyrious

Quote:


> Originally Posted by *Nenkitsune*
> 
> I would run it on my 940BE and clock it to 3.4ghz but I don't think the PSU it's currently running on would take too kindly to that sort of abuse.


Heh, i'd be wiling to offer up my own 940BE, but the last time i had it up that high I nearly set the board on fire (srsly).

Oh, and I did another run with the 100 sample setting, got just a hair over 36 seconds. Ryzen is still faster, although the discrepancy aint nowhere near as big.


----------



## Nenkitsune

Quote:


> Originally Posted by *Cyrious*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> I would run it on my 940BE and clock it to 3.4ghz but I don't think the PSU it's currently running on would take too kindly to that sort of abuse.
> 
> 
> 
> Heh, i'd be wiling to offer up my own 940BE, but the last time i had it up that high I nearly set the board on fire (srsly).
> 
> Oh, and I did another run with the 100 sample setting, got just a hair over 36 seconds. Ryzen is still faster, although the discrepancy aint nowhere near as big.
Click to expand...

Someone posted the video from the actual press confrence, not the live stream. Turns out at 100 samples it was hitting 25s

ah, ok I see what you mean. yeah, 10s faster


----------



## guttheslayer

All I want is to see Intel get mop by Zen and force them to deliver something way >>>5% in their next gen CPU.

A good wake up call to them, and stop their iGPU performance increase nonsense.


----------



## lolerk52

If Zen is indeed matching up with Broadwell-E, this is wonderful news for Zen+ too, considering they expect yet another fairly large uplift. Should put it ahead of Skylake


----------



## guttheslayer

Quote:


> Originally Posted by *lolerk52*
> 
> If Zen is indeed matching up with Broadwell-E, this is wonderful news for Zen+ too, considering they expect yet another fairly large uplift. Should put it ahead of Skylake


If AMD Zen+ is rumored to be +15% better than Zen, den Intel will need 3 generation to catch up.

It will be AMD turn to play the waiting game. lol.


----------



## oxidized

Quote:


> Originally Posted by *guttheslayer*
> 
> If AMD Zen+ is rumored to be +15% better than Zen, den Intel will need 3 generation to catch up.
> 
> It will be AMD turn to play the waiting game. lol.


Still believing in AMD graphs? Seriously?


----------



## Cyrious

Quote:


> Originally Posted by *Nenkitsune*
> 
> Someone posted the video from the actual press confrence, not the live stream. Turns out at 100 samples it was hitting 25s
> 
> ah, ok I see what you mean. yeah, 10s faster


That's still what, 20-30% faster than my SB-E Xeon?


----------



## Nenkitsune

That would be great if AMD finally came back out on top. How many years has it been since they fell from the top?


----------



## fromthewatt

post the actual file they used please?
i will run blender on my 6900k tonight


----------



## oxidized

Quote:


> Originally Posted by *fromthewatt*
> 
> post the actual file they used please?
> i will run blender on my 6900k tonight


They have to post it, i don't think anyone here can do that


----------



## guttheslayer

Quote:


> Originally Posted by *oxidized*
> 
> Still believing in AMD graphs? Seriously?


If they match BW-E in IPC they probably have gone >40% IPC performance.


----------



## oxidized

Quote:


> Originally Posted by *guttheslayer*
> 
> If they match BW-E in IPC they probably have gone >40% IPC performance.


Whatever they match still has to be seen, since once again what they showed was very fishy and not really trustworthy, again, my suggest is don't fool yourselves too much, everyone should stay cautiously optimistic


----------



## Nenkitsune

Quote:


> Originally Posted by *oxidized*
> 
> Quote:
> 
> 
> 
> Originally Posted by *fromthewatt*
> 
> post the actual file they used please?
> i will run blender on my 6900k tonight
> 
> 
> 
> They have to post it, i don't think anyone here can do that
Click to expand...

http://download.amd.com/demo/RyzenGraphic_27.blend

change render samples to 100



based on the press conference that's how they got their result. It was a 25ish second render time.


----------



## oxidized

Quote:


> Originally Posted by *Nenkitsune*
> 
> http://download.amd.com/demo/RyzenGraphic_27.blend
> 
> change render samples to 100
> 
> 
> 
> based on the press conference that's how they got their result. It was a 25ish second render time.


So now we're sure it's the only thing that has been changed? And we're also sure it's 100?


----------



## Kuivamaa

Quote:


> Originally Posted by *guttheslayer*
> 
> If AMD Zen+ is rumored to be +15% better than Zen, den Intel will need 3 generation to catch up.
> 
> It will be AMD turn to play the waiting game. lol.


Well Zen is a fresh design, and Zen+ (or Ryzen plus whatever) should see a nice boost since there is gonna be low hanging fruit. After that things should slow down though.


----------



## Nenkitsune

Quote:


> Originally Posted by *oxidized*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> http://download.amd.com/demo/RyzenGraphic_27.blend
> 
> change render samples to 100
> 
> 
> 
> based on the press conference that's how they got their result. It was a 25ish second render time.
> 
> 
> 
> So now we're sure it's the only thing that has been changed? And we're also sure it's 100?
Click to expand...

Not 100% but once we have people running 6900k's running at their stock clocks to see if the times are relatively similar.


----------



## lolerk52

Quote:


> Originally Posted by *oxidized*
> 
> So now we're sure it's the only thing that has been changed? And we're also sure it's 100?


We're sure they used 100 samples, but as for if they changed anything else we don't know.
I looked over the visible settings and everything looks identical.


----------



## oxidized

Quote:


> Originally Posted by *Nenkitsune*
> 
> Not 100% but once we have people running 6900k's running at their stock clocks to see if the times are relatively similar.


Quote:


> Originally Posted by *lolerk52*
> 
> We're sure they used 100 samples, but as for if they changed anything else we don't know.
> I looked over the visible settings and everything looks identical.


Alright, i'm more of a 100% sure type of guy honestly


----------



## doza

Quote:


> Originally Posted by *Nenkitsune*
> 
> http://download.amd.com/demo/RyzenGraphic_27.blend
> 
> change render samples to 100
> 
> 
> 
> based on the press conference that's how they got their result. It was a 25ish second render time.


render at 100....

31sec 5820k at 3.8ghz, im ok for now.Going for minimal 12 cores on my next upgrade.


----------



## naz2

the fact that they couldn't give it to us straight and we have to play these guessing games feels like old AMD gimmicks. marketing fail as always


----------



## fromthewatt

so from what i read and understand : zen gets 25seconds in blender. 6900k gets same 25 seconds in blender both at 100samples ? as per the amd demo


----------



## Blameless

Quote:


> Originally Posted by *Nenkitsune*
> 
> How many years has it been since they fell from the top?


10.5 years.
Quote:


> Originally Posted by *oxidized*
> 
> So now we're sure it's the only thing that has been changed? And we're also sure it's 100?


No and no.

The did run a test at the event with 100, but this was not part of the video and did not produce the same figures shown in the demonstration. So, we aren't exactly sure what they did and cannot compare until we are.
Quote:


> Originally Posted by *naz2*
> 
> the fact that they couldn't give it to us straight and we have to play these guessing games feels like old AMD gimmicks. marketing fail as always


Came all the way out here to the AMD variety show and all I got was this lousy T-shirt!
Quote:


> Originally Posted by *fromthewatt*
> 
> so from what i read and understand : zen gets 25seconds in blender. 6900k gets same 25 seconds in blender both at 100samples ? as per the amd demo


Roughly 25 seconds in Windows Blender 2.77a x64, apparently.


----------



## kd5151

According to WCCFTech...

Custom blender scene aka 100 sample as seen clearly in the pictures

AMD 25.57
Intel 26.01

Someone was looking for this???


----------



## formula m

Except the press was there, & they had workshops explaining stuff.

The press is not relying on captured screen shots, or still frames from a video. They were there.

Unless, you believe that AMD didn't have identical tests for both computers. Really doesn't matter the settings, or re-creating the exact results, nobody here is getting two rigs to match either. The point was... same software, same setting... AMD wins that bench.

Meaning, that Ryzen is as powerful in blender, as Intel's $$ chip.


----------



## lolerk52

Alright, I attempted to discover what the default sampling rate is, and it appears to be 128.


----------



## kd5151

Quote:


> Originally Posted by *lolerk52*
> 
> Alright, I attempted to discover what the default sampling rate is, and it appears to be 128.

















two thumbs up


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> Except the press was there, & they had workshops explaining stuff.
> The press is not relying on captured screen shots, or still frames from a video. They were there.
> 
> Unless, you believe that AMD didn't have identical tests for both computers. Really doesn't matter the settings, or re-creating the exact results, nobody here is getting two rigs to match either. The point was... same software, same setting... AMD wins that bench.
> 
> Meaning, that Ryzen is as powerful in blender, as Intel's $$ chip.


Meaning nothing, seen the weird things and the discrepancies.


----------



## Nickyvida

So. Zen might be another faildozer? Hyping and a test with unknown settings. Means they're not confident that it can matvh up to the intel chip.


----------



## Blameless

Quote:


> Originally Posted by *kd5151*
> 
> According to WCCFTech...
> 
> Custom blender scene aka 100 sample as seen clearly in the pictures
> 
> AMD 25.57
> Intel 26.01
> 
> Someone was looking for this???


The irony of having to resort to WCCFTech to make sense of AMD's own demo presentation.
Quote:


> Originally Posted by *formula m*
> 
> Except the press was there, & they had workshops explaining stuff.
> The press is not relying on captured screen shots, or still frames from a video. They were there.
> 
> Unless, you believe that AMD didn't have identical tests for both computers. Really doesn't matter the settings, or re-creating the exact results, nobody here is getting two rigs to match either. The point was... same software, same setting... AMD wins that bench.
> 
> Meaning, that Ryzen is as powerful in blender, as Intel's $$ chip.


None of that was really in dispute.

The issue that the last 20 pages of this thread has focused on is that AMD has provided a render file for public comparison...then not told anyone what settings they used, with the default settings producing results far slower than what AMD is getting.

This is giving people a false impression of the performance of older parts relative to the newer ones.


----------



## lolerk52

Quote:


> Originally Posted by *Nickyvida*
> 
> So. Zen might be another faildozer? Hyping and a test with unknown settings. Means they're not confident that it can matvh up to the intel chip.


Looks more like an honest mistake than anything else. Otherwise they wouldn't provide the file and say "Go ahead and test it yourself".
Somebody screwed up with the settings.


----------



## Ding Chavez

Wow this thread is going long.

My 2c is this is a presentation by AMD so zen will obviously be shown in a positive light, I'd take it with a grain of salt. As usual wait and see for the real reviews then we can really make a good thread...


----------



## oxidized

Quote:


> Originally Posted by *Nickyvida*
> 
> So. Zen might be another faildozer? Hyping and a test with unknown settings. Means they're not confident that it can matvh up to the intel chip.


Not at all, looks like a very good chip, just not as good as AMD pictures it...But again, let's wait for proper indie tests


----------



## Nenkitsune

Quote:


> Originally Posted by *lolerk52*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nickyvida*
> 
> So. Zen might be another faildozer? Hyping and a test with unknown settings. Means they're not confident that it can matvh up to the intel chip.
> 
> 
> 
> Looks more like an honest mistake than anything else. Otherwise they wouldn't provide the file and say "Go ahead and test it yourself".
> Somebody screwed up with the settings.
Click to expand...

between the live stream and the press conference the times dropped considerably, meaning they are changing the settings (most likely just the render sample settings)

I don't think they're doing it to mislead people. I think it's so the test goes quicker and is less boring to watch.


----------



## formula m

Quote:


> Originally Posted by *Blameless*
> 
> The irony of having to resort to WCCFTech to make sense of AMD's own demo presentation.
> None of that was really in dispute.
> 
> The issue that the last 20 pages of this thread has focused on is that AMD has provided a render file for public comparison...then not told anyone what settings they used, with the default settings producing results far slower than what AMD is getting.
> 
> This is giving people a false impression of the performance of older parts relative to the newer ones.


I was responding to the conspiracy theorists in this thread, and was explaining to them their points mean zero. Because..

The problem we are having, is not knowing the actual setting they used in the public render. She asked us to download and try ourselves. I don't think they were trying to be coy, or funny. What setting they used, will be released once AMD realizes we don't have the proper render settings.

What is NOT in question, is that running the same tests (side by side), that AMD's Ryzen chip is as fast as the Intel chip.


----------



## Nenkitsune

I guess in a way, the exact settings aren't 100% important since we do know that it does perform on par with a 6900k at stock clocks. This means any test we do could be compared to a stock 6900k running the file they provided.


----------



## GorillaSceptre

Quote:


> Originally Posted by *oxidized*
> 
> Not at all, looks like a very good chip, just not as good as AMD pictures it...


Okay, so you're saying they gimped the Intel chip somehow? Which would mean that AMD would be stupid enough to lie in front of the press and enthusiasts watching, and then be dumb enough to ask enthusiasts to check it themselves straight after the stream.. The more likely scenario is they messed up the settings for enthusiasts to try. Which would also mean that Ryzen went head to head with Intel's $1000 chip and matched or exceeded it, whatever the real settings might of been.

I seriously doubt they could possibly be that stupid.. Not saying they haven't used that bench to paint Ryzen in the best light they can, just that i don't think they sabotaged/twisted the results of the 6900k. I'm still waiting for real reviews and not falling for hype, who knows, all their fancy prediction like senseMI, etc., might make this a best case scenario for them.


----------



## xzamples

intel users in full throttle damage control mode in this thread, great presentation amd ?


----------



## formula m

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Okay, so you're saying they gimped the Intel chip somehow? Which would mean that AMD would be stupid enough to lie in front of the press and enthusiasts watching, and then be dumb enough to ask enthusiasts to check it themselves straight after the stream.. The more likely scenario is they messed up the settings for enthusiasts to try. Which would also mean that Ryzen went head to head with Intel's $1000 chip and matched or exceeded it, whatever the real setting might of been.
> 
> I seriously doubt they could possibly be that stupid.. Not saying they haven't used that bench to paint Ryzen in the best light they can, i just that i don't think they sabotaged/twisted the results of the 6900k. I'm still waiting for real reviews and not falling for hype, who knows, all their fancy prediction like senseMI, etc., might make this a best case scenario for them.


Yes, that is what he is implying. But he won't come out and say it...


----------



## Blameless

Quote:


> Originally Posted by *Nenkitsune*
> 
> I guess in a way, the exact settings aren't 100% important since we do know that it does perform on par with a 6900k at stock clocks. This means any test we do could be compared to a stock 6900k running the file they provided.


Yes.

However, there are still quite a few people who aren't as diligent when it comes to comparing results who now have a false impression of Zen's (and BW-E's) performance.

The only info on AMD's site for the last 15 hours has been their video, which shows 36 seconds, and the default test, which will produce results 40% worse on the same hardware.

Several of the comments in this very thread feature people who now think their soon to be Zen (or BW-E) upgrade will give them a much larger boost than it really will. No doubt their hopes will be dashed before they can purchase Zen, but those hopes should never have been inflated in the first place.

This whole situation was just sloppy.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Blameless*
> 
> Yes.
> 
> However, there are still quite a few people who aren't as diligent when it comes to comparing results who now have a false impression of Zen's (and BW-E's) performance.
> 
> The only info on AMD's site for the last 15 hours has been their video, which shows 36 seconds, and the default test, which will produce results 40% worse on the same hardware.
> 
> Several of the comments in this very thread feature people who now think their soon to be Zen (or BW-E) upgrade will give them a much larger boost than it really will. No doubt their hopes will be dashed before they can purchase Zen, but those hopes should never have been inflated in the first place.
> 
> This whole situation was just sloppy.


Don't you think you're going just a bit overboard here? I get what you're saying... but think about the types of people who would of watched that stream, an AMD stream no less.. They are enthusiasts, not gullible, innocent consumers strolling into a store. Out of all the enthusiasts that watched the stream, how many of them would of actually gone through the effort of testing those settings? Most of the people that watched saw Ryzen and the 6900k compete neck and neck, with Ryzen coming off slightly better.

I'd also wager that the people who bothered to run the test are also more likely to be the ones on forums, discussing the inconsistencies right now. Obviously i wish they would of posted the accurate damn settings (just like you obviously), but i doubt there is something intentionally nefarious going on. Hopefully they correct this situation soon.


----------



## Blameless

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Don't you think you're going just a bit overboard here? I get what you're saying... but think about the types of people who would of watched that stream, an AMD stream no less.. They are enthusiasts, not gullible, innocent consumers strolling into a store. Out of all the enthusiasts that watched the stream, how many of them would of actually gone through the effort of testing those settings? Most of the people that watched saw Ryzen and the 6900k compete neck and neck, with Ryzen coming off slightly better.


Again, have you seen the comments in this very thread?

Some of the doubt and vitriol directed at me, here, for even pointing out this error, is proof enough that even knowledgeable enthusiasts can allow their hopes and biases to cloud their better judgement.

Twenty five pages ago, I would have had the exact same views as you're expressing now...not any more.

Edit: should have said 40 pages, this thread grew fast.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> i doubt there is something intentionally nefarious going on. Hopefully they correct this situation soon.


I don't think it's anything deliberate either...just sloppy, as I said.

And I have the same hope.


----------



## Gumbi

Has anyone been able to figure out what they had the sampling rate set to? The default for me is 200, and with that I scored 1 minute 9 seconds. This seems a bit slow considering I have a heavily overclocked 4790k (4.9ghz). By the way, this was within a second or two of a friend's 4.5ghz 6700k.

According to a poster AMD had the sampling set to 100, I tried that, and got 35.2 seconds...which is too fast. Maybe the sampling rate was set to 128???


----------



## Blameless

Quote:


> Originally Posted by *Gumbi*
> 
> Has anyone been able to figure out what they had the sampling rate set to? The default for me is 200, and with that I scored 1 minute 9 seconds. This seems a bit slow considering I have a heavily overclocked 4790k (4.9ghz). By the way, this was within a second or two of a friend's 4.5ghz 6700k.
> 
> According to a poster AMD had the sampling set to 100, I tried that, and got 35.2 seconds...which is too fast. Maybe the sampling rate was set to 128???


For the test in the video, the sample size is unknown.

There was another test done, that wasn't on the video, which used 100 samples, where both parts were getting near 26 seconds, with the Zen edging out the 6900K by about half a second. This test also used 2.77a (Win64) which is about 1% faster than 2.78a on my setups.


----------



## Gumbi

Quote:


> Originally Posted by *Blameless*
> 
> For the test in the video, the sample size is unknown.
> 
> There was another test done, that wasn't on the video, which used 100 samples, where both parts were getting near 26 seconds, with the Zen edging out the 6900K by about half a second. This test also used 2.77a (Win64) which is about 1% faster than 2.78a on my setups.


OK, thanks. So we can't really do direct comparisons until we know for sure of their settings for that test. I'm using 2.78a myself, but <1% margin of error is fine for my purposes at the moment, I'm just trying to get a reference point for how powerful they say this chip is.


----------



## Nenkitsune

Quote:


> Originally Posted by *Gumbi*
> 
> Has anyone been able to figure out what they had the sampling rate set to? The default for me is 200, and with that I scored 1 minute 9 seconds. This seems a bit slow considering I have a heavily overclocked 4790k (4.9ghz). By the way, this was within a second or two of a friend's 4.5ghz 6700k.
> 
> According to a poster AMD had the sampling set to 100, I tried that, and got 35.2 seconds...which is too fast. Maybe the sampling rate was set to 128???


They changed it to 100 during the press event (separate from the live stream) and they did the renders in 25ish seconds.

I found the video of it. Looks like the 6900k finished in about 24.62s


here's the video

https://www.youtube.com/watch?v=h_863m7vFxw&ab_channel=TechARP

also, right here you can see it say 100 samples as well


----------



## oxidized

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Okay, so you're saying they gimped the Intel chip somehow? Which would mean that AMD would be stupid enough to lie in front of the press and enthusiasts watching, and then be dumb enough to ask enthusiasts to check it themselves straight after the stream.. The more likely scenario is they messed up the settings for enthusiasts to try. Which would also mean that Ryzen went head to head with Intel's $1000 chip and matched or exceeded it, whatever the real settings might of been.
> 
> I seriously doubt they could possibly be that stupid.. Not saying they haven't used that bench to paint Ryzen in the best light they can, just that i don't think they sabotaged/twisted the results of the 6900k. I'm still waiting for real reviews and not falling for hype, who knows, all their fancy prediction like senseMI, etc., might make this a best case scenario for them.


Lied? they already lied in public multiple times in the past, and not so far ago, bunch of months.
I'm not saying they gimped anything, just saying, they did something to show that Ryzen is faster or as fast as skylake chips, i don't know what they did but there are many proof in this very thread. Also the Dota 2 playing + streaming thing is still very odd, a 6700K doesn't run nearly that bad, they showed a streaming going like 5 fps dropping more than half of the frames, something that isn't nearly close to truth, so basically what they did there, wasn't gimp intel, was just making AMD look better than intel in certain scenarios


----------



## kd5151

Was shadow play running in the backround?


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> Lied? they already lied in public multiple times in the past, and not so far ago, bunch of months.
> I'm not saying they gimped anything, just saying, they did something to show that Ryzen is faster or as fast as skylake chips, i don't know what they did but there are many proof in this very thread. Also the Dota 2 playing + streaming thing is still very odd, a 6700K doesn't run nearly that bad, they showed a streaming going like 5 fps dropping more than half of the frames, something that isn't nearly close to truth, so basically what they did there, wasn't gimp intel, was just making AMD look better than intel in certain scenarios


Can you please show us where AMD lied to the public, multiple times... plz?


----------



## Kuivamaa

Quote:


> Originally Posted by *kd5151*
> 
> Was shadow play running in the backround?


Well those computers were equipped with RX480s, so no.


----------



## mohiuddin

E5 2670 @3ghz all core
Render sample size100 - time 37sec~
Sample size 128 - time 47sec~
SAmple size 140 - time 52sec~
size 200 - time 1min 14sec~


----------



## Gumbi

Quote:


> Originally Posted by *Nenkitsune*
> 
> They changed it to 100 during the press event (separate from the live stream) and they did the renders in 25ish seconds.
> 
> I found the video of it. Looks like the 6900k finished in about 24.62s


The time I'm not worried about, are you sure the render for that time was 25~ seconds? I'm getting 35.2 seconds with a 4.9ghz 4790k. Sure, it's a much higher clock speed but half the cores. I shouldn't be that close to the 6900k/Ryzen in that case - unless we're off about the 100.


----------



## SoloCamo

Quote:


> Originally Posted by *oxidized*
> 
> Lied? they already lied in public multiple times in the past, and not so far ago, bunch of months.
> I'm not saying they gimped anything, just saying, they did something to show that Ryzen is faster or as fast as skylake chips, i don't know what they did but there are many proof in this very thread. Also the Dota 2 playing + streaming thing is still very odd, a 6700K doesn't run nearly that bad, they showed a streaming going like 5 fps dropping more than half of the frames, something that isn't nearly close to truth, so basically what they did there, wasn't gimp intel, was just making AMD look better than intel in certain scenarios


Get out of here with that nonsense. Seriously... Just stirring the pot with no evidence. Either be productive in actually researching this as others have or don't fill the thread up with baseless slander.

That said, at 100 samples my 4790k (again, turbo locked to 4.4ghz) pulled a 39.05. This seems far more in line with the roughly 26 secs for the 6900k/Zen and the fact that another 4790k with a 500mhz clock advantage did it in 35 seconds.


----------



## Gumbi

Quote:


> Originally Posted by *oxidized*
> 
> Lied? they already lied in public multiple times in the past, and not so far ago, bunch of months.
> I'm not saying they gimped anything, just saying, they did something to show that Ryzen is faster or as fast as skylake chips, i don't know what they did but there are many proof in this very thread. Also the Dota 2 playing + streaming thing is still very odd, a 6700K doesn't run nearly that bad, they showed a streaming going like 5 fps dropping more than half of the frames, something that isn't nearly close to truth, so basically what they did there, wasn't gimp intel, was just making AMD look better than intel in certain scenarios


This is false. It's very easy to bend over a CPU with certain streaming settings, to the point the difference between a playable and completely unplayable stream can come down to having double the core count. By srttings the stream settings a certain way they have the 4c/8thread i7 choking but the 8c/16thread i7/Ryzen ticking over nicely.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Blameless*
> 
> Again, have you seen the comments in this very thread?
> 
> Some of the doubt and vitriol directed at me, here, for even pointing out this error, is proof enough that even knowledgeable enthusiasts can allow their hopes and biases to cloud their better judgement.
> 
> Twenty five pages ago, I would have had the exact same views as you're expressing now...not any more.
> 
> Edit: should have said 40 pages, this thread grew fast.
> I don't think it's anything deliberate either...just sloppy, as I said.
> 
> And I have the same hope.


Fair enough, i haven't been following the whole thread.

Quote:


> Originally Posted by *oxidized*
> 
> Lied? they already lied in public multiple times in the past, and not so far ago, bunch of months.
> I'm not saying they gimped anything, just saying, they did something to show that Ryzen is faster or as fast as skylake chips, i don't know what they did but there are many proof in this very thread. Also the Dota 2 playing + streaming thing is still very odd, a 6700K doesn't run nearly that bad, they showed a streaming going like 5 fps dropping more than half of the frames, something that isn't nearly close to truth, so basically what they did there, wasn't gimp intel, was just making AMD look better than intel in certain scenarios


Rather don't post so much conjecture, when making such blanket statements give some proof to go along with them. I also don't see any proof of them lying in this thread.. A few are saying they're lying, but most are just wondering what the correct settings are.


----------



## Nenkitsune

Quote:


> Originally Posted by *Gumbi*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> They changed it to 100 during the press event (separate from the live stream) and they did the renders in 25ish seconds.
> 
> I found the video of it. Looks like the 6900k finished in about 24.62s
> 
> 
> 
> 
> The time I'm not worried about, are you sure the render for that time was 25~ seconds? I'm getting 35.2 seconds with a 4.9ghz 4790k. Sure, it's a much higher clock speed but half the cores. I shouldn't be that close to the 6900k/Ryzen in that case - unless we're off about the 100.
Click to expand...

24.xx for both chips. within maybe only a few tenths from each other. You're clocking at 4.9ghz. that's 1.5ghz higher than either of those chips.


----------



## Gumbi

Quote:


> Originally Posted by *SoloCamo*
> 
> Get out of here with that nonsense. Seriously... Just stirring the pot with no evidence. Either be productive in actually researching this as others have or don't fill the thread up with baseless slander.
> 
> That said, at 100 samples my 4790k (again, turbo locked to 4.4ghz) pulled a 39.05. This seems far more in line with the roughly 26 secs for the 6900k/Zen and the fact that another 4790k with a 500mhz clock advantage did it in 35 seconds.


This number falls in line with mine. Mine is overclocked to 4.9ghz and I get 35.2 seconds.

That number is still too low though (too fast), if they used 100 samples they should be getting lower than that, they literally have twice the cores compared to out chips.


----------



## Gumbi

Quote:


> Originally Posted by *Nenkitsune*
> 
> 24.xx for both chips. within maybe only a few tenths from each other. You're clocking at 4.9ghz. that's 1.5ghz higher than either of those chips.


So? They have double the core count. Does Blender not scale in a linear fashion? (I don't know, I'm asking, not being impolite).


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> Can you please show us where AMD lied to the public, multiple times... plz?


You can show yourself by looking at polaris announcement and previous stuff.
Quote:


> Originally Posted by *SoloCamo*
> 
> Get out of here with that nonsense. Seriously... Just stirring the pot with no evidence.


You should really stop telling people "get out of here" you're starting to sound very annoying. Evidence is there, go find it yourself, it has been talked about again and again, months ago.
Quote:


> Originally Posted by *Gumbi*
> 
> This is false. It's very easy to bend over a CPU with certain streaming settings, to the point the difference between a playable and completely unplayable stream can come down to having double the core count. By srttings the stream settings a certain way they have the 4c/8thread i7 choking but the 8c/16thread i7/Ryzen ticking over nicely.


Yeah, so basically what i said, why use certain streaming settings, just to show an alleged better performance? 6700K doesn't run even remotely that bad in usual settings, which is what pretty much everyone is using to stream.


----------



## Nenkitsune

Quote:


> Originally Posted by *Gumbi*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> 24.xx for both chips. within maybe only a few tenths from each other. You're clocking at 4.9ghz. that's 1.5ghz higher than either of those chips.
> 
> 
> 
> So? They have double the core count. Does Blender not scale in a linear fashion? (I don't know, I'm asking, not being impolite).
Click to expand...

clock yours to 3.4ghz and run blender with 150 samples. a 6900k does that in 34.59 seconds

at 4.3ghz it appears the 6900k does it in 29 seconds.


----------



## Gumbi

Quote:


> Originally Posted by *oxidized*
> 
> You can show yourself by looking at polaris announcement and previous stuff.
> Yeah, so basically what i said, why use certain streaming settings, just to show an alleged better performance? 6700K doesn't run even remotely that bad in usual settings, which is what pretty much everyone is using to stream.


This is what corporate BS announcements are always like. It;s full of superficial BS. It doesn't mean they're lying.

They were using the same settings across every system and showing how a 670kk can choke where a 6900k/Ryzen chip won't.


----------



## Gumbi

Quote:


> Originally Posted by *Nenkitsune*
> 
> clock yours to 3.4ghz and run blender with 150 samples. a 6900k does that in 34.59 seconds


I'll give that a shot. Does Blender scale fully with more threads? Can I assume a 4790k a 3.4ghhz should score roughly half as much as a 6900k (actually a stock 6900k, like they used, has some turbo involved, but we'll ignore that for the moment).


----------



## Nenkitsune

Quote:


> Originally Posted by *Gumbi*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> clock yours to 3.4ghz and run blender with 150 samples. a 6900k does that in 34.59 seconds
> 
> 
> 
> I'll give that a shot. Does Blender scale fully with more threads? Can I assume a 4790k a 3.4ghhz should score roughly half as much as a 6900k (actually a stock 6900k, like they used, has some turbo involved, but we'll ignore that for the moment).
Click to expand...

a 6900k turbos up to a maximum of 3.4ghz on all 4 cores i think. It can clock higher, but only in less multi threaded programs.

I don't know exactly how well it scales. at 3.4ghz my i5 clocked in at 68 seconds so I don't think the scaling is a perfect situation of twice the cores, half the time.

also, my 6600k clocked at 4.9ghz wound up getting a time of 47.90 seconds


----------



## oxidized

Quote:


> Originally Posted by *Gumbi*
> 
> This is what corporate BS announcements are always like. It;s full of superficial BS. It doesn't mean they're lying.
> 
> They were using the same settings across every system and showing how a 670kk can choke where a 6900k/Ryzen chip won't.


Yes i know, i'm not saying AMD does, and nvidia or intel don't, everyone does its stuff, and show results in a favorable environment, so with particular settings, and tests, to me that's lying, or at least not telling the truth, if you want to put it in a different way.

Again, i'll wait to see for indie tests and benchmarks. My pc is 6 yrs old, i'll need an upgrade, and i really don't care where my money goes, i just care about performance and durability.


----------



## Blameless

Quote:


> Originally Posted by *Gumbi*
> 
> The time I'm not worried about, are you sure the render for that time was 25~ seconds? I'm getting 35.2 seconds with a 4.9ghz 4790k. Sure, it's a much higher clock speed but half the cores. I shouldn't be that close to the 6900k/Ryzen in that case - unless we're off about the 100.


Blender is a great benchmark partially because it's so easy to extrapolate results when you need to. It scales nearly perfectly linearly with both cores and clock speed.

35.2 seconds is right where you should be because the maximum aggregate performance of a 4.9GHz quad core Haswell is just about 72% that of a 3.4GHz BW-E.

26 (from http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/800#post_25711431) / 0.72 = 36.1 ... not perfect, but pretty damn close to what you're getting. The small discrepancy could be explained by higher uncore clock, faster memory, or any number of cumulative factors. Regardless, it's quite close and your part is right where I'd expect it to be.

Give me any clock speed and core count and I can tell you how Blender will perform with any similar architecture at any other clock speed and any other core count, within a ~5% margin of error by multiplying twice and dividing once...that's how perfectly the program scales. Only applies within the same version of Blender, of course.


----------



## Gumbi

Quote:


> Originally Posted by *oxidized*
> 
> Yes i know, i'm not saying AMD does, and nvidia or intel don't, everyone does its stuff, and show results in a favorable environment, so with particular settings, and tests, to me that's lying, or at least not telling the truth, if you want to put it in a different way.
> 
> Again, i'll wait to see for indie tests and benchmarks. My pc is 6 yrs old, i'll need an upgrade, and i really don't care where my money goes, i just care about performance and durability.


Well, objectively you're wrong. It's misleading the an ignorant consumer (and yes, I realise there are a lot of them out there), but it's not lying.

They tested with the same settings across all platforms, 6900k/Ryzen could deal with it, 6700k couldn't. Nothing about that is lying. Misleading to an ignorant consumer who may want to stream and thinks Ryzen/6900k is the only option and that a 6700k will choke? Sure. Lying? No.


----------



## formula m

People.. to me (visually), the live demo looks like it was run at 110, or 116.


----------



## Nenkitsune

Quote:


> Originally Posted by *formula m*
> 
> People.. to me (visually), the live demo looks like it was run at 110, or 116.


you mean this live demo?


----------



## Imouto

Quote:


> Originally Posted by *lolerk52*
> 
> That's exactly what I thought is going on there. It's very logical to do that, cuts nearly half a minute of basically doing nothing.
> 
> Just as a double confirmation from the anandtech forums:


That image is from the August even.

Made the same mistake with times.

That time they got it done in 24 seconds.


----------



## Gumbi

Quote:


> Originally Posted by *Blameless*
> 
> Blender is a great benchmark partially because it's so easy to extrapolate results when you need to. It scales nearly perfectly linearly with both cores and clock speed.
> 
> 35.2 seconds is right where you should be because the maximum aggregate performance of a 4.9GHz quad core Haswell is just about 72% that of a 3.4GHz BW-E.
> 
> 26 (from http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/800#post_25711431) / 0.72 = 36.1 ... not perfect, but pretty damn close to what you're getting. The small discrepancy could be explained by higher uncore clock, faster memory, or any number of cumulative factors. Regardless, it's quite close and your part is right where I'd expect it to be.
> 
> Give me any clock speed and core count and I can tell you how Blender will perform with any similar architecture at any other clock speed and any other core count, within a ~5% margin of error by multiplying twice and dividing once...that's how perfectly the program scales. Only applies within the same version of Blender, of course.


Thanks, that helps a lot. My extra second can probably be explained by fast RAM and also maybe the fact I ran it in real time priority.


----------



## Nenkitsune

Quote:


> Originally Posted by *Imouto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *lolerk52*
> 
> That's exactly what I thought is going on there. It's very logical to do that, cuts nearly half a minute of basically doing nothing.
> 
> Just as a double confirmation from the anandtech forums:
> 
> 
> 
> 
> That image is from the August even.
> 
> Made the same mistake with times.
> 
> That time they got it done in 24 seconds.
Click to expand...

regardless, the performance is still relevant. It's the same benchmark (although a slight version older, but the performance difference doesn't seem to be different)

at least in this one we can see that they're using render sample setting of 100 and we know they're getting around 24s.


----------



## Blameless

Quote:


> Originally Posted by *Gumbi*
> 
> My extra second can probably be explained by fast RAM and also maybe the fact I ran it in real time priority.


Yep, uncore clock too.

BW-E defaults to a measly 2.8GHz uncore.


----------



## oxidized

Quote:


> Originally Posted by *Gumbi*
> 
> Well, objectively you're wrong. It's misleading the an ignorant consumer (and yes, I realise there are a lot of them out there), but it's not lying.
> 
> They tested with the same settings across all platforms, 6900k/Ryzen could deal with it, 6700k couldn't. Nothing about that is lying. Misleading to an ignorant consumer who may want to stream and thinks Ryzen/6900k is the only option and that a 6700k will choke? Sure. Lying? No.


So basically they change specific settings, on whatever it is they're using to stream, in order to make a 6700k look horrible, and to compete with the 6900k, and that's not lying, well, to me it is, what do you want me to say...


----------



## formula m

Quote:


> Originally Posted by *Nenkitsune*
> 
> you mean this live demo?


No. The other live event.


----------



## Shatun-Bear

First review thread of Ryzen is going to be like a 3000 post monster.


----------



## Gumbi

Quote:


> Originally Posted by *oxidized*
> 
> So basically they change specific settings, on whatever it is they're using to stream, in order to make a 6700k look horrible, and to compete with the 6900k, and that's not lying, well, to me it is, what do you want me to say...


It's a clear demonstration to show the superiority of BOTH the 6900k AND Ryzen over the 6700k in some situations. How is that lying. It was the same settings across all systems.


----------



## Nenkitsune

Quote:


> Originally Posted by *formula m*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> you mean this live demo?
> 
> 
> 
> 
> 
> 
> No. The other live event.
Click to expand...

the live stream they did yesterday seems to have been running at around 150 samples to achieve the 35ish second times.


----------



## SoloCamo

Quote:


> Originally Posted by *oxidized*
> 
> You can show yourself by looking at polaris announcement and previous stuff.
> You should really stop telling people "get out of here" you're starting to sound very annoying. Evidence is there, go find it yourself, it has been talked about again and again, months ago.
> Yeah, so basically what i said, why use certain streaming settings, just to show an alleged better performance? 6700K doesn't run even remotely that bad in usual settings, which is what pretty much everyone is using to stream.


I'm glad it's getting annoying to you, in fact I'm actually really happy it is. Because quite frankly it annoys me when people come in a thread like this and just post slander that has nothing relevant to do with the discussion at hand. And second of all, if I claim the sky is yellow when it has been known to be blue up until now I do not expect others to go research it themselves.. It is and should be expected that I would offer evidence from the start.
Quote:


> Originally Posted by *oxidized*
> 
> So basically they change specific settings, on whatever it is they're using to stream, in order to make a 6700k look horrible, and to compete with the 6900k, and that's not lying, well, to me it is, what do you want me to say...


How is that lying? There are certainly tons of settings where a 6700k woud choke and a higher core count cpu would be fine. This is not lying, it's showing the advantage of the chip.

There are plenty of settings I wish I could run on my 4c8t 4790k but it simply can't handle it where as a chip like the 6900k or Zen would.

___

Now off to run this on my A8-6410 15w tdp laptop for laughs.


----------



## oxidized

Quote:


> Originally Posted by *SoloCamo*
> 
> I'm glad it's getting annoying to you, in fact I'm actually really happy it is. Because quite frankly it annoys me when people come in a thread like this and just post slander that has nothing relevant to do with the discussion at hand. And second of all, if I claim the sky is yellow when it has been known to be blue up until now I do not expect others to go research it themselves.. It is and should be expected that I would offer evidence from the start.


You also used that "sky is yellow" thing once or twice already, you really sound repetitive and annoying, and probably not only to me...
How has what i'm talking about got nothing to do with the thread? I'm talking about what AMD showed, nothing more.


----------



## ocvn

6950x disable 2 core, all DF, lock clock to 3.4GHz so it should match 6900k

Sample 100: 17s


Sample 200: 34s


----------



## Gumbi

Quote:


> Originally Posted by *oxidized*
> 
> Yeah like they care to show the 6900k is superior to 6700k. In some situation such as? Streaming? A 6700k can do exceptionally good with normal settings, and again, i know it was the same setting across all the system, there's no need for you to keep repeating that, still it's a cherry picked setting to show that difference - in normal conditions they would probably be on par (and it's an astonishing result from AMD, in case these graphs and benchmarks are confirmed by indie testers).


Sometime you HAVE to use cherry picked data to _emphasise_ the difference between parts. It doesn't mean there claim is that for the average Joe, he MUST get a 6900k/Ryzen to stream. The only claim that should reasonably be extrapolated from that demo is that in some situations, a 6700k wil choke whereas a 6900k/Ryzen won't.


----------



## Blameless

Regarding the streaming segment, I don't have a huge problem with it, or even consider it especially misleading.

I stream often enough that it's a priority when it comes to selecting parts, and I use x264 CPU encoding because to this day it gives superior quality vs. virtually any hardware solution for a given bit rate, and this is especially true at lower bit rates.

The settings I use to stream are extremely demanding and will peg my OCed hex core HW-E or BW-E at 80-95% overall cycle utilization, even in games that are themselves light on the CPU. Essentially no quad core will provide an acceptable experience with my stream settings, not Skylake/Kaby lake which are running at higher clock speeds with more IPC...aggregate CPU performance still falls a bit short.

If I were running an 8c/16t part, you can damn well bet that I'd jack up my settings even more to make my streams look even better for the bandwidth I have available, and a comparison between my daily setting and the fastest 6700K or 7700K you could get without sub-ambient cooling would look just like AMD's demo.


----------



## Jpmboy

Quote:


> Originally Posted by *Blameless*
> 
> Except that 100 gives me 26 seconds with a CPU set up to match 6900K Blender performance.


trying to match numbers to the degree you are is probably pointless. W10 has too many background tasks running. only way to have identical OS environments is to launch in diagnostic mode.
Quote:


> Originally Posted by *formula m*
> 
> I was responding to the conspiracy theorists in this thread, and was explaining to them their points mean zero. Because..
> 
> The problem we are having, is not knowing the actual setting they used in the public render. She asked us to download and try ourselves. I don't think they were trying to be coy, or funny. What setting they used, will be released once AMD realizes we don't have the proper render settings.
> 
> What is NOT in question, is that running the same tests (side by side), that AMD's Ryzen chip is as fast as the Intel chip.


LOL - except AMD's equivalent chip comes out a year later (an eternity in this market space). The very good news is that AMD has now cut the deficit to a year.








(seriously - I am looking forward to the launch of this cpu and hopefully see if it can overclock. I really do not care about performance at stock. If Zen can;t do a 50% OC... well shame on AMD)
Quote:


> Originally Posted by *Nenkitsune*
> 
> I guess in a way, the exact settings aren't 100% important since we do know that it does perform on par with a 6900k at stock clocks. This means any test we do could be compared to a stock 6900k running the file they provided.


Again - WGAS about stock. It's the OC headroom that matters.


----------



## kd5151

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *ocvn*
> 
> 6950x disable 2 core, all DF, lock clock to 3.4GHz so it should match 6900k
> 
> Sample 100: 17s
> 
> 
> Sample 200: 34s


----------



## formula m

Quote:


> Originally Posted by *kd5151*


11X <-- taken during that event. Clean up the picture using software and might have an answer.


----------



## kd5151

lol


----------



## Blameless

Quote:


> Originally Posted by *ocvn*
> 
> 6950x disable 2 core, all DF, lock clock to 3.4GHz so it should match 6900k
> 
> Sample 200: 34s


The saga continues then.

http://www.overclock.net/t/1601679/broadwell-e-thread/4050#post_25710774

http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710589

I'm starting to think that AMD may have silently updated the test.

Redownloading.


----------



## oxidized

Quote:


> Originally Posted by *Gumbi*
> 
> Sometime you HAVE to use cherry picked data to _emphasise_ the difference between parts. It doesn't mean there claim is that for the average Joe, he MUST get a 6900k/Ryzen to stream. The only claim that should reasonably be extrapolated from that demo is that in some situations, a 6700k wil choke whereas a 6900k/Ryzen won't.


Are you sure it's "Emphasize" and not "Create" because i hardly doubt either a ryzen chip or a 6900k could do that much better than a 6700k in normal conditions (playing + streaming Dota 2 with usual settings).

More than half the audience that attended and watched the event, won't realize the differences from a usual setting, because that's what they were showed, and they probably won't go and look for much more data about what settings, what test, what hardware. So for them 6700k will run Dota 2 and the streaming will look and run like poo


----------



## Gumbi

Quote:


> Originally Posted by *oxidized*
> 
> Are you sure it's "Emphasize" and not "Create" because i hardly doubt either a ryzen chip or a 6900k could do that much better than a 6700k in normal conditions (playing + streaming Dota 2 with usual settings).
> 
> More than half the audience that attended and watched the event, won't realize the differences from a usual setting, because that's what they were showed, and they probably won't go and look for much more data about what settings, what test, what hardware. So for them 6700k will run Dota 2 and the streaming will look and run like poo


Who said anything about "normal settings"? (A subjective assertion by the way). It's an enthusiast chip and it's not unusual to subject an enthusiast chip to extreme loads, streaming or otherwise.

I never said it wasn't misleading for a typical ignorant customer. But you're misrepresenting the situation as it is.


----------



## Travieso

Quote:


> Originally Posted by *oxidized*
> 
> Are you sure it's "Emphasize" and not "Create" because i hardly doubt either a ryzen chip or a 6900k could do that much better than a 6700k in normal conditions (playing + streaming Dota 2 with usual settings).
> 
> More than half the audience that attended and watched the event, won't realize the differences from a usual setting, because that's what they were showed, and they probably won't go and look for much more data about what settings, what test, what hardware. So for them 6700k will run Dota 2 and the streaming will look and run like poo


there're also 6 core and 4 core Ryzen for mainstream consumers.


----------



## Blameless

Quote:


> Originally Posted by *Blameless*
> 
> I'm starting to think that AMD may have silently updated the test.
> 
> Redownloading.


Well, there goes that hypothesis:









Quote:


> Originally Posted by *oxidized*
> 
> Are you sure it's "Emphasize" and not "Create" because i hardly doubt either a ryzen chip or a 6900k could do that much better than a 6700k in normal conditions (playing + streaming Dota 2 with usual settings).


Quantify usual settings? The pick a crappy preset and go settings?

I suppose there is an argument for that, but I tune my settings to task, and I'd expect serious streamers and hardware enthusiasts to do the same. No point in leaving CPU cycles unused.


----------



## Nenkitsune

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ocvn*
> 
> 6950x disable 2 core, all DF, lock clock to 3.4GHz so it should match 6900k
> 
> Sample 200: 34s
> 
> 
> 
> The saga continues then.
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4050#post_25710774
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710589
> 
> I'm starting to think that AMD may have silently updated the test.
> 
> Redownloading.
Click to expand...

they didn't. Honestly I have no idea what the deal with his scores are. They are WAY faster than anything else i've seen on similar rigs.


----------



## f1LL

Quote:


> Originally Posted by *oxidized*
> 
> Are you sure it's "Emphasize" and not "Create" because i hardly doubt either a ryzen chip or a 6900k could do that much better than a 6700k in normal conditions (playing + streaming Dota 2 with usual settings).
> 
> More than half the audience that attended and watched the event, won't realize the differences from a usual setting, because that's what they were showed, and they probably won't go and look for much more data about what settings, what test, what hardware. So for them 6700k will run Dota 2 and the streaming will look and run like poo


You keep going on about "usual" and "normal" streaming settings. What is that to you? In my opinion there is no such thing. I'd always go for the highest settings, that my bandwidth, hardware and streaming platform will allow. So obviously a bigger CPU will allow for better visual quality.

And for a 6700k being able to provide acceptable visual quality for streaming....I'm not too convinced of that. When I go to twitch and watch professional broadcasts of esports events, even those are often of quality that is lackluster in my opinion, and I assume they don't skimp on hardware since they make a living off of it. So, in my opinion, we still have a long way to go when it comes to streaming quality.


----------



## ocvn

Quote:


> Originally Posted by *Blameless*
> 
> The saga continues then.
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4050#post_25710774
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710589
> 
> I'm starting to think that AMD may have silently updated the test.
> 
> Redownloading.


i re-run sample 200, test with 3.1G 10 core 20 thead:thumb:


----------



## oxidized

Quote:


> Originally Posted by *Gumbi*
> 
> Who said anything about "normal settings"? (A subjective assertion by the way). It's an enthusiast chip and it's not unusual to subject an enthusiast chip to extreme loads, streaming or otherwise.


"Who said anything about normal settings?"?
One of their tests was playing and streaming Dota 2 at the same time, what you call it? Extraordinary situation? It's basically one of the most usual things gamers do nowadays, and they showed 6700k's poor performance in that situation (thank god it's not even close to that), streaming is something 4C/8T can easily do and still get a very good amount of FPS, damn my even 2600K can do it no problems.
Quote:


> Originally Posted by *Travieso*
> 
> there're also 6 core and 4 core Ryzen for mainstream consumers.


Yeah i'm sure of it, so they should've not taken the 6700K as competitor in that test, would've been much better seeing just ryzen vs 6900k, and surely much more appropriate


----------



## SoloCamo

Welp my A8-6410 locked at 2.1ghz pulled an awesome time of just a hair over 4 minutes.... with it set to 100 sample size.

Not bad for a $250 laptop I guess with a 15w tdp APU based off jaguar/puma.


----------



## Nenkitsune

Quote:


> Originally Posted by *ocvn*
> 
> i re-run sample 200, test with 3.1G 10 core 20 thead:thumb:


Why are your scores so insanely fast? are you using that AVX thing or whatever it's called? (the experimental thing someone was pointing out earlier)


----------



## SoloCamo

Quote:


> Originally Posted by *Blameless*
> 
> Well, there goes that hypothesis:
> 
> 
> Spoiler: Warning: Spoiler!


Interesting program. Do you have a link to it by chance? Found one on the net that looks like it but I'd rather play it safe.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> "Who said anything about normal settings?"?
> One of their tests was playing and streaming Dota 2 at the same time, what you call it? Extraordinary situation? It's basically one of the most usual things gamers do nowadays, and they showed 6700k's poor performance in that situation (thank god it's not even close to that), streaming is something 4C/8T can easily do and still get a very good amount of FPS, damn my even 2600K can do it no problems.
> Yeah i'm sure of it, so they should've not taken the 6700K as competitor in that test, would've been much better seeing just ryzen vs 6900k, and surely much more appropriate


Why more appropriate..?

What if the SR7 cost closer to the 6700K, but performs like a 6900K. Do you now understand .?


----------



## Darkpriest667

Quote:


> Originally Posted by *oxidized*
> 
> "Who said anything about normal settings?"?
> One of their tests was playing and streaming Dota 2 at the same time, what you call it? Extraordinary situation? It's basically one of the most usual things gamers do nowadays, and they showed 6700k's poor performance in that situation (thank god it's not even close to that), streaming is something 4C/8T can easily do and still get a very good amount of FPS, damn my even 2600K can do it no problems.
> Yeah i'm sure of it, so they should've not taken the 6700K as competitor in that test, would've been much better seeing just ryzen vs 6900k, and surely much more appropriate


A someone who streams and had a 4 Ghz 2600k you cannot stream and play most games on ultra with no problems, and I'd love to see video proof of that. I bought a 5820k because the 2600k was not keeping up, forget video encoding, streaming was killing my 2600k.

My hopes for Zen are high, my expectations are low. Bulldozer made me gunshy. I'd love to buy an 8c/16t processor with haswell or broadwell IPC for 500 bucks.


----------



## Travieso

Quote:


> Originally Posted by *oxidized*
> 
> Yeah i'm sure of it, so they should've not taken the 6700K as competitor in that test, would've been much better seeing just ryzen vs 6900k, and surely much more appropriate


why ? they just showed that 4 cores weren't enough for heavily loaded tasks anymore.

they might wanted to convince all enthusiasts to move to 8 cores. yeah, this benefited Intel as well but i'm pretty sure right now everyone is holding their guns for Zen release.


----------



## guttheslayer

Quote:


> Originally Posted by *Travieso*
> 
> why ? they just showed that 4 cores weren't enough for heavily loaded tasks anymore.
> 
> they might wanted to convince all enthusiasts to move to 8 cores. yeah, this benefited Intel as well but i'm pretty sure right now everyone is holding their guns for Zen release.


How does this benefit intel when Ryzen 8 cores will crush 4 cores I7 if they are priced similarly (for multithreading)... the intel quad core will be left in the dust.


----------



## GorillaSceptre

Man, i really hope it's every bit as good as this preview showed.. I'd love to have an 8 core monster that didn't cost $1000 + (it better not be near that







). From what we've seen (yes, pinch of salt) it even matches/exceeds Intel's perf-per-watt, never would of guessed that..

$499 to much to ask for, AMD?


----------



## Travieso

Quote:


> Originally Posted by *guttheslayer*
> 
> How does this benefit intel when Ryzen 8 cores will crush 4 cores I7 if they are priced similarly (for multithreading)... the intel quad core will be left in the dust.


cuz Intel also has 8 core cpu and we don't know about Zen's pricing yet.


----------



## guttheslayer

Quote:


> Originally Posted by *Travieso*
> 
> cuz Intel also has 8 core cpu and we don't know about Zen's pricing yet.


You know Intel latest 8 cores and 10 cores pricing are out of reach to most ppl right?


----------



## Blameless

Quote:


> Originally Posted by *SoloCamo*
> 
> Interesting program. Do you have a link to it by chance? Found one on the net that looks like it but I'd rather play it safe.


That's windiff, used to be part of Windows and I've been using it for almost twenty years.

If you don't have a Windows 2k or XP install disc or ISO, I can compress my directory and upload it for you.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> Man, i really hope it's every bit as good as this preview showed.. I'd love to have an 8 core monster that didn't cost $1000 + (it better not be near that
> 
> 
> 
> 
> 
> 
> 
> ). From what we've seen (yes, pinch of salt) it even matches/exceeds Intel's perf-per-watt, never would of guessed that..
> 
> $499 to much to ask for, AMD?


If it's sub 500 and hits at least 4GHz, I'm probably going to be teaching all the punks accusing me of being an anti-AMD conspiracy theorist how to overclock the thing in the Zen thread in two months.


----------



## guttheslayer

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Man, i really hope it's every bit as good as this preview showed.. I'd love to have an 8 core monster that didn't cost $1000 + (it better not be near that
> 
> 
> 
> 
> 
> 
> 
> ). From what we've seen (yes, pinch of salt) it even matches/exceeds Intel's perf-per-watt, never would of guessed that..
> 
> $499 to much to ask for, AMD?


Maybe $500 for usual 8 core model and $800 for extreme binned one that goes >4GHz.


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> Why more appropriate..?
> What if the SR7 cost closer to the 6700K, but performs like a 6900K. Do you now understand .?


That's another thing, but i seriously doubt it'll cost less than 600$/€. AMD prove me wrong please!
Quote:


> Originally Posted by *Darkpriest667*
> 
> A someone who streams and had a 4 Ghz 2600k you cannot stream and play most games on ultra with no problems, and I'd love to see video proof of that. I bought a 5820k because the 2600k was not keeping up, forget video encoding, streaming was killing my 2600k.
> 
> My hopes for Zen are high, my expectations are low. Bulldozer made me gunshy. I'd love to buy an 8c/16t processor with haswell or broadwell IPC for 500 bucks.


As someone who streamed a little Dota 2 (since that was in the test) i kept the game close to maximum, ofc i had framerate drops, but i was still above 60 fps, and the streaming was at least 30 fps and maybe it was downscaled to 720p (and i'm pretty sure also in AMD test, yesterday, was downscaled to that, if not, very close https://youtu.be/4DEfj2MRLtA?t=2833)
Quote:


> Originally Posted by *Travieso*
> 
> why ? they just showed that 4 cores weren't enough for heavily loaded tasks anymore.
> 
> they might wanted to convince all enthusiasts to move to 8 cores. yeah, this benefited Intel as well but i'm pretty sure right now everyone is holding their guns for Zen release.


That's the point, streaming isn't that bad on 6700k, and it's not even such a heavy load task it's not an I5 we're talking about here


----------



## duganator

A 6700k will struggle hard gaming and streaming at 60fps. Anyone who thinks otherwise doesn't know what they are talking about.


----------



## Travieso

Quote:


> Originally Posted by *guttheslayer*
> 
> You know Intel latest 8 cores and 10 cores pricing are out of reach to most ppl right?


hope that AMD will price 8c Zen much more reasonable than Intel's counterparts.


----------



## oxidized

Quote:


> Originally Posted by *duganator*
> 
> A 6700k will struggle hard gaming and streaming at 60fps. Anyone who thinks otherwise doesn't know what they are talking about.


No it won't and you don't know what you're talking about. Especially games like Dota 2


----------



## ocvn

Quote:


> Originally Posted by *Nenkitsune*
> 
> Why are your scores so insanely fast? are you using that AVX thing or whatever it's called? (the experimental thing someone was pointing out earlier)


someone in this thread post link of both blender and file, so i downloaded it to run







dont know it have avx instruction or not


----------



## guttheslayer

Actually it depend 2 things.

How well it perform
How expensive it is.

If it perform significantly faster than Intel 6900K or even kick 6950X off the crown. AMD has the right to price close to $1000 range.

However if it perform slightly slower or just as fast, it will make more sense to price it at $500 to set up a good brand name with Zen.


----------



## Travieso

Quote:


> Originally Posted by *oxidized*
> 
> That's the point, streaming isn't that bad on 6700k, and it's not even such a heavy load task it's not an I5 we're talking about here


i don't do streaming myself but from what i heard from people who had experience in streaming here, you are completely wrong.


----------



## kd5151

My hp htpc. Intel core i3 4170 @ stock 3.7 GHz. Windows 10. Clean boot. 1600 MHz ddr3 memory. 11-11-11-28 1t single channel only. 1tb 7200rpm wd blue hdd. Blender 2.78

Score last night @ 200 - 3:09
Today @ 200 - 3:10
Today @ 100 - 1:36 tested it twice same score***


----------



## Nenkitsune

Quote:


> Originally Posted by *ocvn*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> Why are your scores so insanely fast? are you using that AVX thing or whatever it's called? (the experimental thing someone was pointing out earlier)
> 
> 
> 
> someone in this thread post link of both blender and file, so i downloaded it to run
> 
> 
> 
> 
> 
> 
> 
> dont know it have avx instruction or not
Click to expand...

did you get blender straight from the blender website?


----------



## oxidized

Quote:


> Originally Posted by *Travieso*
> 
> i don't do streaming myself but from what i heard from people who had experience in streaming here, you are completely wrong.


And from my experience They're completely wrong, so? What do we do?


----------



## Mahigan

My 3930K @ 4.5GHz does it in 1:05:25.

So it seems to me like I'll be upgrading and going AMD if things pan out.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Blameless*
> 
> If it's sub 500 and hits at least 4GHz, I'm probably going to be teaching all the punks accusing me of being an anti-AMD conspiracy theorist how to overclock the thing in the Zen thread in two months.












Quote:


> Originally Posted by *guttheslayer*
> 
> Maybe $500 for usual 8 core model and $800 for extreme binned one that goes >4GHz.


My dream scenario is to get the king of the hill for $499 (impossible,i know), i want AMD and Intel to have a price war for the next 5 years, and for an unlucky shop to mail me one by accident. Santa, PLZ..


----------



## lolerk52

Quote:


> Originally Posted by *Blameless*
> 
> The saga continues then.
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4050#post_25710774
> 
> http://www.overclock.net/t/1601679/broadwell-e-thread/4040#post_25710589
> 
> I'm starting to think that AMD may have silently updated the test.
> 
> Redownloading.


Did a checksum of the file I've had on my computer and then of a fresh one downloaded from AMD, they are identical.


----------



## Blameless

Quote:


> Originally Posted by *oxidized*
> 
> it's not even such a heavy load task it's not an I5 we're talking about here


Screen shot taken right after I stopped one of my Elite: Dangerous streams about 18 months ago. This was running on my first 5820K @ 4.2GHz.










80-85% of that load is the encoder.
Quote:


> Originally Posted by *lolerk52*
> 
> Did a checksum of the file I've had on my computer and then of a fresh one downloaded from AMD, they are identical.


Yeah I already compared them, good to have confirmation though.


----------



## ocvn

Quote:


> Originally Posted by *Nenkitsune*
> 
> did you get blender straight from the blender website?


No, download it from the website to retest


----------



## Travieso

Quote:


> Originally Posted by *oxidized*
> 
> And from my experience They're completely wrong, so? What do we do?


they already argued you that streaming with normal settings with very average image quality (so that 6700k can run it) isn't good enough for them. they can easily distinguish between normal and highest settings.

do you have any answer for them ?

ok, i saw the proof from Blameless. there's no way 6700K can handle this such of load.

i think you @oxidized are full of crap.


----------



## lolerk52

Quote:


> Originally Posted by *Blameless*
> 
> Yeah I already compared them, good to have confirmation though.


Yeah, spotted that comparison after I posted it.


----------



## CrazyElf

I've done a bit more digging on AVX2:
https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.72/Cycles

Quote:


> Optimizations
> 
> Importance sampling for Glossy and Anisotropic BSDFs has been improved, which results in less noise per sample. The gives increased render time as well, but it's generally more than compensated by the reduction in noise.
> 
> Decreased memory usage during rendering. (49df707496e5, 0ce3a755f83b)
> 
> *CPUs with support for AVX2 (e.g. Intel Haswell) render a few percent faster. (866c7fb6e63d)*


I'm curious what the Stilt did. Probably full AVX2?

See here too:
https://blenderartists.org/forum/showthread.php?350476-Does-Blender-take-advange-of-CPU-instruction-sets

Quote:


> Modern CPU's should pretty much support all of those.
> 
> Exceptions might be AVX and AVX2, but the only part of Blender that uses them is Cycles and has fallbacks to SSE instructions if it's not supported on your machine.


Quote:


> Originally Posted by *Blameless*
> 
> Assuming render samples is the only difference, ~140 is what I suspect they used. That's just a ballpark figure though, and even if I'm only off by five in either direction, it not very useful.


We need AMD to confirm the test parameters.

There's really no other way at this point.

Quote:


> Originally Posted by *Nenkitsune*
> 
> Why are your scores so insanely fast? are you using that AVX thing or whatever it's called? (the experimental thing someone was pointing out earlier)


AVX is the latest instruction set that was introduced with Sandy Bridge. AVX2 are further extensions to the instruction set introduced with Haswell.

Unfortunately for us, beyond the Xeon Phi, there haven't been plans for AVX512 (aka AVX3) on desktop CPUs. Best we can hope for is that HEDT was used.

What the Stilt here did was he recompiled Blender to use AVX instruction set (I presume AVX2, but I will need to ask him).

Quote:


> Originally Posted by *Mahigan*
> 
> My 3930K @ 4.5GHz does it in 1:05:25.
> 
> So it seems to me like I'll be upgrading and going AMD if things pan out.


Right now we gotta figure out the test parameters.

But if the IPC is actually 2-3% better than Broadwell (so maybe 5% better than Haswell), this could be awesome.

The other of course is that power efficiency is great too.

Edit:
We have the slide from AMD:


Still leaves more questions than answers. One important thing to note for those of us who have HEDT systems is that in their benchmarks, AMD was running the RAM of X99 in dual channel at 2400.

We don't know of course what Zen's IMC is capable of compared to Broadwell in terms of clockspeed and timings, just that it is dual channel.


----------



## oxidized

Quote:


> Originally Posted by *Travieso*
> 
> they already argued you that streaming with normal settings with very average image quality (so that 6700k can run it) isn't good enough for them. they can easily distinguish between normal and highest settings.
> 
> do you have any answer for them ?


Man what are you some kind of judge? Like, i used to stream like them, and i'm saying Dota doesn't run that bad on my 2600K, "enough for them" doesn't apply here, now i'm doing my tests, i'll post as i finish


----------



## Nenkitsune

Quote:


> Originally Posted by *ocvn*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Nenkitsune*
> 
> did you get blender straight from the blender website?
> 
> 
> 
> No, download it from the website to retest
Click to expand...

https://www.blender.org/download/

use this version of blender.


----------



## Darkpriest667

Quote:


> Originally Posted by *oxidized*
> 
> Man what are you some kind of judge? Like, i used to stream like them, and i'm saying Dota doesn't run that bad on my 2600K, "enough for them" doesn't apply here, now i'm doing my tests, i'll post as i finish


Yeah do Battlefield 1 and you can even downsample to 720 (I don't) let me know how that goes.


----------



## Travieso

Quote:


> Originally Posted by *oxidized*
> 
> Man what are you some kind of judge? Like, i used to stream like them, and i'm saying Dota doesn't run that bad on my 2600K, "enough for them" doesn't apply here, now i'm doing my tests, i'll post as i finish


i'm playing GTA V on GTX970 @1080P 60Hz. i think that's good enough for me.

but that doesn't mean it's good enough for enthusiasts who want the absolute best from their systems.

same as your streaming argument.

you are full of crap. please stop.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> No it won't and you don't know what you're talking about. Especially games like Dota 2


Do you game at 1080p..?


----------



## EniGma1987

Quote:


> Originally Posted by *budgetgamer120*
> 
> How comes?
> It will be great for me. I don't overclock. What would lead you to even think about Zen not being able to overclock?
> 
> Before today I read here that it won't be 3ghz. Now we should expected overclocking.


Where did you read that? lol. I and many others were expecting a max speed of 3.5GHz, and probably slightly under, and we nailed right right on the head with the announcement of 3.4GHz clock speed.


----------



## DNMock

I hear at the Legion of Doom NSA Headquarters, which I assume is either on the dark side of the moon, or in the heart of an active volcano somewhere, someone using their latest quantum computer completed the blender test in .87 seconds, so it's not really all that impressive.


----------



## AmericanLoco

Just wondering if theres been any mention at all about Opteron Zen variants yet?


----------



## CrazyElf

Take a look at this:
https://blenderartists.org/forum/showthread.php?397369-Possible-to-use-GPU-CPU-rendering-simultaneously

Would it be possible that they used 200 samples, but used the RX480 on each sample to render simultaneously with the respective CPUs?

Quick tutorial:






If so, we will need the exact test set up to confirm.

I've read as well that dual channel actually renders slightly faster than quad channel on Blender, although I need confirmation on this one.

Quote:


> Originally Posted by *AmericanLoco*
> 
> Just wondering if theres been any mention at all about Opteron Zen variants yet?


Yes, but unconfirmed.


32 cores https://www.extremetech.com/extreme/222921-amd-is-supposedly-planning-a-32-core-cpu-with-an-eight-channel-ddr4-interface
up to 8 channels of DDR4
up to 64 lanes of PCIe 3 http://www.anandtech.com/show/10581/early-amd-zen-server-cpu-and-motherboard-details-codename-naples-32cores-dual-socket-platforms-q2-2017

Need full rumors. Oh, and if you are interested. The Intel Skylake Xeon E5 2699v5 is rumored to be a 32 monster as well. We believe it is 2.1 GHz base and this being Purley, it can support 6 channels of DDR4.

We will know more soon enough. I'm interested to see if they actually managed to cram 32 cores on a single massive die or if it is 16 cores + 16 cores on an MXM package.


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> A 3.4GHz octo-core only matching a 4.6GHz Ivy quad core in a highly threaded test when the former is supposed to have more IPC than the later is not impressive. If it was accurate, it would be hugely disappointing.
> 
> Blender scales almost perfectly with cores and clock speed. It's as CPU dependent and as parallel as it gets. You should need a ~7GHz hyperthreaded Ivy quad to match at stock 6900K or 3.4GHz Zen.
> 
> However, I do not think it's accurate, because the 100 sample size used is still not producing comparable scores to AMD's demo.


You forgot the 3.4ghz octocore also beat a 6900k? You must have forgot. I didn't see it mentioned.

Bias is bias.


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> Do you game at 1080p..?


Of course i do, i just stream downsampled, after all my system can't really do more than that, (2600k default, turbo boost 4,46Ghz - GTX580 1,5GB)

Anyway, i've just finished my 3 streamings

The first - I set OBS 60 FPS streaming and no scale from source, so as i'm playing it 1080p. In this i also show the settings i'm using on Dota, and those will be the same for every streaming i made.

https://www.twitch.tv/rockiieee/v/107399884

The second - I set OBS 60 FPS but this time with 1.5 downscale, to 720p, same settings.

https://www.twitch.tv/rockiieee/v/107400668

The third, the one i'd use with my current PC - I set OBS to 30 FPS and again 1.5 downscale, so 720p again, same settings.

https://www.twitch.tv/rockiieee/v/107401548

Now, it's true the workload on the task manager is higher than i expected, but it isn't nowhere close to what AMD showed (remember i'm running on a I7 2600k stock, +boost | PNY GTX 580 1,5GB)

At the first settings isn't very much enjoyable for me, i'm around 50/60 fps, it's not that smooth, the streaming is pretty much unwatchable. The second starts to be far better for me, i get 70+ FPS, so totally enjoyable, and slighty better for the stream, but still unwatchable. The third i'm all good, with 70/80+ FPS and the streaming is running at solid 30 fps, sometimes it drops a bit but it's not even noticeable, and this is surely watchable, decent, nothing super, and i've really no problems with FPS from my POV.
Keep also in mind i'm running 2 monitors, the second is pretty small tho, 5:4 1280x1024, 60Hz and the main is a 23" FHD 60Hz, i had at least other 10 tasks running, and at least 5 chrome tabs




So in the end, yes, above 60FPS streaming it struggles, especially at native resolution, but this was nothing close to what AMD showed, UNLESS they were playing at UHD and streaming at either FHD or above

Don't mind my dota, i'm horrible, after all i was just testing


----------



## budgetgamer120

Quote:


> Originally Posted by *EniGma1987*
> 
> Where did you read that? lol. I and many others were expecting a max speed of 3.5GHz, and probably slightly under, and we nailed right right on the head with the announcement of 3.4GHz clock speed.


The CEO clearly said the slowest Zen will be 3.4ghz base. So their will be higher clocked ones


----------



## Disharmonic

Not to mention, that it's clocked higher than the 6900K and at 95TDP at that.


----------



## Blameless

Quote:


> Originally Posted by *CrazyElf*
> 
> Take a look at this:
> https://blenderartists.org/forum/showthread.php?397369-Possible-to-use-GPU-CPU-rendering-simultaneously
> 
> Would it be possible that they used 200 samples, but used the RX480 on each sample to render simultaneously with the respective CPUs?


I can use my R9 290Xes (either or both, but not in conjunction with my CPU for some reason) in Blender and their performance is crap. Unless RX480 performs vastly better than 290X in OpenCL (and I haven't seen this anywhere else), using the OCL renderer would probably slow down theses tests.

I'll experiment with some more settings to see if there is a way for my GPUs to significantly alter the times.
Quote:


> Originally Posted by *budgetgamer120*
> 
> You forgot the 3.4ghz octocore also beat a 6900k? You must have forgot. I didn't see it mentioned.


No. In this case the 3.4GHz octo core _was_ the 6900K (3.4GHz is it's all core turbo and the most likely clock it would have run at in Blender).

I was pointing out that a quad core Ivy would need to be clocked at around 7GHz to beat a 6900K.

Obviously, that implies it would need to be clocked at least that fast to beat the Zen chip demoed, as they are so close in performance, but that's neither here nor there as I was simply correcting the misconception that Blender does not scale well (Blender scales almost perfectly), not trying to engage in an AMD vs. Intel pissing match when we've known that Zen is clock for clock, core for core, a match for Broadwell in Blender for months.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> So in the end, yes, above 60FPS streaming it struggles, especially at native resolution, but this was nothing close to what AMD showed, UNLESS they were playing at UHD and streaming at either FHD or above


*So you agree with everyone in this thread..?*

And if you want to game at modern resolution (1440/4k)... and record & stream with modern quality, that AMD's Ryzen is a better choice, than my Devil's Canyon..?


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> I can use my R9 290Xes (either or both, but not in conjunction with my CPU for some reason) in Blender and their performance is crap. Unless RX480 performs vastly better than 290X in OpenCL (and I haven't seen this anywhere else), using the OCL renderer would probably slow down theses tests.
> 
> I'll experiment with some more settings to see if there is a way for my GPUs to significantly alter the times.
> No. In this case the 3.4GHz octo core _was_ the 6900K (3.4GHz is it's all core turbo and the most likely clock it would have run at in Blender).
> 
> I was pointing out that a quad core Ivy would need to be clocked at around 7GHz to beat a 6900K.
> 
> Obviously, that implies it would need to be clocked at least that fast to beat the Zen chip demoed, as they are so close in performance, but that's neither here nor there as I was simply correcting the misconception that Blender does not scale well (Blender scales almost perfectly), not trying to engage in an AMD vs. Intel pissing match when we've known that Zen is clock for clock, core for core, a match for Broadwell in Blender for months.


Ok understood.


----------



## Jpmboy

Quote:


> Originally Posted by *guttheslayer*
> 
> Actually it depend 2 things.
> How well it perform
> How expensive it is.
> If it perform significantly faster than Intel 6900K *or even kick 6950X off the crown*. AMD has the right to price close to $1000 range.
> However if it perform slightly slower or just as fast, it will make more sense to price it at $500 to set up a good brand name with Zen.


lol - that's hilarious. tin foil hats are coming out now.

guys... the bench AMD showed compared stock clocks. this is completely meaningless. This is overclock.net not stockclock.net. Knowing the performance of the chip at default settings tells you nothing about it's overclocking headroom. Unless the zen 8 core can do a 50% OC it will not perform better than a 3 year old 5960X.


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> *So you agree with everyone in this thread..?*
> 
> And if you want to game at modern resolution (1440/4k)... and record & stream with modern quality, that AMD's Ryzen is a better choice, than my Devil's Canyon..?


I guess in my case with a slightly better video card everything would be much better, and i could also stream at FHD and more than 30 FPS, also do you realize what bandwidth size and what videocard, one should have in order to stream in ultra at higher than 1080p (if possible, because i don't know someone who streams at higher than FHD, at least gaming, and i'm also unsure it's doable on twitch, maybe on youtube? Idk) at 60 FPS?


----------



## budgetgamer120

Quote:


> Originally Posted by *Jpmboy*
> 
> lol - that's hilarious. tin foil hats are coming out now.
> 
> guys... the bench AMD showed compared stock clocks. this is completely meaningless. This is overclock.net not stockclock.net. Knowing the performance of the chip at default settings tells you nothing about it's overclocking headroom. Unless the zen 8 core can do a 50% OC it will not perform better than a 3 year old 5960X.[/quote
> Quote:
> 
> 
> 
> Originally Posted by *Jpmboy*
> 
> lol - that's hilarious. tin foil hats are coming out now.
> 
> guys... the bench AMD showed compared stock clocks. this is completely meaningless. This is overclock.net not stockclock.net. Knowing the performance of the chip at default settings tells you nothing about it's overclocking headroom. Unless the zen 8 core can do a 50% OC it will not perform better than a 3 year old 5960X.
> 
> 
> 
> Maybe meaningless for you. I'm a pic user who does not overclock. And I see Zen beating a $1000 cpu. So it will all come down to price. Seeing as I already have a x99 system.
Click to expand...


----------



## Blameless

Quote:


> Originally Posted by *oxidized*
> 
> I guess in my case with a slightly better video card everything would be much better, and i could also stream at FHD and more than 30 FPS, also do you realize what bandwidth size and what videocard, one should have in order to stream in ultra at higher than 1080p (if possible, because i don't know someone who streams at higher than FHD, at least gaming, and i'm also unsure it's doable on twitch, maybe on youtube? Idk) at 60 FPS?


Twitch is precisely where you want a fast CPU if you are streaming fast paced content, even at fairly low res; they have a measly 3.5megabit cap which means every ounce of quality you can get out of the encoder matters.

Youtube can handle about five times the bandwidth of Twitch and at 1080p or lower this is plenty even with less than perfect encoding.


----------



## hawker-gb

There is a lot butthurt on this thread.
Why?

Xperia Z5 via Tapatalk


----------



## darealist

There's a reason they compared it to a 6700k in daily tasks, and showcased an 8-core processor in a gaming event. It will be competitively priced against a 6700k with a performance of a 6900k.


----------



## oxidized

Quote:


> Originally Posted by *Blameless*
> 
> Twitch is precisely where you want a fast CPU if you are streaming fast paced content, even at fairly low res; they have a measly 3.5megabit cap which means every ounce of quality you can get out of the encoder matters.
> 
> Youtube can handle about five times the bandwidth of Twitch and at 1080p or lower this is plenty even with less than perfect encoding.


So twitch doesn't allow to stream at higher than FHD (and to do that you need to be a partner), and also not higher than 60FPS ofc, and we can see you can accomplish doing it even with a 4C/8T like the 2600k, but you'd need a far better videocard than my 580

About youtube i don't know, very few gamers stream on youtube, and when they do most of the time isn't above FHD and at 60FPS, and again we could accomplish that with a good 4C/8T and a good videocard
Quote:


> Originally Posted by *darealist*
> 
> There's a reason they compared it to a 6700k in daily tasks, and showcased an 8-core processor in a gaming event. It will be competitively priced against a 6700k with a performance of a 6900k.


I think it's what everyone hopes here, me included, otherwise throwing the 6700k into the 8C/16T battle doesn't make sense and won't look good for AMD


----------



## Darkpriest667

Quote:


> Originally Posted by *Blameless*
> 
> Twitch is precisely where you want a fast CPU if you are streaming fast paced content, even at fairly low res; they have a measly 3.5megabit cap which means every ounce of quality you can get out of the encoder matters.
> 
> Youtube can handle about five times the bandwidth of Twitch and at 1080p or lower this is plenty even with less than perfect encoding.


it was DOTA 2... A toaster could run it. For those of us playing games like ELITE or Battlefield 1 or CIV VI you're gonna need better than a 2600k and 580 to stream. The reason they can even stream at a good rate with a 580 is again because its DOTA...


----------



## Jpmboy

Quote:


> Originally Posted by *budgetgamer120*
> 
> Maybe meaningless for you. I'm a pic user who does not overclock. And I see Zen beating a $1000 cpu. So it will all come down to price. Seeing as I already have a x99 system.


if you do not overclock and need the performance you claim, then comparing unlocked CPUs is even more tin-hat territory.








Quote:


> Originally Posted by *darealist*
> 
> There's a reason they compared it to a 6700k in daily tasks, and showcased an 8-core processor in a gaming event. It will be competitively priced against a 6700k with a performance of a 6900k.


now that would generate the sales AMD needs.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> So twitch doesn't allow to stream at higher than FHD (and to do that you need to be a partner), and also not higher than 60FPS ofc, and we can see you can accomplish doing it even with a 4C/8T like the 2600k, but you'd need a far better videocard than my 580
> 
> About youtube i don't know, very few gamers stream on youtube, and when they do most of the time isn't above FHD and at 60FPS, and again we could accomplish that with a good 4C/8T and a good videocard
> I think it's what everyone hopes here, me included, otherwise throwing the 6700k into the 8C/16T battle doesn't make sense and won't look good for AMD


*Do you understand the requirements to encode and stream at higher resolution, increases significantly..?*

Nobody is talking about Twitch, except you. Our standards are higher, for what we are talking about. No wonder why you think your 5 year old rig is up to snuff, your living in mediocrity.


----------



## DNMock

Quote:


> Originally Posted by *guttheslayer*
> 
> How does this benefit intel when Ryzen 8 cores will crush 4 cores I7 if they are priced similarly (for multithreading)... the intel quad core will be left in the dust.


Well Ryzen SR7 probably won't launch at the same time as the other zen SR products, but a little later. Intel mainstream CPU's will be 6 core starting next year sometime I believe, so yeah. MOAR COREZ!


----------



## oxidized

Quote:


> Originally Posted by *Darkpriest667*
> 
> it was DOTA 2... A toaster could run it. For those of us playing games like ELITE or Battlefield 1 or CIV VI you're gonna need better than a 2600k and 580 to stream. *The reason they can even stream at a good rate with a 580 is again because its DOTA*...


So why are you talking about the videocard now? I'll tell you why, because if you want to stream BF1 at ultra, it'll need to run good on your system, once you have that, you can stream no problems because it's not like you need double the computational power of your PC to stream what you're playing
Quote:


> Originally Posted by *formula m*
> 
> *Do you understand the requirements to encode and stream at higher resolution, increases significantly..?*
> 
> Nobody is talking about Twitch, except you. Our standards are higher, for what we are talking about. No wonder why you think your 5 year old rig is up to snuff, your living in mediocrity.


What's HIGHER? There's no higher, that's the max you can stream, there's nothing past that


----------



## Blameless

Quote:


> Originally Posted by *Darkpriest667*
> 
> it was DOTA 2... A toaster could run it. For those of us playing games like ELITE or Battlefield 1 or CIV VI you're gonna need better than a 2600k and 580 to stream. The reason they can even stream at a good rate with a 580 is again because its DOTA...


This is true.
Quote:


> Originally Posted by *oxidized*
> 
> About youtube i don't know, very few gamers stream on youtube, and when they do most of the time isn't above FHD and at 60FPS, and again we could accomplish that with a good 4C/8T and a good videocard.


That CPU load graph earlier was from streaming on Youtube at 1080p60 (scaled from the 1440p I was playing at) @ 13Mbps.

You do this with with most quad core CPUs, quicksync, or any GPU encoder (VCE, NVENC) and it will look like a muddy macroblock ridden smear in games that have some detail and fast motion. Six or more cores does noticeably improve the situation because you can use better encoder settings without overwhelming things and fit more quality into the same bitrate.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> So why are you talking about the videocard now? I'll tell you why, because if you want to stream BF1 at ultra, it'll need to run good on your system, once you have that, you can stream no problems because it's not like you need double the computational power of your PC to stream what you're playing
> What's HIGHER? There's no higher, that's the max you can stream, there's nothing past that


You are so lost.

Battlefield is an incredibly CPU intensive game. Most people can't play it on quad-cores, let alone play, record & stream. The DOTA event illustrated by that Professional DOTA player, when he was playing, encoding & streaming that rez, on the fly.

Secondly, nobody is talking about Twitch. Most people publish their videos to YOUTUBE, not twitch. Many record at 3440 x 1440p. Or, is your machine not capable of running those videos on YouTube..?


----------



## oxidized

Quote:


> Originally Posted by *Blameless*
> 
> This is true.


It is true, but it's what they streamed, it wasn't me who picked Dota 2 as a game to show, they did, i just showed what my old pc can do on the same game, even at the same stream quality.
Quote:


> Originally Posted by *Blameless*
> 
> That CPU load graph earlier was from streaming on Youtube at 1080p60 (scaled from the 1440p I was playing at) @ 13Mbps.
> 
> You do this with with most quad core CPUs, quicksync, or any GPU encoder (VCE, NVENC) and it will look like a muddy macroblock ridden smear in games that have some detail and fast motion. Six or more cores does noticeably improve the situation because you can use better encoder settings without overwhelming things and fit more quality into the same bitrate.


Hey i'm not saying it's worse, i'm saying that for the user who wants to stream and play (and not run in the background 100 more processes, and doing more) a good 4C/8T is already enough, even at max, because (again) almost everyone who streams doesn't (can't) do it at more than 1080p/60FPS, and that could be achieved already, the problem goes down to the VGA at some point, when you want YOUR game to run smooth, as smooth as possible, and to stream it at 1080p/60fps
Quote:


> Originally Posted by *formula m*
> 
> You are so lost.
> Battlefield is an incredibly CPU intensive game. Most people can't play it on quad-cores, let alone play, record & stream. The DOTA event illustrated by that Professional DOTA player, when he was playing, encoding & streaming that rez, on the fly.
> 
> Secondly, nobody is talking about Twitch. Most people publish their videos to YOUTUBE, not twitch. Many record at 3440 x 1440p. _Or, is your machine not capable of running those videos on YouTube..?_


You are so not funny.
Play and stream, and play (recording) uploading it afterwards, are 2 different things. You're still based on what you saw there, you still don't understand that most of what you saw at that show doesn't matter do you?


----------



## GorillaSceptre

Quote:


> Originally Posted by *darealist*
> 
> There's a reason they compared it to a 6700k in daily tasks, and showcased an 8-core processor in a gaming event. It will be competitively priced against a 6700k with a performance of a 6900k.


This happens every time.. How can i not be disappointed now..

That would turn the industry on it's head.


----------



## IRobot23

Quote:


> Originally Posted by *formula m*
> 
> You are so lost.
> Battlefield is an incredibly CPU intensive game. Most people can't play it on quad-cores, let alone play, record & stream. The DOTA event illustrated by that Professional DOTA player, when he was playing, encoding & streaming that rez, on the fly.
> 
> Secondly, nobody is talking about Twitch. Most people publish their videos to YOUTUBE, not twitch. Many record at 3440 x 1440p. _Or, is your machine not capable of running those videos on YouTube..?_


there is now way that you could play BF1 64p maps and stream at [email protected] with i7 6700K 4.5GHz.

Bf1 will max out 6C/12T in some moments...


----------



## budgetgamer120

Quote:


> Originally Posted by *darealist*
> 
> There's a reason they compared it to a 6700k in daily tasks, and showcased an 8-core processor in a gaming event. It will be competitively priced against a 6700k with a performance of a 6900k.


This is exactly what I said.


----------



## SoloCamo

Quote:


> Originally Posted by *GorillaSceptre*
> 
> This happens every time.. How can i not be disappointed now..
> 
> That would turn the industry on it's head.


I wouldn't get your hopes too high yet for 6700k pricing. I've a feeling it's going to be between 6800k & 6850k pricing, which is still pretty great. Even at $500 it's still half the cost of a similar performing Intel 8c16t chip.


----------



## Dragonsyph

Guys zen is gonna be x5 faster then a 6950x and zen will only cost 99 dollars.


----------



## PCGamer4Ever

The whole streaming demo to me was mis-handled. I stream all the time with a 4790K at stock and next to zero performance hit. 1080P BTW for live streaming is often frowned on as it limits the viewers in many cases, this is why 720 streaming is so popular. Again thought this is not as issue as we are talking about scaling down the video.

The 1080 and higher is referring to recording, not live streaming which is a different beast.


----------



## oxidized

Quote:


> Originally Posted by *IRobot23*
> 
> there is now way that you could play BF1 64p maps and stream at [email protected] with i7 6700K 4.5GHz.


As long as your videocard sucks, you can't even do it with the 6900K
Quote:


> Originally Posted by *PCGamer4Ever*
> 
> The whole streaming demo to me was mis-handled. I stream all the time with a 4790K at stock and next to zero performance hit. 1080P BTW for live streaming is often frowned on as it limits the viewers in many cases, this is why 720 streaming is so popular. Again thought this is not as issue as we are talking about scaling down the video.
> 
> The 1080 and higher is referring to recording, not live streaming which is a different beast.


exactly


----------



## JJBY

If it is priced at the cost of a 6700k/7700k for a beast 8 core cpu, so many people will upgrade to amd (myself included)

That is so cheap really I can't justify not upgrading, even if it doesn't overclock well..........

In the end 8 cores can just do more stuff and will allow me to congest my pc with more useless background programs!


----------



## GorillaSceptre

Quote:


> Originally Posted by *SoloCamo*
> 
> I wouldn't get your hopes too high yet for 6700k pricing. I've a feeling it's going to be between 6800k & 6850k pricing, which is still pretty great. Even at $500 it's still half the cost of a similar performing Intel 8c16t chip.


Yup, I'm keeping my expectation well in check. Would be awesome though.


----------



## formula m

Oxidized..

Your points are moot. The reason people want 8c 16th is so they can do these things on 1 computer, instead of having two, to produce quality gaming & recording. If you are happy with 720p, why bother responding to people who have been seeking higher.?

You obviously feel your system is flawless..! But understand, many people simply won't spend their time watching your streams if it is not HD, or better..

Anyone I ever follow, uses high quality video. If not, someone else will.... & your stream goes unnoticed.


----------



## budgetgamer120

Quote:


> Originally Posted by *PCGamer4Ever*
> 
> The whole streaming demo to me was mis-handled. I stream all the time with a 4790K at stock and next to zero performance hit. 1080P BTW for live streaming is often frowned on as it limits the viewers in many cases, this is why 720 streaming is so popular. Again thought this is not as issue as we are talking about scaling down the video.
> 
> The 1080 and higher is referring to recording, not live streaming which is a different beast.


The test is "mis-handled" because you stream at 720p?

If you are happy with 720p then fine.


----------



## Blameless

Quote:


> Originally Posted by *GorillaSceptre*
> 
> This happens every time.. How can i not be disappointed now..
> 
> That would turn the industry on it's head.


Zen looks good from what little has been shown, but I fear people are letting their hopes get the best of them. AMD is not going to run tests that don't show Zen in the most positive light possible.

The gaming tests were either primarily GPU limited, or more niche scenarios with an induced CPU limitation. Blender and x264 results were very impressive, but again, these are the exact type of applications one would expect Zen to excel in.

I'm standing by my original prediction (which I can find a link to if I look) of Zen's IPC falling somewhere between Ivy and Haswell overall, matching Skylake at best (x264), and falling to somewhere south of Sandy levels at worst (AVX 256 heavy stuff). I expect SMT scaling to be very good, possibly better than Intel's HT (the hotchips architecture slides hint at this and the test run today reinforce this). Modern Intel architectures will probably continue to do better for lightly threaded loads. All in all, it's going to come down to pricing and how far these things can OC...which is what most people not dedicated to one side or the other have been saying for months at this point.

Anyway, if an 8c/16t Zen part falls into the sub-$500 dollar price range and can reach at least 4GHz on all cores, I will own one the month they are out. I think they'll do well, but I'm trying not to get ahead of myself.


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> Oxidized..
> 
> Your points are moot. The reason people want 8c 16th is so they can do these things on 1 computer, instead of having two, to produce quality gaming & recording. If you are happy with 720p, why bother responding to people who have been seeking higher.?
> 
> You obviously feel your system is flawless..! But understand, many people simply won't spend their time watching your streams if it is not HD, or better..
> 
> Anyone I ever follow, uses high quality video. If not, someone else will.... & your stream goes unnoticed.


What are you saying man, you're really out of your mind, you can do all of that on the same pc at the same time. If i'm happy with 720p? It's not like i'm playing at 720p, more like streaming, and no i'm not happy with it either. I feel my system is flawed? Where did i say that, i already said multiple times i'm in a need to upgrade my whole system, because i can't run newer games i have to play.

"But understand, many people simply won't spend their time watching your streams if it is not HD, or better.."







Listen to yourself, how can i even answer to this?


----------



## formula m

Quote:


> Originally Posted by *IRobot23*
> 
> there is now way that you could play BF1 64p maps and stream at [email protected] with i7 6700K 4.5GHz.
> 
> Bf1 will max out 6C/12T in some moments...


Right, everyone has been trying to tell that to Oxidized.

He is suggesting, that DOTA + Recording + Streaming is the epitome of taxing a system and of broadcast quality. And that gamers don't need Zen, or more cores. All because he uses twitchTV.

8c/16th @ 4Ghz is what nearly everyone needs right now.


----------



## Blameless

Quote:


> Originally Posted by *formula m*
> 
> 8c/16th @ 4Ghz is what nearly everyone needs right now.


I'm all for it.


----------



## oxidized

Quote:


> Originally Posted by *formula m*
> 
> Right, everyone has been trying to tell that to Oxidized.
> 
> He is suggesting, that DOTA + Recording + Streaming is the epitome of taxing a system and of broadcast quality. And that gamers don't need Zen, or more cores. All because he uses twitchTV.
> 
> 8c/16th @ 4Ghz is what nearly everyone needs right now.


I never said such things. you're out of your mind. I'm done with you. Incredible...


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> Zen looks good from what little has been shown, but I fear people are letting their hopes get the best of them. AMD is not going to run tests that don't show Zen in the most positive light possible.
> 
> The gaming tests were either primarily GPU limited, or more niche scenarios with an induced CPU limitation. Blender and x264 results were very impressive, but again, these are the exact type of applications one would expect Zen to excel in.
> 
> I'm standing by my original prediction (which I can find a link to if I look) of Zen's IPC falling somewhere between Ivy and Haswell overall, matching Skylake at best (x264), and falling to somewhere south of Sandy levels at worst (AVX 256 heavy stuff). I expect SMT scaling to be very good, possibly better than Intel's HT (the hotchips architecture slides hint at this and the test run today reinforce this). Modern Intel architectures will probably continue to do better for lightly threaded loads. All in all, it's going to come down to pricing and how far these things can OC...which is what most people not dedicated to one side or the other have been saying for months at this point.
> 
> Anyway, if an 8c/16t Zen part falls into the sub-$500 dollar price range and can reach at least 4GHz on all cores, I will own one the month they are out. I think they'll do well, but I'm trying not to get ahead of myself.


Still not a bad prediction though. Hopefully supply meets demand and it doesnt turn into a rx 480 fiasco.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> What are you saying man, you're really out of your mind, you can do all of that on the same pc at the same time. If i'm happy with 720p? It's not like i'm playing at 720p, more like streaming, and no i'm not happy with it either. I feel my system is flawed? Where did i say that, i already said multiple times i'm in a need to upgrade my whole system, because i can't run newer games i have to play.
> 
> "But understand, many people simply won't spend their time watching your streams if it is not HD, or better.."
> 
> 
> 
> 
> 
> 
> 
> Listen to yourself, how can i even answer to this?


Dude, those are the facts. As a consumer, people look for the best quality.

When you do a search on YouTube for a BF1 video, do you think most people search for the lowest resolution available..? Or the best available..? When I am searching (at work, or at home), I don't even bother with videos that are not HD. Because there are sooo many to choose from, why would someone lower their standards to 720... because YOU produced it.

You don't have to answer to it, or listen to me. You have to understand that 720p is sub-par.


----------



## Ghoxt

AMD would not pull a second coming of Bulldozer by wildly fraudulent demos. Stockholders would sue them into the ground. That's the only likely truth I see here.

If anyone see's different please explain today why they think AMD would do that? And knowingly suffer the consequences? I don't think so.


----------



## hawker-gb

AMD has winner on it's hands.
I was sceptical but looks like Keller do some magic.

Xperia Z5 via Tapatalk


----------



## formula m

Did someone say you can't stream to YouTube..?


----------



## CrazyElf

We may have figured it out. Maybe.

Big thanks to The Stilt for compiling Blender again to use AVX2. We are seeing pretty substantial gains once AVX instruction sets are used. Thanks aberrero too for linking. A well deserved Rep to you both.

We think that it is because they are using a Blender variant with AVX. Anyways, the Stilt has a copy of Blender re-compiled.

https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD
Password: "ryzen" (without the quotes)

There are pretty substantial gains when the instruction sets are used.
Quote:


> Originally Posted by *Blameless*
> 
> I can use my R9 290Xes (either or both, but not in conjunction with my CPU for some reason) in Blender and their performance is crap. Unless RX480 performs vastly better than 290X in OpenCL (and I haven't seen this anywhere else), using the OCL renderer would probably slow down theses tests.
> 
> I'll experiment with some more settings to see if there is a way for my GPUs to significantly alter the times.
> No. In this case the 3.4GHz octo core _was_ the 6900K (3.4GHz is it's all core turbo and the most likely clock it would have run at in Blender).


Just checked:
https://www.phoronix.com/scan.php?page=news_item&px=Blender-Slow-OpenCL-AMDGPU

Then there's this:
https://blenderartists.org/forum/showthread.php?239480-2-7x-Cycles-benchmark-%28Updated-BMW%29&p=3072292&viewfull=1#post3072292

Unless the AMD drivers for OpenCL have matured a great deal since last summer, there goes that hypothesis. Hmm ... it is possible the Radeon Crimson Relive may have offered increased support. Don't know.

I also have 2x 290X, similar to your setup, so that isn't going to help much because we both have a pair of Hawaii GPUs.

AVX2 performance will probably be weak, but everything else should be very strong indeed. We need more benchmarks, but I'm optimistic that AMD may be back.


----------



## hawker-gb

Let's see how Intel will price now their overpriced dual-core frauds.
Or they will sent those cheats to history books where they belong.

Xperia Z5 via Tapatalk


----------



## Jpmboy

Quote:


> Originally Posted by *Ghoxt*
> 
> AMD would not pull a second coming of Bulldozer by wildly fraudulent demos. Stockholders would sue them into the ground. That's the only likely truth I see here.
> 
> If anyone see's different please explain today why they think AMD would do that? And knowingly suffer the consequences? I don't think so.


dude - it works both ways, overly optimistic forecasts get you sued by _longs_ if not met, sandbagging forecasts gets you sued by _shorts_ if exceeded. You either walk the centerline, or do the perp walk from the stock market's perspective.


----------



## Catscratch

https://www.google.com/finance?q=NASDAQ:AMD

Woah hadn't realized they really went back to nov 2007 stock levels.

A good chip will do wonders for them.


----------



## sugarhell

If this thing has Haswell IPC in average and can hit 4ghz for 400-500 bucks then it is a success.

It will shake the cpu market a lot and it will help AMD. I think that AMD is waiting for these release for quite long. It is not hyped at all from AMD compared to the others products. And that's a good thing.


----------



## Kuivamaa

Quote:


> Originally Posted by *SoloCamo*
> 
> I wouldn't get your hopes too high yet for 6700k pricing. I've a feeling it's going to be between 6800k & 6850k pricing, which is still pretty great. Even at $500 it's still half the cost of a similar performing Intel 8c16t chip.


I am very interested in it. I expect the platform switch to cost me around 1100 euros. 600 for the top 8C SKU and 400-500 for high end mobo plus RAM. Anything less for the CPU and I will be pleasantly surprised.


----------



## bossie2000

So Zen + comming later in 2017,Could be a good year for AMD..


----------



## Kuivamaa

Quote:


> Originally Posted by *bossie2000*
> 
> So Zen + comming later in 2017,Could be a good year for AMD..


Have a source for that? I would expect Ryzen+ or whatever the name will be,around a year after plain Ryzen. And I expect first gen to be at stores by early Feb 2017.


----------



## Newwt

I though they already confirmed the release for summit ridge was 1/17/17


----------



## Marios145

This is 2005 all over again.
"Dual-cores are only useful for rendering and video encoding"
"For MY needs, a single core is enough"
"There are only a few games that take full advantage of dual-cores"


----------



## oxidized

Quote:


> Originally Posted by *Marios145*
> 
> This is 2005 all over again.
> "Dual-cores are only useful for rendering and video encoding"
> "For MY needs, a single core is enough"
> "There are only a few games that take full advantage of dual-cores"


Yeah, quite like that, but the exact opposite


----------



## JakdMan

Quote:


> Originally Posted by *Kuivamaa*
> 
> Have a source for that? I would expect Ryzen+ or whatever the name will be,around a year after plain Ryzen. And I expect first gen to be at stores by early Feb 2017.


Well we've all had their product map for a while so, yeah. You two go at eachother I guess


----------



## EniGma1987

Quote:


> Originally Posted by *budgetgamer120*
> 
> The CEO clearly said the slowest Zen will be 3.4ghz base. So their will be higher clocked ones


I guess the way you are thinking of "base" and the way I am is very different. When she said base clock that to me is the default speed without boost, not the lowest base speed all the models will come at. The way she said the 3.4Ghz for the 8 core model and then said maybe higher to me was looking like AMD was trying to squeeze a little more speed out of the processors to sell at but werent sure they could.


----------



## budgetgamer120

Quote:


> Originally Posted by *EniGma1987*
> 
> I guess the way you are thinking of "base" and the way I am is very different. When she said base clock that to me is the default speed without boost, not the lowest base speed all the models will come at. The way she said the 3.4Ghz for the 8 core model and then said maybe higher to me was looking like AMD was trying to squeeze a little more speed out of the processors to sell at but werent sure they could.


I forgot that everything regarding AMD should always be taken the worst way.

What she said only means one thing. Zens slowest 8 core will be 3.4ghz


----------



## EniGma1987

Quote:


> Originally Posted by *budgetgamer120*
> 
> I forgot that everything regarding AMD should always be taken the worst way.
> 
> What she said only means one thing. Zens slowest 8 core will be 3.4ghz


LMAO. I'm not trying to take it the worst way at all. Did you watch the stream and see what she said when announcing the clock speed? I guess one of us will realize how wrong we were when the parts come out


----------



## ladcrooks

Quote:


> Originally Posted by *Ghoxt*
> 
> AMD would not pull a second coming of Bulldozer by wildly fraudulent demos. Stockholders would sue them into the ground. That's the only likely truth I see here.
> 
> If anyone see's different please explain today why they think AMD would do that? And knowingly suffer the consequences? I don't think so.


I agree, hang yourself once that's enough.

I am confident this time round, so optimistic, I sold my i7-6700K and using a £55, Pentium Dual Core G4400 3.30GHz as a gap filler









then the gap filler can go elsewhere in the house


----------



## SoloCamo

Quote:


> Originally Posted by *Marios145*
> 
> This is 2005 all over again.
> "Dual-cores are only useful for rendering and video encoding"
> "For MY needs, a single core is enough"
> "There are only a few games that take full advantage of dual-cores"


Yup, I hopped on the dual core bandwagon pretty early then skipped over quads as I stretched my overclocked socket 939 system and went straight to 8 threads in 2011. My next cpu will be 16 threads probably in late 2017-2018.


----------



## budgetgamer120

Quote:


> Originally Posted by *ladcrooks*
> 
> I agree, hang yourself once that's enough.
> 
> I am confident this time round, so optimistic, I sold my i7-6700K and using a £55, Pentium Dual Core G4400 3.30GHz as a gap filler
> 
> 
> 
> 
> 
> 
> 
> 
> 
> then the gap filler can go elsewhere in the house


I am contemplating selling my x99 setup for Zen. I definitely won't be buying Intel's $1000 cpu


----------



## JackCY

So you will buy an AMD's $1000 CPU instead?
Price wars when there are only barely 2 competing companies are not gonna happen much.
I expect no better performance than Haswell, so not worth upgrading and even if there is a price drop it will simply drop where the prices have been 2 years ago when I bought it.


----------



## BinaryDemon

Nice work AMD. It will be interesting to see how Intel responds with pricing cuts.


----------



## motoray

Quote:


> Originally Posted by *JackCY*
> 
> So you will buy an AMD's $1000 CPU instead?
> Price wars when there are only barely 2 competing companies are not gonna happen much.
> I expect no better performance than Haswell, so not worth upgrading and even if there is a price drop it will simply drop where the prices have been 2 years ago when I bought it.


It wont be 1000$. They need to take market share back. The only way they will is to either straight blow out intel in performance ( not going to happen) or they need to make having more cores cost less. Assuming the rumors of matching haswell ipc, its a core count upgrade that ppl need to want. And for the majority of ppl on intel that means a new system. So they cant just match prices.


----------



## Ha-Nocri

Why do you ppl think they didn't show any single-threaded benchmark?! My guess is that Zen is not as good in that apartment. Hope I'm wrong tho


----------



## duganator

All of that is well and good, but what render setting are you using? Super fast?
Quote:


> Originally Posted by *oxidized*
> 
> Of course i do, i just stream downsampled, after all my system can't really do more than that, (2600k default, turbo boost 4,46Ghz - GTX580 1,5GB)
> 
> Anyway, i've just finished my 3 streamings
> 
> The first - I set OBS 60 FPS streaming and no scale from source, so as i'm playing it 1080p. In this i also show the settings i'm using on Dota, and those will be the same for every streaming i made.
> 
> https://www.twitch.tv/rockiieee/v/107399884
> 
> The second - I set OBS 60 FPS but this time with 1.5 downscale, to 720p, same settings.
> 
> https://www.twitch.tv/rockiieee/v/107400668
> 
> The third, the one i'd use with my current PC - I set OBS to 30 FPS and again 1.5 downscale, so 720p again, same settings.
> 
> https://www.twitch.tv/rockiieee/v/107401548
> 
> Now, it's true the workload on the task manager is higher than i expected, but it isn't nowhere close to what AMD showed (remember i'm running on a I7 2600k stock, +boost | PNY GTX 580 1,5GB)
> 
> At the first settings isn't very much enjoyable for me, i'm around 50/60 fps, it's not that smooth, the streaming is pretty much unwatchable. The second starts to be far better for me, i get 70+ FPS, so totally enjoyable, and slighty better for the stream, but still unwatchable. The third i'm all good, with 70/80+ FPS and the streaming is running at solid 30 fps, sometimes it drops a bit but it's not even noticeable, and this is surely watchable, decent, nothing super, and i've really no problems with FPS from my POV.
> Keep also in mind i'm running 2 monitors, the second is pretty small tho, 5:4 1280x1024, 60Hz and the main is a 23" FHD 60Hz, i had at least other 10 tasks running, and at least 5 chrome tabs
> 
> 
> 
> 
> So in the end, yes, above 60FPS streaming it struggles, especially at native resolution, but this was nothing close to what AMD showed, UNLESS they were playing at UHD and streaming at either FHD or above
> 
> Don't mind my dota, i'm horrible, after all i was just testing


----------



## SuperZan

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Why do you ppl think they didn't show any single-threaded benchmark?! My guess is that Zen is not as good in that apartment. Hope I'm wrong tho


I don't think it means that Zen will be 'bad' in single-threaded workloads. Certainly it will be improved over the construction cores. I just think (purely my speculative opinion) that they would have had to go back to SB to show a string of clear 'wins' in ST performance where with the MT performance they were able to show parity with a latest-and-greatest Intel part. Bigger wow-factor than showing ST performance that bumps up against Haswell, sometimes slips to IB, and makes a good run at Skylake in some particular workload... though we as enthusiasts would know how significant all of that would be, the target audience for the New Horizon event likely would not find it as compelling.


----------



## andydabeast

If vega gets 60 fps in Battlefront and a 1080 gets 48 fps then vega > gtx 1080? It is newer... so it should lol but good to see it is capable.

http://www.techspot.com/review/1174-nvidia-geforce-gtx-1080/page4.html


----------



## SoloCamo

Quote:


> Originally Posted by *andydabeast*
> 
> If vega gets 60 fps in Battlefront and a 1080 gets 48 fps then vega > gtx 1080? It is newer... so it should lol but good to see it is capable.
> 
> http://www.techspot.com/review/1174-nvidia-geforce-gtx-1080/page4.html


Were we 100% they used max settings? Just lowering post processing to medium from ultra nets me near 10-15fps at 4k when the rest is at ultra on my 290x.


----------



## andydabeast

Quote:


> Originally Posted by *SoloCamo*
> 
> Were we 100% they used max settings? Just lowering post processing to medium from ultra nets me near 10-15fps at 4k when the rest is at ultra on my 290x.


That is the thing. I thought she said it was max but you are right that they can easily leave one setting down a click. Would have been great if they switched the stream to her computer and just clicked through the options.


----------



## aznsniper911

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Why do you ppl think they didn't show any single-threaded benchmark?! My guess is that Zen is not as good in that apartment. Hope I'm wrong tho


I feel general consumers don't care about IPC/ST performance anymore, it's really sad actually to see how much marketing makes a big impact on people's mentality.


----------



## }SkOrPn--'

I been gone for over half a day now. Has anyone figured out the discrepancies with AMD's testing on the 6900K? Yesterday everyone with a 6900K said they couldn't match AMDs test results, so people were believing something was wrong, or AMD supplied the wrong test files and emailed AMD to report it.

Has anything changed yet? Did we figure out why such a discrepancy existed? I am very curious to know.


----------



## Dragonsyph

Quote:


> Originally Posted by *andydabeast*
> 
> That is the thing. I thought she said it was max but you are right that they can easily leave one setting down a click. Would have been great if they switched the stream to her computer and just clicked through the options.


And its just in a single person map doing nothing but flying around in the new DLC.


----------



## budgetgamer120

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Why do you ppl think they didn't show any single-threaded benchmark?! My guess is that Zen is not as good in that apartment. Hope I'm wrong tho


You would have had a point of AMD didn't compare Zen to a cpu with the same 8 cores and threads. That means everything is close to par with Intel.

Who makes a 16thread cpu and brags about single thread performance? This isn't some garbage i3 or i5 competitor.


----------



## oxidized

Quote:


> Originally Posted by *duganator*
> 
> All of that is well and good, but what render setting are you using? Super fast?


Haven't touched it from the default setting, should be very fast...


----------



## IRobot23

Quote:


> Originally Posted by *andydabeast*
> 
> If vega gets 60 fps in Battlefront and a 1080 gets 48 fps then vega > gtx 1080? It is newer... so it should lol but good to see it is capable.
> 
> http://www.techspot.com/review/1174-nvidia-geforce-gtx-1080/page4.html


depend on map and gameplay. Usually flying is less intensive.
Do not hype without decent comparison, some people says that vega will be between GTX 1070 and GTX 1080.

Same goes for RYZEN, we saw only 2 benchmarks, we do not know pricing, we do not know how well will it do in other important scenarios.

But we can expect that it will destroy last generation of AMD CPUs.


----------



## kd5151

Quote:


> Originally Posted by *IRobot23*
> 
> depend on map and gameplay. Usually flying is less intensive.


True.


----------



## Tobiman

I record my gameplay with quicksync on my 4670k clocked to 4.5ghz and it's probably one of the best ways to record at 1440p while still staying above 60fps. NVENC, I hear is also good for high-res recording and doesn't tax the CPU as much but I don't have a Nvidia GPU and my R9 290 can't record at 1440p even at 30fps with Relive. 1080p @ 60fps recording is decent but being able to record 1440p @ 60fps is even better.

The game I do the most recording in are racing games at max settings with regular CPU usage in the 40-60% range depending on the number of opponents and complexity of the scene.

Once I start recording, that number bumps up to 90-95%, and although it's rare, I get huge frame dips that sometimes ruins my race. It's not super common but it happens from time to time. Obviously, I could use Relive to record at 1080p 60 and not get any dips but I want to record at 1440p 60.

If Zen can do X264 encoding on the fly, I'd buy it in a heartbeat.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *budgetgamer120*
> 
> You would have had a point of AMD didn't compare Zen to a cpu with the same 8 cores and threads. That means everything is close to par with Intel.
> 
> Who makes a 15thread cpu and brags about single thread performance? This isn't done garbage i3 or i5 competitor.


The blender test just shows that AMD has caught up to Intel's 6900K performance. Now you need to concentrate on the fact that Lisa Su was focusing on the fact that Intel's CPU is $1100. In fact they mentioned it SIX times throughout the demo's. What this means is they plan on shocking the world later, probably at CES with a very competitively priced chip. You do NOT mention how expensive your competitor is and then match their prices, or even get close for that matter.

AMD plans to drop a bomb and will probably price this 8 Core Ryzen somewhere between $500 and $900, and my guess is closer to $799-850. They don't just want to make money, they want market share back and NO ONE is going to jump Intel's ship for AMD if it costs relatively the same. People will only jump ship if the performance is just as good for less cost. THAT is how AMD gets market share back and competes with Intel again. Besides, AMD is very used to pricing chips lower so its no skin off their backs.


----------



## Newwt

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> The blender test just shows that AMD has caught up to Intel's 6900K performance. Now you need to concentrate on the fact that Lisa Su was focusing on the fact that Intel's CPU is $1100. In fact they mentioned it SIX times throughout the demo's. What this means is they plan on shocking the world later, probably at CES with a very competitively priced chip. You do NOT mention how expensive your competitor is and then match their prices, or even get close for that matter.
> 
> AMD plans to drop a bomb and will probably price this 8 Core Ryzen somewhere between $500 and $900, and my guess is closer to $799-850. They don't just want to make money, they want market share back and NO ONE is going to jump Intel's ship for AMD if it costs relatively the same. People will only jump ship if the performance is just as good for less cost. THAT is how AMD gets market share back and competes with Intel again. Besides, AMD is very used to pricing chips lower so its no skin off their backs.


I honestly don't even believe it will be that expensive


----------



## Tobiman

Everyone is yapping about the blender test but what about the handbrake one? Was that impressive or not?


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Newwt*
> 
> I honestly don't even believe it will be that expensive


Lets all keep our fingers crossed, because if its say $500 its going to be the single largest PC shake up since the 1990's... And AMD can afford to do it as for years they have been structuring their company to handle just this sort of thing.


----------



## hunterwindu

I agree with SkOrPn - my guess is that the top sku will be around $800. And that's still 300 saved that you could spend on motherboard + ram. Of course, that's assuming it is very competitive with 6900k and Intel doesn't drop the price hard.


----------



## IRobot23

Quote:


> Originally Posted by *Tobiman*
> 
> Everyone is yapping about the blender test but what about the handbrake one? Was that impressive or not?


Yep.
Do you know what is most impressive?
That actually AMD 8 core/16 treads can keep up with INTEL 8 core/16 threads. Since AMD is new to the SMT technology, basically every one would expect to scale worse than INTELs (hyperthreading), but then AMD would need better IPC to perform same.

So does SMT scales superbly or AMD actually has really good IPC? Maybe superb MT scaling?....


----------



## doza

or maybe she was just comparing it to intel 1000$ cpu how it can match it performance wise.....she/we all know how amd cpus were slow compared to intel in multitasking,maybe it has nothing to do with zen going low price,she is just happy it can match it in performance segment.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *IRobot23*
> 
> Do you know what is most impressive? That actually AMD 8 core/16 treads can keep up with INTEL 8 core/16 threads. Since AMD is new to the SMT technology, basically every one would expect to scale worse than INTELs


AMD may be inexperienced with hyper-threading, but it was Jim Keller who designed Zen in the first place, not AMD. You don't hire one of the most talented chip engineers on planet Earth to head Zen development and not expect him to know how to do hyperthreading designs. As far as I am concerned it was Jim Keller, not AMD that is giving us this new PC frontier. AMD and Mark Papermaster is just properly implementing Jim's work.


----------



## Newwt

If they were going to price it over $500 I don't think the 6700k would of been on stage with the 6900k. Also I could be wrong but I don't think AMD has released a CPU over $500 since what like 1999 or the early 2000's? I could be wrong but I think it was like a Athlon 64 FX or something along those lines.

But who knows, at this point pricing is all speculation.


----------



## }SkOrPn--'

Yeah pricing is all speculation at this point. Man I sure wish Lisa would have given us pricing info during those demo's. I hope AMD kicks back hard, just because I know what it will do for our PC industry. NOTHING is better than good competition. The next couple of years will be interesting to say the least, both for CPU's and GPU's, lol...


----------



## mouacyk

Quote:


> Originally Posted by *Newwt*
> 
> If they were going to price it over $500 I don't think the 6700k would of been on stage with the 6900k. Also I could be wrong but I don't think AMD has released a CPU over $500 since what like 1999 or the early 2000's? I could be wrong but I think it was like a Athlon 64 FX or something along those lines.
> 
> But who knows, at this point pricing is all speculation.


FX 9590 for $900!

src: http://www.anandtech.com/show/8316/amds-5-ghz-turbo-cpu-in-retail-the-fx9590-and-asrock-990fx-extreme9-review


----------



## IRobot23

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> AMD may be inexperienced with hyper-threading, but it was Jim Keller who designed Zen in the first place, not AMD. You don't hire one of the most talented chip engineers on planet Earth to head Zen development and not expect him to know how to do hyperthreading designs. As far as I am concerned it was Jim Keller, not AMD that is giving us this new PC frontier. AMD and Mark Papermaster is just properly implementing Jim's work.


1 person cannot design anything, saying that he could design ZEN (by himself) is just nonsense. Maybe you mean that Jim Keller was leader of ZEN team.

Well for AM4, AM4 just cannot compare to LGA 2011-3. I think that AM4 is just between LGA 1151 and LGA 2011-3. Thats why it might become really great deal, with good prices. Buying RYZEN over i7 6900K for same price is just bad deal for me. Same goes for i7 6800K, LGA 2011-3 offers more.
So I hope we will see pricing below 500$....


----------



## rancor

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I been gone for over half a day now. Has anyone figured out the discrepancies with AMD's testing on the 6900K? Yesterday everyone with a 6900K said they couldn't match AMDs test results, so people were believing something was wrong, or AMD supplied the wrong test files and emailed AMD to report it.
> 
> Has anything changed yet? Did we figure out why such a discrepancy existed? I am very curious to know.


The settings in blender file supplied by AMD are wrong. It is set to 200 samples.

The earlier test shown to the press is confirmed to use 100 samples. The times from equivalent system match the results from AMD (6900k) at ~25 seconds.

The setting used for the livestream are unknown but may be 128 samples.


----------



## dmasteR

Quote:


> Originally Posted by *andydabeast*
> 
> If vega gets 60 fps in Battlefront and a 1080 gets 48 fps then vega > gtx 1080? It is newer... so it should lol but good to see it is capable.
> 
> http://www.techspot.com/review/1174-nvidia-geforce-gtx-1080/page4.html


That benchmark is also heavily outdated. We also don't know the other variables like map / player count / etc.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> AMD may be inexperienced with hyper-threading, but it was Jim Keller who designed Zen in the first place, not AMD. You don't hire one of the most talented chip engineers on planet Earth to head Zen development and not expect him to know how to do hyperthreading designs. As far as I am concerned it was Jim Keller, not AMD that is giving us this new PC frontier. AMD and Mark Papermaster is just properly implementing Jim's work.


Let's not forget that the last time AMD had Keller they had fantastically competitive CPU's, I doubt they would of worked on Ryzen for 4 years and not be sure it's going to perform.

As for the pricing RedGamingTech got an email from the person who made up the prices that everyone's been using as "leaked" for the last few weeks (a few other places got it to) so as far as prices are concerned... well they were pulled out of somebody's ass so take it with a mountain of salt.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *rancor*
> 
> The settings in blender file supplied by AMD are wrong. It is set to 200 samples.
> 
> The earlier test shown to the press is confirmed to use 100 samples. The times from equivalent system match the results from AMD (6900k) at ~25 seconds.
> 
> The setting used for the livestream are unknown but may be 128 samples.


OK thanks for the confirmation. I thought I counted off that blender test of theirs at 36 seconds though? I tried it twice and get ~36 seconds, not ~25.

LOL, my measly hexa Xeon only does it at 1:33.... haha


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Let's not forget that the last time AMD had Keller they had fantastically competitive CPU's, I doubt they would of worked on Ryzen for 4 years and not be sure it's going to perform.


My thoughts exactly. Jim is just a God when it comes to designing silicon and his credentials prove that time and time again. Just not sure why he went to Tesla though, haha. Must have been a really big check to get out of the PC industry where he shines at.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> My thoughts exactly. Jim is just a God when it comes to designing silicon and his credentials prove that time and time again. Just not sure why he went to Tesla though, haha. Must have been a really big check to get out of the PC industry where he shines at.


I was thinking about him going to Tesla, could be they need a processor for a smart car?

Either way Ryzen is going to be at least good enough to put AMD back into the market for a while.

On another note, you guys think we'll have reviews on launch day or will it be days / weeks after launch? wanna jump on this fast providing it performs and decent motherboards are out and I know Aus shops probably wont get much stock as usual.


----------



## rancor

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> OK thanks for the confirmation. I thought I counted off that blender test of theirs at 36 seconds though? I tried it twice and get ~36 seconds, not ~25.
> 
> LOL, my measly hexa Xeon only does it at 1:33.... haha


The live stream is ~35s and the settings are unknown.




This is from the press conference before the livestream it is ~25s and the setting are confirmed at 100 samples not the 200 of the file AMD supplied. You can change it there are direction earlier in the thread I don't remember how.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> I was thinking about him going to Tesla, could be they need a processor for a smart car?
> 
> Either way Ryzen is going to be at least good enough to put AMD back into the market for a while.
> 
> On another note, you guys think we'll have reviews on launch day or will it be days / weeks after launch? wanna jump on this fast providing it performs and decent motherboards are out and I know Aus shops probably wont get much stock as usual.


I don't think so, Tesla's in-vehicle supercomputer is powered by the NVIDIA DRIVE PX 2 AI computing platform for all new Tesla's for the next few years at least. But maybe they are going to build an in-house AI system so they don't have to rely on Intel or NVIDIA, or anyone else for that matter. Which I find to be very smart if they are going this route. Why pay NVIDIA $10K per module if they can beat it for much much less?

I think we will have reviews from the big 3rd parties such as Toms and Anandtech etc on day one, if not sooner.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *rancor*
> 
> The live stream is ~35s and the settings are unknown.
> 
> This is from the press conference before the livestream it is ~25s and the setting are confirmed at 100 samples not the 200 of the file AMD supplied. You can change it there are direction earlier in the thread I don't remember how.


Lol, yeah no matter what AMD has a winner design on their hands. They should be able to improve upon this for several years to come. Not sure how Intel is going to respond with Kaby Lake now as that is practically finished work. Intel will need to dump X99 now and introduce a new top-tier chipset already I think.

VERY interesting 2017 is a head of us. VERY interesting indeed...


----------



## budgetgamer120

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> The blender test just shows that AMD has caught up to Intel's 6900K performance. Now you need to concentrate on the fact that Lisa Su was focusing on the fact that Intel's CPU is $1100. In fact they mentioned it SIX times throughout the demo's. What this means is they plan on shocking the world later, probably at CES with a very competitively priced chip. You do NOT mention how expensive your competitor is and then match their prices, or even get close for that matter.
> 
> AMD plans to drop a bomb and will probably price this 8 Core Ryzen somewhere between $500 and $900, and my guess is closer to $799-850. They don't just want to make money, they want market share back and NO ONE is going to jump Intel's ship for AMD if it costs relatively the same. People will only jump ship if the performance is just as good for less cost. THAT is how AMD gets market share back and competes with Intel again. Besideto toMD is very used to pricing chips lower so its no skin off their backs.


I am thinking $599 tops


----------



## Dimaggio1103

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> AMD may be inexperienced with hyper-threading, but it was Jim Keller who designed Zen in the first place, not AMD. You don't hire one of the most talented chip engineers on planet Earth to head Zen development and not expect him to know how to do hyperthreading designs. As far as I am concerned it was Jim Keller, not AMD that is giving us this new PC frontier. AMD and Mark Papermaster is just properly implementing Jim's work.


I know you have stated your a AMD fan but the majority of what you say comes off anti-AMD from a point of pure bias. One man does not engineer a chip. That's just misleading. I know you meant that in a overly simplistic way, but still.

Others in here can disavow AMD all day long, won't change the fact, they are still alive today, despite being almost entirely pushed out of the market share on both fronts. They have always come back with something good. They did it with the radeon dept, I expect the same now with Mr. Keller on board. Lets take the evidence so far, and try and be pragmatic here. AMD NEEDS this chip to do well, so it stands to reason it will most likely be good enough to compete with consumer class i7's. That's a really good thing for us all.


----------



## budgetgamer120

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> AMD may be inexperienced with hyper-threading, but it was Jim Keller who designed Zen in the first place, not AMD. You don't hire one of the most talented chip engineers on planet Earth to head Zen development and not expect him to know how to do hyperthreading designs. As far as I am concerned it was Jim Keller, not AMD that is giving us this new PC frontier. AMD and Mark Papermaster is just properly implementing Jim's work.


Wait... Didn't AMD hire Jim?
If so then AMD designed this chip. Jim didn't work for free.


----------



## M3TAl

We can thank these people (and likely many more) for Zen. All this Jim Keller is GOD and he made it by himself with his eyes closed stuff gets really old. Might as well slap the rest of the team in the face.

http://www.mystatesman.com/business/amid-challenges-chipmaker-amd-sees-way-forward/KVpONwiwUaengs02EwigbK/


Quote:


> Advanced Micro Devices is counting on its new Zen advanced processor core will help boost the company's fortunes. members or the Zen design team are: Mike Clark, front left, and team leader Suzanne Plummer, and in background from left are Teja Singh, Lyndal Curry, Mike Tuuk, Farhan Rahman, Andy Halliday, Matt Crum, Mike Bates and Joshua Bell.RICARDO B. BRAZZIELL / AMERICAN-STATESMAN


----------



## Shatun-Bear

Quote:


> Originally Posted by *M3TAl*
> 
> We can thank these people (and likely many more) for Zen. All this Jim Keller is GOD and he made it by himself with his eyes closed stuff gets really old. Might as well slap the rest of the team in the face.
> 
> http://www.mystatesman.com/business/amid-challenges-chipmaker-amd-sees-way-forward/KVpONwiwUaengs02EwigbK/


Lol true. Other people deserve equal credit.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *M3TAl*
> 
> We can thank these people (and likely many more) for Zen. All this Jim Keller is GOD and he made it by himself with his eyes closed stuff gets really old. Might as well slap the rest of the team in the face.


Yeah Keller didn't do it by himself but you cant deny that without him Zen may not of been what is potentially is, AMD needed someone from the outside that is damn good at what they do to help them get out of the crappy CPU rut they've been in for 10 years? and as I said earlier when Keller use to work at AMD they had some extremely good processors, not really a coincidence that he comes back, works on Zen and now AMD has something good enough to challenge Intel.

I'm not saying it was him alone but he was probably a good driving force behind it.


----------



## Pro3ootector

So with the sampling adjustment how does RYZEN perform in compasion to FX?


----------



## kd5151

Quote:


> Originally Posted by *Pro3ootector*
> 
> So with the sampling adjustment how does RYZEN perform in compasion to FX?


post#644

8350 @4.7 1:13 @ 100


----------



## epic1337

Quote:


> Originally Posted by *kd5151*
> 
> Page 65
> 
> 8350 @4.7 1:13 @ 100


rather than page, mention the post# instead, on my settings we're still on page 21.


----------



## kd5151

Quote:


> Originally Posted by *epic1337*
> 
> rather than page, mention the post# instead, on my settings we're still on page 21.


Updated


----------



## X-Nine

I've read a lot of posts here and the only thing I can conclude is this:

Wait for real world benchmarks and results. We've been burned before.

Now, I'm not saying that Ryzen will be Bulldozer all over again. I was a die-hard AMD user until that fiasco, so, please don't insinuate that I'm an anti-AMD this or that. I truly do want them to succeed (it's stupid not to, even if the performance doesn't meet or exceed Intel, if it even comes close, it brings prices down for EVERYONE, which is good!), but I think people should be cautious and not jump to any conclusions just yet.

Then again:


----------



## Colin1204

Quote:


> Originally Posted by *M3TAl*
> 
> We can thank these people (and likely many more) for Zen. All this Jim Keller is GOD and he made it by himself with his eyes closed stuff gets really old. Might as well slap the rest of the team in the face.
> 
> http://www.mystatesman.com/business/amid-challenges-chipmaker-amd-sees-way-forward/KVpONwiwUaengs02EwigbK/


While I completely agree with you, I think its more like how people focus on the lead of a band much more than the other band members. Jim is a super high profile figure who is known for heading some of the best work in his field pretty much wherever he goes. I don't think that he _doesn't_ deserve so much credit, but these people need to get theirs as well.

Jim is a genius, I don't think many will try to argue that, but the man could simply not do it alone, and AMD has some seriously talented engineers to pull off what they do with such a limited budget when compared to Intel (Despite BD micro arch)


----------



## SuperZan

Quote:


> Originally Posted by *Colin1204*
> 
> While I completely agree with you, I think its more like how people focus on the lead of a band much more than the other band members. Jim is a super high profile figure who is known for heading some of the best work in his field pretty much wherever he goes. I don't think that he doesn't deserve so much credit, but these people need to get theirs as well.
> 
> Jim is a genius, I don't think many will try to argue that, but the man could simply not do it alone, and AMD has some seriously talented engineers to pull off what they do with such a limited budget when compared to Intel (Despite BD micro arch)


It's also always worth mentioning that Bulldozer wasn't just an idea spun out from the engineering department. Management had a particular track they wanted to take with BD and the engineering team was told to design around that concept. AMD has good engineers... there are only so many jobs at Intel for x86-focused people and AMD is the only other (real) game in town. The problem for AMD has been mismanagement leading to drastic disparity in resources relative to their competitors. Bringing in Jim Keller was a great move but I think that bringing him in was itself a signal that AMD was going to let engineers lead the way on Zen development. That is just as important as Keller's work itself.


----------



## Raghar

I read words. "Prediction by neural network". Well as someone who did AI (in spare time) for 20 years I know that neural networks sucks for this stuff. Then I seen horrible part about automatic overclocking as high as heatsink allows...

Is it time to rename this site to autooverclock.net?


----------



## SuperZan

Quote:


> Originally Posted by *Raghar*
> 
> I read words. "Prediction by neural network". Well as someone who did AI (in spare time) for 20 years I know that neural networks sucks for this stuff. Then I seen horrible part about automatic overclocking as high as heatsink allows...
> 
> Is it time to rename this site to autooverclock.net?


It's likely just AMD's latest Turbo iteration. I'd be very surprised to learn that we couldn't manually override it for our own overclocks.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *budgetgamer120*
> 
> I am thinking $599 tops


I really like your thinking
















Lets just hope they didn't focus on the $1100 figure only to charge $999, I would be really bummed out after all my anticipation of Zen.


----------



## lombardsoup

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I really like your thinking
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Lets just hope they didn't focus on the $1100 figure only to charge $999, I would be really bummed out after all my anticipation of Zen.


Under $800 would be very reasonable considering what we're getting.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Raghar*
> 
> Is it time to rename this site to autooverclock.net?


Just another feature that you can disable for manual overclocking. I think they even mentioned this, but I can't remember if they did or not.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *lombardsoup*
> 
> Under $800 would be very reasonable considering what we're getting.


Yesterday I predicted $899 myself, but today I am thinking $799 or less just because of the fact they keep mentioning how expensive Intel is. I would be surprised if they offer the 8C at anything more than $700 now. $700-800 for a desktop consumer part still isn't cheap...

However $500 would shake the industry, probably even $600. I would LOVE to see Intel's face's if AMD sells it for $500, hahahaha THAT would be priceless.


----------



## oxidized

Quote:


> Originally Posted by *XNine*
> 
> I've read a lot of posts here and the only thing I can conclude is this:
> 
> Wait for real world benchmarks and results. We've been burned before.
> 
> Now, I'm not saying that Ryzen will be Bulldozer all over again. I was a die-hard AMD user until that fiasco, so, please don't insinuate that I'm an anti-AMD this or that. I truly do want them to succeed (it's stupid not to, even if the performance doesn't meet or exceed Intel, if it even comes close, it brings prices down for EVERYONE, which is good!), but I think people should be cautious and not jump to any conclusions just yet.
> 
> Then again:


This


----------



## CDub07

Unless AMD can flat out beat Intel I don't see them dropping prices to match AMD. As long as Intel is on the top of their software game they are still in the lead. My Sandy Bridge Core i5 laptop is just freaking sick fast and mostly it just flat out works no issues ever. I don't see AMD at this level yet but they are working towards it. This is coming from a AMD Fanboy. I thought about paying a ridiculous amount of money for 16GB of AMD Radeon memory but talked myself out of it.


----------



## }SkOrPn--'

Does anyone have any info on whether AMD is also working on competing with Intel or NVIDIA in regards to autonomous super computing products? NVIDIA makes the Drive PX 2 and will be supplying Tesla with these parts, but I am wondering if AMD has plans to try their hand at these type of products. I know Intel has plans to take on NVIDIA's Drive PX2. Or is AMD still solely focusing on the PC market, Gaming and now VR?


----------



## formula m

Quote:


> Originally Posted by *JackCY*
> 
> So you will buy an AMD's $1000 CPU instead?
> Price wars when there are only barely 2 competing companies are not gonna happen much.
> I expect no better performance than Haswell, so not worth upgrading and even if there is a price drop it will simply drop where the prices have been 2 years ago when I bought it.


Your insight is myopic.

The point is, that there has been essentially only One CPU company over the last 5~6 years. Now there is going to be two companies, seriously vying for the gaming consumer's dollar. Ryzen is a win/win/win.. on many fronts, over intel. And we do not even have the full specs.

The larger picture is what you are missing...

ie:

$269 for a 6c/12th Ryzen SR5 will be on par with a X99 system..?

$189 for a 4c/8th Ryzen that will compete directly with Kabylake (lga1151)...?

For gamers..


----------



## Blameless

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Has anything changed yet? Did we figure out why such a discrepancy existed? I am very curious to know.


If you use AMD's Blender render file and the version of Blender they link to, you will not be able to duplicate their results with the default test.

There are two plausible possibilities:

1. AMD set a manual render scale on their test systems to something lower than the default of 200 (somewhere around or slightly below 150), possibly to get render times down to something they felt comfortable demoing.

2. AMD used a non-standard build of Blender. Some of us tested a build (provided by Stilt) compiled with support for all modern instruction sets and it brought 6900K equivalent processors to times close to what AMD demoed.

The first one is most likely as that would be the easiest mistake to make, while the second would likely give a less favorable comparison to Zen because of AVX use.

We also found other screens from the event, but not from the video, where we could see the build (2.77a x64) and samples (100, rather than 200) running, as well as different times (about 26 seconds). This build with these settings produces results in line with AMD's.

Long story short, nothing looks wrong with AMD's _own_ comparison, but the test they have supplied to the public is not what they ran.
Quote:


> Originally Posted by *}SkOrPn--'*
> 
> AMD may be inexperienced with hyper-threading, but it was Jim Keller who designed Zen in the first place, not AMD.


AMD has worked closely with IBM over the years and IBM has even more experience with SMT than Intel. AMD has also had plenty of time to observe and learn from Intel, not to mention access to many of their patents.

I don't foresee any problems with AMD's SMT implementation...not that I think problems are impossible, but SMT isn't exactly a new thing and AMD isn't exactly without insight, even if they haven't had products with it before.


----------



## Eorzean

If this thing isn't over $600 CAD, I'll most likely be jumping ship. Before getting excited, I'm waiting for some real reviews/benchmarks after years of disappointing AMD hypetrains tho.


----------



## formula m

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yesterday I predicted $899 myself, but today I am thinking $799 or less just because of the fact they keep mentioning how expensive Intel is. I would be surprised if they offer the 8C at anything more than $700 now. $700-800 for a desktop consumer part still isn't cheap...
> 
> However $500 would shake the industry, probably even $600. I would LOVE to see Intel's face's if AMD sells it for $500, hahahaha THAT would be priceless.


Here is my prediction:

SR7 *$449*

SR5 $269

SR3 $189

There will also be a handpicked, "black bird" version of the SR7, code named the ZR7(1) @ $699 with special AIO liquid cooler.


----------



## looniam

Quote:


> Originally Posted by *SuperZan*
> 
> It's likely just AMD's latest Turbo iteration. I'd be very surprised to learn that we couldn't *manually override it for our own overclocks*.


that is what will keep me waiting not just for benchmarks or several motherboards to be released, but to see if that "auto OC" can be disabled.

so yeah, waiting for ian cutress's dissertation.


Spoiler: Warning: Spoiler!


----------



## Mad Pistol

I'm thinking that Ryzen in its 8-core/16-thread variant will retail around $500. If it is able to come within spitting distance of a i7 6900k, it will be a hit among enthusiasts.

Also, if AMD is comparing Ryzen to an i7 6900k, they are confident they have a winner. The i7 6900k is literally intel's flagship CPU - 2 cores. I highly doubt this will be another Bulldozer.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Blameless*
> 
> I don't foresee any problems with AMD's SMT implementation...not that I think problems are impossible, but SMT isn't exactly a new thing and AMD isn't exactly without insight, even if they haven't had products with it before.


Not to mention 4 years is a really decent amount of time for engineers to get something right. Except for maybe the name, lol... Had they had the $ that Intel has, we would have seen this a year ago or sooner I'm sure.

Ryzen up and take note, AMD is back... Corny?







lol


----------



## Aussiejuggalo

Anyone else more interested in the motherboards now?

I've seen enough on the CPU's to be happy till CES, not even going to speculate on price because I know whatever price it is here in Aus were going to get price gouged as usual







.


----------



## inedenimadam

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Anyone else more interested in the motherboards now?
> 
> I've seen enough on the CPU's to be happy enough till CES, not even going to speculate on price because I know whatever price it is here in Aus were going to get price gouged as usual
> 
> 
> 
> 
> 
> 
> 
> .


Yes, for sure. Very interested in seeing overclocking boards. Almost more interested in some BIOS screenies though. Overclocking sounds like it could be as simple as installing good cooling and letting software handle the rest...I hope we still have manual control of it all.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Mad Pistol*
> 
> I'm thinking that Ryzen in its 8-core/16-thread variant will retail around $500. If it is able to come within spitting distance of a i7 6900k, it will be a hit among enthusiasts.
> 
> Also, if AMD is comparing Ryzen to an i7 6900k, they are confident they have a winner. The i7 6900k is literally intel's flagship CPU - 2 cores. I highly doubt this will be another Bulldozer.


One of the problems I have is the fact that I promised myself I would not upgrade again until my next Board+CPU is fully Optane compliant. I doubt AMD will be supporting Optane, and if they do its probably a later series Ryzen+ or something.

Any agree or disagree? You guys think AMD's chipsets will support Optane products once they hit the market?


----------



## Robenger

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> One of the problems I have is the fact that I promised myself I would not upgrade again until my next Board+CPU is fully Optane compliant. I doubt AMD will be supporting Optane, and if they do its probably a later series Ryzen+ or something.
> 
> Any agree or disagree? You guys think AMD's chipsets will support Optane products once they hit the market?


It is an Intel technology, so who knows.


----------



## formula m

What if..? (without cooler)

SR7 "Black Edition" @ $499 ..?

SR7 $349

SR5 $249

SR3 $149

*Would that^ be enough to upset the gaming world..? *

Coincidentally, I see a $1,200 AMD gaming chip being a 10 Cores APU. (No Gamer needs a 32 core Napels)

An AM4+ that uses whatever video card you have & just bolsters the power, heterogeneously...


----------



## n4p0l3onic

Wait a minute, does zen only use dual channel memory?


----------



## }SkOrPn--'

Quote:


> Originally Posted by *Robenger*
> 
> It is an Intel technology, so who knows.


Yeah true, however being its a PCIe tech I am hopeful it is system independent. If so and all you need is PCIe 3.0 and Software to recognize it, then yeah I could see myself building another AMD system after a 10 year gap.


----------



## budgetgamer120

Quote:


> Originally Posted by *n4p0l3onic*
> 
> Wait a minute, does zen only use dual channel memory?


It seems so by the board.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *inedenimadam*
> 
> Yes, for sure. Very interested in seeing overclocking boards. Almost more interested in some BIOS screenies though. Overclocking sounds like it could be as simple as installing good cooling and letting software handle the rest...I hope we still have manual control of it all.


I'm just worried that the boards are going to be made cheaper than Intel equivalents and be more likely to fail.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *budgetgamer120*
> 
> It seems so by the board.


Where did you see the board? lol link?
Quote:


> Originally Posted by *Aussiejuggalo*
> 
> I'm just worried that the boards are going to be made cheaper than Intel equivalents and be more likely to fail.


Now why on Earth would board manufacturers go and do something like that? Asus, Gigabyte etc aren't going to sabotage AMD like that because they have their own reputations to maintain.


----------



## kd5151

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Where did you see the board? lol link?


----------



## }SkOrPn--'

Quote:


> Originally Posted by *kd5151*


Wow was that part of the Horizon event? I guess I better go get to searching for more info. Yeah that looks like dual channels to me too. But does it really matter with DDR4? Lisa said they plan better things a head soon, I wonder if they plan on a 10 and 12 core variant with another setup using quad channel memory to take on 2011-v3?


----------



## SuperZan

> Originally Posted by *}SkOrPn--'*
> 
> Now why on Earth would board manufacturers go and do something like that? Asus, Gigabyte etc aren't going to sabotage AMD like that because they have their own reputations to maintain.


Those of us who dabbled heavily in Vishera might disagree. ASUS was the only company that put equivalent effort into their AM3+ line.. MSI had one great board (GD-80) and the rest were very average. Gigabyte was a roll of the dice what with all of the revisions, and their BIOS was just tragic.

FM2+ boards have been a bit better in terms of consistency but still, it is a thing to hope that AM4 will receive that extra buff and polish.


----------



## kd5151

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Wow was that part of the Horizon event? I guess I better go get to searching for more info. Yeah that looks like dual channels to me too. But does it really matter with DDR4? Lisa said they plan better things a head soon, I wonder if they plan on a 10 and 12 core variant with another setup using quad channel memory to take on 2011-v3?


No,someone else posted a link to a Chinese forum. This picture came from WCCFTech which is a complete shot of the whole Mobo. There are other pictures. Close ups and such.


----------



## kd5151

http://tieba.baidu.com/p/4898582118?pn=1

Posted before the event. Post# 175 gets the real credit for sharing.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *SuperZan*
> 
> Those of us who dabbled heavily in Vishera might disagree. ASUS was the only company that put equivalent effort into their AM3+ line.. MSI had one great board (GD-80) and the rest were very average. Gigabyte was a roll of the dice what with all of the revisions, and their BIOS was just tragic.
> 
> FM2+ boards have been a bit better in terms of consistency but still, it is a thing to hope that AM4 will receive that extra buff and polish.


OK I see, yeah agreed that would be tragic. Lets hope that isn't going to happen again. Sorry to hear about that though. The only AMD system I have built in the last ten years was a Asus A88X-Pro board and the AMD A10 7850K in 2014, and I only built it for my brother who needed a decent HTPC on a budget. That is the ONLY experience I have with AMD ever since jumping ship for the Core series.

Lets hope we don't see any serious bugs or flat out recalls on these new AMD products coming out, and everything is smooth sailing.


----------



## george241312

The speaker of this event was so cringy I really hated her it was hobestly really unprofessional. She made it seem like the only reason people want advancement with CPUs is because of "Gaming". I'm pretty sure these new CPUs are going to be a bust by the way they are already talking about their product. I really wanted amd to outperform but when she said "a quarter of the performance comes from " it sounds like they just did some optimization or trick to improve performance which obviously won't help anywhere that's not optimized for Zen.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *george241312*
> 
> The speaker of this event was so cringy I really hated her it was hobestly really unprofessional. She made it seem like the only reason people want advancement with CPUs is because of "Gaming". I'm pretty sure these new CPUs are going to be a bust by the way they are already talking about their product. I really wanted amd to outperform but when she said "a quarter of the performance comes from " it sounds like they just did some optimization or trick to improve performance which obviously won't help anywhere that's not optimized for Zen.


I'm pretty sure your going to be 100% wrong....


----------



## budgetgamer120

Quote:


> Originally Posted by *george241312*
> 
> The speaker of this event was so cringy I really hated her it was hobestly really unprofessional. She made it seem like the only reason people want advancement with CPUs is because of "Gaming". I'm pretty sure these new CPUs are going to be a bust by the way they are already talking about their product. I really wanted amd to outperform but when she said "a quarter of the performance comes from " it sounds like they just did some optimization or trick to improve performance which obviously won't help anywhere that's not optimized for Zen.


Are you for real?


----------



## tpi2007

I just read the last two hundred posts and came away more confused than anything about the Blender test settings.
Quote:


> Originally Posted by *Nenkitsune*
> 
> also, right here you can see it say 100 samples as well


- I don't know what that is, it's a hundred something alright, but only the first number is clearly distinguishable; I don't have good enough CSI skills to "enhance" that and tell for sure if it's 100, 128, or something else entirely;
- Has anybody asked AMD directly on Twitter or something to get to the bottom of this?

Quote:


> Originally Posted by *CrazyElf*
> 
> I've done a bit more digging on AVX2:
> https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.72/Cycles
> 
> Quote:
> 
> 
> 
> Optimizations
> 
> Importance sampling for Glossy and Anisotropic BSDFs has been improved, which results in less noise per sample. The gives increased render time as well, but it's generally more than compensated by the reduction in noise.
> 
> Decreased memory usage during rendering. (49df707496e5, 0ce3a755f83b)
> 
> *CPUs with support for AVX2 (e.g. Intel Haswell) render a few percent faster. (866c7fb6e63d)*
> 
> 
> 
> I'm curious what the Stilt did. Probably full AVX2?
> 
> See here too:
> https://blenderartists.org/forum/showthread.php?350476-Does-Blender-take-advange-of-CPU-instruction-sets
> 
> Quote:
> 
> 
> 
> Modern CPU's should pretty much support all of those.
> 
> Exceptions might be AVX and AVX2, but the only part of Blender that uses them is Cycles and has fallbacks to SSE instructions if it's not supported on your machine.
> 
> Click to expand...
Click to expand...

- Why can a single guy, The Stilt, compile a build of Blender with better AVX2 results than the official thing? AVX2 was introduced with Haswell in 2013, why does the official AVX2 implementation only yield a few percent faster results?

Quote:


> Originally Posted by *n4p0l3onic*
> 
> Wait a minute, does zen only use dual channel memory?


Yes, and for desktop usage it makes virtually no difference compared to Intel's HEDT quad channel. Assuming they have a good DDR4 memory controller, that is.

http://www.pcworld.com/article/2982965/components/quad-channel-ram-vs-dual-channel-ram-the-shocking-truth-about-their-performance.html


----------



## Shogon

Are we talking about these settings?











At these settings my 5820k manages 30 seconds I think.


----------



## kd5151

It's 100% 100


----------



## Forceman

Quote:


> Originally Posted by *formula m*
> 
> What if..? (without cooler)
> 
> SR7 "Black Edition" @ $499 ..?
> SR7 $349
> SR5 $249
> SR3 $149
> 
> *Would that^ be enough to upset the gaming world..? *
> 
> Coincidentally, I see a $1,200 AMD gaming chip being a 10 Cores APU. (No Gamer needs a 32 core Napels)
> 
> An AM4+ that uses whatever video card you have & just bolsters the power, heterogeneously...


Expecting AMD to sell a 8/16 CPU that has roughly equivalent IPC for the same price as Intel's 4/8 CPU is setting yourself up for disappointment. $499 for an SR7 would be an amazing price, expecting less than that is kind of foolish.


----------



## tpi2007

Quote:


> Originally Posted by *Shogon*
> 
> Are we talking about these settings?
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At these settings my 5820k manages 30 seconds I think.


Yes and no. I'm not in my post, but people are talking about two because there were two benchmarks presented at different times with different results.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *george241312*
> 
> *The speaker of this event was so cringy I really hated her it was hobestly really unprofessional.* She made it seem like the only reason people want advancement with CPUs is because of "Gaming". I'm pretty sure these new CPUs are going to be a bust by the way they are already talking about their product. I really wanted amd to outperform but when she said "a quarter of the performance comes from " it sounds like they just did some optimization or trick to improve performance which obviously won't help anywhere that's not optimized for Zen.


Oh do you happen to mean AMD President & CEO Dr. Lisa T. Su? Yes so very unprofessional. This event was aimed at gamers because of the Vega tease... you do realise that don't you?

CES will be when they release everything and get far more in depth, there's no point in AMD releasing everything and giving Intel any info that could give them an edge with Kaby Lake and they sure as hell weren't giving away anything on Vega for Nvidia.


----------



## }SkOrPn--'

I am now finding myself wishing they had shown Ryzen tests with XFR enabled on both AIR and Water, just for fun. I'm also curious how a 4.5Ghz overclocked 6900K competes with a 4.5Ghz Ryzen, or if it can even hit 4.5.


----------



## Mad Pistol

Quote:


> Originally Posted by *george241312*
> 
> The speaker of this event was so cringy I really hated her it was hobestly really unprofessional. She made it seem like the only reason people want advancement with CPUs is because of "Gaming". I'm pretty sure these new CPUs are going to be a bust by the way they are already talking about their product. I really wanted amd to outperform but when she said "a quarter of the performance comes from " it sounds like they just did some optimization or trick to improve performance which obviously won't help anywhere that's not optimized for Zen.


AMD knows their market. They want to get this product into the hands of gamers, and from there, it will trickle into the professional market.

Also, that "cringy" and "unprofessional" presenter was Dr. Lisa Su, President and CEO of AMD. She's actually an engineer and has worked for companies like IBM and Texas Instruments.

In other words, if "cringy" is her only crime and AMD rebounds under her leadership, she can present all the events that she wants. It's never stopped Jen-Hsun Huang, CEO and co-founder of Nvidia, and that guy is cringy as hell.


----------



## budgetgamer120

Quote:


> Originally Posted by *Forceman*
> 
> Expecting AMD to sell a 8/16 CPU that has roughly equivalent IPC for the same price as Intel's 4/8 CPU is setting yourself up for disappointment. $499 for an SR7 would be an amazing price, expecting less than that is kind of foolish.


I think you are wanting it to cost more. Everything points to a much cheaper cpu. AMD's position does not allow them to charge that much and make a difference in the market.

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I am now finding myself wishing they had shown Ryzen tests with XFR enabled on both AIR and Water, just for fun. I'm also curious how a 4.5Ghz overclocked 6900K competes with a 4.5Ghz Ryzen, or if it can even hit 4.5.


I doubt it can clock that high. Maybe 4.2ghz


----------



## Forceman

Quote:


> Originally Posted by *budgetgamer120*
> 
> Everything points to a much cheaper cpu.


What exactly is "everything" that is pointing at a much cheaper CPU? Hype and dreams?


----------



## }SkOrPn--'

Quote:


> Originally Posted by *budgetgamer120*
> 
> I think you are wanting it to cost more. Everything points to a much cheaper cpu. AMD's position does not allow them to charge that much and make a difference in the market.
> I doubt it can clock that high. Maybe 4.2ghz


Yeah even the 6900K struggles to get that. At least a 1Ghz stable overclock would be icing on the cake though, especially if they shock us with pricing.


----------



## dmasteR

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah even the 6900K struggles to get that. At least a 1Ghz stable overclock would be icing on the cake though, especially if they shock us with pricing.


What?

There's plenty of 6900K's that do 4.4. I'd even say that's average assuming you're on air/AIO.


----------



## tpi2007

Quote:


> Originally Posted by *Forceman*
> 
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> Everything points to a much cheaper cpu.
> 
> 
> 
> What exactly is "everything" that is pointing at a much cheaper CPU? *Hype and dreams?*
Click to expand...

It's what makes the world keep turning. And rumour sites. Especially rumour sites.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *dmasteR*
> 
> What?
> 
> There's plenty of 6900K's that do 4.4. I'd even say that's average assuming you're on air/AIO.


Sorry, I meant for the Ryzen chip to do 4.4 (a 1Ghz OC). 6900K's are doing 4.5 as far as I know and some even more, no?


----------



## verovdp

Quote:


> Originally Posted by *Mad Pistol*
> 
> Also, if AMD is comparing Ryzen to an i7 6900k, they are confident they have a winner. The i7 6900k is literally intel's flagship CPU - 2 cores. I highly doubt this will be another Bulldozer.


While I would like to also come to the same conclusion that AMD is confident about Ryzen with the comparison to the 6900K, let's not forget that during the build up to the Bulldozer launch, AMD was comparing performance to an i7 980x (Gulftown) and claiming it was competitive. Basically what I'm saying is, we can extrapolate that AMD has achieved decent SMT and excellent IPC increase over the construction core cpus based on these demos, but the complete picture isn't going to be clear until at least late Q1, or maybe even the start of Q2.


----------



## dmasteR

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Sorry, I meant for the Ryzen chip to do 4.4 (a 1Ghz OC). 6900K's are doing 4.5 as far as I know and some even more, no?


Yep. That's why I was confused when you said the 6900K was struggling to hit those speeds.

All good though!


----------



## budgetgamer120

Quote:


> Originally Posted by *Forceman*
> 
> What exactly is "everything" that is pointing at a much cheaper CPU? Hype and dreams?


They compared it to a 6700k. Enough said. Just as in the past they compared their 8 core to i5s


----------



## }SkOrPn--'

Quote:


> Originally Posted by *dmasteR*
> 
> Yep. That's why I was confused when you said the 6900K was struggling to hit those speeds.
> 
> All good though!


Yeah the first overclocking review I ever read about the 6900K the guy couldnt get past 4.3Ghz and that just stuck in my mind since, but I know for fact others have hit a stable 4.5 on theirs. So I just extrapolated that 4.5 is probably the 6900K's struggle point, or abouts. My main point was I would LOVE to see the Ryzen hit at least 1 Ghz overclocks stable.

For some reason I just find 1 Ghz Overclocks to be awesome, lol... If any chip I own can do that I am more then happy with it


----------



## kingduqc

Can't wait benches. I hope they can be competitive and that people will support them because with the last 5 release intel has done nothing for us perf. Wise. We need this competition so badly.


----------



## Forceman

Quote:


> Originally Posted by *budgetgamer120*
> 
> They compared it to a 6700k. Enough said. Just as in the past they compared their 8 core to i5s


Never mind, believe what you want to believe.


----------



## Diogenes5

I hope AMD does well because competition is good for everyone. I have a feeling that it'll be a successful prodcuct but nothing that will shake up the market the way the athlon did (by destroying Intel in benchmarks). My basis of comparison is their Polaris GPU's. Smaller die-size, slightly re-engineered and tuned but not a very good overclocker because they have to contract to global foundries which also probably decreased yields. Also still a little behind Nvidia because they've just been behind for so long.

I'm guessing base Haswell ipc, and mild overclocking. If their 8-core chip is 3.4 ghz stock, you might get 3.8 ghz with a good cooler. I wouldn't expect the 1ghz+ overclock people are expecting. They are not going to leave that much performance on the table. They want to price $50-100 cheaper than intel at all performance points so they can return to profitability.

So it'll be much better than the crap they've had to produce for the last few years because they've been stuck on larger die sizes due to TSMC and Global foundries being so delayed in adopting 14nm tech. But it won't be close to what Intel has recently because they'll hit the same engineering wall intel did when going to smaller and smaller dies.

So they'll compete with cpu's that are behind intel in single core IPC at all price levels but offer more cores. It'll definitely be a competitive product but they'll probably need another generation to work on the manufacturing kinks and increase yields and mature their design for better clocks. It's just the nature of engineering design when you don't own your own fabs.


----------



## mercs213

Quote:


> Originally Posted by *kd5151*
> 
> It's 100% 100


https://www.reddit.com/r/Amd/comments/5ie7f0/summoning_uamd_robert_how_can_we_do_the_blender/


----------



## S.M.

I wonder how many times they ran Blender and Handbrake beforehand. I'm wondering this mostly because of their "prefetch prediction" technology.


----------



## Travieso

Quote:


> Originally Posted by *verovdp*
> 
> While I would like to also come to the same conclusion that AMD is confident about Ryzen with the comparison to the 6900K, let's not forget that during the build up to the Bulldozer launch, AMD was comparing performance to an i7 980x (Gulftown) and claiming it was competitive. Basically what I'm saying is, we can extrapolate that AMD has achieved decent SMT and excellent IPC increase over the construction core cpus based on these demos, but the complete picture isn't going to be clear until at least late Q1, or maybe even the start of Q2.


as i remember they never benchmarked their own FX-8150 against 980X.

they only did against 2600K and 2500K.


----------



## MadRabbit

Quote:


> [-]AMD_james- AMD EmployeeAMD Employee • 45 poäng en timme sen
> 
> Set render samples to 150. A new file with the sample level set correctly will be uploaded shortly. Apologies for the confusion.
> 
> Blender 2.78a x64 is what we used, binary download from blender.org.


----------



## SuperZan

Quote:


> Originally Posted by *Travieso*
> 
> as i remember they never benchmarked their own FX-8150 against 980X.
> 
> they only did against 2600K and 2500K.


They were also much more vocal and used loads of euphemisms and vague language that I'm not seeing this time 'round. They've been rather quiet and are giving (relatively) simple examples. We'll surely know more after a more 'techie' oriented event than this New Horizon one seemed to be, but AMD is not acting at all like they did in the run-up to BD.


----------



## tpi2007

Quote:


> Originally Posted by *mercs213*
> 
> Quote:
> 
> 
> 
> Originally Posted by *kd5151*
> 
> It's 100% 100
> 
> 
> 
> https://www.reddit.com/r/Amd/comments/5ie7f0/summoning_uamd_robert_how_can_we_do_the_blender/
Click to expand...

Aha! I knew nobody could 100% say that that blurry mess was "100", it's 150.

Rep+

Quote:


> Originally Posted by *Travieso*
> 
> Quote:
> 
> 
> 
> Originally Posted by *verovdp*
> 
> While I would like to also come to the same conclusion that AMD is confident about Ryzen with the comparison to the 6900K, let's not forget that during the build up to the Bulldozer launch, AMD was comparing performance to an i7 980x (Gulftown) and claiming it was competitive. Basically what I'm saying is, we can extrapolate that AMD has achieved decent SMT and excellent IPC increase over the construction core cpus based on these demos, but the complete picture isn't going to be clear until at least late Q1, or maybe even the start of Q2.
> 
> 
> 
> as i remember they never benchmarked their own FX-8150 against 980X.
> 
> they only did against 2600K and 2500K.
Click to expand...

No, they also tested against the 980X in the video below, right at the start, in the gaming test. In fact, back then some complaints were made because in the first video that was taken down they even had the Cinebench Intel CPU mislabelled as the 980X when in fact it was the 2500K. A mention of that is in the video description, although the link to their site is now dead. The notes at the end of the video were always right, but not many read them.





I hope they don't repeat anything that happened with this video. They were very misleading of the general performance and conveyed a lot of mixed messages about value.



Spoiler: Warning: Spoiler!



(Well, they misspelled "Platform", but that's not performance related.

)


----------



## flippin_waffles

From what people in the industry are suggesting, Zen in enterprise is highly attractive also. The Naples platform is quite impressive with the ability to reach a wide market. Radeon Instinct has a perfect home and the software stack they have developed is so good. Its hard to believe how they have managed to develop such incredible technology in so little time it really is. I think the award they just recieved from eetimes was for tech company of the year and they truly deserve it.


----------



## kd5151

Quote:


> Originally Posted by *mercs213*
> 
> https://www.reddit.com/r/Amd/comments/5ie7f0/summoning_uamd_robert_how_can_we_do_the_blender/


Interesting. This changes everything. Now we are getting somewhere? So the live stream was 150 and the event before the live stream was 100.


----------



## Travieso

Quote:


> Originally Posted by *tpi2007*
> 
> No, they also tested against the 980X in the video below, right at the start, in the gaming test. In fact, back then some complaints were made because in the first video that was taken down they even had the Cinebench Intel CPU mislabelled as the 980X when in fact it was the 2500K. A mention of that is in the video description, although the link to their site is now dead. The notes at the end of the video were always right, but not many read them.
> 
> 
> 
> 
> 
> I hope they don't repeat anything that happened with this video. They were very misleading of the general performance and conveyed a lot of mixed messages about value.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> (Well, they misspelled "Platform", but that's not performance related.
> 
> )


playing games at 1440P back then was very gpu limited. it's not surprising that both had the same frame rate. but they benched FX-8150 in heavily cpu loaded works against cpus with similar performance like 2500K and 2600K (in some benches).

i just cannot see how they could deceive consumers and all enthusiasts in the event in heavily cpu benchmarks like Blender and Handbrake.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *kd5151*
> 
> Interesting. This changes everything. Now we are getting somewhere? So the live stream was 150 and the event before the live stream was 100.


OK so that changes a lot when I set render to 150. Instead of 1:33 for my hexa Xeon, its now 1:10. Still not anywhere near Ryzen I know, but closer now. Ryzen seemed crazy good and now its just really good, lol. Priced right I still want it.


----------



## Clocknut

lol I think I am sticking to my Core i5s and wait Zen+


----------



## kd5151

Quote:


> Originally Posted by *tpi2007*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> (Well, they misspelled "Platform", but that's not performance related.
> 
> )


I guess the sample wasn't only thing they got wrong. Good eye.


----------



## jologskyblues

Would it be too much to ask for 6 cores, 12 threads and Skylake IPC and <100W TDP for $299 because if that happens, I'm definitely going to switch back to AMD CPUs.


----------



## budgetgamer120

Quote:


> Originally Posted by *jologskyblues*
> 
> Would it be too much to ask for 6 cores, 12 threads and Skylake IPC and <100W TDP for $299 because if that happens, I'm definitely going to switch back to AMD CPUs.


So you buy an inferior i7-4790K for more and hope to get an atleast 60% faster cpu (guesstimate) for less.

With that said I am kinda wanting the same lol.


----------



## Roaches

Rerun the render at 150 Samples. Got 28.76 Seconds with dual Xeon E5-2670.



Waiting for the Haswell/Broadwell-E dudes results. @Jpmboy and others?


----------



## jologskyblues

Quote:


> Originally Posted by *budgetgamer120*
> 
> So you buy an inferior i7-4790K for more and hope to get an atleast 60% faster cpu (guesstimate) for less.












I don't quite get what you're trying to say but I bought my 4790K used and cheap too and its all good for now. Certainly better than any FX for my usage case.

All I want is moar cores for future proofing as well as the bells and whistles that usually come with newer motherboard chipsets like NVMe, more PCIE 3.0, lanes, USB 3.1 Gen2 Type-C, etc. Kaby Lake is looking like a big disappointment. If Ryzen doesn't deliver the goods, I will be waiting instead for the mainstream 6-core Coffee Lake.


----------



## tpi2007

Quote:


> Originally Posted by *Travieso*
> 
> playing games at 1440P back then was very gpu limited. it's not surprising that both had the same frame rate. but they benched FX-8150 in heavily cpu loaded works against cpus with similar performance like 2500K and 2600K (in some benches).
> 
> i just cannot see how they could deceive consumers and all enthusiasts in the event in heavily cpu benchmarks like Blender and Handbrake.


They couldn't of course (unless they wanted to get in legal trouble); it's not the benchmarks themselves but rather the picture that they paint may not be indicative of general performance. As you can see from the video the FX-8150 took exactly the same time in Handbrake as the 2600K, but we all know which CPU was overall better.

This time around the CPU has a lower TDP than Intel's offerings and things are overall looking more promising, but it's just to say that people should not go into overdrive with excitement before we get the full picture.

For example, the gaming bench they did with streaming at the same time is all well and good, but less than one tenth of the equivalent number of concurrent Steam users stream while gaming, then factor in Origin, GOG, etc gamers, and the percentage is even lower. So, while it may be a great CPU for streaming, how high can the 4 and 6 core variants clock to compete with Intel's offerings? A 4 core variant will be going against Intel's 4.2 - 4.5 Ghz Kaby lake 4C/8T CPU, so we still need to know more. How much headroom does the architecture have, how well do lower core variants clock versus the top end one they showed and what is the pricing.

Quote:


> Originally Posted by *kd5151*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tpi2007*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> (Well, they misspelled "Platform", but that's not performance related.
> 
> )
> 
> 
> 
> 
> 
> 
> I guess the sample wasn't only thing they got wrong. Good eye.
Click to expand...

Credit where it's due: http://www.hardocp.com/news/2016/12/13/amd_proofreading_fail_day
Quote:


> AMD Proofreading FAIL of the Day
> 
> It's a good thing AMD let *it's* upcoming Ryzen processor performance do most of the "talking" today during its New Horizon event because overall, I think it was a very solid showing. It's just too bad the company didn't let Ryzen do the spelling too. [H] forum reader Gigantopithecus gets credit for spotting this goof.


Ironically the writer made a mistake himself (in bold).


----------



## Travieso

Quote:


> Originally Posted by *jologskyblues*
> 
> For example, the gaming bench they did with streaming at the same time is all well and good, but less than one tenth of the equivalent number of concurrent Steam users stream while gaming, then factor in Origin, GOG, etc gamers, and the percentage is even lower. So, while it may be a great CPU for streaming, how high can the 4 and 6 core variants clock to compete with Intel's offerings? A 4 core variant will be going against Intel's 4.2 - 4.5 Ghz Kaby lake 4C/8T CPU, so we still need to know more. How much headroom does the architecture have; how well do lower core variants clock versus the top end one they showed and what is the pricing.
> Credit where it's due: http://www.hardocp.com/news/2016/12/13/amd_proofreading_fail_day
> Ironically the writer made a mistake himself (in bold).


i think that's because most of margin lies in enthusiastic high end 8 core ultimate edition whatever it's called segment.

intel doesn't care that their 8 core cpus are sold at rate of 1/10 of mainstream 4 core as they can earn their profit from selling 8 core probably more than 10 times of selling mainstream cpu.

i myself doesn't care about 8 core Zen cpu at all (since i know that it will be priced at least $500) but if some kids fall into AMD's marketing and feel like they need 8 core Zen to play game with crappy outdated engine like LoL or CS:GO. well, that's not my business i don't need to educate them at all.

and i think every people in this forum are well educated enough.


----------



## Boomer1990

Retested my potato with render at 150.

Win 8.1
2133 ddr3 ram
Amd A10-5800k

Stock 3.8 I got 4:17.70
Overclocked to 4.3 and I got 3:56.29

If this comes in at $499 or under I think it will be my next cpu.


----------



## KarathKasun

If that performance is indicative of overall performance without turbo active, you will not see a top bin 8c/16t chip anywhere close to $500.

I'd bet its going to start at $800 for the top bin octal with top bin quads in the $250 ballpark and top bin hexes in the $350 ballpark.


----------



## inedenimadam

did anybody notice how long the render time was on the ryzen logo in blender in the video?


----------



## lrch

Quote:


> Originally Posted by *tpi2007*
> 
> [...].
> Credit where it's due: http://www.hardocp.com/news/2016/12/13/amd_proofreading_fail_day
> Ironically the writer made a mistake himself (in bold).


That's a grammatical error though.


----------



## Shogon

Quote:


> Originally Posted by *Roaches*
> 
> Rerun the render at 150 Samples. Got 28.76 Seconds with dual Xeon E5-2670.
> 
> 
> 
> Waiting for the Haswell/Broadwell-E dudes results. @Jpmboy and others?


150 samples



5820k @ 4.3 / 3200 MHz memory


----------



## Nick the Slick

4.6GHz 4770k nets me 56.37s @ 150 samples. I'm almost back to my original level of impressed. Can't wait for the launch now for the juicy details


----------



## doza

why did they use dual channel ram in both amd and intel setup? maybe quad is better on intel so amd uses dual in which they are performing better than intel at that ram setup? or is it opposite and amd will crush intel in quad channel but Lisa letf that scenario for launch review?
maybe i'am talking nonsence idk.


----------



## Blameless

Quote:


> Originally Posted by *n4p0l3onic*
> 
> Wait a minute, does zen only use dual channel memory?


AM4 has always been slated to be dual-channel.
Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Wow was that part of the Horizon event? I guess I better go get to searching for more info. Yeah that looks like dual channels to me too. But does it really matter with DDR4? Lisa said they plan better things a head soon, I wonder if they plan on a 10 and 12 core variant with another setup using quad channel memory to take on 2011-v3?


There is a server socket for Naples, but I doubt there will ever be consumer boards based on it.

AM4 is the consumer socket for AMD. It will cover everything from their APUs to their fastest consumer 8c/16t parts.
Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I am now finding myself wishing they had shown Ryzen tests with XFR enabled on both AIR and Water, just for fun. I'm also curious how a 4.5Ghz overclocked 6900K competes with a 4.5Ghz Ryzen, or if it can even hit 4.5.


4.5GHz is not a practical 24/7 overclock for most 6900Ks.

We really have no idea how Zen will OC, but I would be surprised if 4.5GHz was remotely practical there either.
Quote:


> Originally Posted by *S.M.*
> 
> I wonder how many times they ran Blender and Handbrake beforehand. I'm wondering this mostly because of their "prefetch prediction" technology.


Probably not relevant.

While there is a small benefit to second a later runs on any system, this comes from making sure all assets are in memory/cache. Any new Zen prefetching would be very unlikely to get better run to run, at least in a single task environment like this benchmark.

It's not like any modern out-of-order CPU doesn't have a lot of logic devoted to branch prediction and prefetching.
Quote:


> Originally Posted by *inedenimadam*
> 
> did anybody notice how long the render time was on the ryzen logo in blender in the video?


35.38 seconds for Ryzen, 35.44 seconds for the 6900K, if I recall correctly.
Quote:


> Originally Posted by *doza*
> 
> why did they use dual channel ram in both amd and intel setup? maybe quad is better on intel so amd uses dual in which they are performing better than intel at that ram setup? or is it opposite and amd will crush intel in quad channel but Lisa letf that scenario for launch review?
> maybe i'am talking nonsence idk.


Probably so they could just use exactly the same memory configuration, without having to install four DIMMs.

AM4 is only dual channel and Blender tends to perform slightly better in dual-channel than quad on LGA-2011v3.


----------



## Blameless

Quote:


> Originally Posted by *Shogon*
> 
> 150 samples
> 
> 
> 
> 5820k @ 4.3 / 3200 MHz memory


My 5820K @ 4.3/4.1 core/uncore with memory at 2666 CL 12 (my normal 24/7 settings).

RyzenGraphic_27.blend, 150 samples, in the following Windows Blender builds (best of three runs each), in seconds:

2.77a x64 official (what AMD used) = 39.06

2.78a x64 official (what AMD recommended) = 39.56

2.78a x64 compiled to use all available instruction sets (courtesy of The Stilt) = 27.21


----------



## Ban13

Quote:


> Originally Posted by *Blameless*
> 
> My 5820K @ 4.3/4.1 core/uncore with memory at 2666 CL 12 (my normal 24/7 settings).
> 
> RyzenGraphic_27.blend, 150 samples, in the following Windows Blender builds (best of three runs each), in seconds:
> 
> 2.77a x64 official (what AMD used) = 39.06
> 
> 2.78a x64 official (what AMD recommended) = 39.56
> 
> 2.78a x64 compiled to use all available instruction sets (courtesy of The Stilt) = 27.21


So do you now think 6900K and RYZEN's numbers are believable?


----------



## Blameless

Quote:


> Originally Posted by *Ban13*
> 
> So do you now think 6900K and RYZEN's numbers are believable?


Always did.

The issue wasn't Ryzen vs. 6900K it was the test AMD ran vs. the test they told us to run.


----------



## Mellifleur

here you gents go i did a couple runs of the ryzen blender benchmark at 100,150 and 200 samples so we can compare AMD's old reigning champion (FX 9590 space heater at 5.2ghz) with their new bad boy.. man am i lookin forward to zen







.

1:04.69 @100 samples


1:[email protected] samples


2:[email protected] samples


link to imgur album for higher res versions, someone let me know if there is some way to upload higher res ones to overclock.net plx plax plx







.

http://imgur.com/a/SUEgS


----------



## mypcisugly

On the bright side of things this shows it could hang on there with Intel now we just have to see how good it handles everyday software


----------



## Blameless

Quote:


> Originally Posted by *Mellifleur*
> 
> here you gents go i did a couple runs of the ryzen blender benchmark at 100,150 and 200 samples so we can compare AMD's old reigning champion with their new bad boy.. man am i lookin forward to zen
> 
> 
> 
> 
> 
> 
> 
> .


Thanks for the benchmarks. I dismantled my FX-9590 setup just the other day and I'd been wondering how Vishera would do here.

Any chance you could run the 150 sample test at 3.4GHz, so we can more easily get an IPC comparison? I'm expecting 148 seconds (2:28) or so, but an actual test would be preferred.


----------



## Marios145

AMD tested blender twice.

Tech summit:
Blender v2.77
Sample size: 100

Time:
ryzen: 24.82sec
6900k: unknown (but similar)

New horizon event:
Blender v2.77
Sample size: 150

Time:
ryzen: 00:35.3s
6900k: 00:35.94s

Sources for settings and test time below:

Tech summit





and Sweclockers

New Horizon
New Horizon - Twitch(when quality is set to "Source" it's clearer than youtube)
and 100 sample size from AMD employee in reddit but he also said they used 2.78 but it looks like 2.77 for me


----------



## oxidized

Quote:


> Originally Posted by *Ban13*
> 
> So do you now think 6900K and RYZEN's numbers are believable?


What still remains is the Dota 2 streaming test with the 6700K, rest was kinda believable, besides them never showing FPS counter on tests with games, but that doesn't quite matter now


----------



## Mellifleur

Quote:


> Originally Posted by *Blameless*
> 
> Thanks for the benchmarks. I dismantled my FX-9590 setup just the other day and I'd been wondering how Vishera would do here.
> 
> Any chance you could run the 150 sample test at 3.4GHz, so we can more easily get an IPC comparison? I'm expecting 148 seconds (2:28) or so, but an actual test would be preferred.


here you go fine sir. sry it took a hot minute baby really didn't like a 17x multi and needed some tinkering.

1:42.16 @100 samples


2:30.52 @150 samples


3:19.54 @ 200 samples


imgur album for higher res versions

http://imgur.com/a/JsS26

if ryzen can really do the same workload in 35s vs 2:30s at the same clock speed than i would say amd is back in business









kinda want to drag my old phenom II 965 out of the closet and give it a run just to see how it compares.


----------



## Myst-san

I was wondering does the infinity fabric has anything to do with the interconnect fabric got from SeaMicro?


----------



## Serios

Quote:


> Originally Posted by *EniGma1987*
> 
> I guess the way you are thinking of "base" and the way I am is very different. When she said base clock that to me is the default speed without boost, not the lowest base speed all the models will come at. The way she said the 3.4Ghz for the 8 core model and then said maybe higher to me was looking like *AMD was trying to squeeze a little more speed out of the processors to sell at but werent sure they could.*


I'm 100% sure they can get higher speeds or she would not have mentioned it.


----------



## duganator

Crank the streaming preset on a 6700k and watch it start dropping frames
Quote:


> Originally Posted by *oxidized*
> 
> What still remains is the Dota 2 streaming test with the 6700K, rest was kinda believable, besides them never showing FPS counter on tests with games, but that doesn't quite matter now


----------



## oxidized

Quote:


> Originally Posted by *duganator*
> 
> Crank the streaming preset on a 6700k and watch it start dropping frames


What do you mean by "crank" unless they were streaming at QHD or even UHD there's no way it does that, and i proved it with my tests on my pc with a old 2600k


----------



## Marios145

Quote:


> Originally Posted by *oxidized*
> 
> What do you mean by "crank" unless they were streaming at QHD or even UHD there's no way it does that, and i proved it with my tests on my pc with a old 2600k


They were playing at 4K 1080p, both 6900k and Ryzen were smooth, only the 6700k had issues.
Btw there are more settings besides resolution that will bring a multi-core to its knees. High-resolution alone doesn't guarantee high-quality.


----------



## oxidized

Quote:


> Originally Posted by *Marios145*
> 
> They were playing at 4K, both 6900k and Ryzen were smooth, only the 6700k had issues.
> Btw there are more settings besides resolution that will bring a multi-core to its knees. High-resolution alone doesn't guarantee high-quality.


It's not about the resolution they were PLAYING at, more like the one they were STREAMING at. If it is as you say then well, it makes even less sense, and AMD using certain streaming settings (which are next to never used) just to show the advantage of an 8c/16t over a 4c/8t isn't really something they should be doing, UNLESS, unless they aim at pricing that somewhere very close to intel's 4c/8t chips.

But the doubt still remains, how much better (if better) would perform the 8c/16t of theirs compared to a 4c/8t either of theirs or intel's at normal settings


----------



## Blameless

Quote:


> Originally Posted by *oxidized*
> 
> What do you mean by "crank" unless they were streaming at QHD or even UHD there's no way it does that, and i proved it with my tests on my pc with a old 2600k


If I wanted to I could use my archival settings to stream with...and those knock the encode speed of 1080p content to 2-3 frames per second on fast Intel hex core parts. This is an extreme example, and such settings are so far past the point of deminishing returns for streaming that no one should be using them, but there are certainly settings that can be useful which will be impossible for even an overclocked 4c/8t Skylake or Kaby Lake to encode in real time, let alone leave any CPU cycles left for anything else.
Quote:


> Originally Posted by *oxidized*
> 
> AMD using certain streaming settings (which are next to never used) just to show the advantage of an 8c/16t over a 4c/8t isn't really something they should be doing


They are going to spin their demo in their favor any way they can and that includes using uncommonly demanding streaming settings.

It's one of the only ways to one reasonably high-end CPU show an advantage over another while gaming.
Quote:


> Originally Posted by *oxidized*
> 
> UNLESS, unless they aim at pricing that somewhere very close to intel's 4c/8t chips


This is a distinct possibility for one of the 8c SKUs.


----------



## 1216

Quote:


> Originally Posted by *Mellifleur*
> 
> Vishera 3.4GHz
> 2:30.52 @150 samples
> 
> kinda want to drag my old phenom II 965 out of the closet and give it a run just to see how it compares.


Broke into your house, sorry

Phenom II X4 at 3.4GHz, 150 samples:
NB at 2000: 03:01.78
NB at 2400: 03:00.03
NB at 2600: 02:59.41

Memory at 1600 8-9-8-25 1T for all of the runs


----------



## oxidized

Quote:


> Originally Posted by *Blameless*
> 
> If I wanted to I could use my archival settings to stream with...and those knock the encode speed of 1080p content to 2-3 frames per second on fast Intel hex core parts. This is an extreme example, and such settings are so far past the point of deminishing returns for streaming that no one should be using them, but there are certainly settings that can be useful which will be impossible for even an overclocked 4c/8t Skylake or Kaby Lake to encode in real time, let alone leave any CPU cycles left for anything else.


I'm sure of it, but as you said, if nodody or close to it, uses such settings, so what's the point in showing a superior performance based on those settings? It's just to show off, ok, no problems with that, but we users should realise that


----------



## Serios

Quote:


> Originally Posted by *Mad Pistol*
> 
> AMD knows their market. They want to get this product into the hands of gamers, and from there, it will trickle into the professional market.
> 
> Also, that "cringy" and "unprofessional" presenter was Dr. Lisa Su, President and CEO of AMD. She's actually an engineer and has worked for companies like IBM and Texas Instruments.
> 
> In other words, if "cringy" is her only crime and AMD rebounds under her leadership, she can present all the events that she wants. It's never stopped Jen-Hsun Huang, CEO and co-founder of Nvidia, and that guy is cringy as hell.


Yeah insane counter level 4 or irresponsible amount of performance.
Very funny guy, I like his passion.
Lisa does seem similar only she doesn't use words that are as flashy.


----------



## Serios

Quote:


> Originally Posted by *verovdp*
> 
> While I would like to also come to the same conclusion that AMD is confident about Ryzen with the comparison to the 6900K, let's not forget that during the build up to the Bulldozer launch, AMD was comparing performance to an i7 980x (Gulftown) and claiming it was competitive. Basically what I'm saying is, we can extrapolate that AMD has achieved decent SMT and excellent IPC increase over the construction core cpus based on these demos, but the complete picture isn't going to be clear until at least late Q1, or maybe even the start of Q2.


You are talking about a few "leaked" slides in comparison to a live presentation by AMD's CEO?
Doesn't look like the same thing to me.


----------



## Tobiman

Has anyone tried to run the handbrake test?


----------



## Blameless

Quote:


> Originally Posted by *Tobiman*
> 
> Has anyone tried to run the handbrake test?


Do you have a link to the file and the setting they used?


----------



## kd5151

Quote:


> Originally Posted by *Marios145*
> 
> AMD tested blender twice.
> 
> Tech summit:
> Blender v2.77
> Sample size: 100
> 
> Time:
> ryzen: 24.82sec
> 6900k: unknown (but similar)
> 
> New horizon event:
> Blender v2.77
> Sample size: 150
> 
> Time:
> ryzen: 00:35.3s
> 6900k: 00:35.94s
> 
> Sources for settings and test time below:
> 
> Tech summit
> 
> 
> 
> 
> 
> and Sweclockers
> 
> New Horizon
> New Horizon - Twitch(when quality is set to "Source" it's clearer than youtube)
> and 100 sample size from AMD employee in reddit but he also said they used 2.78 but it looks like 2.77 for me


Yeah the amd Robert guy said they used 2.78 at both events. I see 2.77 and 100 sample setting clearly at the press event.


----------



## Newwt

Quote:


> Originally Posted by *oxidized*
> 
> It's not about the resolution they were PLAYING at, more like the one they were STREAMING at. If it is as you say then well, it makes even less sense, and AMD using certain streaming settings (which are next to never used) just to show the advantage of an 8c/16t over a 4c/8t isn't really something they should be doing.


Um what?!? If their CPU can game and stream at a higher quality why wouldn't they showcase it?


----------



## Marios145

Quote:


> Originally Posted by *Newwt*
> 
> Um what?!? If their CPU can game and stream at a higher quality why wouldn't they showcase it?


"Because everyone else can't."


----------



## Mellifleur

Quote:


> Originally Posted by *Marios145*
> 
> "Because everyone else can't."


hey guys lets not get bogged down in red vs green, or red vs blue, or idk anymore but as my tests have shown and if amd is telling the the truth with blender, we are talking about an almost 500% IPC increase over vishera, like god #$#$#% damn amd nice #$#%#$ work.


----------



## Blameless

Quote:


> Originally Posted by *kd5151*
> 
> Yeah the amd Robert guy said they used 2.78 at both events. I see 2.77 and 100 sample setting clearly at the press event.


The used 2.77a in at least one of the tests, probably both.

2.77a is a little bit faster than 2.78a.
Quote:


> Originally Posted by *Mellifleur*
> 
> hey guys lets not get bogged down in red vs green, or red vs blue, or idk anymore but as my tests have shown and if amd is telling the the truth with blender, we are talking about an almost 500% IPC increase over vishera, like god #$#$#% damn amd nice #$#%#$ work.


It is a huge increase over Vishera. However, do keep in mind that these IPC gains are unlikely to translate universally.

Blender in particular is one of the most SMT friendly applications I have ever encountered and also has near perfect core and clock scaling.


----------



## oxidized

Quote:


> Originally Posted by *Newwt*
> 
> Um what?!? If their CPU can game and stream at a higher quality why wouldn't they showcase it?


My english is not good, i know, but i never imagined at such level. What i wrote is different from this, my point is another, it's not like they shouldn't show the technology or the performance, whatever you want - they just shouldn't use particular settings (which nobody uses, and it's not likely someone will use those settings in the future for some reason) in order to make a chip that would run good at normal settings (as normal i mean settings that 100% streamers use), run bad because of those unusual settings. That's what i've been repeating for days, pretty bored of it. Now if you can't see, or don't want to see what i'm saying, it's another thing, but the DOTA 2 streaming on 6700k doesn't look that bad in normal condition.
I don't want to talk about it anymore, i just want AMD to frigging present its stuff, and wait for proper tests benchmarks and whatnot


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> It is a huge increase over Vishera. However, do keep in mind that these IPC gains are unlikely to translate universally.
> 
> Blender in particular is one of the most SMT friendly applications I have ever encountered and also has near perfect core and clock scaling.


yes this is the case, the disparity between CMT 4M/8C and SMT 8C/16T is as much as 4x FP or 2x INT without factoring in the IPC increase.
so at the very least, we'd need to at least disable SMT to have a rough estimate as to how much IPC increase Zen has over Piledriver.

on a side note, it does make one wonder how good AMD's SMT is, a comparison between HT-on and HT-off would be a good topic.


----------



## cssorkinman

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *kd5151*
> 
> Yeah the amd Robert guy said they used 2.78 at both events. I see 2.77 and 100 sample setting clearly at the press event.
> 
> 
> 
> The used 2.77a in at least one of the tests, probably both.
> 
> 2.77a is a little bit faster than 2.78a.
> Quote:
> 
> 
> 
> Originally Posted by *Mellifleur*
> 
> hey guys lets not get bogged down in red vs green, or red vs blue, or idk anymore but as my tests have shown and if amd is telling the the truth with blender, we are talking about an almost 500% IPC increase over vishera, like god #$#$#% damn amd nice #$#%#$ work.
> 
> Click to expand...
> 
> It is a huge increase over Vishera. However, do keep in mind that these IPC gains are unlikely to translate universally.
> 
> Blender in particular is one of the most SMT friendly applications I have ever encountered and also has near perfect core and clock scaling.
Click to expand...

It's relatively easy for me to believe that Zen's use of logical cores is more efficient than Intel's Hyperthreading vs Zen having leapfrogged SKylake in single thread performance.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> yes this is the case, the disparity between CMT 4M/8C and SMT 8C/16T is as much as 4x FP or 2x INT without factoring in the IPC increase.
> 
> so at the very least, we'd need to at least disable SMT to have a rough estimate as to how much IPC increase Zen has over Piledriver.


Well, SMT itself is an important source of IPC in anything with enough threads to utilize it, but yeah, if we want to compare the improvement in more lightly threaded scenarios, it would need to be disabled.


----------



## epic1337

Quote:


> Originally Posted by *cssorkinman*
> 
> It's relatively easy for me to believe that Zen's use of logical cores is more efficient than Intel's Hyperthreading vs Zen having leapfrogged SKylake in single thread performance.


i have this same inclination, its more acceptable to accept it like this anyway, although its quite impressive for AMD to do it this well on their first try.


----------



## kd5151

Quote:


> Originally Posted by *Blameless*
> 
> The used 2.77a in at least one of the tests, probably both.


Look at this video. I can make out the time,the date,sample setting and which version they used at the press event. 2.77 was used at one of these events no doubt. Someone also posted a still image of one of these videos. It's even clearer in the image. I see two 77s. Hard to make out but I see it.

I'm more concerned about getting the sample right!












/


----------



## Particle

Quote:


> Originally Posted by *epic1337*
> 
> i have this same inclination, its more acceptable to accept it like this anyway, although its quite impressive for AMD to do it this well on their first try.


Just be careful with that kind of logic. It's the same thought process that brought us years of people thinking AMD's 8 core processors were actually 4 core processors because that would "make more sense" in terms of the performance characteristics between them and Intel's quad cores.


----------



## kd5151

tested again by 3.7GHz stock i3 4170 htpc hooked up to samsung 1080p tv.

sample 100 - 1:36
sample 150 - 2:22
sample 200 - 3:09


----------



## pokerapar88

I'm really sorry to see so many stupid comments that are biased and quite possibly wanting AMD to fail without realizing that wpuld affect them directly as it would allow Intel to keep rising prices and developing crap every 6 to 12 months that improves 1-5% on the IPC and forces you buy a new MB for it.

Myself, I can't wait for Ryzen to launch and hope it's a success for the sake of all of us and the whole industry.

If this processor is anywhere near the x99 i7's (even if it doesn't beat a 6900k) and retalis between 350 and 500 usd... It'll be a steal. It will force Intel to lower prices and develop a much needed new architecture (actually a new architecture, not just shrinked dies with a new name). Heck, if you hate AMD you'll still be able to buy i7's but at a lower price. What's not to like? I get the escepticism but c'mon!

Over and out


----------



## Jpmboy

Quote:


> Originally Posted by *pokerapar88*
> 
> I'm really sorry to see so many stupid comments that are biased and quite possibly wanting AMD to fail without realizing that wpuld affect them directly as it would allow Intel to keep rising prices and developing crap every 6 to 12 months that improves 1-5% on the IPC and forces you buy a new MB for it.
> 
> Myself, I can't wait for Ryzen to launch and hope it's a success for the sake of all of us and the whole industry.
> 
> If this processor is anywhere near the x99 i7's (even if it doesn't beat a 6900k) and retalis between 350 and 500 usd... It'll be a steal. It will force Intel to lower prices and develop a much needed new architecture (actually a new architecture, not just shrinked dies with a new name). Heck, if you hate AMD you'll still be able to buy i7's but at a lower price. What's not to like? I get the escepticism but c'mon!
> 
> Over and out


I hope so too. I really want to pick up an AMD cpu and mb that overclocks well.
The "inconvenient truth" here is that AMD is matching/barely exceeding Intel's year-old part, with intel bring out X299/socket 2066 shortly after. The good news is AMD has made major progress in cutting this down to only 1 year.


----------



## Blameless

Quote:


> Originally Posted by *cssorkinman*
> 
> It's relatively easy for me to believe that Zen's use of logical cores is more efficient than Intel's Hyperthreading vs Zen having leapfrogged SKylake in single thread performance.


I'm inclined to agree. The architectural details from the Hot Chips presentation slides (which are on AMD's site), the generally larger L1 and L2 caches, and to a more uncertain extent AMD's new prefetchers (which may or may not be superior to Intel's current ones), do suggest that AMD's SMT will be strong. There are a few contraindications to this, namely the inferior L1/L2 cache bandwidth vs. Haswell or newer Intel architectures and one less execution port (if I recall correctly) than Skylate, but I still think the the balance is in favor of potent SMT for AMD.
Quote:


> Originally Posted by *kd5151*
> 
> I'm more concerned about getting the sample right!


Well the sample size is responsible for a vastly larger anomaly so that makes sense, but if you want the most accurate comparison possible, it's not hard to use the version they used.
Quote:


> Originally Posted by *Particle*
> 
> Just be careful with that kind of logic. It's the same thought process that brought us years of people thinking AMD's 8 core processors were actually 4 core processors because that would "make more sense" in terms of the performance characteristics between them and Intel's quad cores.


Thinking that AMD might have modestly better SMT scaling than Intel isn't the same thing.

No one, at least no one with any sense, is claiming any fundamental difference between SMT on Ryzen and SMT on Intel's parts, just that Zen is a comparably wide architecture with bigger caches (and thus less cache contention) and possibly prefetching that puts them to good use, which may require SMT to take full advantage of everything and provide a larger advantage when it does.

Given the tests they've decided to show and the architectural details they have revealed, this is not unreasonable speculation.


----------



## budgetgamer120

Quote:


> Originally Posted by *oxidized*
> 
> What do you mean by "crank" unless they were streaming at QHD or even UHD there's no way it does that, and i proved it with my tests on my pc with a old 2600k


Streaming at 1080p on the 6700k causes issue. Weren't you and someone discussing this yesterday?


----------



## SoloCamo

Quote:


> Originally Posted by *Boomer1990*
> 
> Retested my potato with render at 150.
> 
> Win 8.1
> 2133 ddr3 ram
> Amd A10-5800k
> 
> Stock 3.8 I got 4:17.70
> Overclocked to 4.3 and I got 3:56.29
> 
> If this comes in at $499 or under I think it will be my next cpu.


Your potato is quicker than the potato in my laptop...though not as much as I thought.

Just some comparison info I did out of curiosity.

Keep in mind, the A8-6410 is limited to single channel memory and I'm currently running the max speed it supports of 1866mhz. You are running dual channel 2133mhz. Puma is fairly impressive actually considering it's a 15w tdp chip found in like $200-$300 laptops.

At 4.3ghz the 5800k has a 51% increase in clock speed over A8-6410

2.1ghz (stock) 6410 = 390 seconds
5800k at 4.3 = 236 seconds

diff in performance 39.5%

___

45% increase in clock speed over A8-6410 with 5800k at 3.8ghz

2.1ghz (stock) 6410 = 390 seconds
5800k stock 3.8ghz = 257 seconds

diff in performance 34%

So even with the single channel limitation and slower memory overall, from my rough math here puma is roughly 10-12% faster per clock over trinity/richland. Really wish AMD could clock these little chips higher.


----------



## pas008

I'm rooting for amd but with previous previews and showcases I keep thinking

Fool me once, shame on you. Fool me twice, shame on me

if they wanted to actually give people real taste they could have brought in many online personalities(i'm sure many would have gone to it for free) to help the demo and show few other benchmarks

anyways I'll wait for the real numbers from 3rd party


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> I'm inclined to agree. The architectural details from the Hot Chips presentation slides (which are on AMD's site), the generally larger L1 and L2 caches, and to a more uncertain extent AMD's new prefetchers (which may or may not be superior to Intel's current ones), do suggest that AMD's SMT will be strong. There are a few contraindications to this, namely the inferior L1/L2 cache bandwidth vs. Haswell or newer Intel architectures and one less execution port (if I recall correctly) than Skylate, but I still think the the balance is in favor of potent SMT for AMD.
> Well the sample size is responsible for a vastly larger anomaly so that makes sense, but if you want the most accurate comparison possible, it's not hard to use the version they used.
> Thinking that AMD might have modestly better SMT scaling than Intel isn't the same thing.
> 
> No one, at least no one with any sense, is claiming any fundamental difference between SMT on Ryzen and SMT on Intel's parts, just that Zen is a comparably wide architecture with bigger caches (and thus less cache contention) and possibly prefetching that puts them to good use, which may require SMT to take full advantage of everything and provide a larger advantage when it does.
> 
> Given the tests they've decided to show and the architectural details they have revealed, this is not unreasonable speculation.


Where can I find the settings to change in blender so I can run this bench?


----------



## SoloCamo

Quote:


> Originally Posted by *pas008*
> 
> I'm rooting for amd but with previous previews and showcases I keep thinking
> 
> Fool me once, shame on you. Fool me twice, shame on me
> 
> if they wanted to actually give people real taste they could have brought in many online personalities(i'm sure many would have gone to it for free) to help the demo and show few other benchmarks
> 
> anyways I'll wait for the real numbers from 3rd party


This was a preview....


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> Where can I find the settings to change in blender so I can run this bench?


When you open the render file there will be a panel along the right hand side of the window. Under the "sampling" section there will be "samples" and "render". You want to change the default render samples from 200 to 150, then press F12 to run the benchmark.


----------



## oxidized

Quote:


> Originally Posted by *budgetgamer120*
> 
> Streaming at 1080p on the 6700k causes issue. Weren't you and someone discussing this yesterday?


It was me, and causes no issue with regular settings, but again, i don't want to talk about it anymore, my tests are there, my proof is there, rest is just talk, and AMD is showing something not true, or anyway not specifying whatever settings they used


----------



## budgetgamer120

Quote:


> Originally Posted by *oxidized*
> 
> It was me, and causes no issue with regular settings, but again, i don't want to talk about it anymore, my tests are there, my proof is there, rest is just talk, and AMD is showing something not true, or anyway not specifying whatever settings they used


Your proof of what?

The fact is 6700k cannot handle setting Ryzen or a 6900k can. Did your proof dispute that?


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> When you open the render file there will be a panel along the right hand side of the window. Under the "sampling" section there will be "samples" and "render". You want to change the default render samples from 200 to 150, then press F12 to run the benchmark.


Thanks, I got 1:13 with my 2.4ghz xeon.


----------



## pas008

Quote:


> Originally Posted by *SoloCamo*
> 
> This was a preview....


and your point?


----------



## LongtimeLurker

Quote:


> Originally Posted by *Travieso*
> 
> as i remember they never benchmarked their own FX-8150 against 980X.
> 
> they only did against 2600K and 2500K.


Oh yes they did


----------



## SoloCamo

Quote:


> Originally Posted by *pas008*
> 
> and your point?


That is the point. People are expecting way too much out of a preview. If they showed a bunch more benchmarks it might as well be the full on reveal at that point. There is still time before release, there is no point basically revealing everything regarding performance and then sitting on it for a while.

If there are weaknesses compared to the competition do you really think they want to showcase them and have the masses call it bulldozer 2.0? Best to wait.

AMD has always been called out for over hyping and yet this time they don't do much regarding it and yet people now say they should have done more. I'm confident that we will receive a decent CPU when the time comes, but I am certainly not going to hype myself up over it.


----------



## oxidized

Quote:


> Originally Posted by *budgetgamer120*
> 
> Your proof of what?
> 
> The fact is 6700k cannot handle setting Ryzen or a 6900k can. Did your proof dispute that?


God
I proved that on my system (2600k + GTX580), which is much much worse and old than the one with the 6700K, the same game (Dota 2, at max settings) doesn't run not even close as bad, as they showed, at 1080p (which is currently the maximum resolution used by pretty much the entirety of streamers even because many streaming sites, don't allow more than that), my streaming was skipping frames, but not even remotely as bad as the system with the 6700k, which is ofc far better than my 2600k, and whatever videocard they were using, was surely much much better than my GTX 580, so the bad performance of that system (6700k one) isn't trustworthy, and can't be used as comparison with anything.

Now please stop insisting on this.


----------



## formula m

Quote:


> Originally Posted by *Forceman*
> 
> Never mind, believe what you want to believe.


It seems you are^...

Perhaps you should just understand, nothing you have said is compelling, nor do you offer your own suggestion. Just that everyone is always wrong. What justification is there for a $900 AMD Summit Ridge, plz.

Educate us and show us how smart, knowledgeable in-depth thinker you are...


----------



## LongtimeLurker

Quote:


> Originally Posted by *tpi2007*
> 
> They couldn't of course (unless they wanted to get in legal trouble); it's not the benchmarks themselves but rather the picture that they paint may not be indicative of general performance. As you can see from the video the FX-8150 took exactly the same time in Handbrake as the 2600K, but we all know which CPU was overall better.
> 
> This time around the CPU has a lower TDP than Intel's offerings and things are overall looking more promising, but it's just to say that people should not go into overdrive with excitement before we get the full picture.


This. Anandtech, Tom's Hardware, etc, reviews can't come soon enough...


----------



## budgetgamer120

Quote:


> Originally Posted by *oxidized*
> 
> God
> I proved that on my system (2600k + GTX580), which is much much worse and old than the one with the 6700K, the same game (Dota 2, at max settings) doesn't run not even close as bad, as they showed, at 1080p (which is currently the maximum resolution used by pretty much the entirety of streamers even because many streaming sites, don't allow more than that), my streaming was skipping frames, but not even remotely as bad as the system with the 6700k, which is ofc far better than my 2600k, and whatever videocard they were using, was surely much much better than my GTX 580, so the bad performance of that system (6700k one) isn't trustworthy, and can't be used as comparison with anything.
> 
> Now please stop insisting on this.


I'll take their word over yours. You have been trying to dispute all their tests without any facts and saying "what other streamers use".

That is not the point. The point is Ryzen and 6900K can handle more than a 6700k. That's what they proved.


----------



## Newwt

Quote:


> Originally Posted by *formula m*
> 
> It seems you are^...
> 
> Perhaps you should just understand, nothing you have said is compelling, nor do you offer your own suggestion. Just that everyone is always wrong. What justification is there for a $900 AMD Summit Ridge, plz.
> 
> Educate us and show us how smart, knowledgeable in-depth thinker you are...


AMD would be crazy to charge that much for a CPU. I don't see SR7 over $450 or max $500. Especially with how over the last few years the have stated over and over about bringing in affordable hardware for everyone.


----------



## SoloCamo

Quote:


> Originally Posted by *Newwt*
> 
> AMD would be crazy to charge that much for a CPU. I don't see SR7 over $450 or max $500. Especially with how over the last few years the have stated over and over about bringing in affordable hardware for everyone.


http://arstechnica.com/gadgets/2015/05/amd-admits-it-cant-be-the-cheaper-solution-will-refocus-on-performance/

We will see what they deem as "not the cheaper solution"


----------



## oxidized

Quote:


> Originally Posted by *budgetgamer120*
> 
> I'll take their word over yours. You have been trying to dispute all their tests without any facts and saying "what other streamers use".
> 
> That is not the point. The point is Ryzen and 6900K can handle more than a 6700k. That's what they proved.


Which tests? I brought facts, they didn't, so someone normal or not biased or anything, would take my words over theirs. Of course those CPUs are faster that a 6700k, but not particularly in that test. Now believe what you want, i don't care, i won't talk about this anymore.


----------



## Pro3ootector

My score update:

Xeon E5-2670 8c/16t, 1600mhz DDR, SSD

200 - 1:12

150 - 55:10

100 - 37:36


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> It was me, and causes no issue with regular settings, but again, i don't want to talk about it anymore, my tests are there, my proof is there, rest is just talk, and AMD is showing something not true, or anyway not specifying whatever settings they used


What are "regular" settings...?

Secondly, as everyone here has told you, Your test don't prove anything. Your test (streams), only prove that your computer is not capable of doing HD encoding & playing. Let alone being able to play at 4k, and encode to 1440p. Like the rest of us want to do.

What you recorded and showed (all three), is NOT acceptable to a good many gamers. It understand is "good enough" for you, because you do not care about quality, or top-notch performance. If you are happy with a 1080p monitor.. and 720p video then ok. But the rest of the world are building new rigs and left 1080p a long time ago. Most are moving on to 4k. Those are facts... nothing in your basement will change them.

You keep trying to convince everyone here... "because I did this stream at home, AMD lied to me".

Your remarks are laughable...


----------



## hawker-gb

Quote:


> Originally Posted by *oxidized*
> 
> Which tests? I brought facts, they didn't, so someone normal or not biased or anything, would take my words over theirs. Of course those CPUs are faster that a 6700k, but not particularly in that test. Now believe what you want, i don't care, *i won't talk about this anymore*.


Lets hope.


----------



## Newwt

Quote:


> Originally Posted by *oxidized*
> 
> Which tests? I brought facts, they didn't, so someone normal or not biased or anything, would take my words over theirs. Of course those CPUs are faster that a 6700k, but not particularly in that test. Now believe what you want, i don't care, i won't talk about this anymore.


Dude I'm not trying to be mean, but you're delusional. You're totally discrediting their test because you don't know what settings they have set and/or those settings don't match what you and "other streamers" use.

All that matters is that they had 3 computers running the same setup and one of the 3 sucked. If they had all 3 computers streaming to 720p it would look exactly the same so what's the point.


----------



## Dimaggio1103

Quote:


> Originally Posted by *Raghar*
> 
> I read words. "Prediction by neural network". Well as someone who did AI (in spare time) for 20 years I know that neural networks sucks for this stuff. Then I seen horrible part about automatic overclocking as high as heatsink allows...
> 
> Is it time to rename this site to autooverclock.net?


You have seem to have no idea about neural networks. They are actually really good for prediction, with a little time they have high efficiency, and prediction rates. The way they implemented it, im sure its a watered down net, with a prime focus of CPU processing prediction.


----------



## formula m

Quote:


> Originally Posted by *oxidized*
> 
> God
> I proved that on my system (2600k + GTX580), which is much much worse and old than the one with the 6700K, the same game (Dota 2, at max settings) doesn't run not even close as bad, as they showed, at 1080p (which is currently the maximum resolution used by pretty much the entirety of streamers even because many streaming sites, don't allow more than that), my streaming was skipping frames, but not even remotely as bad as the system with the 6700k, which is ofc far better than my 2600k, and whatever videocard they were using, was surely much much better than my GTX 580, so the bad performance of that system (6700k one) isn't trustworthy, and can't be used as comparison with anything.
> 
> Now please stop insisting on this.


*I am lmao right now....*

You don't even realize that YOUR GAMEPLAY in DOTA is hiding in the woods and rarely engaging many, if any other players. That style of minimalistic gameplay, won't stress any system... & clicking on the ground and moving around, isn't what stresses the CPU in DOTA... it is when there is massive wars going and particulate all over the places. And the CPU is crunching.

Your TEST showed nothing, because you are not a PROFESSIONAL DOTA PLAYER.... lol


----------



## cssorkinman

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cssorkinman*
> 
> It's relatively easy for me to believe that Zen's use of logical cores is more efficient than Intel's Hyperthreading vs Zen having leapfrogged SKylake in single thread performance.
> 
> 
> 
> I'm inclined to agree. The architectural details from the Hot Chips presentation slides (which are on AMD's site), the generally larger L1 and L2 caches, and to a more uncertain extent AMD's new prefetchers (which may or may not be superior to Intel's current ones), do suggest that AMD's SMT will be strong. There are a few contraindications to this, namely the inferior L1/L2 cache bandwidth vs. Haswell or newer Intel architectures and one less execution port (if I recall correctly) than Skylate, but I still think the the balance is in favor of potent SMT for AMD.
> 
> .
Click to expand...

I'm left in the odd position of hoping that the 8C/16T Zen is too expensive for me ( meaning it doesn't have a weakness in an area yet to be demonstrated ).

FWIW the rumblings about overclockablity seem to be optimistic. I am however a bit fearful of anything " automatic" or anything being tied to temps as Lisa alluded to - hope they don't take away any overclocking options.


----------



## mypcisugly

This web site never fails to deliver


----------



## keikei

Quote:


> Originally Posted by *cssorkinman*
> 
> I'm left in the odd position of hoping that the 8C/16T Zen is too expensive for me ( meaning it doesn't have a weakness in an area yet to be demonstrated ).
> 
> FWIW the rumblings about overclockablity seem to be optimistic. I am however a bit fearful of anything " automatic" or anything being tied to temps as Lisa alluded to - hope they don't take away any overclocking options.


Ryzen looks to be on par with intels current offerings with those core/thread specs. Does that intel chipset have weaknesses (minus price ofc)?


----------



## Blameless

Quote:


> Originally Posted by *cssorkinman*
> 
> FWIW the rumblings about overclockablity seem to be optimistic. I am however a bit fearful of anything " automatic" or anything being tied to temps as Lisa alluded to - hope they don't take away any overclocking options.


I don't think they'd do that. Can't say for certain, obviously, but any board intended for OCing is likely to have options to disable all of that stuff.

On top of the normal Intel Speed Step and Turbo Boost and C-state stuff, BW-E has adaptive voltages, negative AVX multiplier offsets, a dynamic uncore clock, and a special boost multipler for the pre-determined strongest core...but I'm still able to turn all of that crap and run whatever clocks I feel like on all cores all the time, if I like.

Not having similar levels of control on the higher-end AM4 platforms would be silly.
Quote:


> Originally Posted by *keikei*
> 
> Does that intel chipset have weaknesses (minus price ofc)?


Any such weakness will be relative to Ryzen and will only really be known when we can compare them in detail. We are using the 6900K as a baseline as this is clearly the closest equivalent part in overall design aspects (core/thread count and stock clock speeds).

We can speculate based on what architectural differences we know of, but not much more.


----------



## delboy67

I think auto oc and new turbo is a way of winning benchmarks in reviews, good to see they've learned, I hope just it can be turned off and manual oc applied. I dont need 8c/16t tbh, I would like to have it of course but cant wait to see how much the 4 core performs vs my ivy i7, I think the newer platform will push me for an upgrade. Exciting times ahead


----------



## Blameless

Quote:


> Originally Posted by *delboy67*
> 
> I think auto oc and new turbo is a way of winning benchmarks in reviews, good to see they've learned


It is an important marketing point and as long as it's not producing unstable combinations, will be of big help to those who don't care for manual OCing.


----------



## Roaches

Quote:


> Originally Posted by *Roaches*
> 
> Rerun the render at 150 Samples. Got 28.76 Seconds with dual Xeon E5-2670.
> 
> Waiting for the Haswell/Broadwell-E dudes results. @Jpmboy and others?


Quote:


> Originally Posted by *Pro3ootector*
> 
> My score update:
> 
> Xeon E5-2670 8c/16t, 1600mhz DDR, SSD
> 
> 200 - 1:12
> 
> 150 - 55:10
> 
> 100 - 37:36


Hmm, interesting scaling right there compared to my dual cpu setup result. Same memory 1600mhz config here.

Thanks.

Gonna try compare with my 3570K setup when I get home from work, I also still have my 4930K but its not assembled atm since my transition to a Xeon setup..


----------



## Raghar

Quote:


> Originally Posted by *Dimaggio1103*
> 
> You have seem to have no idea about neural networks. They are actually really good for prediction, with a little time they have high efficiency, and prediction rates. The way they implemented it, im sure its a watered down net, with a prime focus of CPU processing prediction.


Well. How much NN can be simplified before it's indistinguishable from a not NN? I remember I wrote adaptive algorithms which wrote and altered predictors on the fly, which were not NN. Or I wouldn't call them NN. (Oh well. 10 years majority of AA games were called RPG. Then therm RPG lost its sale value, and they were named "action adventures".)

BTW if they made real NN, how did they protect it against degeneration?
Quote:


> Originally Posted by *Blameless*
> 
> It is an important marketing point and as long as it's not producing unstable combinations, will be of big help to those who don't care for manual OCing.


That's actually kinda scary. As technology progresses to smaller nodes, there is less and less headroom for OC. I think these who don't care about OC are more happy with reliable CPUs where it will not happen behind theirs backs.

It can also wreak havock with adaptive algorithms. Until now they could find easily max speed by a simple test. But, when 20 minutes later the max speed would be much lower, they would either retest it, and adapt again thus creating overhead, or cause microstuttering. (Well not many algorithms care about real time clock, so it would be probably fine, as long as variable max FPS would be fine.)

I hate when companies are abusing review methodologies. I disliked NVidia boost for creating artificial speed boosts at low workloads. Now we might have it with CPUs as well.


----------



## Blameless

Quote:


> Originally Posted by *Roaches*
> 
> Hmm, interesting scaling right there compared to my dual cpu setup result. Same memory 1600mhz config here.
> 
> Thanks.
> 
> Gonna try compare with my 3570K setup when I get home from work, I also still have my 4930K but its not assembled atm since my transition to a Xeon setup..


You could run half the number of threads and force affinity to one of your physical processors to analyze scaling on your Xeon setup.
Quote:


> Originally Posted by *Raghar*
> 
> It can also wreak havock with adaptive algorithms. Until now they could find easily max speed by a simple test. But, when 20 minutes later the max speed would be much lower, they would either retest it, and adapt again thus creating overhead, or cause microstuttering. (Well not many algorithms care about real time clock, so it would be probably fine, as long as variable max FPS would be fine.)


One of the many reasons I use fixed clocks whenever practical.
Quote:


> Originally Posted by *Raghar*
> 
> I hate when companies are abusing review methodologies. I disliked NVidia boost for creating artificial speed boosts at low workloads. Now we might have it with CPUs as well.


It's undesirable, but also inevitable.

I like my hardware to spit out consistent performance in any app, or combination of apps, I could conceivably run on it, but the masses are no where near as picky. The company that doesn't cut corners when designing VRMs, then keep them safe with a TDP limiter is paying more for their boards than their competitors. A company that doesn't allow their parts to exceed nominal clocks when low loads allow for them to looks worse in most comparisons.


----------



## budgetgamer120

Quote:


> Originally Posted by *Raghar*
> 
> Well. How much NN can be simplified before it's indistinguishable from a not NN? I remember I wrote adaptive algorithms which wrote and altered predictors on the fly, which were not NN. Or I wouldn't call them NN. (Oh well. 10 years majority of AA games were called RPG. Then therm RPG lost its sale value, and they were named "action adventures".)
> 
> BTW if they made real NN, how did they protect it against degeneration?
> That's actually kinda scary. As technology progresses to smaller nodes, there is less and less headroom for OC. I think these who don't care about OC are more happy with reliable CPUs where it will not happen behind theirs backs.
> 
> It can also wreak havock with adaptive algorithms. Until now they could find easily max speed by a simple test. But, when 20 minutes later the max speed would be much lower, they would either retest it, and adapt again thus creating overhead, or cause microstuttering. (Well not many algorithms care about real time clock, so it would be probably fine, as long as variable max FPS would be fine.)
> 
> I hate when companies are abusing review methodologies. I disliked NVidia boost for creating artificial speed boosts at low workloads. Now we might have it with CPUs as well.


How much different is it gonna be compared to windows power saving features in regards to dynamic clocking. So far with it wreaking havoc on any program. My cpu goes from 1.34ghz to 2.44ghz at any given time.

The primary use of my Pc is for content creation and programming.

So I'm not sure why this clocking tech will it might cause problems.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> So I'm not sure why this clocking tech will it might cause problems.


For most people and most tasks it won't.

However, if you have a task of a time critical nature or need to ensure the lowest latency possible, dynamic clocks can easily interfere with this.


----------



## Raghar

If you talked about EIST, then it's from 800 to max. When program would need full power, it will deliver *max*. But AMD CPU is defining max as dependent on temperature. Programs typically do benchmarks during initialization, and then use computed values. When max changes to lower after few minutes these programs would still use original limit.

As to EIST, I seen it caused VERY minor problems in certain situation. But everyone with some skill can go to BIOS, disable EIST, do stuff, then enable EIST when he has correct data. EIST has VERY minor impact when CPU speeds up, and programs are affected only by first few microseconds. Which isn't real problems for real world program. But can be problem with algorithm testing and when someone needs stable cycles for some external device.


----------



## Ultracarpet

I feel like AMD's SMT has better scaling than Intels... and the ipc will still be 10-20% behind in most things that aren't super multi threaded. Still excited though.


----------



## lolerk52

Quote:


> Originally Posted by *Ultracarpet*
> 
> I feel like AMD's SMT has better scaling than Intels... and the ipc will still be 10-20% behind in most things that aren't super multi threaded. Still excited though.


Would really be quite the achievement. Intel took 10 years to get to where they are now with Hyperthreading. Meanwhile this is AMD's first go at SMT.


----------



## epic1337

Quote:


> Originally Posted by *lolerk52*
> 
> Would really be quite the achievement. Intel took 10 years to get to where they are now with Hyperthreading. Meanwhile this is AMD's first go at SMT.


they have experience with CMT, remember that BD uarch shares the same FP pipeline for two threads, that may have helped them make SMT more efficient.
on the other hand intel had nothing much with regards to other uarch, so they're mostly concentrated on what they already know about SMT.


----------



## budgetgamer120

Quote:


> Originally Posted by *lolerk52*
> 
> Would really be quite the achievement. Intel took 10 years to get to where they are now with Hyperthreading. Meanwhile this is AMD's first go at SMT.


That is actually pretty funny lol


----------



## LancerVI

A little late to the re-run blender party, but I figured I throw my re-test up.

My Talino 5820k @ Stock w/ 64GB DDR @2400 Windows 10 Blender @150

48.72


----------



## SystemTech

I hope they plan the launch well.
If it is expected on the 17th Jan, then CES they should stay quiet, Maybe bring out some of the same benchmarks and Vega but be hush on anything new RYZEN Related. Just reiterate what we already know.

Then, assuming Motherboards are already being produced for them, with mass production possibly ramping up soon.

On release day, release Full Reviews from multiple sites and have CPU's And Motherboards on Shelves and in stock at major retailers.

And then price it aggressively ($500-$600 for a SR7).
If they can do that without letting any birds out of the bag, Intel will poop themselves in a matter of minutes, provided the performance we have seen runs true (within 5% of 6900K)

What we dont want, is a drawn out release, where they release some more info, then more info, then reviews, and only then actual CPU's and basic mobos, with better mobos only a few months after release.

There is very little way i see it being over $600 for the SR7.

RYZEN+ is coming late 2017, which will occupy the $600-$999 market with 2 different models im guessing.


----------



## Jpmboy

Quote:


> Originally Posted by *Raghar*
> 
> If you talked about EIST, then it's from 800 to max. When program would need full power, it will deliver *max*. But AMD CPU is defining max as dependent on temperature. Programs typically do benchmarks during initialization, and then use computed values. When max changes to lower after few minutes these programs would still use original limit.
> 
> As to EIST, I seen it caused VERY minor problems in certain situation. But everyone with some skill can go to BIOS, disable EIST, do stuff, then enable EIST when he has correct data. EIST has VERY minor impact when CPU speeds up, and programs are affected only by first few microseconds. Which isn't real problems for real world program. But can be problem with algorithm testing and when someone needs stable cycles for some external device.


Actually, there's no need to disable speedstep in bios with dynamic/adaptive freq/voltage if you are using windows: assuming you have c-states disabled in bios (which would be the best way to run dynamic freq and voltage in this usage scenario) just use the windows High Power plan or create your own with "min proc state = 100%". Lastly, if you want to have all cores and phases available at all times in any OS, you need to disable speedstep and c-states.. and disable any phase parking.


----------



## BinaryDemon

With my 5960x @ 4.223 ghz

I get:

100 samples = 20.65 s
150 samples = 31.32 s
200 samples = 40.94 s


----------



## PostalTwinkie

Quote:


> Originally Posted by *Blameless*
> 
> For most people and most tasks it won't.
> 
> However, if you have a task of a time critical nature or need to ensure the lowest latency possible, dynamic clocks can easily interfere with this.


Yes.

Also, any word back on the e-mail you sent?


----------



## Ultracarpet

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yes.
> 
> Also, any word back on the e-mail you sent?


If this is about the settings dispute with blender, an AMD rep on reddit confirmed it was a mistake and the sample should be set to 150.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Ultracarpet*
> 
> If this is about the settings dispute with blender, an AMD rep on reddit confirmed it was a mistake and the sample should be set to 150.


Awesome!!

Thanks.


----------



## Kuivamaa

Quote:


> Originally Posted by *BinaryDemon*
> 
> With my 5960x @ 4.223 ghz
> 
> I get:
> 
> 100 samples = 20.65 s
> 150 samples = 31.32 s
> 200 samples = 40.94 s


Your RAM timings and freq,out of curiosity?


----------



## BinaryDemon

Quote:


> Originally Posted by *Kuivamaa*
> 
> Your RAM timings and freq,out of curiosity?


My ram sucks, I was an early adopter and DDR4 was expensive as crap then. So my DDR4-2133 is slighly overclocked ~ DDR4 2200 speed.



As I watch prices drop I keep toying with the idea of replacing it.


----------



## Blameless

Quote:


> Originally Posted by *lolerk52*
> 
> Would really be quite the achievement. Intel took 10 years to get to where they are now with Hyperthreading. Meanwhile this is AMD's first go at SMT.


Intel has had pretty solid SMT since Nehalem in 2009. Even Netburst's SMT, limited as it was by the narrow architecture, was far from useless. Gains have been pretty gradual for a while, but that's not necessarily a bad thing when it comes to all architectures that features SMT.

SMT exists to salvage otherwise wasted ILP. AMD's implementation, if it does show scaling as well as some of us are expecting, may well result not solely from a focus on SMT, but from Zen being an immature architecture with room to grow. SMT may be picking up the slack for some wish-list features that weren't practical to add yet.
Quote:


> Originally Posted by *epic1337*
> 
> they have experience with CMT, remember that BD uarch shares the same FP pipeline for two threads, that may have helped them make SMT more efficient.
> on the other hand intel had nothing much with regards to other uarch, so they're mostly concentrated on what they already know about SMT.


Other than the shared front-end, AMD's CMT doesn't really have that much in common with SMT. They are very different means to a similar end.

That said, I don't think AMD is as ignorant of SMT as their lack of past products might suggest.


----------



## Ultracarpet

Quote:


> Originally Posted by *lolerk52*
> 
> Would really be quite the achievement. Intel took 10 years to get to where they are now with Hyperthreading. Meanwhile this is AMD's first go at SMT.


The reason I say that is because there hasn't been a single benchmark leaked or presented that is single threaded or with the 8 extra threads disabled. I feel like the hype train is in full tilt because it is matching intels 8c16t in these super multi threaded workloads, people are now expecting skylake level IPC and I think they are going to be massively disappointed.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> Awesome!!
> 
> Thanks.


Np, big props to @Blameless for calling this whole thing out.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> Other than the shared front-end, AMD's CMT doesn't really have that much in common with SMT. They are very different means to a similar end.
> 
> That said, I don't think AMD is as ignorant of SMT as their lack of past products might suggest.


no i meant making Zen SMT more parallelized than intel's SMT, since CMT is technically a more parallel oriented architecture.

e.g. the experience in sharing resources on CMT might've inspired to make some changes on regular SMT structure to effectively utilize more resources than normally allowed.
or in other words, you can think of it as an SMT architecture reinforced with a CMT architectural characteristics and features.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> no i meant making Zen SMT more parallelized than intel's SMT, since CMT is technically a more parallel oriented architecture.
> 
> e.g. the experience in sharing resources on CMT might've inspired to make some changes on regular SMT structure to effectively utilize more resources than normally allowed.
> or in other words, you can think of it as an SMT architecture reinforced with a CMT architectural characteristics and features.


I'm not sure how you'd go about making SMT more parallel in a way that's at all analogous to CMT.

We have one set of execution pipelines per core. You can inject simultaneous threads into them as many times as you care to duplicate the physical architectural state, but both HT and AMD's SMT implementations are two-way. CMT was was duplicating whole execution pipelines and feeding them with one shared fetch (and earlier on, decode) unit.

You can make SMT better at utilizing wasted execution resources, or you can have more wasted resources for SMT to work with, or both, but neither of those bear much resemblance to CMT.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> I'm not sure how you'd go about making SMT more parallel in a way that's at all analogous to CMT.
> 
> We have one set of execution pipelines per core. You can inject simultaneous threads into them as many times as you care to duplicate the physical architectural state, but both HT and AMD's SMT implementations are two-way. CMT was was duplicating whole execution pipelines and feeding them with one shared fetch (and earlier on, decode) unit.
> 
> You can make SMT better at utilizing wasted execution resources, or you can have more wasted resources for SMT to work with, or both, but neither of those bear much resemblance to CMT.


i was talking about AMD CMT's FP component being shared though, that part of their CMT is the closest relative to SMT.
in fair point, they managed to make their FP unit keep up with the overall performance despite being only 4pipelines with 8threads.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> i was talking about AMD CMT's FP component being shared though, that part of their CMT is the closest relative to SMT.


This is an angle I hadn't considered and now that I think about it, I do see some similarities.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> This is an angle I hadn't considered and now that I think about it, I do see some similarities.


yes thats what i was pointing towards, their experience in this CMT uarch sharing the FP pipeline may had made them more proficient in implementing SMT.
specially when considering later revisions had become much better than the initial bulldozer release, Excavator in particular was excellent on both INT and FP multi-core performance.


----------



## lolerk52

Quote:


> Originally Posted by *Blameless*
> 
> Intel has had pretty solid SMT since Nehalem in 2009. Even Netburst's SMT, limited as it was by the narrow architecture, was far from useless. Gains have been pretty gradual for a while, but that's not necessarily a bad thing when it comes to all architectures that features SMT.
> 
> SMT exists to salvage otherwise wasted ILP. AMD's implementation, if it does show scaling as well as some of us are expecting, may well result not solely from a focus on SMT, but from Zen being an immature architecture with room to grow. SMT may be picking up the slack for some wish-list features that weren't practical to add yet.


Yeah, I have thought of that. Still impressive piece of engineering for a first gen.
They did mention a lack of time or lack of transistor budget made them push back some Zen features, and Zen+ appears to be a pretty large step yet again if their graphs are to be believed.


----------



## Blameless

Quote:


> Originally Posted by *lolerk52*
> 
> Still impressive piece of engineering for a first gen.


No argument here.


----------



## Aussiejuggalo

So JayzTwoCents did a "my thoughts" video on this... I don't mind his videos but he really didn't do much research into Ryzen because he talked about if there was going to be "4 core 8 thread or even a four core with no hyper threading"... err... um... we already know there's going to be a pretty good line up.


----------



## flopper

its AmaZen


----------



## epic1337

Quote:


> Originally Posted by *flopper*
> 
> its AmaZen


i had actually thought of _amazon_ for some reason.









edit: i just realized Ryzen can also be read as _rising_.


----------



## vtheofilis




----------



## CrazyElf

Re-ran the results at 150 for the Blender render.

I get 31.07s on a 5960X at 4.5 GHz with 32 GB of RAM @ 2666 13-13-13-32.

Factor in though:

Broadwell will be perhaps 2% or so faster in terms of IPC
Quad channel doesn't do as well as dual channel
This was with a 5960X @ 4.5 GHz; most 6900K seem to top out at 4.4 GHz, so this should be roughly comparable

This was not with AVX enabled (with the Stilt's AVX build, it would probably would be under 25s with my 5960X @ 4.5 GHz).

No idea if AMD using 2.77 is going to make any differences.

Quote:


> Originally Posted by *Tobiman*
> 
> Has anyone tried to run the handbrake test?


There's already a thread on it Edit: Deleted as this was on the wrong subject.

See here:
https://www.reddit.com/r/Amd/comments/5i6ohv/download_ryzen_benchmarks/
Quote:


> The statement for download yourself was for the blender file. However you can test the handbrake one yourself as follows:
> 
> Go to http://bbb3d.renderfarming.net/download.html and download the Standard 2D 4K, Quad-Full-HD (3840x2160) 60fps video file.
> 
> Use a video editor to clip the first 60s of the movie file.
> 
> USe handbrake 0.10.5-x64 GUI to convert the file to AppleTV3 preset.
> 
> Watch that status bar at the bottom for transcode time.


Quote:


> Originally Posted by *Blameless*
> 
> I'm not sure how you'd go about making SMT more parallel in a way that's at all analogous to CMT.
> 
> We have one set of execution pipelines per core. You can inject simultaneous threads into them as many times as you care to duplicate the physical architectural state, but both HT and AMD's SMT implementations are two-way. CMT was was duplicating whole execution pipelines and feeding them with one shared fetch (and earlier on, decode) unit.
> 
> You can make SMT better at utilizing wasted execution resources, or you can have more wasted resources for SMT to work with, or both, but neither of those bear much resemblance to CMT.


I think then that at long last, we can consider the mystery resolved.

I'd be interested to know if the concept of CMT itself was fatally flawed or if Bulldozer was just a terrible implementation of it. Perhaps there is simply no way to get Floating Point performance to be competitive with CMT? Your guess is as good as mine. It would be fascinating to look back at AMD"s design decisions - Bulldozer was clearly motivated by a much more multi-threaded world for example. I'd be interested to know why AMD didn't just scale up the Jaguar cores (kind of like what Intel did for their Pentium M parts that ultimately resulted in the very successful Conroe architecture).

Hopefully some of IBM's experience with SMT transfers over. If so, as you say, we may very well see a better implementation of SMT right out of the market than Intel's SMT. Skylake E will come out perhaps 6 months later (if it is Q3 of 2017, which is like close to 2 years after Skylake mainstream).

I think we can draw a few educated guesses here:

AMD's seems to have gotten power consumption down (assuming their demo of Ryzen using less power than the 6900K is accurate)
IPC for non-AVX based workloads must be very good compared to Bulldzoer and perhaps better than Broadwell (likely they met the 40% IPC target)
The unknowns are how good AMD's Zen is with applications that can use AVX2 instructions and how good the multithreading is
I suspect the SMT must be pretty decent for it to keep up with the 6900K though.


----------



## Jpmboy

[email protected], [email protected], [email protected]

150 -> 28.6 sec.


100 -> 19.17 sec.


Regarding the Handbrake comparo... it would use AVX unless this IS is disabled.


----------



## Blameless

Quote:


> Originally Posted by *CrazyElf*
> 
> There's already a thread on it


That's for Blender, not Handbrake.

I haven't seen the Handbrake test files they used anywhere.
Quote:


> Originally Posted by *CrazyElf*
> 
> I'd be interested to know if the concept of CMT itself was fatally flawed or if Bulldozer was just a terrible implementation of it. Perhaps there is simply no way to get Floating Point performance to be competitive with CMT? Your guess is as good as mine.


CMT doesn't necessarily imply a shared FPU, and the idea of a shared FPU wasn't a bad one; FPU work is less common and later iterations of the Bulldozer family had some decent improvements to the FPU and CMT as a whole.

Also, CMT and SMT aren't mutually incompatible. I would not be entirely surprised to see CMT show up again, but regardless of that SMT is almost certainly here to stay.
Quote:


> Originally Posted by *CrazyElf*
> 
> It would be fascinating to look back at AMD"s design decisions - Bulldozer was clearly motivated by a much more multi-threaded world for example. I'd be interested to know why AMD didn't just scale up the Jaguar cores (kind of like what Intel did for their Pentium M parts that ultimately resulted in the very successful Conroe architecture).


As much as I hate the over use of the Netburst analogies, AMD likely underestimated power and overestimated the clock speeds they could reasonably achieve. Combined with a bad cache design (L1 and L2 were slow and the L3 was mostly dead weight), this turned a reasonable concept into a poor implementation.
Quote:


> Originally Posted by *CrazyElf*
> 
> (assuming their demo of Ryzen using less power than the 6900K is accurate)


People present at the demonstration have reported that AMD had power consumption figures shown and that the power consumption of the two systems was largely a wash.
Quote:


> Originally Posted by *CrazyElf*
> 
> I suspect the SMT must be pretty decent for it to keep up with the 6900K though.


Yes, though how it handles workloads we haven't seen...remains to be seen.
Quote:


> Originally Posted by *Jpmboy*
> 
> Regarding the Handbrake comparo... it would use AVX unless this IS is disabled.


x264 is relatively light on AVX code...it's definitely used, but I'd be surprised if much of it was 256-bit.

256-bit AVX2 is where Ryzen is almost guaranteed to fall behind Intel; it can't fuse it's 2x128/4x64 FMACs into a single 256-bit one so it takes a minimum of two cycles to execute these instructions where Intel AVX2 CPUs can do this in one.

If you are talking about Blender, the standard builds of Windows Blender don't use AVX...they actually have a really outdated set of instruction support. Some of us have been benching the current Blender source run through updated compilers with all instruction enabled (courtesy of the Stilt) and the difference is pretty enormous. My primary sig system gets ~39.5 seconds in the Ryzen render (150 sample) with the public 2.78 build, but ~27.5 seconds in the AVX2 enabled build.


----------



## epic1337

once Zen is out someone has to thoroughly compare Zen with HT-on and HT-off, this would solve a lot of it's mysteries.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> once Zen is out someone has to thoroughly compare Zen with HT-on and HT-off, this would solve a lot of it's mysteries.


I have no doubt this will be a significant feature of many launch day reviews.


----------



## CrazyElf

If you factor in how big the disparity in resources in, in terms of R&D money, and everything else, AMD's performance here has been pretty amazing considering what they have to work with. Either that or throwing money at the problem has rapidly diminishing returns. I'm reluctant to pin credit solely on Jim Keller, but the AMD team has done a pretty impressive job considering their lack of funds.

Same too if some of the estimates are true on the GPU front for Vega.

Hopefully Zen and Vega play a role in turning things around. Especially important is if AMD can sell some high margin Opteron parts and perhaps gain a share in the HPC field. Some GPU operations are still young enough that Vega might be able to gain a foothold.

Quote:


> Originally Posted by *Blameless*
> 
> That's for Blender, not Handbrake.
> 
> I haven't seen the Handbrake test files they used anywhere.


Thanks - deleted.

Repost from above:
https://www.reddit.com/r/Amd/comments/5i6ohv/download_ryzen_benchmarks/
Quote:
The statement for download yourself was for the blender file. However you can test the handbrake one yourself as follows:

Go to http://bbb3d.renderfarming.net/download.html and download the Standard 2D 4K, Quad-Full-HD (3840x2160) 60fps video file.

Use a video editor to clip the first 60s of the movie file.

USe handbrake 0.10.5-x64 GUI to convert the file to AppleTV3 preset.

Watch that status bar at the bottom for transcode time.

I think that given the discrepancy in the Blender benchmark, it would be prudent to test this one and see.

Quote:


> Originally Posted by *Blameless*
> 
> CMT doesn't necessarily imply a shared FPU, and the idea of a shared FPU wasn't a bad one; FPU work is less common and later iterations of the Bulldozer family had some decent improvements to the FPU and CMT as a whole.
> 
> Also, CMT and SMT aren't mutually incompatible. I would not be entirely surprised to see CMT show up again, but regardless of that SMT is almost certainly here to stay.
> As much as I hate the over use of the Netburst analogies, AMD likely underestimated power and overestimated the clock speeds they could reasonably achieve. Combined with a bad cache design (L1 and L2 were slow and the L3 was mostly dead weight), this turned a reasonable concept into a poor implementation.


Maybe.

I suspect though that AMD will focus on Zen and Zen+ for the near future, which may involve focusing on single threaded performance. I don't know to be honest.

AMD for Bulldozer:

Overestimated what Global Foundries could deliver in speed and power consumption
Expected a much more multihreaded world circa 2011 than what actually happened
As you note there was a "cache" gap
Bulldozer was a fatally flawed design from the start.

They would have been better off had they just made a "Super Thuban". They would have been better off had they spent all the money they spent on Bulldozer on making a "K10+" type of architecture.

Quote:


> Originally Posted by *Blameless*
> 
> People present at the demonstration have reported that AMD had power consumption figures shown and that the power consumption of the two systems was largely a wash.


We need the real numbers then.

Just wondering, what did they do? The official story of course is this:

*Idle*
Zen: 93W
Intel: 106W

*Load*
Zen: 187W
Intel: 191W

Assuming this is accurate, that means that Zen drew 94W under load and the 6900K drew 85W. That's slightly worse, although Zen in this case was also slightly faster in IPC.

Very slight advantage for Intel in that case. To be expected, as they do have process leadership. Intel has a "true" 14nm versus the 14/20nm Global Foundries has (from Samsung).

Quote:


> Originally Posted by *Blameless*
> 
> Yes, though how it handles workloads we haven't seen...remains to be seen.
> x264 is relatively light on AVX code...it's definitely used, but I'd be surprised if much of it was 256-bit.
> 
> 256-bit AVX2 is where Ryzen is almost guaranteed to fall behind Intel; it can't fuse it's 2x128/4x64 FMACs into a single 256-bit one so it takes a minimum of two cycles to execute these instructions where Intel AVX2 CPUs can often do this in one.


It will be very interesting to see the x265 benchmarks. I wonder if future implementations of Zen would get full speed AVX2 capabilities.

Intel Haswell-E (and I would assume Broadwell E) CPUs get about 20% more with AVX2 in use.

Overall, my guess:

Slightly better than Broadwell at non-AVX load, maybe approaching Skylake
Sandy Bridge level performance for AVX2 loads maybe a bit less than that
Somewhat worse performance per watt
Still, it is a huge leap over Bulldozer no matter what and it is very probable that they got their 40% IPC.

I'd be very interested to see if Skylake E gets native AVX512 and once the software catches up to see what we get there. Xeon Skylake is getting the partial instructions, but I'm not sure about desktop.

This rumor from BIts and Chips is what we have: http://www.bitsandchips.it/hardware/9-hardware/5294-avx-512-solo-per-gli-xeon-skylake-e-non-i-core-skylake


This would also imply further extensions on the 10nm Cannonlake. It seems like die shrinks are now getting feature updates (ex: TSX on Broadwell E, although that was more due to a bug in the original implementation on Haswell, so it is not a perfect comparison).


----------



## SoloCamo

Quote:


> Originally Posted by *Blameless*
> 
> I have no doubt this will be a significant feature of many launch day reviews.


8 core 6900k w/ HT off vs 8 core Ryzen w/ SMT off at the same clocks is hopefully a guaranteed benchmark made available to us on launch.


----------



## GorillaSceptre

Quote:


> Originally Posted by *CrazyElf*
> 
> Just wondering, what did they do? The official story of course is this:
> 
> *Idle*
> Zen: 93W
> Intel: 106W
> 
> *Load*
> Zen: 187W
> Intel: 191W
> 
> Assuming this is accurate, that means that Zen drew 94W under load and the 6900K drew 85W. That's slightly worse, although Zen in this case was also slightly faster in IPC.
> 
> Very slight advantage for Intel in that case. To be expected, as they do have process leadership. Intel has a "true" 14nm versus the 14/20nm Global Foundries has (from Samsung).
> It will be very interesting to see the x265 benchmarks. I wonder if future implementations of Zen would get full speed AVX2 capabilities.


Wait... are you really using a delta between load and idle to declare a "winner"? That's completely pointless..


----------



## }SkOrPn--'

Quote:


> Originally Posted by *CrazyElf*
> 
> I think we can draw a few educated guesses here:
> 
> AMD's seems to have gotten power consumption down (assuming their demo of Ryzen using less power than the 6900K is accurate)
> IPC for non-AVX based workloads must be very good compared to Bulldzoer and perhaps better than Broadwell (likely they met the 40% IPC target)
> The unknowns are how good AMD's Zen is with applications that can use AVX2 instructions and how good the multithreading is
> I suspect the SMT must be pretty decent for it to keep up with the 6900K though.


I shortened your quote as to not make for a large post.

I'd like to say even though Ryzen does seem to be quite decent, the real test imo, just as they went through years ago after the release of AMD64, will be *"keeping up with Intel"*. If Intel fights back as they did all those years ago with Core series, who knows if AMD will slip behind again? My hope is that they manage to keep up this time, especially considering the signs that Intel is slowly moving into other areas of interest. Now how AMD pulls that off in competition with a behemoth such as Intel is going to be interesting to watch. Do you think they can keep up this time around?

I can't wait to see what Intel does with their products once Ryzen has released into the wild. *But for now, to me anyway Ryzen appears more advanced and I want it*. I already have several friends who are looking at their new Skylake builds and saying meh, lol. I'm thinking the same about my 6c12t Xeon now finally too since that is now a little long in the tooth. But that is because it is time for an upgrade regardless. I was waiting for Kaby Lake or any Z270 based system, but now not so sure, haha. That will depend on Asus, Gigabyte or maybe even ASRock to release some very interesting boards for Ryzen consumption.


----------



## epic1337

Quote:


> Originally Posted by *SoloCamo*
> 
> 8 core 6900k w/ HT off vs 8 core Ryzen w/ SMT off at the same clocks is hopefully a guaranteed benchmark made available to us on launch.


i wonder if they'll also LN2 FroZen bench them for maximum OC.


----------



## fleetfeather

Quote:


> Originally Posted by *epic1337*
> 
> i wonder if they'll also LN2 FroZen bench them for maximum OC.


I wonder if they'll use a sealed medical facility to ensure no environmental influences impact on maximum overclock.

It's a CPU review, not a peer-reviewed submission to Science


----------



## epic1337

Quote:


> Originally Posted by *fleetfeather*
> 
> I wonder if they'll use a sealed medical facility to ensure no environmental influences impact on maximum overclock.
> 
> It's a CPU review, not a peer-reviewed submission to Science


uhhh, LN2 bench is always mandatory in crowning the king of overclockers.
finding whether it can exceed the world record is also part of reviewing a new uarch right?

on a side note, Zen's new feature automatically overclocks it depending on temperature, so its a valid question.


----------



## n4p0l3onic

Quote:


> Originally Posted by *formula m*
> 
> _What are "regular" settings...?_
> 
> Secondly, as everyone here has told you, Your test don't prove anything. Your test (streams), only prove that your computer is not capable of doing HD encoding & playing. Let alone being able to play at 4k, and encode to 1440p. Like the rest of us want to do.
> 
> What you recorded and showed (all three), is NOT acceptable to a good many gamers. It understand is "good enough" for you, because you do not care about quality, or top-notch performance. If you are happy with a 1080p monitor.. and 720p video then ok. But the rest of the world are building new rigs and left 1080p a long time ago. Most are moving on to 4k. Those are facts... nothing in your basement will change them.
> 
> You keep trying to convince everyone here... "because I did this stream at home, AMD lied to me".
> Your remarks are laughable...


You really out of touch with the reality of pc gaming out there especially ones with moba preference.

Those kind of gamers usually play with laptops worse than that Oxidized guy pc and they play at max 1080p stream or watch stream at 720p

For them a 6700k is already way overkill and almost an unnecessary luxury

If amd really want to showcase 8c16t advantage in gaming to the enthusiasts, they should've used other games or c16tdon't bother at all because there are no games that actually take advantage of 8c16t.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *epic1337*
> 
> i wonder if they'll also LN2 FroZen bench them for maximum OC.


Lol yeah...

I am wondering if Ryzen will be 100% automatic depending on temps, OR if we will be allowed to actually manually select the type of cooling in the BIOS? Air, Water or LN2?


----------



## coolhandluke41

old Ivy 4930K @ stock /150=58sec, doesn't look all that bad


Spoiler: 3.6


----------



## oxidized

Quote:


> Originally Posted by *n4p0l3onic*
> 
> You really out of touch with the reality of pc gaming out there especially ones with moba preference.
> 
> Those kind of gamers usually play with laptops worse than that Oxidized guy pc and they play at max 1080p stream or watch stream at 720p
> 
> For them a 6700k is already way overkill and almost an unnecessary luxury
> 
> If amd really want to showcase 8c16t advantage in gaming to the enthusiasts, they should've used other games or c16tdon't bother at all because there are no games that actually take advantage of 8c16t.


Don't even mind his words, i had him blocked and i never blocked anybody, he started saying just too much BS, bringing other matters in the conversation. They guy clearly doesn't know what he's talking about, and his need to hype up the atmosphere just blinds him totally.

What worries me tho, is, he's not alone


----------



## formula m

Quote:


> Originally Posted by *n4p0l3onic*
> 
> You really out of touch with the reality of pc gaming out there especially ones with moba preference.
> 
> Those kind of gamers usually play with laptops worse than that Oxidized guy pc and they play at max 1080p stream or watch stream at 720p
> 
> For them a 6700k is already way overkill and almost an unnecessary luxury
> 
> If amd really want to showcase 8c16t advantage in gaming to the enthusiasts, they should've used other games or c16tdon't bother at all because there are no games that actually take advantage of 8c16t.


*?*

Most PC players don't play just mobas, they do FPS, racing, MMORPG, etc... & moba.

Mobile gamers can only really play platform games such as 3d isometric. Been doing that since Amiga days, it's grown old. Racing games, or Battlefield/Arma/Project Cars.. type of PC games that require hardware (CPUs & GPUs)... or else you really arn't playing, ur just pretending..!

Laptops are not for PC gaming, they are a compromise. This shouldn't be news to you.


----------



## formula m

Quote:


> Originally Posted by *formula m*
> 
> *I am lmao right now....*
> 
> You don't even realize that YOUR GAMEPLAY in DOTA is hiding in the woods and rarely engaging many, if any other players. That style of minimalistic gameplay, won't stress any system... & clicking on the ground and moving around, isn't what stresses the CPU in DOTA... it is when there is massive wars going and particulate all over the places. And the CPU is crunching.
> 
> Your TEST showed nothing, because you are not a PROFESSIONAL DOTA PLAYER.... lol


*oxidized*, You have not responded in this thread, since I posted this^.

You are unable to even rebuttal that, because you are clueless to how baseless your statements are. They don't follow logic. You don't like to hear it, so now you are trying to say you are "muting" everyone who brings this up..?

Someone else asked you "whats facts..?" And we are still waiting. Your videos only prove you do not know what you are talking about. You don't even seem to understand what the actual requirements for doing such a task, require.. That is all you have proven to this community. And at some point you are just going to have to admit you comments are over the top and baseless.

But thanks for the laughs...


----------



## jeffdamann

Quote:


> Originally Posted by *formula m*
> 
> *?*
> 
> Laptops are not for PC gaming, they are a compromise. This shouldn't be news to you.


Lol, I know this isnt a typical laptop, but you can't just throw a broad generalization like that around.

Show me someone with a rig in this thread that this won't be on par with.

Just scroll down and look at the specs

http://www.ebay.com/itm/302137409621?rmvSB=true


----------



## Shogon

Quote:


> Originally Posted by *formula m*
> 
> *?*
> 
> Laptops are not for PC gaming


More generalizations incoming from knowledgeable armchair experts. How did those enthusiasts ever manage to stream games without Zen before? The poor souls.


----------



## formula m

Quote:


> Originally Posted by *jeffdamann*
> 
> Lol, I know this isnt a typical laptop, but you can't just throw a broad generalization like that around.
> 
> Show me someone with a rig in this thread that this won't be on par with.
> 
> Just scroll down and look at the specs
> 
> http://www.ebay.com/itm/302137409621?rmvSB=true


SO what...?

My rig has a 700w PSU and a GPU the size of that Laptop. You have to have that laptop PLUGGED INTO THE WALL.. AND INTO THE INTERNET.. to do any gaming on it.. so whats the fraking point of it..?

I'd rather cart a Lan box around, than a $5k pseudo-gaming laptop. It's an utter oximoron... because where do you sit with your $2k sound system at..? or your $1k chair..? Do you bring that with you too..? Serious gamer have PC (which requires a dedicated room(s).

Or... xbox/station. <--- and you go house to house logging in..


----------



## epic1337

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Lol yeah...
> 
> I am wondering if Ryzen will be 100% automatic depending on temps, OR if we will be allowed to actually manually select the type of cooling in the BIOS? Air, Water or LN2?


how would they even detect that? although i could see temperature range detection as one indication.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Lol yeah...
> 
> I am wondering if Ryzen will be 100% automatic depending on temps, OR if we will be allowed to actually manually select the type of cooling in the BIOS? Air, Water or LN2?


Probably be just like GPU's, the lower the temps the faster it runs regardless of how it keeps the temps down. No need to complicate things.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *epic1337*
> 
> how would they even detect that? although i could see temperature range detection as one indication.


Yeah Lisa said hundreds of sensors are inside Ryzen, so it is probably making assumptions to what kind of cooling you got just by how it is working. Seems easy enough. However I want to manually select it if I can, lol. Like most of us here probably agree a manual selection would give peace of mind that we are taking full advantage of their XFR feature. Although most will probably disable that too and manually overclock.


----------



## formula m

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah Lisa said hundreds of sensors are inside Ryzen, so it is probably making assumptions to what kind of cooling you got just by how it is working. Seems easy enough. However I want to manually select it if I can, lol. Like most of us here probably agree a manual selection would give peace of mind that we are taking full advantage of their XFR feature. Although most will probably disable that too and manually overclock.


Manually select what..?

I don't think people understand Zen yet. It maintains itself to a certain threshold, if it not met, it increases clock, until thermal thresh is near/met.

From air to water to LN2... whatever you are using to cool Zen with, doesn't matter. Just how cool it gets...

(or more scientifically, how much heat you can dissipate)


----------



## SoloCamo

Quote:


> Originally Posted by *n4p0l3onic*
> 
> You really out of touch with the reality of pc gaming out there especially ones with moba preference.
> 
> Those kind of gamers usually play with laptops worse than that Oxidized guy pc and they play at max 1080p stream or watch stream at 720p
> 
> For them a 6700k is already way overkill and almost an unnecessary luxury
> 
> If amd really want to showcase 8c16t advantage in gaming to the enthusiasts, they should've used other games or c16tdon't bother at all because there are no games that actually take advantage of 8c16t.


And those players you speak of aren't exactly the target audience of this CPU. They are likely those on pentiums, i3's and fx-6300's. Aka, the lower end budget sector. Nothing wrong with those parts but you cannot discredit the test being run at higher setting (who wouldn't want to use better quality settings for their stream?) which showed the weakness of weaker cpu's or those with far less threads?

Moba may be popular but it's not the only genre in town. Try streaming BF1 on those laptops, or even a 2600k like oxidiezed has. Even on my 4790k it's a joke to get any reasonable quality settings out of it.

I would gladly take a similar ipc 8c16t cpu.


----------



## }SkOrPn--'

Quote:


> Originally Posted by *formula m*
> 
> Manually select what..?
> I don't think people understand Zen yet. It maintains itself to a certain threshold, if it not met, it increases clock, until thermal thresh is near/met.
> 
> From air to water to LN2... whatever you are using to cool Zen with, doesn't matter. Just how cool it gets...
> (or more scientifically, how much heat you can dissipate)


After 13 years working for AMD and over 9 for Intel I am fully aware of how it will probably work. I just stated I would "like", or should I say "prefer" a manual selection in the bios for peace of mind. I hope that is OK with you? Or do I need your approval to want that feature? That is why for the past 30+ years I always build systems that have extensive BIOS releases, such as my ROG systems. I like manually selecting features, but that's just me man. If it is not a manually selectable feature, sure I can handle that too of course.









Again, I am just hoping I get a manual selection for the type of cooling, if only for the peace of mind of it. Really very simple to understand...


----------



## Jpmboy

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> After 13 years working for AMD and over 9 for Intel I am fully aware of how it will probably work. I just stated I would "like", or *should I say "prefer" a manual selection in the bios for peace of mind*. I hope that is OK with you? Or do I need your approval to want that feature? That is why for the past 30+ years I always build systems that have extensive BIOS releases, such as my ROG systems. I like manually selecting features, but that's just me man. If it is not a manually selectable feature, sure I can handle that too of course.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Again, I am just hoping I get a manual selection for the type of cooling, if only for the peace of mind of it. Really very simple to understand...


^^ This.


----------



## n4p0l3onic

Quote:


> Originally Posted by *formula m*
> 
> *?*
> Most PC players don't play just mobas, they do FPS, racing, MMORPG, etc... & moba.
> 
> Mobile gamers can only really play platform games such as 3d isometric. Been doing that since Amiga days, it's grown old. Racing games, or Battlefield/Arma/Project Cars.. type of PC games that require hardware (CPUs & GPUs)... or else you really arn't playing, ur just pretending..!
> 
> Laptops are not for PC gaming, they are a compromise. This shouldn't be news to you.


You are the one who are completely ignorant with the world dude

I'm talking about moba gamers, not the enthusiastic hardcore pc gamers like the ones we e usually encountered on western pc gaming related forum like this forum.

Moba gamers usually don't have high end rigs nor they actually play hardcore pc games... They usually don't even know much about pc hardware nor they care about it, certainly not with the things amd seem want to attract them by dota2 demonstration. Furthermore moba games are usually completely playable with a 10 years old pc, so this state of the art 8c16t cpu isn't something that attract their attention, heck the fact is moba gamers play moba games because they are free, meaning that these are a gamers group in which they couldn't even be bothered to spend $$$ to buy the game, moreover these expensive hardware.

Enthusiastic hardcore pc gamers on the other hand are usually the opposite, we don't play moba that much, if any.

So in a sense, yeah amd marketing failed to hit the intended audience with that dota2 demo.


----------



## SoloCamo

Quote:


> Originally Posted by *n4p0l3onic*
> 
> You are the one who are completely ignorant with the world dude
> 
> I'm talking about moba gamers, not the enthusiastic hardcore pc gamers like the ones we e usually encountered on western pc gaming related forum like this forum.
> 
> Moba gamers usually don't have high end rigs nor they actually play hardcore pc games... They usually don't even know much about pc hardware nor they care about it, certainly not with the things amd seem want to attract them by dota2 demonstration. Furthermore moba games are usually completely playable with a 10 years old pc, so this state of the art 8c16t cpu isn't something that attract their attention, heck the fact is moba gamers play moba games because they are free, meaning that these are a gamers group in which they couldn't even be bothered to spend $$$ to buy the game, moreover these expensive hardware.
> 
> Enthusiastic hardcore pc gamers on the other hand are usually the opposite, we don't play moba that much, if any.
> 
> So in a sense, yeah amd marketing failed to hit the intended audience with that dota2 demo.


So the people you claim to be happy with 10 year old pc performance and as you said..
Quote:


> "the fact is moba gamers play moba games because they are free, meaning that these are a gamers group in which they couldn't even be bothered to spend $$$ to buy the game, moreover these expensive hardware"


would be watching this show in the first place let alone even know what Zen is?

Be real. The people watching this know and care.


----------



## Pro3ootector

Xbox Scorpio might get RYZEN.


----------



## Particle

My system completed the 150-sample render in 1m02s. It's an AMD FX-9590 (stock) running Blender 2.78a.


----------



## Travieso

Yeah... just benchmarking cpu on AAA titles which most of them are very gpu limited is the best way to present your cpu.

lol


----------



## Fyrwulf

Quote:


> Originally Posted by *formula m*
> 
> Manually select what..?
> I don't think people understand Zen yet. It maintains itself to a certain threshold, if it not met, it increases clock, until thermal thresh is near/met.
> 
> From air to water to LN2... whatever you are using to cool Zen with, doesn't matter. Just how cool it gets...
> (or more scientifically, how much heat you can dissipate)


When they announced that feature, I had to chuckle. The days of using watercooling so people's dainty ears are protected are GONE. Meanwhile I've got a case set up with twin loops; 12 SP120 HPs for the rads and six FF5-120s plus one FF4-140 for the intake.


----------



## flippin_waffles

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I shortened your quote as to not make for a large post.
> 
> I'd like to say even though Ryzen does seem to be quite decent, the real test imo, just as they went through years ago after the release of AMD64, will be *"keeping up with Intel"*. If Intel fights back as they did all those years ago with Core series, who knows if AMD will slip behind again? My hope is that they manage to keep up this time, especially considering the signs that Intel is slowly moving into other areas of interest. Now how AMD pulls that off in competition with a behemoth such as Intel is going to be interesting to watch. Do you think they can keep up this time around?
> 
> I can't wait to see what Intel does with their products once Ryzen has released into the wild. *But for now, to me anyway Ryzen appears more advanced and I want it*. I already have several friends who are looking at their new Skylake builds and saying meh, lol. I'm thinking the same about my 6c12t Xeon now finally too since that is now a little long in the tooth. But that is because it is time for an upgrade regardless. I was waiting for Kaby Lake or any Z270 based system, but now not so sure, haha. That will depend on Asus, Gigabyte or maybe even ASRock to release some very interesting boards for Ryzen consumption.


What can intel do though? Just releasing more of the same with incrimental performance increases doesnt sound very exciting really. AMD can offer a complete gaming platform with enthusiast level hardware for both GPU and CPU. To me this seems very significant and from what im reading this new platform coming from AMD rocks.

As good as that is said to be, SP3 for Naples appears to have these people from the industry pretty excited too.

(the first half of vid is about the recently announced Radeon Instinct)

https://youtu.be/s1jvhHJ6xw8


----------



## Roaches

living room rig: 3570K @4.4ghz 1866mhz DDR3 @ 9.9.9.27

1:05.76 @ 100 samples



1:38.93 @ 150 samples



2:11.43 @ 200 samples



Scaling is almost linear!


----------



## tpi2007

Quote:


> Originally Posted by *flippin_waffles*
> 
> What can intel do though? Just releasing more of the same with incrimental performance increases doesnt sound very exciting really. AMD can offer a complete gaming platform with enthusiast level hardware for both GPU and CPU. To me this seems very significant and from what im reading this new platform coming from AMD rocks.
> 
> As good as that is said to be, SP3 for Naples appears to have these people from the industry pretty excited too.
> 
> (the first half of vid is about the recently announced Radeon Instinct)
> 
> https://youtu.be/s1jvhHJ6xw8


As excited as we are, perhaps Intel only needs to do one very simple thing with X99 to stay in the game before they release the next platform: shift all the CPUs down one slot.


i7-6800K: discontinued;
i7-6850K: the new entry level for X99, with 5820K pricing. Next year the yields will be good to get more volume at the higher clockspeeds and the PCIe lane segmentation was always artificial to begin with, so it's easy peasy for Intel to leave them all enabled. Now they get to say that all the X99 CPU line-up has 40 lanes on-die;
i7-6900K: now occupies the $500 segment;
i7-6950X: 10 cores now at $1k;
Eventual 12 core i7-6970X at $1.7k just to make an extra point.
Now, if AMD does release a locked 8 core chip at the 5820K price point as some rumours say, it may confuse things a bit.


----------



## epic1337

Quote:


> Originally Posted by *tpi2007*
> 
> Now, if AMD does release a locked 8 core chip at the 5820K price point as some rumours say, it may confuse things a bit.


8C locked chip + disabled HT sounds reasonable, the native 8C/16T part would still beat it fairly up front, and even a 6C/12T has a chance due to 50% more thread handling.


----------



## comagnum

My 6600k @ 4.7 did it in 1:02:34... I guess I'll need to upgrade


----------



## Fyrwulf

Quote:


> Originally Posted by *epic1337*
> 
> 8C locked chip + disabled HT sounds reasonable, the native 8C/16T part would still beat it fairly up front, and even a 6C/12T has a chance due to 50% more thread handling.


Why would AMD play Intel's game? If I'm running AMD, I want to release an overclockable, SMT-enabled 4 core Zen APU at a price point where Intel is embarrassed it ever thought of trying to sell locked, not-HT CPUs in the first place. Bonus points if Intel is humiliated enough they have to cut prices so severely they take a loss.


----------



## epic1337

Quote:


> Originally Posted by *Fyrwulf*
> 
> Why would AMD play Intel's game? If I'm running AMD, I want to release an overclockable, SMT-enabled 4 core Zen APU at a price point where Intel is embarrassed it ever thought of trying to sell locked, not-HT CPUs in the first place. Bonus points if Intel is humiliated enough they have to cut prices so severely they take a loss.


why does it have to be against Intel? i'm thinking more of price placement of AMD's lineup.
e.g. if 6C/12T is to be priced at $250 then 8C/8T can be priced at $300, where 8C/16T is at $500.
being locked can offset some buyers towards choosing either 6C/12T or 8C/16T instead, which makes it sound reasonable.

and its simply because of binning, not every chip can OC to 4GHz++ and not everyone can cool a 4GHz++ 8core chip.


----------



## ladcrooks

Quote:


> Originally Posted by *flopper*
> 
> its AmaZen


























Intel will see it as Damzen - Intel, ' Comon boys back to work, we got some catching up to do . '


----------



## dbLIVEdb

so is it confirmed that the numbers are based on 150 sample size?

Here is mine. 39.45. If Zen has turbo mode and OCs well, not only would it be a better cpu than my 6800k, but it has more PCIe lanes! I'm jelly.


----------



## Fyrwulf

Quote:


> Originally Posted by *epic1337*
> 
> why does it have to be against Intel? i'm thinking more of price placement of AMD's lineup.
> e.g. if 6C/12T is to be priced at $250 then 8C/8T can be priced at $300, where 8C/16T is at $500.
> 
> and its simply because of binning, not every chip can OC to 4GHz++ and not everyone can cool a 4GHz++ 8core chip.


Here's the thing, though, AMD has confirmed that every Zen chip is going to have SMT and will be overclockable, those are baked into the architecture. We might see 6 and maybe 4 core products to recover failed dies. Your prices are low, though. I'm betting on 8 core Zen being around $600, with any hypothetical 6 core product being in the $400-$450 range and 4 core being in the $300 range. Obviously, as those are die recovery products, AMD will be willing to play chicken with prices depending on where performance falls in and what Intel does with prices.


----------



## IRobot23

Well blender scales pretty much great with SMT design (HT-AMD SMT), but what about handbrake? How well does HT scales there...


----------



## JakdMan

So far it seems AMD is doing a good job. I just hope that we don't need to be too cautious. Some of yall act as though once you leave heavily multi-threaded applications, Ryzen my perform like a potato and leave you stuck at a loading screen for word for a decade







(especially considering half of you swear by intel xeons and the somehow bring up how their single threads aren't as amazing)

Also I'm gonna have the nerve to voice my annoyance at some self appointed defecto authority on game streaming......... Didn't realize we all play nothing but mobas on ancient tech and are all delusional enough to believe we're doing something. Literally all you've done is help push AMDs message that if you want to actually play a modern game and actual nice settings and stream flawlessly at modern day acceptable resolutions, sandy bridges ain't cutting it. Plain and simple


----------



## Blameless

Quote:


> Originally Posted by *CrazyElf*
> 
> Repost from above:
> https://www.reddit.com/r/Amd/comments/5i6ohv/download_ryzen_benchmarks/
> Quote:
> The statement for download yourself was for the blender file. However you can test the handbrake one yourself as follows:
> 
> Go to http://bbb3d.renderfarming.net/download.html and download the Standard 2D 4K, Quad-Full-HD (3840x2160) 60fps video file.
> 
> Use a video editor to clip the first 60s of the movie file.
> 
> USe handbrake 0.10.5-x64 GUI to convert the file to AppleTV3 preset.


Thanks, I'll test it in a moment.
Quote:


> Originally Posted by *CrazyElf*
> 
> I suspect though that AMD will focus on Zen and Zen+ for the near future, which may involve focusing on single threaded performance. I don't know to be honest.


Well, well have to see--hard to speculate much on Zen+ when Zen hasn't been released yet--but I would not be surprised if this were the case. Should be some room for fairly easy refinement, just as there was between Nehalem and Sandy.
Quote:


> Originally Posted by *CrazyElf*
> 
> It seems like die shrinks are now getting feature updates (ex: TSX on Broadwell E, although that was more due to a bug in the original implementation on Haswell, so it is not a perfect comparison).


TSX is actually still disabled on Broadwell-E. It was supposed to be one of the only potentially significant changes between HW-E and BW-E, but Intel was apparently still having issues so they disabled it again. If you look at the specification update you can see it in the errata.

It's one of the reasons I ended up cheaping out and getting a 6800K instead of a 6900K for my second X99 board...didn't want to drop a grand on a CPU that Zen might roughly match for much less and it didn't have much of anything over my existing 5820K setup (TSX doesn't do anything now, but aside from my X79 parts, I tend to keep my hex cores a very long time and it may have been relevant at some point). I have been pleasantly surprised by my second 6800K sample though...matches or beats my 5820K at the same core clock with worse uncore/memory settings, and does it while using almost 70-80w less under load (both at 4.3GHz).
Quote:


> Originally Posted by *GorillaSceptre*
> 
> Wait... are you really using a delta between load and idle to declare a "winner"? That's completely pointless..


If both chips were allowed to reach full idle states, power should be very close since idle power draw on modern CPUs (even BW-E) is very low.

However, we don't know this for certain, nor do we know precisely how much power differential there is from the platform/board itself (X99 tends to have a higher baseline power draw than the mainstream chipsets/boards, but it's not a huge difference), and per sample variance can easily be the difference by more than a few watts...which is why I consider the power consumption figures a wash.

No firm conclusions can be drawn from them except to demonstrate that TDP ratings are not necessarily tightly linked to power consumption, which most of us should know already.
Quote:


> Originally Posted by *}SkOrPn--'*
> 
> If Intel fights back as they did all those years ago with Core series, who knows if AMD will slip behind again?


If they can regain rough process node parity and avoid complacency/stupid mistakes, they should be able to regain market share and stay competitive for a long while.

AMD had pretty awesome stuff from the Super Socket 7 days until 2006, but they were far too slow in adapting to the threat Core presented. Core 2 did not come from no where; 18 months prior to it's release we were starting to see the potential of the Pentium M, and with Netburst floundering, it was almost a given that it would be retooled for the desktop and that it would beat K8 once it was. Phenom was lack luster. Phenom II while much better, was still too little too late. We all know how Bulldozer turned out.
Quote:


> Originally Posted by *epic1337*
> 
> how would they even detect that? although i could see temperature range detection as one indication.


Temperature, because it really doesn't matter how you get there.
Quote:


> Originally Posted by *Pro3ootector*
> 
> Xbox Scorpio might get RYZEN.


While I would expect to see Ryzen in consoles at some point, if Scorpio is supposed to be a super XBone, totaly changing the CPU architecture will make it difficult to maintain consistent performance scaling.
Quote:


> Originally Posted by *Roaches*
> 
> Scaling is almost linear!


Yep, and this applies to cores as well. Blender also sees a 40-50% jump from SMT.

This, and the fact that it can be touted as real-world (because it is and fairly popular as well) is one of the reasons its used to show off cores and threads.
Quote:


> Originally Posted by *tpi2007*
> 
> [*] Eventual 12 core i7-6970X at $1.7k just to make an extra point.


I don't see this as being likely on BW-E.

The 12 core part would need to be on the MCC die rather than the LCC die that all production i7-6000 series parts are on. While it's not impossible that they would do this, I think they'd wait for the next HEDT platform for such a shift, rather than sort out a new SKU on bigger EP die when it would just be replaced within months.

If they feel the need for a 6960/70X, they'll probably just tighten up the binning and release a higher clocked 10-core, discontinuing the current part.


----------



## coolhandluke41

[email protected]/150 (for Sale)


Spoiler: Warning: Spoiler!


----------



## Blameless

Quote:


> Originally Posted by *IRobot23*
> 
> Well blender scales pretty much great with SMT design (HT-AMD SMT), but what about handbrake? How well does HT scales there...


Generally not anywhere near as much the official Blender builds.

I just ran the Handbrake test AMD was using. Times derived from the encode log, "starting job" to "encode finished". Granularity is only one second in the log, so I've included the average encoding speed for more precision, which is what ill use for most comparisons.

Primary signature system, HT enabled: 60 seconds (average encoding speed 62.415413 fps)

HT disabled: 64 seconds (average encode speed 58.870796 fps)

That's only a bit over a 6% advantage for HT, which is quite small.

I also happen to have a comparison I made well over six years ago which features HT on vs. off in x264 (the encoder Handbrake uses here) and Blender, on the 4.2GHz Nehalem (my favoritest i7-920 D0 ever, may she rest in piece) primary system I had at the time: http://www.overclock.net/t/756266/hyper-threading-enabled-vs-disabled-in-real-apps#post_9706297

Some general conclusions I can draw from past and modern Blender and x264 tests, based on these results and other demonstrable facts:

- Reference Windows Blender builds are inefficient and leave a lot for SMT to work with...probably because of the outdated compiler settings they are habitually built with.
- Blender cares about core count and clock speed far above other factors.
- Windows x264 public binaries are rather efficient and have been getting more efficient over time. SMT isn't very helpful here.
- x264 is much more memory/cache dependent than Blender. My signature system loses to a stock 6900K by about 7-8% in Blender, but only by 1-2% in x264. Some of this may be due to the lower boost state reached when running x264 (it has more AVX code, defaults to more threads, and is generally more demanding), but some of it is also due to the fact that I'm running my 5820K's uncore 1.3GHz above the default BW-E uncore clock and have well tuned memory.

Some tentative conclusions I can draw with relation to the Zen vs. BW-E comparisons:

- Zen likely has a competitive cache/memory subsystem, unlike Bulldozer and it's descendants. This is a great relief.
- Some of the advantage shown by Ryzen in x264 is likely from the BW-E being limited to dual channel...dual-channel doesn't hurt Blender at all, but it does hurt x264 (yes, I've tried).
- Zen probably handles light/non-256-bit AVX code very well. x264 doesn't use much AVX there is only about a 9% difference between non-AVX and AVX2 builds), but whatever it does use certainly doesn't appear to be getting in the way of Zen.
- Better SMT scaling than Intel, at least in certain loads, is quite possibly part of Zen's advantage in X264. Intel doesn't get much from SMT here, but Zen might. Will need more Zen testing to confirm, obviously.


----------



## tpi2007

Quote:


> Originally Posted by *Blameless*
> 
> I don't see this as being likely on BW-E.
> 
> The 12 core part would need to be on the MCC die rather than the LCC die that all production i7-6000 series parts are on. While it's not impossible that they would do this, I think they'd wait for the next HEDT platform for such a shift, rather than sort out a new SKU on bigger EP die when it would just be replaced within months.
> 
> If they feel the need for a 6960/70X, they'll probably just tighten up the binning and release a higher clocked 10-core, discontinuing the current part.


You're probably right, I hadn't thought of the core arrangement.

Or they could forgo binning and go 150w again and do a "3970X".


----------



## formula m

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> After 13 years working for AMD and over 9 for Intel I am fully aware of how it will probably work. I just stated I would "like", or should I say "prefer" a manual selection in the bios for peace of mind. I hope that is OK with you? Or do I need your approval to want that feature? That is why for the past 30+ years I always build systems that have extensive BIOS releases, such as my ROG systems. I like manually selecting features, but that's just me man. If it is not a manually selectable feature, sure I can handle that too of course.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Again, I am just hoping I get a manual selection for the type of cooling, if only for the peace of mind of it. Really very simple to understand...


Your answer is: It makes me feel good & I like it. <--I don't disagree with your answer. Just want you to know... that is your technical answer.


----------



## Mahigan

Seems that my old score was wrong.

I set it to 150 samples and...


Spoiler: Warning: Spoiler!







00:36.33

Not bad... but at 4.5GHz. Which means that Zen @ 3.4GHz is a match for my 3930K @ 4.5Ghz. That's very nice performance indeed.


----------



## epic1337

Quote:


> Originally Posted by *Fyrwulf*
> 
> Here's the thing, though, AMD has confirmed that every Zen chip is going to have SMT and will be overclockable, those are baked into the architecture. We might see 6 and maybe 4 core products to recover failed dies. Your prices are low, though. I'm betting on 8 core Zen being around $600, with any hypothetical 6 core product being in the $400-$450 range and 4 core being in the $300 range. Obviously, as those are die recovery products, AMD will be willing to play chicken with prices depending on where performance falls in and what Intel does with prices.


those were the price rumors going around multiple sites.
SR3 = $150 | SR5 = $250 | cutdown SR7 = $350 | SR7 = $500
https://www.overclock3d.net/news/cpu_mainboard/amd_summit_ridge_cpu_pricing_leaked/1

furthermore, AMD promised APU prices to stay cost effective, that includes their A8-9600 (4C/8T 6CU) and A12-9800 chip (4C/8T + 8CU).
if you're gonna put their 4core APUs at $300 then it would end up being way too overpriced compared to former APUs.
take note that godavari came out at $137 msrp, you can't just suddenly almost triple the price of it's successors.


----------



## mohiuddin

Quote:


> Originally Posted by *dbLIVEdb*
> 
> so is it confirmed that the numbers are based on 150 sample size?


Guys i want to know that as well. please.


----------



## fleetfeather

Quote:


> Originally Posted by *mohiuddin*
> 
> Guys i want to know that as well. please.


Well, an AMD employee on Reddit said they used 150 samples. So unless he's also misinformed, yes, it was 150 samples.










https://www.reddit.com/r/Amd/comments/5ie7f0/summoning_uamd_robert_how_can_we_do_the_blender/


----------



## mohiuddin

Quote:


> Originally Posted by *fleetfeather*
> 
> Well, an AMD employee on Reddit said they used 150 samples. So unless he's also misinformed, yes, it was 150 samples.


thanks man.
By the way, my sandybridge-E E5 2670 8c/16t @3ghz all core needs 56.38 seconds. Which i got for just 70$ ,


----------



## oxidized

Quote:


> Originally Posted by *epic1337*
> 
> those were the price rumors going around multiple sites.
> SR3 = $150 | SR5 = $350 | cutdown SR7 = $350 | SR7 = $500
> https://www.overclock3d.net/news/cpu_mainboard/amd_summit_ridge_cpu_pricing_leaked/1
> 
> furthermore, AMD promised APU prices to stay cost effective, that includes their A8-9600 (4C/8T 6CU) and A12-9800 chip (4C/8T + 8CU).
> if you're gonna put their 4core APUs at $300 then it would end up being way too overpriced compared to former APUs.
> take note that godavari came out at $137 msrp, you can't just suddenly almost triple the price of it's successors.


If those are really the prices than AMD has a huge advantage over pretty much every price section of the market. Assuming indie benchmark are even close to what AMD showed


----------



## mohiuddin

someone made a ryzen wallpaper over the reddit ...lol
https://drive.google.com/drive/folders/0BzTBTtNDe7HuMnc3TGV3U2loMU0


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> those were the price rumors going around multiple sites.
> SR3 = $150 | SR5 = $350 | cutdown SR7 = $350 | SR7 = $500
> https://www.overclock3d.net/news/cpu_mainboard/amd_summit_ridge_cpu_pricing_leaked/1
> 
> furthermore, AMD promised APU prices to stay cost effective, that includes their A8-9600 (4C/8T 6CU) and A12-9800 chip (4C/8T + 8CU).
> if you're gonna put their 4core APUs at $300 then it would end up being way too overpriced compared to former APUs.
> take note that godavari came out at $137 msrp, you can't just suddenly almost triple the price of it's successors.


A $300 APU would be just under double the initial price bracket of AMD APUs. Initial high end APU pricing was ~$175.

They will likely have four thread parts covering $150 and below and eight thread parts covering from there to $300.

Also, you should never have a product stack that has duplicate price points, its bad for business.


----------



## epic1337

Quote:


> Originally Posted by *oxidized*
> 
> If those are really the prices than AMD has a huge advantage over pretty much every price section of the market. Assuming indie benchmark are even close to what AMD showed


i don't think it'll have much impact on drastically shifting buyers towards AMD's camp, they'll earn more market shares out of it though.

plus it isn't a precendent in AMD undercutting Intel's price points, take Phenom II X6 1055T for example.
MSRP at $199, much cheaper and faster than intel's $300+ chips.


----------



## oxidized

Quote:


> Originally Posted by *epic1337*
> 
> i don't think it'll have much impact on drastically shifting buyers towards AMD's camp, they'll earn more market shares out of it though.
> 
> plus it isn't a precendent in AMD undercutting Intel's price points, take Phenom II X6 1055T for example.
> MSRP at $199, much cheaper and faster than intel's $300+ chips.


Well, depends on the real performance. Also how is that phenom faster than intel's ?


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> i don't think it'll have much impact on drastically shifting buyers towards AMD's camp, they'll earn more market shares out of it though.
> 
> plus it isn't a precendent in AMD undercutting Intel's price points, take Phenom II X6 1055T for example.
> MSRP at $199, much cheaper and faster than intel's $300+ chips.


Umm, Phenom II x6 was slower in general than the higher end LGA1366 parts. ESPECIALLY Intels hex cores. It only won on price.


----------



## oxidized

Quote:


> Originally Posted by *KarathKasun*
> 
> Umm, Phenom II x6 was slower in general than the higher end LGA1366 parts. ESPECIALLY Intels hex cores. It only won on price.


This is what i remember too, idk


----------



## epic1337

Quote:


> Originally Posted by *KarathKasun*
> 
> Also, you should never have a product stack that has duplicate price points, its bad for business.


my typo, SR5 is $250 on that link.

Quote:


> Originally Posted by *KarathKasun*
> 
> Umm, Phenom II x6 was slower in general than the higher end LGA1366 parts. ESPECIALLY Intels hex cores. It only won on price.


i wasn't talking about their high end parts, intel had low end i7 at the $300 price point and is slower than PII X6, i7-870 for example is $305.

Quote:


> Originally Posted by *oxidized*
> 
> This is what i remember too, idk


i dunno, i remember 1055T being on-par with i7-930.


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> my typo, SR5 is $250 on that link.
> i wasn't talking about their high end parts, intel had low end i7 at the $300 price point and is slower than PII X6, i7-870 for example is $305.


In most user applications the i7 was faster. The PII only really took the lead in rendering and MT compression.

PII wasn't BAD, but it did not win solidly over most of Intels product stack at the time. Its claim to fame was bringing six core CPUs to the masses with its low price.


----------



## epic1337

Quote:


> Originally Posted by *KarathKasun*
> 
> PII wasn't BAD, but it did not win solidly over most of Intels product stack at the time. Its claim to fame was bringing six core CPUs to the masses with its low price.


it brings up one point though, they did introduce a full 6core chip at $199 price point.

on a side note, doesn't that sound like Zen?


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> it brings up one point though, they did introduce a full 6core chip at $199 price point.
> 
> on a side note, doesn't that sound like Zen?


No, it sounds like FX/BullDozer.

Comparative pricing in the market is a direct indication of performance. FX was cheap for a reason, same goes for Phenom II.


----------



## epic1337

Quote:


> Originally Posted by *KarathKasun*
> 
> No, it sounds like FX/BullDozer.


bulldozer was down on all fronts except one, specially threaded workloads.

Zen on the other hand indicates possibly inferior IPC, but superior thread scaling (superior hyperthreading).
e.g. it goes well with good thread scaling apps like blender, but would fall behind in more single-thread or IPS oriented apps.

edit: on a side note, will there be any effect towards the view point of the uninformed?
e.g. seeing that Zen's 6core chip priced much more expensive than PD's 8core chip.
most people think more core count = better, regardless of IPC.


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> bulldozer was down on all fronts except one, specially threaded workloads.


So was Phenom II. It had the IPC of Core 2 after the new Core i# series launched. They sold a cheap six core part because they had no other way to compete, simple as that.

BD still had C2D IPC, and followed the same path. Throw more cores at the problem.

I don't look back with rose tinted shades. I see the decisions for what they were. Blind last ditch efforts to remain relevant after they had fallen one generation behind in core design with Phenom II, and two generations behind with BD.

Zen looks to be a step back in the right direction from an engineering point of view. I hope they can keep this forward momentum going with future R&D.


----------



## flippin_waffles

Quote:


> Originally Posted by *tpi2007*
> 
> As excited as we are, perhaps Intel only needs to do one very simple thing with X99 to stay in the game before they release the next platform: shift all the CPUs down one slot.
> 
> 
> i7-6800K: discontinued;
> i7-6850K: the new entry level for X99, with 5820K pricing. Next year the yields will be good to get more volume at the higher clockspeeds and the PCIe lane segmentation was always artificial to begin with, so it's easy peasy for Intel to leave them all enabled. Now they get to say that all the X99 CPU line-up has 40 lanes on-die;
> i7-6900K: now occupies the $500 segment;
> i7-6950X: 10 cores now at $1k;
> Eventual 12 core i7-6970X at $1.7k just to make an extra point.
> Now, if AMD does release a locked 8 core chip at the 5820K price point as some rumours say, it may confuse things a bit.


Sure, and lowering price is probably what intel will have to do but still does that make the platform much more attractive relative to what AM4 is rumored to bring? If intel is willing to cut their margins on those chips virtually in half for price/perf parity, i think the AM4 platform is still going to be more attractive to enthusiasts. My prediction is that an all AMD system with enthusiast class Ryzen CPU and an enthusiast class Vega GPU on an AM4 platform is going to give gamers the best experience, performance and features. Time will tell.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Blameless*
> 
> If both chips were allowed to reach full idle states, power should be very close since idle power draw on modern CPUs (even BW-E) is very low.
> 
> However, we don't know this for certain, nor do we know precisely how much power differential there is from the platform/board itself (X99 tends to have a higher baseline power draw than the mainstream chipsets/boards, but it's not a huge difference), and per sample variance can easily be the difference by more than a few watts...which is why I consider the power consumption figures a wash.
> 
> No firm conclusions can be drawn from them except to demonstrate that TDP ratings are not necessarily tightly linked to power consumption, which most of us should know already.


Agreed, they're undoubtedly close in any case.

Just saying that using a delta is silly..

This was his reasoning:
Quote:


> Idle
> Zen: 93W
> Intel: 106W
> 
> Load
> Zen: 187W
> Intel: 191W
> 
> = Intel winning.


With that logic i could say;

Idle
Zen: 150W
Intel: 50W

Load:
Zen: 200W
Intel: 150W

= Zen wins because delta is 50W vs 100W..


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> it brings up one point though, they did introduce a full 6core chip at $199 price point.
> 
> on a side note, doesn't that sound like Zen?


Correct^

And how long ago was that..? People tend to forget.

The Price of Ryzen, isn't the cost of Zen.

I can't remember the last time I was so geek'd. I'm pushing 50.

AMD's skus look phenomenal from a buyer's standpoint. And AMD's "product stack", of being 8-core/6-core/4-core design, means that AMD's yields are quite profitable and that ryZEN's uarch is quite dimensional & robust. ZEN's chip AI is being glazed over. It means better yeilds, as they lazer chips down to 6 & 4-core varients.

That is how/why AMD will might possibly be able to sell a SR7 for $399... and a SR7 "Black Edition" w/AIO water for $499..! Because any chips that don't meet those two tiers, become 6-core SR5s & 4-core SR3s.

AMD is going to compete in the market space on cost & price. Intel (currently) is fractured with it's customers. The Z170 is dead, the X99 is dead.. and the new stuff is 10 months out. Even then.. what exactly will it offer.. to the customer? And then, what over Zen?


----------



## Blameless

Quote:


> Originally Posted by *KarathKasun*
> 
> PII wasn't BAD, but it did not win solidly over most of Intels product stack at the time. Its claim to fame was bringing six core CPUs to the masses with its low price.


Yes.
Quote:


> Originally Posted by *epic1337*
> 
> on a side note, doesn't that sound like Zen?


Hard to not see the similarities, but Phenom II wasn't quite competitive enough in the long run to be something to aspire to again.


----------



## cssorkinman

Quote:


> Originally Posted by *formula m*
> 
> Quote:
> 
> 
> 
> Originally Posted by *epic1337*
> 
> it brings up one point though, they did introduce a full 6core chip at $199 price point.
> 
> on a side note, doesn't that sound like Zen?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Correct^
> And how long ago was that..? People tend to forget.
> 
> The Price of Ryzen, isn't the cost of Zen.
> I can't remember the last time I was so geek'd. I'm pushing 50.
> 
> AMD's skus look phenomenal from a buyer's standpoint. And AMD's "product stack", of being 8-core/6-core/4-core design, means that AMD's yields are quite profitable and that ryZEN's uarch is quite dimensional & robust. ZEN's chip AI is being glazed over. It means better yeilds, as they lazer chips down to 6 & 4-core varients.
> 
> That is how/why AMD will be able to sell a SR7 for $399... and a SR7 "Black Edition" w/AIO water for $599..! Because any chips that don't meet those two tiers, become 6-core SR5s & 4-core SR3s.
> 
> AMD is going to compete in the market space on cost & price. Intel (currently) is fractured with it's customers. The Z170 is dead, the X99 is dead.. and the new stuff is 10 months out. Even then.. what exactly will it offer.. to the customer? And then, what over Zen?
Click to expand...

I've seen a half century myself and am also quite anxious to see this product come to market.

If the rumored SR 3 is a 4 core 8 thread capable part, it should match the 6700k in tasks demonstrated at the press event. Coupled with the rumored $200 price tag..... it's not hard to imagine enthusiasts of any age being excited for Zen.


----------



## Marios145

Since no release date was announced, I hope amd doesn't pull an amd with zen's launch.


----------



## Undervolter

Quote:


> Originally Posted by *Jpmboy*
> 
> Regarding the Handbrake comparo... it would use AVX unless this IS is disabled.


x264 uses AVX2 much more than it uses AVX, but remains a mainly integer operation. As for disabling AVX or AVX2, I 've never seen Handbrake or the x264 settings to offer this option.


----------



## Phixit

Good news if AMD is back in the game. As much as I love my i7-6700k, Intel needs more competition.

I had great experience with AMD in the past !


----------



## Blameless

Quote:


> Originally Posted by *Undervolter*
> 
> x264 uses AVX2 much more than it uses AVX, but remains a mainly integer operation. As for disabling AVX or AVX2, I 've never seen Handbrake or the x264 settings to offer this option.


You can disable certain instructions with x264 in the command line.

If you add "--asm mmx2,sse2,sse2fast,ssse3,sse42" to the command line or custom options you will use all non-AMD specific instruction sets prior to AVX.


----------



## Jpmboy

Quote:


> Originally Posted by *formula m*
> 
> Correct^
> And how long ago was that..? People tend to forget.
> The Price of Ryzen, isn't the cost of Zen.
> I can't remember the last time I was so geek'd. I'm pushing 50.
> AMD's skus look phenomenal from a buyer's standpoint. And AMD's "product stack", of being 8-core/6-core/4-core design, means that AMD's yields are quite profitable and that ryZEN's uarch is quite dimensional & robust. ZEN's chip AI is being glazed over. It means better yeilds, as they lazer chips down to 6 & 4-core varients.
> That is how/why AMD will might possibly be able to sell a SR7 for $399... and a SR7 "Black Edition" w/AIO water for $499..! Because any chips that don't meet those two tiers, become 6-core SR5s & 4-core SR3s.
> *AMD is going to compete in the market space on cost & price. Intel (currently) is fractured with it's customers. The Z170 is dead, the X99 is dead.. and the new stuff is 10 months out. Even then.. what exactly will it offer.. to the customer? And then, what over Zen?*


only issue with that logic is that both platforms cited are over 1 and near 3 years old... and AMD is now matching performance? As I said erlier, at least AMD has closed the gap to ~ 1 year.

Quote:


> Originally Posted by *Undervolter*
> 
> x264 uses AVX2 much more than it uses AVX, but remains a mainly integer operation. As for disabling AVX or AVX2, I 've never seen Handbrake or the x264 settings to offer this option.


yeah, you can disable the IS families.. as blameless points out below. Additionally, this can be done at the bios level. CF all non-K unlock bios mods.
Quote:


> Originally Posted by *Blameless*
> 
> You can disable certain instructions with x264 in the command line.
> 
> If you add "--asm mmx2,sse2,sse2fast,ssse3,sse42" to the command line or custom options you will use all non-AMD specific instruction sets prior to AVX.


----------



## Newwt

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> So JayzTwoCents did a "my thoughts" video on this... I don't mind his videos but he really didn't do much research into Ryzen because he talked about if there was going to be "4 core 8 thread or even a four core with no hyper threading"... err... um... we already know there's going to be a pretty good line up.


I usually watch some of his videos as well...

after I saw this though I was just like come on dude...


----------



## Shau76434

Amd's stock went past the 11 dollar.


----------



## Blameless

Quote:


> Originally Posted by *Barca130*
> 
> Amd's stock went past the 11 dollar.


Makes me wish I sat on my 1500 shares, instead of cashing out to gamble it away on those Chinese small caps a few years back!


----------



## Newwt

Quote:


> Originally Posted by *formula m*
> 
> Correct^
> And how long ago was that..? People tend to forget.
> 
> The Price of Ryzen, isn't the cost of Zen.
> I can't remember the last time I was so geek'd. I'm pushing 50.
> 
> AMD's skus look phenomenal from a buyer's standpoint. And AMD's "product stack", of being 8-core/6-core/4-core design, means that AMD's yields are quite profitable and that ryZEN's uarch is quite dimensional & robust. ZEN's chip AI is being glazed over. It means better yeilds, as they lazer chips down to 6 & 4-core varients.
> 
> That is how/why AMD will might possibly be able to sell a SR7 for $399... and a SR7 "Black Edition" w/AIO water for $499..! Because any chips that don't meet those two tiers, become 6-core SR5s & 4-core SR3s.
> 
> AMD is going to compete in the market space on cost & price. Intel (currently) is fractured with it's customers. The Z170 is dead, the X99 is dead.. and the new stuff is 10 months out. Even then.. what exactly will it offer.. to the customer? And then, what over Zen?


agreed,

This brings up another point on what people were able to do the the Phenoms buy a lower core chip with "damaged" cores and unlock them...


----------



## Newwt

Quote:


> Originally Posted by *Blameless*
> 
> Makes me wish I sat on my 1500 shares, instead of cashing out to gamble it away on those Chinese small caps a few years back!


Makes me sick to my stomach since I was contemplating buying some shares at the start of the year with my bonus...


----------



## Shau76434

Quote:


> Originally Posted by *Blameless*
> 
> Makes me wish I sat on my 1500 shares, instead of cashing out to gamble it away on those Chinese small caps a few years back!


Did you buy those shares when amd was at its lowest around January (~1.80 dollar).


----------



## LazarusIV

SR3 - 4C/8T - $249
SR5 - 6C/12T - $349
SR7 - 8C/16T - $499
SR7+ - 8C/16T - $599

APU - 4C/4T 6CU - $179
APU - 4C/8T 8CU - $274

Just had to get in on this... This is based on nothing but conjecture and fairy dust.


----------



## CrazyElf

If anyone is looking for a larger sample size of CPUs and performances, I have something that might help.

Searching around, I just found this:
https://docs.google.com/spreadsheets/d/1O2Or6XOETZr3a4gLAuCuYEx8m5X_lv23LdwZiCZpBcM/edit#gid=0

It's a list of Blender scores. Note that everyone put the samples there. I'm reluctant to use the N/A Sample data because AMD previously AMD had 200 samples and reuploaded for 150, so we won't know if N/A stands for what.

This is from the following thread, the AMD Subreddit:
https://www.reddit.com/r/Amd/comments/5idz87/ryzen_blender_benchmark_comparison_spreadsheet/

I'm going to contact the author and see if we cannot get this done with 150 samples.

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Wait... are you really using a delta between load and idle to declare a "winner"? That's completely pointless..


We just want a ballpark figure at this point.

At this point we need a general figure because all we've got is limited information. If it's a complete fail, we will know, just like if a figure is unrealistic, we will know.

But of course, if it is a wash as Blameless said, then we are going to need a lot more information.


https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD
Quote:


> Originally Posted by *}SkOrPn--'*
> 
> Yeah Lisa said hundreds of sensors are inside Ryzen, so it is probably making assumptions to what kind of cooling you got just by how it is working. Seems easy enough. However I want to manually select it if I can, lol. Like most of us here probably agree a manual selection would give peace of mind that we are taking full advantage of their XFR feature. Although most will probably disable that too and manually overclock.


I would hesitate to guess that they are using Adaptive Voltage and Frequency Scaling (AVFS) in Zen. The way AVFS works, they would need to put hundreds of sensors in.

Here is an article:
https://www.extremetech.com/gaming/203898-amd-details-new-power-efficiency-improvements-update-on-25x20-project

They used in their Carrizo APUs.

It's almost certain they will use it in Vega. They used it in Polaris. On the downside, downclocking and perhaps overclocking headroom may have been affected. We don't know at this point.

Quote:


> Originally Posted by *}SkOrPn--'*
> 
> I shortened your quote as to not make for a large post.
> 
> I'd like to say even though Ryzen does seem to be quite decent, the real test imo, just as they went through years ago after the release of AMD64, will be *"keeping up with Intel"*. If Intel fights back as they did all those years ago with Core series, who knows if AMD will slip behind again? My hope is that they manage to keep up this time, especially considering the signs that Intel is slowly moving into other areas of interest. Now how AMD pulls that off in competition with a behemoth such as Intel is going to be interesting to watch. Do you think they can keep up this time around?
> 
> I can't wait to see what Intel does with their products once Ryzen has released into the wild. *But for now, to me anyway Ryzen appears more advanced and I want it*. I already have several friends who are looking at their new Skylake builds and saying meh, lol. I'm thinking the same about my 6c12t Xeon now finally too since that is now a little long in the tooth. But that is because it is time for an upgrade regardless. I was waiting for Kaby Lake or any Z270 based system, but now not so sure, haha. That will depend on Asus, Gigabyte or maybe even ASRock to release some very interesting boards for Ryzen consumption.


I suspect that they may lower price slightly but not much else.

Their ability to respond with a better product (ex: better single threaded performance) is what I am really interested in seeing. Are we at the limits of silicon or has Intel simply been holding back because they just wanted to milk? If we are at the limits of silicon, Intel might not be able to respond as much as you think. In that case, they may be forced to try to resort to some of the dirty tactics they used in the days when Prescott did not do so well against AMD K8 just to prevent AMD from gaining marketshare. If we are at the limits of silicon, AMD may be the victim of some unethical business practices again. If we are not, then hopefully Intel does indeed pull that rabbit out and give us another leap. I suspect that we are at the limits of silicon.

Another big problem is the OEMs themselves. Will they accept Intel's ruse? Another is that in order for the products to be used to their full potential, they need dual channel RAM. A lot of OEMs shipped subpar APU systems around Carrizo, which has single channel RAM. That's especially bad for APUs because they need the bandwidth for their iGPUs.

Zen actually has a huge value proposition here. Right now AMD cannot offer competitive single threaded CPU performance, but can offer good graphics performance. Zen's value proposition is that it offers "good enough" (for most users) performance, while offering a competitive GPU solution. Keep in mind that the GPU solution will be under siege soon because if Intel does indeed have the rights to AMD iGPUs, then they will be able to integrate them in, while offering superior single threaded performance and a superior process (they have a true 14nm versus AMD's 14/20nm hybrid via Global Foundries). It's unlikely that anyone is going to get parity with Intel.

Quote:


> Originally Posted by *Blameless*
> 
> Well, well have to see--hard to speculate much on Zen+ when Zen hasn't been released yet--but I would not be surprised if this were the case. Should be some room for fairly easy refinement, just as there was between Nehalem and Sandy.
> TSX is actually still disabled on Broadwell-E. It was supposed to be one of the only potentially significant changes between HW-E and BW-E, but Intel was apparently still having issues so they disabled it again. If you look at the specification update you can see it in the errata.
> 
> It's one of the reasons I ended up cheaping out and getting a 6800K instead of a 6900K for my second X99 board...didn't want to drop a grand on a CPU that Zen might roughly match for much less and it didn't have much of anything over my existing 5820K setup (TSX doesn't do anything now, but aside from my X79 parts, I tend to keep my hex cores a very long time and it may have been relevant at some point). I have been pleasantly surprised by my second 6800K sample though...matches or beats my 5820K at the same core clock with worse uncore/memory settings, and does it while using almost 70-80w less under load (both at 4.3GHz).
> If both chips were allowed to reach full idle states, power should be very close since idle power draw on modern CPUs (even BW-E) is very low.


Thanks. I did not realize that at all.

That actually got me thinking. That will likely slow down TSX adoption by a few years. Perhaps with Skylake E or Cannonlake E, we will see the bugs fixed. Actually by then, a Zen+ Opteron might be able to incorporate TSX support too, negating AMD's drawback.

There is one other matter. AVX128 might be more competitive with Zen than AVX256. They've got 2x 128 and 4x 64 FMACs. So an AVX 256 instruction cannot be done in 1 cycle. What about AVX1, which is 128 bit?

Theoretically, there were already huge benefits to AVX1 and they might be able to run it in 1 cycle. I wonder if it would be possible to find out. Hmm ... maybe if we could disable AVX2 on Haswell E and just have AVX1?

Here is Linpack with AVX on and off: http://forums.evga.com/Intel-AVX-vs-NoAVX-Benchmarks-m870442.aspx
 

Most applications will not scale as well as Linpak, but anecdotally, I do remember there being many applications that did gain significantly, and as we've seen Blender with AVX2 sees huge improvements. That was a 2600K at 4.5 GHz as per the reader.

Quote:


> Originally Posted by *Mahigan*
> 
> Seems that my old score was wrong.
> 
> I set it to 150 samples and...
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 00:36.33
> 
> Not bad... but at 4.5GHz. Which means that Zen @ 3.4GHz is a match for my 3930K @ 4.5Ghz. That's very nice performance indeed.


This is looking very good for AMD. Keep in mind this is an 8 core part compared to a 6 core part you are running and that dual channel RAM setups get slightly better scores than quad channel RAM setups.

But it is a pretty decent IPC over what they had and yes, it is a very competitive solution with X79 and indeed, perhaps X99.

AVX2 is probably where it will be weaker. AVX1 though might be interesting. As I was saying to Blameless, they've got 2x 128 and 4x 64 FMACs. So an AVX 256 instruction cannot be done in 1 cycle. But what about AVX1? There is a possibility.

Which reminds me, could someone with Sandy Bridge (or Ivy Bridge) or alternatively, any X79 CPU of the E version's please run The Stilt's AVX build?

https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD
Password: "ryzen" (without the quotes)

It would be much appreciated. We need a test with a CPU with AVX128 (which Sandy and Ivy Bridge) have, but not AVX256 (as on Haswell, Broadwell, and Skylake).


----------



## Blameless

Quote:


> Originally Posted by *Jpmboy*
> 
> Regarding the Handbrake comparo... it would use AVX unless this IS is disabled.


Quote:


> Originally Posted by *Undervolter*
> 
> x264 uses AVX2 much more than it uses AVX, but remains a mainly integer operation. As for disabling AVX or AVX2, I 've never seen Handbrake or the x264 settings to offer this option.


I just ran the Handbrake test again to compare AVX enabled (default) vs. disabled on my primary system.

*Default settings:*
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
...
[10:01:02] work: average encoding speed for job is 62.540569 fps

*After turning off everything more recent than SSE 4.2:*
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
...
[10:11:33] work: average encoding speed for job is 61.065166 fps

As you can see, the x264 build in Handbrake 10.5 is very light on AVX work, only a ~2.5% advantage with the newer instruction sets.

You can run this test yourself by checking the "use advanced tab" option and pasting this ":asm=MMX2,SSE2Fast,SSSE3,SSE4.2" into the advanced tab's "x264 encoder options" (_after_ everything else, do not delete any of the default already there).
Quote:


> Originally Posted by *Barca130*
> 
> Did you buy those shares when amd was at its lowest around January (~1.80 dollar).


I bought them about four years ago when the price was around $2 and sold them at $4 about six months later. Would have picked up some more at their more recent lows, but I just bought a house and haven't had much free change.
Quote:


> Originally Posted by *CrazyElf*
> 
> But of course, if it is a wash as Blameless said, then we are going to need a lot more information.


I call it a wash because the 5-10w spread we see is lower than the spread I'd see if I bought five random 6900K samples and compared them to each other in the same system at stock. The per sample variability of CPUs exceeds the margins shown in the AMD demo.
Quote:


> Originally Posted by *CrazyElf*
> 
> That actually got me thinking. That will likely slow down TSX adoption by a few years. Perhaps with Skylake E or Cannonlake E, we will see the bugs fixed. Actually by then, a Zen+ Opteron might be able to incorporate TSX support too, negating AMD's drawback.


TSX works in LGA-1151 Skylake and Kaby lake, as well as the current Xeon E7/EX parts, and I'd be very surprised if it didn't work in Skylake-E. Still the lack of support in the current workstation socket may well delay things slightly.
Quote:


> Originally Posted by *CrazyElf*
> 
> There is one other matter. AVX128 might be more competitive with Zen than AVX256. They've got 2x 128 and 4x 64 FMACs. So an AVX 256 instruction cannot be done in 1 cycle. What about AVX1, which is 128 bit?


Presumably, up to two 128-bit AVX instructions per cycle...assuming no other bottlenecks, of course.


----------



## seanpatrick

I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).

Thanks!


----------



## Blameless

Quote:


> Originally Posted by *seanpatrick*
> 
> I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).


Most likely a modest downgrade for lightly threaded tasks and a huge upgrade for multi-threaded ones, assuming we are talking about at least a hex-core Zen.


----------



## Shau76434

Quote:


> Originally Posted by *Blameless*
> 
> Snip


Congrats on the house. Im pretty new to this but your posts are really informative +Rep.


----------



## Blameless

Quote:


> Originally Posted by *Barca130*
> 
> Congrats on the house. Im pretty new to this but your posts are really informative +Rep.


Thanks.


----------



## Newwt

Quote:


> Originally Posted by *seanpatrick*
> 
> I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).
> 
> Thanks!


We don't know yet


----------



## SoloCamo

Quote:


> Originally Posted by *seanpatrick*
> 
> I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).
> 
> Thanks!


That is entirely dependent on your workload. *Most* games would likely see slightly worse performance as they are not taking advantage of the extra threads - assuming you are cpu bound in the first place and not sitting on the gpu. However, new titles are starting to take advantage of more threads and it shows.

Now, if you stream, render, encode, or do anything that scales well with threads or even if you are a heavy multitasker, yea, it's probably going to be a nice upgrade for you.


----------



## mouacyk

To put pricing and performance all together in context of the RyZen blender benchmark, my 980 TI can do it in 7 seconds at 150 samples on Linux in ver 2.78a. That's $300, if you scour the right sources.


----------



## CrazyElf

Another interesting benchmark for Zen will be AES encryption. Intel made a huge leap with AES NI in Haswell.

Source: https://www.pcper.com/reviews/Processors/Intel-Core-i7-6700K-Review-Skylake-First-Enthusiasts/Clock-Clock-Skylake-Broadwel


There was a huge jump.

Anyways, if anyone is interested, some bedtime reading:
https://pdfs.semanticscholar.org/0eec/2da1ac0a2fcaea0fdeb2194a89e167dfdd4a.pdf

I'm curious as to the Broadwell regression though. In theory, performance should have gone up. Source: http://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/3



TrueCrypt was discontinued, but other AES benchmarks should be comparable. Could the smaller L3 cache have had an averse impact? Either that or is it throttling?

Quote:


> Originally Posted by *Blameless*
> 
> Presumably, up to two 128-bit AVX instructions per cycle...assuming no other bottlenecks, of course.


Then it might end up being better than Sandy Bridge and Ivy Bridge at AVX1. Might. I've been looking for comparisons of AVX128 vs AVX256: https://www.extremetech.com/computing/157337-the-haswell-paradox-the-best-cpu-in-the-world-unless-youre-a-pc-enthusiast



I'm thinking Zen might be equal to AVX128 in that regard. If anyone has any AVX128 vs AVX256 benches, that would be much appreciated.

OT, but my original plan was to compare the SkyOC Skylake Xeons against the Zen Opterons (which might be unlocked btw). Not possible.
http://hwbot.org/newsflash/3307_non_k_skylake_overclocking_hurts_avx2_performance_problem_related_to_256_bit_vector_warm_up_%28hardware.fr%29/

Edit:
Reached out to author of Reddit post.

Still, the Opteron might be amazing:

32 cores
8 channel DDR4 RAM
perhaps 64 PCIe 3.0 lanes
Against this, Intel's Xeon E5 2699v5 is supposed to have 32 cores @ 2.1 GHz, 6 channels of DDR4, and 40 (or 48 lanes) of PCIe 3.0.

Plus, if the Opteron is unlocked, that would lead to some insane 2P setups for multithreaded performance.


----------



## Fyrwulf

Quote:


> Originally Posted by *epic1337*
> 
> those were the price rumors going around multiple sites.
> SR3 = $150 | SR5 = $250 | cutdown SR7 = $350 | SR7 = $500
> https://www.overclock3d.net/news/cpu_mainboard/amd_summit_ridge_cpu_pricing_leaked/1
> 
> furthermore, AMD promised APU prices to stay cost effective, that includes their A8-9600 (4C/8T 6CU) and A12-9800 chip (4C/8T + 8CU).
> if you're gonna put their 4core APUs at $300 then it would end up being way too overpriced compared to former APUs.
> take note that godavari came out at $137 msrp, you can't just suddenly almost triple the price of it's successors.


Don't take those prices as gospel. I've been keeping a spreadsheet with a range of case scenarios about where Zen would fit in performance wise and the average case was bang on for the 8 core. If 4 core Zen comes in at 4.25Ghz (I'm probably being conservative with this, because there would be no iGPU to take up power and heat budget), it will only be 7% slower than a 6700k. Even with the price cuts that I believe AMD has received for wafers, $250 is basically no profit on a 4 core part, while $150 is taking a loss. AMD can't afford that.

Godavari came in where it did because the CPU itself is lackluster and the iGPU is barely capable, plus the process node was so mature that the chips were dirt cheap to make. Raven Ridge, _if_ the rumors are true about the 1048 stream processor Vega core, is absolutely worth three times what any APU has released for. That's a processor almost as fast as Skylake and a iGPU in the RX 470-480 class, which is basically the holy grail of APUs.


----------



## Fyrwulf

Quote:


> Originally Posted by *seanpatrick*
> 
> I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).
> 
> Thanks!


If you want more performance for the here and now, it's not going to do anything for you. If you want more performance in the future as programs take better advantage of a greater number of threads, absolutely pull the trigger.


----------



## Newwt

Quote:


> Originally Posted by *Fyrwulf*
> 
> Don't take those prices as gospel. I've been keeping a spreadsheet with a range of case scenarios about where Zen would fit in performance wise and the average case was bang on for the 8 core. If 4 core Zen comes in at 4.25Ghz (I'm probably being conservative with this, because there would be no iGPU to take up power and heat budget), it will only be 7% slower than a 6700k. Even with the price cuts that I believe AMD has received for wafers, $250 is basically no profit on a 4 core part, while $150 is taking a loss. AMD can't afford that.
> 
> Godavari came in where it did because the CPU itself is lackluster and the iGPU is barely capable, plus the process node was so mature that the chips were dirt cheap to make. Raven Ridge, _if_ the rumors are true about the 1048 stream processor Vega core, is absolutely worth three times what any APU has released for. That's a processor almost as fast as Skylake and a iGPU in the RX 470-480 class, which is basically the holy grail of APUs.


From what I understand there aren't seprate dies for SR3/5/7. I've read Zen chips are all based on the 8c Die and the damaged cores will be disabled and a sold as 4c/6c chips so there is limited waste.

Much how the Phenom II chips were processed.


----------



## Fyrwulf

Quote:


> Originally Posted by *Newwt*
> 
> From what I understand there aren't seprate dies for SR3/5/7. I've read Zen chips are all based on the 8c Die and the damaged cores will be disables and a sold as 4c/6c chips so there is limited waste.
> 
> Much how the Phenom II chips were processed.


Which is exactly my point. If my math is right about how much AMD is paying per wafer, an 8-core Zen processor is going to cost $150 for AMD to manufacture. For a retailer to put that chip out at $150, AMD has to be taking a huge loss or everybody else in the supply chain has to. AMD might as well just throw the chip out at that point.


----------



## Tojara

Quote:


> Originally Posted by *Fyrwulf*
> 
> Don't take those prices as gospel. I've been keeping a spreadsheet with a range of case scenarios about where Zen would fit in performance wise and the average case was bang on for the 8 core. If 4 core Zen comes in at 4.25Ghz (I'm probably being conservative with this, because there would be no iGPU to take up power and heat budget), it will only be 7% slower than a 6700k. Even with the price cuts that I believe AMD has received for wafers, $250 is basically no profit on a 4 core part, while $150 is taking a loss. AMD can't afford that.


$250 would definitely be a lot of profit. At ~$7 000 per wafer a single die comes at around $25-35. Even with $100 of costs/packaging/R&D that's a fair bit of profit for each $200 CPU, not to mention the higher end ones. I have no idea how you got the cost of a single chip well above $150.

It's not like you have to make a massive profit on the worst dies, they're worth it as long as they make money over what making and selling them costs.


----------



## Newwt

Quote:


> Originally Posted by *Fyrwulf*
> 
> Which is exactly my point. If my math is right about how much AMD is paying per wafer, an 8-core Zen processor is going to cost $150 for AMD to manufacture. For a retailer to put that chip out at $150, AMD has to be taking a huge loss or everybody else in the supply chain has to. AMD might as well just throw the chip out at that point.


Don't know what your math is or really the process for bringing CPUs to market, but I would think making some sort of money on a chip that would normally be trashed would add up somewhere. Especially if complete/better working dies are being sold for 2x-3x the lower binned part.


----------



## Fyrwulf

Quote:


> Originally Posted by *Tojara*
> 
> $250 would definitely be a lot of profit. At ~$7 000 per wafer a single die comes at around $25-35. Even with $100 of costs/packaging/R&D that's a fair bit of profit for each $200 CPU, not to mention the higher end ones. I have no idea how you got the cost of a single chip well above $150.
> 
> It's not like you have to make a massive profit on the worst dies, they're worth it as long as they make money over what making and selling them costs.


I've got to get read for work, but you're missing a lot of the supply chain. $35 is the cost of the die itself, packaging doubles that. Then factor in GloFo's profit. That's what AMD pays GloFo. AMD has to get theirs, so they build in profit when selling it to their distributor. The distributor has to get theirs, so there's another layer of profit before it goes to the retailer. Then the retailer has to get theirs, so that's yet another layer of profit. These things build up quickly.


----------



## OCmember

Quote:


> Originally Posted by *seanpatrick*
> 
> I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).
> 
> Thanks!


I'm going to go ahead and say it will be an improvement. But lets wait and see, huh?


----------



## Raghar

Quote:


> Originally Posted by *Fyrwulf*
> 
> Even with the price cuts that I believe AMD has received for wafers, $250 is basically no profit on a 4 core part, while $150 is taking a loss. AMD can't afford that.


How did you calculate these numbers? Shouldn't be 14 nm wafer bit more efficient than 28 nm?


----------



## mouacyk

Today's common wafer sizes are 300mm, so at 14nm per die... that makes 21,428,571 dies theoretically per wafer, including wastage.


----------



## Jpmboy

Quote:


> Originally Posted by *Blameless*
> 
> I just ran the Handbrake test again to compare AVX enabled (default) vs. disabled on my primary system.
> 
> *Default settings:*
> x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
> ...
> [10:01:02] work: average encoding speed for job is 62.540569 fps
> 
> *After turning off everything more recent than SSE 4.2:*
> x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
> ...
> [10:11:33] work: average encoding speed for job is 61.065166 fps
> 
> As you can see*, the x264 build in Handbrake 10.5 is very light on AVX work*, only a ~2.5% advantage with the newer instruction sets.
> 
> You can run this test yourself by checking the "use advanced tab" option and pasting this ":asm=MMX2,SSE2Fast,SSSE3,SSE4.2" into the advanced tab's "x264 encoder options" (_after_ everything else, do not delete any of the default already there).
> I bought them about four years ago when the price was around $2 and sold them at $4 about six months later. Would have picked up some more at their more recent lows, but I just bought a house and haven't had much free change.
> I call it a wash because the 5-10w spread we see is lower than the spread I'd see if I bought five random 6900K samples and compared them to each other in the same system at stock. The per sample variability of CPUs exceeds the margins shown in the AMD demo.
> TSX works in LGA-1151 Skylake and Kaby lake, as well as the current Xeon E7/EX parts, and I'd be very surprised if it didn't work in Skylake-E. Still the lack of support in the current workstation socket may well delay things slightly.
> Presumably, up to two 128-bit AVX instructions per cycle...assuming no other bottlenecks, of course.


Thanks - useful info!


----------



## Blameless

Quote:


> Originally Posted by *mouacyk*
> 
> Today's common wafer sizes are 300mm, so at 14nm per die... that makes 21,428,571 dies theoretically per die, including wastage.


Uh...only if each die is a 14nm thick hair.


----------



## budgetgamer120

Quote:


> Originally Posted by *seanpatrick*
> 
> I'd like someones opinion on whether it would be worthwhile to upgrade to Zen over my Intel i5 6500 that I've been able to OC to 4.65. Lateral move or worthwhile (from the limited info we have so far).
> 
> Thanks!


An i7 would be a nice upgrade. So I would say yes. This Zen chip will run circles around that i5.


----------



## budgetgamer120

Quote:


> Originally Posted by *Fyrwulf*
> 
> If you want more performance for the here and now, it's not going to do anything for you. If you want more performance in the future as programs take better advantage of a greater number of threads, absolutely pull the trigger.


What? Programs already use the 12 threads I have. This is 1999, its 2016.


----------



## mouacyk

Quote:


> Originally Posted by *budgetgamer120*
> 
> An i7 would be a nice upgrade. So I would say yes. This Zen chip will run circles around that i5.


CPU Benchmark Single Threading: https://www.cpubenchmark.net/singleThread.html

You should go count how many i3's and i5's are ranked above the 6900K there. It doesn't run circles around the i5 -- in multi-threaded loads, sure.


----------



## formula m

Quote:


> Originally Posted by *Jpmboy*
> 
> only issue with that logic is that both platforms cited are over 1 and near 3 years old... and AMD is now matching performance? As I said erlier, at least AMD has closed the gap to ~ 1 year.
> yeah, you can disable the IS families.. as blameless points out below. Additionally, this can be done at the bios level. CF all non-K unlock bios mods.


You are looking at the past..

People shopping today don't care about the past, only today.

Both X99 and Z170 are EOL. So, by YOUR own logic, it seems AMD is now a head by 10 months..?


----------



## budgetgamer120

Quote:


> Originally Posted by *mouacyk*
> 
> CPU Benchmark Single Threading: https://www.cpubenchmark.net/singleThread.html
> 
> You should go count how many i3's and i5's are ranked above the 6900K there. It doesn't run circles around the i5 -- in multi-threaded loads, sure.


What is your point? Is the i3 or i5 more powerful than a 6900k? No.

Telling someone to hang on top am i5 is as bad as advice gets. Atleast get an i7.

Your single thread benchmark proves nothing really. It's 2016.

How does that single thread performance help in battlefield? Lol


----------



## mouacyk

Quote:


> Originally Posted by *budgetgamer120*
> 
> What is your point? Is the i3 or i5 more powerful than a 6900k? No.
> 
> Telling someone to hang on top am i5 is as bad as advice gets. Atleast get an i7.
> 
> Your single thread benchmark proves nothing really. It's 2016.
> 
> How does that single thread performance help in battlefield? Lol


Here's the thing, we don't know how RyZen 6c+/12t+ will be priced. Someone currently on an i5 is there for the reason of affordability, and primarily better performance in lightly threaded use cases -- games or light office work. If they were about maxed productivity for minimal cost, AMD would have offered more for less. Horses for courses... and the Zen we were shown is not an end-all replacement for i5, but a potential contender for i7 and HEDT. We have zero clue on Zen single-threading performance to date. That's what i3 and i5 users need to pay attention to, and so making 8c/16t recommendations to them is disingenuous.


----------



## Jpmboy

Quote:


> Originally Posted by *formula m*
> 
> You are looking at the past..
> People shopping today don't care about the past, only today.
> Both X99 and Z170 are EOL. So, by YOUR own logic, it seems AMD is now a head by 10 months..?


lol - c'mon, that's silly spin... let me explain this simply since it seems the MY logic escapes you: If AMD is only now able to match the performance of an EOL Intel part, who's playing catch-up? Unless you believe a competitor's EOL platform performance is a stretch objective.








btw - I have and have had both brands, and shop every day







. And yes, I am looking forward to Zen's real performance numbers, and to what Intel does with the 2066 socket family.


----------



## budgetgamer120

Quote:


> Originally Posted by *mouacyk*
> 
> Here's the thing, we don't know how RyZen 6c+/12t+ will be priced. Someone currently on an i5 is there for the reason of affordability, and primarily better performance in lightly threaded use cases -- games or light office work. If they were about maxed productivity for minimal cost, AMD would have offered more for less. Horses for courses... and the Zen we were shown is not an end-all replacement for i5, but a potential contender for i7 and HEDT. We have zero clue on Zen single-threading performance to date. That's what i3 and i5 users need to pay attention to, and so making 8c/16t recommendations to them is disingenuous.


People buy i5s mainly because others tell them that's all they need. Regardless of what you say the 16thread chip will run circles around any i3 or i5. The fact that the user is asking for advice proves cost isn't really a problem.

Zen matching a 6900k gives id's a pretty good indication of single thread performance. Its either on par or slightly slower. But who cares about single thread performance when considering a 16thread cpu?


----------



## epic1337

Quote:


> Originally Posted by *Fyrwulf*
> 
> Godavari came in where it did because the CPU itself is lackluster and the iGPU is barely capable, plus the process node was so mature that the chips were dirt cheap to make. Raven Ridge, _if_ the rumors are true about the *1048 stream processor Vega core*, is absolutely worth three times what any APU has released for. That's a processor almost as fast as Skylake and *a iGPU in the RX 470-480 class*, which is basically the holy grail of APUs.


how can a 1048SP ( typo? should be 1024 ) match a 2048SP or 2304SP discrete when a 896SP RX460 is barely over half as fast as them?

Vega is also GCN 4th gen, and will have no difference with regards to it's architecture, the only difference is HBM support and larger core configurations.
e.g. its like the difference between Tonga and Fiji, same architecture with Fiji supporting HBM and double core config.


----------



## Ultracarpet

Quote:


> Originally Posted by *Fyrwulf*
> 
> Don't take those prices as gospel. I've been keeping a spreadsheet with a range of case scenarios about where Zen would fit in performance wise and the average case was bang on for the 8 core. If 4 core Zen comes in at 4.25Ghz (I'm probably being conservative with this, because there would be no iGPU to take up power and heat budget), it will only be 7% slower than a 6700k. Even with the price cuts that I believe AMD has received for wafers, $250 is basically no profit on a 4 core part, while $150 is taking a loss. AMD can't afford that.
> 
> Godavari came in where it did because the CPU itself is lackluster and the iGPU is barely capable, plus the process node was so mature that the chips were dirt cheap to make. Raven Ridge, _if_ the rumors are true about the 1048 stream processor Vega core, is absolutely worth three times what any APU has released for. That's a processor almost as fast as Skylake and a iGPU in the RX 470-480 class, which is basically the holy grail of APUs.


Nah, 1048 sp Vega igpu would probably be closer to an rx 460. Unless they put hbm on there, the ddr4 will still hold the gpu back, and even with hbm... that's half the sp of an rx470.


----------



## epic1337

Quote:


> Originally Posted by *Ultracarpet*
> 
> Nah, 1048 sp Vega igpu would probably be closer to an rx 460. Unless they put hbm on there, the ddr4 will still hold the gpu back, and even with hbm... that's half the sp of an rx470.


RX460 isn't really bandwidth starved either, probably enough to make a 1024SP GCN 4th exceed GTX1050 performance, but it wouldn't get past GTX1050Ti performance.
otherwise AMD would've already tried doing HBM low-end for those niche ulta-low power cards, imagine a totally passive 1080P capable card.

Quote:


> Originally Posted by *budgetgamer120*
> 
> Zen matching a 6900k gives id's a pretty good indication of single thread performance. Its either on par or slightly slower. But who cares about single thread performance when considering a 16thread cpu?


theres architectural differences, e.g. Zen doing great on blender, but doing worse in other benchmarks.
this isn't even the first time AMD (and even Intel) cherry picked their benchmarks, so theres no saying whether this preview is conclusive evidence.

and theres still Zen's Hyperthread scaling, Intel's Hyperthread isn't +100% performance on multi-threaded tasks.
so if AMD's implementation of it can exceed Intel's, then Zen matching 6900K in MT workloads is reasonable.
this would also imply that Zen's actual IPC would be fairly lower than Broadwell, which isn't unreasonable either.

on a side note, 90% of Broadwell puts it at IvyB IPC, while 95% puts it at Haswell IPC.


----------



## formula m

Quote:


> Originally Posted by *Jpmboy*
> 
> lol - c'mon, that's silly spin... let me explain this simply since it seems the MY logic escapes you: If AMD is only now able to match the performance of an EOL Intel part, who's playing catch-up? Unless you believe a competitor's EOL platform performance is a stretch objective.
> 
> 
> 
> 
> 
> 
> 
> 
> btw - I have and have had both brands, and shop every day
> 
> 
> 
> 
> 
> 
> 
> . And yes, I am looking forward to Zen's real performance numbers, and to what Intel does with the 2066 socket family.


Spin..?

It is called logic.

And I used your own standards, as a metric. Again, nobody shopping ZEN today, is cross shopping looking at Z170 chipset (They are EOL). And if you didn't notice, AMD latest offering is beating Intel's $1,100 CPU and it is rumored to be priced at half the cost. Not only that, Intel's greatest offerings are on an old platform (X99), (that is also EOL).

So, as it stand right now, it looks like AMD is ahead and Intel has months to go, before they release their gaming platform, and new uarch designs & technologies. Because I will be buying that too.

X299 look to be interesting... but by that time Naples will be out, so...


----------



## formula m

Ryzen: (Rye-ZEN)

SR7 "Black Edition" $499 w/AIO water

SR7 $349

SR5 $249

SR3 $149

Anyone else want to see an OC'd SR3 @ 4.5 Ghz, and see if beat a 4790K..?


----------



## epic1337

Quote:


> Originally Posted by *formula m*
> 
> Ryzen: (Rye-ZEN)
> 
> SR7 "Black Edition" $499 w/AIO water
> SR7 $349
> SR5 $249
> SR3 $149
> 
> Anyone else want to see an OC'd SR3 @ 4.5 Ghz, and see if beat a 4790K..?


i want to see SR5 completely mangle Intel's entire mainstream platform.








anything from i5s to i7s would be out-performed by SR5, which leaves only i3s and pentiums, but theres also SR3 on their price point.

on a side note, i don't think its a good idea to bundle an AIO water cooler though, specially if its one of the generic AIO.

on a further note, lets expand the possibility of this lineup.
$499 | 8C/16T | 24lane PCI-E = SR7 "Black Edition"
$349 | 8C/16T | 16lane PCI-E = SR7
$249 | 6C/12T | 16lane PCI-E = SR5
$149 | 4C/08T | 16lane PCI-E = SR3


----------



## MadGoat

Quote:


> Originally Posted by *formula m*
> 
> Spin..?
> It is called logic.
> 
> And I used your own standards, as a metric. Again, nobody shopping ZEN today, is cross shopping looking at Z170 chipset (They are EOL). And if you didn't notice, AMD latest offering is beating Intel's $1,100 CPU and it is rumored to be priced at half the cost. Not only that, Intel's greatest offerings are on an old platform (X99), (that is also EOL).
> 
> So, as it stand right now, it looks like AMD is ahead and Intel has months to go, before they release their gaming platform, and new uarch designs & technologies. Because I will be buying that too.
> 
> X299 look to be interesting... but by that time Naples will be out, so...


Can see both points here. However I think these are both valid perspectives.

My owns thoughts are that its good to see a level of performance that was once relegated to the higher tier price market come to the mass via AMDs offerings. I don't think anyone will be looking at comparing tech from intel that "CAN" do the same thing but is EOL, but rather if they HAVE components that can. You're not going to steal the wind out of someones sails and convince them to go red just because you have a solution that can do (or even slightly better) than what you already have.

However, it will bring those once unattainable tiers that intel has kept from the general market into a competitive space. That is what will push the market back into the innovative space, instead of the evolutionary space it has been.

now, the disclaimer here is that I do like AMD. Not for any other reason other than I'm cheap, it has afforded me the ability to upgrade without having to spring for a board with proc upgrades.

With that said I would like to think I'm unbias... but I know the reality would be that I probably swing more for the red team every time. That being said, I wont deny the reality in facts and numbers. Intel has had a better product for a long time. Its simply good to see the possibility of some competition again. Competition is always good for the consumer and advancement in tech.


----------



## Jpmboy

Quote:


> Originally Posted by *formula m*
> 
> Spin..?
> *It is called logic.*
> And I used your own standards, as a metric. Again, nobody shopping ZEN today, is cross shopping looking at Z170 chipset (They are EOL). And if you didn't notice, *AMD latest offering is beating Intel's $1,100 CPU and it is rumored to be priced at half the cost.* Not only that, Intel's greatest offerings are on an old platform (X99), (that is also EOL).
> So, as it stand right now, it looks like AMD is ahead and Intel has months to go, before they release their gaming platform, and new uarch designs & technologies. Because I will be buying that too.
> X299 look to be interesting... but by that time Naples will be out, so...


Please try to not degenerate into a ridiculous fanboi.


----------



## budgetgamer120

Quote:


> Originally Posted by *Jpmboy*
> 
> Please try to not degenerate into a ridiculous fanboi.


Its a proven fact it best the 6900k in blender and bf1. Anyone saying otherwise is making stuff up. Until we see independent reviews all we can say now is Zen is faster. Nothing else


----------



## formula m

Quote:


> Originally Posted by *Jpmboy*
> 
> Please try to not degenerate into a ridiculous fanboi.


Are you trying to suggest, that RyZen at AMD's "New Horizons" event, didn't beat Intel's offerings in those benches..? Or that the Intel chip is not sku'd at $1,100? Or, you have not heard that SR7 is rumored to be $499..?

So, when using remedial mathematics, yields a number that is over 50% (half) the cost, in savings. So I was being conservative, not ridiculous.

What I don't get... is why you are looking in the mirror^.


----------



## Gumbi

Quote:


> Originally Posted by *budgetgamer120*
> 
> Its a proven fact it best the 6900k in blender and bf1. Anyone saying otherwise is making stuff up. Until we see independent reviews all we can say now is Zen is faster. Nothing else


We didn't see numbers for the BF1 test. Are you taking it on faith that it was faster? That test is useless anyway, as it was almost certainly a GPU bound scenario (as they were quoting numbers like 60 - 70 FPS). So the test is useless. Plus it wasn't a predetermined bench, so you can't fully compare like with like.


----------



## Jpmboy

Quote:


> Originally Posted by *budgetgamer120*
> 
> Its a proven fact it best the 6900k in blender and bf1. Anyone saying otherwise is making stuff up. Until we see independent reviews all we can say now is Zen is faster. Nothing else


Quote:


> Originally Posted by *formula m*
> 
> Are you trying to suggest, that RyZen at AMD's "New Horizons" event, didn't beat Intel's offerings in those benches..? Or that it is not sku'd at $1,100? Or that it is rumored to be $499..?
> So, when using remedial mathematics, yields a number that is over 50% (half) the cost, in savings. So I was being conservative, not ridiculous.
> 
> What I don't get... is why you are looking in the mirror^.


Lol, side by side tests done by the vendor of one... c'mon sheeple. Do you believe every comparison a vendor puts out? (I pity those that do)
That said, lets see how well it does in OUR hands.
EOD


----------



## budgetgamer120

Quote:


> Originally Posted by *Gumbi*
> 
> We didn't see numbers for the BF1 test. Are you taking it on faith that it was faster? That test is useless anyway, as it was almost certainly a GPU bound scenario (as they were quoting numbers like 60 - 70 FPS). So the test is useless. Plus it wasn't a predetermined bench, so you can't fully compare like with like.


Do I take what is in front of me or what people here are making up?

You tell me.
Quote:


> Originally Posted by *Jpmboy*
> 
> Lol, side by side tests done by the vendor of one... c'mon sheeple. Do you believe every comparison a vendor puts out? (I pity those that do)
> That said, lets see how well it does in OUR hands.
> EOD


What tests have you conducted that proves otherwise?


----------



## PostalTwinkie

Quote:


> Originally Posted by *epic1337*
> 
> i want to see SR5 completely mangle Intel's entire mainstream platform.
> 
> 
> 
> 
> 
> 
> 
> 
> anything from i5s to i7s would be out-performed by SR5, which leaves only i3s and pentiums, but theres also SR3 on their price point.
> 
> on a side note, i don't think its a good idea to bundle an AIO water cooler though, specially if its one of the generic AIO.
> 
> on a further note, lets expand the possibility of this lineup.
> $499 | 8C/16T | 24lane PCI-E = SR7 "Black Edition"
> $349 | 8C/16T | 16lane PCI-E = SR7
> $249 | 6C/12T | 16lane PCI-E = SR5
> $149 | 4C/08T | 16lane PCI-E = SR3


I highly doubt AMD is going to pull an Intel and screw us on PCI-E lanes.

Realistically their top shelf SR7 will be $599 to $799 and compete easily with Intel's $1200 6900K. The other price segments are likely to see a similar disruption in terms of performance and price.


----------



## formula m

Quote:


> Originally Posted by *Jpmboy*
> 
> Lol, side by side tests done by the vendor of one... c'mon sheeple. Do you believe every comparisona vendor puts out?
> That said, lets see how well it does in OUR hands.
> EOD


OK, I get it now. You are a sheep herder..


----------



## MadGoat

Quote:


> Originally Posted by *Jpmboy*
> 
> Please try to not degenerate into a ridiculous fanboi.


Quote:


> Originally Posted by *formula m*
> 
> OK, I get it now. You are a sheep herder..


hey now, I don't think that's it at all... Experience in these releases dictates that there is ALWAYS a bunch of fluff and unseen "gotchyas" by the time a product releases. There is no problem with wanting to see the end product yourself. (I for one am one of those).

If you blindly shoved our money at products that companies "SAY" are great... well we'd all be playing No Man's Sky on a 8150 and blindly claiming "8 cores are better no matter what!" ...

Being skeptic is a good thing in today's world. Nothing wrong with that...


----------



## LancerVI

Quote:


> Originally Posted by *formula m*
> 
> I'm pushing 50.


Quote:


> Originally Posted by *cssorkinman*
> 
> I've seen a half century myself and am also quite anxious to see this product come to market.


Same here. Pushing the big 50.

Truly excited for Ryzen!!

How many launches have we seen in our day boys?? I'm so nostalgic right now, I might have to break out my trash 80!!


----------



## mouacyk

Your age really shows your priorities. Retiring on AMD stocks is nice and fine, provided Zen sells well.

On a point that actually matters here on this forum, let's hope it performs well when reviewed independently. Obviously, your priorities have something against that. No? Good.


----------



## oxidized

Quote:


> Originally Posted by *Jpmboy*
> 
> Please try to not degenerate into a ridiculous fanboi.


It's way too late for that


----------



## flopper

Quote:


> Originally Posted by *Jpmboy*
> 
> Lol, side by side tests done by the vendor of one... c'mon sheeple. Do you believe every comparison a vendor puts out? (I pity those that do)
> That said, lets see how well it does in OUR hands.
> EOD


cant fake results as what do you think intel would do if amd did?
*your logic fails.*
RyZen at 3.,4ghz beats Intels 3.5ghz.
seems good for me and if intel thought anything was fishy you hear about it.

clear enough example for me


----------



## formula m

Quote:


> Originally Posted by *MadGoat*
> 
> hey now, I don't think that's it at all... Experience in these releases dictates that there is ALWAYS a bunch of fluff and unseen "gotchyas" by the time a product releases. There is no problem with wanting to see the end product yourself. (I for one am one of those).
> 
> If you blindly shoved our money at products that companies "SAY" are great... well we'd all be playing No Man's Sky on a 8150 and blindly claiming "8 cores are better no matter what!" ...
> 
> Being skeptic is a good thing in today's world. Nothing wrong with that...


Being a skeptic, is vastly different than saying someone is lying to the public, & falsifying benches. People are responding to what we who know, or rumored to know. We are not focused on the past releases of AMD cpus, it was a different company then. Holding on to that "skepticism" is unfounded here.

Granted, we have no heard the whole story, but.. for what a top-end gamer is looking for?

*Ryzen seems to be the answer for next gen gaming CPU*. What remains are the other pieces and mainly Summit Ridge's chipset & mobo. Once we knows these details, I don't think it will be hard to assume AMD has a win/win/winner on their hands.


----------



## Fyrwulf

Quote:


> Originally Posted by *Ultracarpet*
> 
> Nah, 1048 sp Vega igpu would probably be closer to an rx 460. Unless they put hbm on there, the ddr4 will still hold the gpu back, and even with hbm... that's half the sp of an rx470.


The rumor is specifically that it will have on-chip HBM. Sorry, should have specified that.


----------



## epic1337

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I highly doubt AMD is going to pull an Intel and screw us on PCI-E lanes.
> 
> Realistically their top shelf SR7 will be $599 to $799 and compete easily with Intel's $1200 6900K. The other price segments are likely to see a similar disruption in terms of performance and price.


well, i don't really see much use on too much lanes, x16 is OK considering you'd still get something from the PCH.

i think their top shelf (>$500) is reserved for their 10core Zen, theres been rumors that they'll release one for Zen+ and will target Skylake-E.
you know how they go about these things, the big chips comes right after.

we could think about it this way, Zen initial release is meant to target the mainstream market and some enthusiast.
where as Zen+ will target all of enthusiast and some mainstream, with Zen in parallel they'll cover the entire range.


----------



## Tojara

Quote:


> Originally Posted by *Fyrwulf*
> 
> I've got to get read for work, but you're missing a lot of the supply chain. $35 is the cost of the die itself, packaging doubles that. Then factor in GloFo's profit. That's what AMD pays GloFo. AMD has to get theirs, so they build in profit when selling it to their distributor. The distributor has to get theirs, so there's another layer of profit before it goes to the retailer. Then the retailer has to get theirs, so that's yet another layer of profit. These things build up quickly.


I know that. $150 is still way too much over fixed costs to be actually losing money. If that were the case any sub-$100 CPU/APUs would not exist.
Quote:


> Originally Posted by *Raghar*
> 
> How did you calculate these numbers? Shouldn't be 14 nm wafer bit more efficient than 28 nm?


14nm wafers are more expensive than 28nm ones. The difference is that the latter should already be far cheaper per transistor, as well as clocking better and using less power.

Quote:


> Originally Posted by *epic1337*
> 
> RX460 isn't really bandwidth starved either, probably enough to make a 1024SP GCN 4th exceed GTX1050 performance, but it wouldn't get past GTX1050Ti performance.
> otherwise AMD would've already tried doing HBM low-end for those niche ulta-low power cards, imagine a totally passive 1080P capable card.
> theres architectural differences, e.g. Zen doing great on blender, but doing worse in other benchmarks.
> this isn't even the first time AMD (and even Intel) cherry picked their benchmarks, so theres no saying whether this preview is conclusive evidence.
> 
> and theres still Zen's Hyperthread scaling, Intel's Hyperthread isn't +100% performance on multi-threaded tasks.
> so if AMD's implementation of it can exceed Intel's, then Zen matching 6900K in MT workloads is reasonable.
> this would also imply that Zen's actual IPC would be fairly lower than Broadwell, which isn't unreasonable either.
> 
> on a side note, 90% of Broadwell puts it at IvyB IPC, while 95% puts it at Haswell IPC.


Broadwell isn't even 5% faster than Haswell, and Haswell-E lacks the L4 altogether which gives most of the increase.

In the best case an SMT(2) implementation gives about a 40% increase, while Intel's implementation typically gives a 25-30% increase in well-threaded workloads. That gives AMD a 1,4/1,25=1,12->12% benefit at most. If Zen matches HW-E in such workloads due to SMT, if Zen has slightly lower (95% of HW-E) IPC on average in those workloads while SMT is well used, that means that Zen has at worst 0,95/1,12~85% of the IPC HW-E has. That's about 81% of Skylake (outside of heavy AVX2 workloads), in the absolute worst case for the majority of workloads. So realistically, Skylake is 5-10% faster per clock than Zen, as it did actually beat HW-E in the x264 demo, at a slightly lower clock speed.

Quote:


> Originally Posted by *formula m*
> 
> Being a skeptic, is vastly different than saying someone is lying to the public, & falsifying benches. People are responding to what we who know, or rumored to know. We are not focused on the past releases of AMD cpus, it was a different company then. Holding on to that "skepticism" is unfounded here.
> 
> Granted, we have no heard the whole story, but.. for what a top-end gamer is looking for?
> 
> *Ryzen seems to be the answer for next gen gaming CPU*. What remains are the other pieces and mainly Summit Ridge's chipset & mobo. Once we knows these details, I don't think it will be hard to assume AMD has a win/win/winner on their hands.


Gaming is probably one of the places where Zen doesn't shine, the other one being AVX2 workloads. Granted, having over four threads still helps with multi-tasking, but actual performance difference in about half of games released this year is still negligible. It's still not going to be anything like Bulldozer or Piledriver where you'll be completely SOL with anything when looking for 60+ FPS or even 60FPS in the most demanding single-thread games.


----------



## formula m

Tojara..

Battlefield called, to tell you that you are wrong.


----------



## Benny89

Quote:


> Originally Posted by *Tojara*
> 
> Gaming is probably one of the places where Zen doesn't shine, the other one being AVX2 workloads. Granted, having over four threads still helps with multi-tasking, but actual performance difference in about half of games released this year is still negligible. It's still not going to be anything like Bulldozer or Piledriver where you'll be completely SOL with anything when looking for 60+ FPS or even 60FPS in the most demanding single-thread games.


I don't see how Ryzen is weak for gaming.... It will be as good or better at gaming as any other 8-core top CPUs (if benches are right, which remains to be seen). Sure, in last year 4-cores were better in most cases for gaming but we know that games are starting to use more cores.

I think considering that still 90% games are being driven mostly by GPU power- Ryzen or any other 8-core CPU is good future-proof investment for gaming.

Of course again- we need benches first, but I am optimistic for Zen mostly


----------



## motoray

Quote:


> Originally Posted by *formula m*
> 
> Tojara..
> Battlefield called, to tell you that you are wrong.


Battlefield can utilize 6 cores effectively. But he may be correct if you are playing an ancient game. That would be assuming IPC is not as good. But we wait to find out if that is true. Either way im sold.


----------



## budgetgamer120

Quote:


> Originally Posted by *motoray*
> 
> Battlefield can utilize 6 cores effectively. But he may be correct if you are playing an ancient game. That would be assuming IPC is not as good. But we wait to find out if that is true. Either way im sold.


Battlefield uses all my 12 threads effectively. Even crysis 3 does some times.


----------



## NuclearPeace

If you are getting CPU usage from Windows, don't. Windows only measures how much CPU time is reserved by the CPU and can't measure how much the execution units in your CPU cores are actually busy.

With all else being equal, temperature may even be a better measure of CPU usage than Windows Task Manager or other monitoring software is. "power vampire" applications such as Prime95 and playing a video game like BF1 might peg a few cores at "100%" but the CPU is doing far more work in Prime95 and would be running hotter than in BF1.


----------



## cssorkinman

Quote:


> Originally Posted by *NuclearPeace*
> 
> If you are getting CPU usage from Windows, don't. Windows only measures how much CPU time is reserved by the CPU and can't measure how much the execution units in your CPU cores are actually busy.
> 
> With all else being equal, temperature may even be a better measure of CPU usage than Windows Task Manager or other monitoring software is. "power vampire" applications such as Prime95 and playing a video game like BF1 might peg a few cores at "100%" but the CPU is doing far more work in Prime95 and would be running hotter than in BF1.


Do you play BF1 multiplayer?


----------



## budgetgamer120

Quote:


> Originally Posted by *NuclearPeace*
> 
> If you are getting CPU usage from Windows, don't. Windows only measures how much CPU time is reserved by the CPU and can't measure how much the execution units in your CPU cores are actually busy.
> 
> With all else being equal, temperature may even be a better measure of CPU usage than Windows Task Manager or other monitoring software is. "power vampire" applications such as Prime95 and playing a video game like BF1 might peg a few cores at "100%" but the CPU is doing far more work in Prime95 and would be running hotter than in BF1.


Quote:


> Originally Posted by *motoray*
> 
> Battlefield can utilize 6 cores effectively. But he may be correct if you are playing an ancient game. That would be assuming IPC is not as good. But we wait to find out if that is true. Either way im sold.


Battlefield uses all my 12 threads effectively. Even crysis 3 does some

I will go by task manager.


----------



## motoray

Quote:


> Originally Posted by *budgetgamer120*
> 
> Battlefield uses all my 12 threads effectively. Even crysis 3 does some
> 
> I will go by task manager.


And once again im not denying that. That being 6 physical cores 12 threads. I was just discussing what the other dude was trying to say. I agree bf1 is anazing. But if you compare how many games can do that vs how many can not. There is a big gap still to be filled. Granted each new game will improve that. So once again verifying my whole point that zen should perform well on all newer titles.


----------



## budgetgamer120

Quote:


> Originally Posted by *motoray*
> 
> And once again im not denying that. That being 6 physical cores 12 threads. I was just discussing what the other dude was trying to say. I agree bf1 is anazing. But if you compare how many games can do that vs how many can not. There is a big gap still to be filled. Granted each new game will improve that. So once again verifying my whole point that zen should perform well on all newer titles.


Sorry I didn't mean to post the same thing twice. This site sometimes messes up my post


----------



## Cyrious

Quote:


> Originally Posted by *CrazyElf*
> 
> Which reminds me, could someone with Sandy Bridge (or Ivy Bridge) or alternatively, any X79 CPU of the E version's please run The Stilt's AVX build?
> 
> https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD
> Password: "ryzen" (without the quotes)
> 
> It would be much appreciated. We need a test with a CPU with AVX128 (which Sandy and Ivy Bridge) have, but not AVX256 (as on Haswell, Broadwell, and Skylake).


Yeah sure, gimme a sec and I'll do it on my Xeon.


----------



## PostalTwinkie

Quote:


> Originally Posted by *epic1337*
> 
> well, i don't really see much use on too much lanes, x16 is OK considering you'd still get something from the PCH.
> 
> i think their top shelf (>$500) is reserved for their 10core Zen, theres been rumors that they'll release one for Zen+ and will target Skylake-E.
> you know how they go about these things, the big chips comes right after.
> 
> we could think about it this way, Zen initial release is meant to target the mainstream market and some enthusiast.
> where as Zen+ will target all of enthusiast and some mainstream, with Zen in parallel they'll cover the entire range.


AMD has stated several times that they will release their flagship Zen processor first.


----------



## n4p0l3onic

Quote:


> Originally Posted by *epic1337*
> 
> i want to see SR5 completely mangle Intel's entire mainstream platform.
> 
> 
> 
> 
> 
> 
> 
> 
> anything from i5s to i7s would be out-performed by SR5, which leaves only i3s and pentiums, but theres also SR3 on their price point.
> 
> on a side note, i don't think its a good idea to bundle an AIO water cooler though, specially if its one of the generic AIO.
> 
> on a further note, lets expand the possibility of this lineup.
> $499 | 8C/16T | 24lane PCI-E = SR7 "Black Edition"
> $349 | 8C/16T | 16lane PCI-E = SR7
> $249 | 6C/12T | 16lane PCI-E = SR5
> $149 | 4C/08T | 16lane PCI-E = SR3


What? AMD only going to have max 24 pcie lanes???


----------



## Fyrwulf

Quote:


> Originally Posted by *n4p0l3onic*
> 
> What? AMD only going to have max 24 pcie lanes???


Ignore that. You would need a significantly changed core design to do that, which means a different mask set, and an $80 million hit for the new design (going by what it costs to develop a chip on the 14nm node). Some people here didn't get the memo that Lisa Su said AMD can't afford to be the value proposition anymore. AMD doesn't win anything by playing Intel's game, it wins by doing better up and down the stack and being cheaper than price gouger Intel. That doesn't mean cheap.


----------



## Cyrious

Results are in for AVX build. For comparison, the stock non-avx version was also benched. Do note, all 6 runs had some interference with other background processes and the like (approx 1-2% CPU usage).


Spoiler: Images here












AVX run:
200 Samples: 50.80s
150 Samples: 36.99s
100 Samples: 25.15s

Stock run:
200 Samples: 1:08.97
150 Samples: 0:51.20
100 Samples: 0:34.23

Delta between AVX and Stock:
200 Samples: 18.7s
150 Samples: 14.21s
100 Samples: 9.08s


----------



## LancerVI

Quote:


> Originally Posted by *Cyrious*
> 
> Results are in for AVX build. For comparison, the stock non-avx version was also benched. Do note, all 6 runs had some interference with other background processes and the like (approx 1-2% CPU usage).
> 
> 
> Spoiler: Images here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AVX run:
> 200 Samples: 50.80s
> 150 Samples: 36.99s
> 100 Samples: 25.15s
> 
> Stock run:
> 200 Samples: 1:08.97
> 150 Samples: 0:51.20
> 100 Samples: 0:34.23
> 
> Delta between AVX and Stock:
> 200 Samples: 18.7s
> 150 Samples: 14.21s
> 100 Samples: 9.08s


Nice...thanks and well done.


----------



## Fyrwulf

Quote:


> Originally Posted by *Tojara*
> 
> I know that. $150 is still way too much over fixed costs to be actually losing money. If that were the case any sub-$100 CPU/APUs would not exist.


I owe you a mea culpa. I realized this entire time that I've been mentally referencing the discussion I had for the RX480 vis a vis the cost of the actual GPUs themselves. I'll get to crunching numbers soon.


----------



## epic1337

Quote:


> Originally Posted by *PostalTwinkie*
> 
> AMD has stated several times that they will release their flagship Zen processor first.


i didn't say anything about flagships?

plus a flagship doesn't always mean their strongest chip, its simply their "iconic product" within their lineup, the bearer of the flag so to speak.
you could find news with regards to a server Zen chip with 32cores out there yet it isn't an AM4 flagship chip.


----------



## formula m

Quote:


> Originally Posted by *motoray*
> 
> And once again im not denying that. That being 6 physical cores 12 threads. I was just discussing what the other dude was trying to say. I agree bf1 is anazing. But if you compare how many games can do that vs how many can not. There is a big gap still to be filled. Granted each new game will improve that. So once again verifying my whole point that zen should perform well on all newer titles.


You really aren't saying much, nobody buys a CPU for yesterday games, they are already playing those with their current rig. You buy a new CPU for new & FUTURE titles...

But thanks for your "verification" of Zen's performance. I know many here will sleep at night, knowing you verified everything.


----------



## motoray

Quote:


> Originally Posted by *formula m*
> 
> You really aren't saying much, nobody buys a CPU for yesterday games, they are already playing those with their current rig. You buy a new CPU for new & FUTURE titles...
> 
> But thanks for your "verification" of Zen's performance. I know many here will sleep at night, knowing you verified everything.


The whole point was another person complaining that zen will lose in games that dont utilize multiple cores. So i was saying exactly what you just said. But thanks for being the standard dick on this forum.


----------



## fleetfeather

This thread is turning into a cancer risk. Better hit the unsub button to protect myself.


----------



## NFL

Quote:


> Originally Posted by *epic1337*
> 
> i want to see SR5 completely mangle Intel's entire mainstream platform.
> 
> 
> 
> 
> 
> 
> 
> 
> anything from i5s to i7s would be out-performed by SR5, which leaves only i3s and pentiums, but theres also SR3 on their price point.
> 
> on a side note, i don't think its a good idea to bundle an AIO water cooler though, specially if its one of the generic AIO.
> 
> on a further note, lets expand the possibility of this lineup.
> $499 | 8C/16T | 24lane PCI-E = SR7 "Black Edition"
> $349 | 8C/16T | 16lane PCI-E = SR7
> $249 | 6C/12T | 16lane PCI-E = SR5
> $149 | 4C/08T | 16lane PCI-E = SR3


If the 6C/12T is really $249, I will not be able to throw my money at the screen fast enough


----------



## DaaQ

Quote:


> Originally Posted by *mouacyk*
> 
> Your age really shows your priorities. Retiring on AMD stocks is nice and fine, provided Zen sells well.
> 
> *On a point that actually matters here on this forum, let's hope it performs well when reviewed independently. Obviously, your priorities have something against that. No? Good.*


Just would like to add my opinion on independent or 3rd party reviews alot keep speaking of.
I myself find user reviews much more informative than leading review sites reviews. Due to the fact of all the differing methodologies used, meaning that review sites I find often misleading at best and downright shilled at worst.
Case in point @cssorkinman (sorry if misspelled) bf1 info.
Too many take the graphs and such from review sites as gospel without regards to any bias they may have.
USER reviews from members here are much more valuable as a whole since there is no monetary or free gear incentives for their time and effort.
Just want to point this out.


----------



## looniam

Quote:


> Originally Posted by *fleetfeather*
> 
> This thread is turning into a cancer risk. Better hit the unsub button to protect myself.


it's already turned but highly entertaining for several pages (50 posts per page!)

but what is better is the fruit in your sig rig's name.









how did you do that?


----------



## delboy67

Quote:


> Originally Posted by *DaaQ*
> 
> Just would like to add my opinion on independent or 3rd party reviews alot keep speaking of.
> I myself find user reviews much more informative than leading review sites reviews. Due to the fact of all the differing methodologies used, meaning that review sites I find often misleading at best and downright shilled at worst.
> Case in point @cssorkinman (sorry if misspelled) bf1 info.
> Too many take the graphs and such from review sites as gospel without regards to any bias they may have.
> USER reviews from members here are much more valuable as a whole since there is no monetary or free gear incentives for their time and effort.
> Just want to point this out.


I completely agree with this, been bit before by 'review guides', mainstream reviews sometimes are no better than marketing, not all the time ofc.


----------



## DaaQ

Quote:


> Originally Posted by *delboy67*
> 
> I completely agree with this, been bit before by 'review guides', mainstream reviews sometimes are no better than marketing, not all the time ofc.


With the RX 480 reviews is was blatantly obvious. The sites that listed the test settings were an epic fail as far as comparison across other review sites. It was apparent that it became pointless.
Unfortunately users seem to throw these reviews as proof without realizing the pointlessness of doing so.

User testing should be the holy Grail on this site. Unfortunately it seems to be too inconvenient for most.


----------



## iLeakStuff

Every 2500K Sandy Bridge owner and all who have waited while Intel released the MEH chips aftwr another, should buy Zen and support AMD to even out the market.

AMD gonna have a huge sucess with Zen for sure.


----------



## iLeakStuff

Quote:


> Originally Posted by *Jpmboy*
> 
> Lol, side by side tests done by the vendor of one... c'mon sheeple. Do you believe every comparison a vendor puts out? (I pity those that do)
> That said, lets see how well it does in OUR hands.
> EOD


One should be sceptical about comparisons presented by companies for sure, but AMD is 100% transparent about what settings they used during the tests so that people can compare for themselves.
So there is no doubt Zen will be really fast and that AMD speaks the truth when they are so open about it









Have a read
https://m.reddit.com/r/Amd/comments/5ie7f0/summoning_uamd_robert_how_can_we_do_the_blender/


----------



## oxidized

Quote:


> Originally Posted by *fleetfeather*
> 
> This thread is turning into a cancer risk. Better hit the unsub button to protect myself.


Why unsub from such an interesting and passionate thread? Just ignore (i actually mean to put them into the ignore list) those spreading cancer, problem solved.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *iLeakStuff*
> 
> Every 2500K Sandy Bridge owner and all who have waited while Intel released the MEH chips aftwr another, should buy Zen and support AMD to even out the market.
> 
> AMD gonna have a huge sucess with Zen for sure.


For me it's not even the fact all the chips since Sandy have been kinda crap (only have Ivy because my 2500K died







), I haven't upgraded because I want more than 4 stupid cores... without paying the insane premium for X99, it's extremely expensive in Australia.

As long as Zen hits Skylake performance or near enough I'll get it, if not I'll wait for Coffee Lake







.


----------



## vtheofilis

Regarding user reviews, we live in the age of astroturfing. You must double and triple check everything before you decide to purchase a component. Also, a hands on test in your typical use case scenario wouldn't hurt. In addition, both professionals and users can make mistakes in their tests that create a favourable picture for a product.


----------



## Aussiejuggalo

Quote:


> Originally Posted by *vtheofilis*
> 
> Regarding user reviews, we live in the age of astroturfing. You must double and triple check everything before you decide a purchase. Also, a hands on test in your typical use case scenario wouldn't hurt.
> In addition, both professionals and users can make mistakes in their tests that create a favourable picture for a product.


Let's not forget reviewer bias to... a lot of people hold a grudge against AMD for Bulldozer.


----------



## vtheofilis

Quote:


> a lot of people hold a grudge against AMD for Bulldozer


I can't speak for the original Bulldozer, but Vishera x3xx CPUs were a decent choice for those that couldn't afford Intel's bigger platforms. 115x socket i7 are a bad choice for me where I live. i5s may perform well in tests or in less optimized and not up to date games, but they do so in a "clean" testing environment. And 115x i7 aren't so cheap compared to the flaghsip 20XX i7, if you purchase a Z mobo loaded as much as an X mobo.


----------



## Blameless

Quote:


> Originally Posted by *Jpmboy*
> 
> Do you believe every comparison a vendor puts out? (I pity those that do)
> That said, lets see how well it does in OUR hands.


The issue is less with the veracity of AMD's comparison, but people's inability to put what's been shown in context, or account for how it was shown, and what was not shown.

I have a feeling that plenty of people are going to take what would otherwise be a stellar launch of a new product, discover that it doesn't live up to certain baseless expectations they have built up in their minds, and find a way to be disappointed.
Quote:


> Originally Posted by *Cyrious*
> 
> Results are in for AVX build. For comparison, the stock non-avx version was also benched. Do note, all 6 runs had some interference with other background processes and the like (approx 1-2% CPU usage).
> 
> 
> Spoiler: Images here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AVX run:
> 200 Samples: 50.80s
> 150 Samples: 36.99s
> 100 Samples: 25.15s
> 
> Stock run:
> 200 Samples: 1:08.97
> 150 Samples: 0:51.20
> 100 Samples: 0:34.23
> 
> Delta between AVX and Stock:
> 200 Samples: 18.7s
> 150 Samples: 14.21s
> 100 Samples: 9.08s


Here is the difference I was getting at 150 samples on my 5820K system with various Builds:

http://www.overclock.net/t/1618534/blender-ryzen-scene-benchmark#post_25714916
Quote:


> Originally Posted by *iLeakStuff*
> 
> One should be sceptical about comparisons presented by companies for sure, but AMD is 100% transparent about what settings they used during the tests so that people can compare for themselves.


It took them a day and a half to acknowledge they had default settings incorrect in the original Blender render file they offered for download and even longer to correct it. They are also still referencing a different version of Blender than the one they actually used.

Also, the Handbrake benchmark info isn't on their Ryzen demo page...you have to go to third-party sites like Reddit or OCN for that info.

This isn't quite what I'd call 100% transparency, even if the all the omissions and delays are entirely unintentional.
Quote:


> Originally Posted by *Aussiejuggalo*
> 
> a lot of people hold a grudge against AMD for Bulldozer.


Those people only have themselves to blame. We knew what Bulldozer would do well before release, but people who weren't satisfied with it's actual performance/value chose to buy it anyway on the rather foolish hope that multiple corroborating leaks were somehow all wrong or not representative of final performance.


----------



## Dygaza

I found that people's expectations towards Zen have taken a bit too huge leap. People still need to remember that AMD doesn't need to match Intel's IPC or performance in general, they just need to become competitive so we get real cpu wars going on again, and prizes down in general.

I'm just saying this since I feel that when real results come and Zen isn't as fast as these few tests show in most applications, and people are suddenly disappointed.

For me one thing is for sure. I can't justify myself on getting new 4c/8T cpu anymore after this 3770k gets reitred. Already we are seeing games getting better and better multithreaded performance, and this trend will just keep growing and growing (even it's slow). 4C/8T cpu just ain't "future proof" anymore, especially if you are looking for cpu investment for next 3-5 years of time.


----------



## GorillaSceptre

Not sure how some people who have been in this thread for a couple days are warning against false hype. We must have been reading a different thread..

The "hype" I've seen revolves around what they price Ryzen at, not that they'll easily be beating a $1100+ CPU in every scenario..

If you read the content of this thread it's gone full circle from AMD lying and deliberately sabotaging Intel, to them being nefarious and intentionally misleading people on performance by posting a false test. A few pages later the test is accurate but absolutely a best-case for Ryzen, and it won't match Intel in games, for some reason we also need to run a custom AVX version of blender to show what Intel can really do i guess?

After 140 pages of negativity and doubt, and the impossibility of what AMD have shown during a _preview_ , some of the same people are now acting like there's a bunch of lunatics in this thread who think Ryzen is going to slaughter a 6900k for $99..

Well i disagree, from what I've seen most people are cautiously optimistic about finally having some competition, most aren't even looking for something that completely outclasses Intel's best, they just want a good 8 core CPU that can bring some competition back to the industry for a "fair" price. All in all, with the pessimism I've seen AMD can only go up from here.


----------



## lolerk52

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Not sure how some people who have been in this thread for a couple days are warning against false hype. We must have been reading a different thread..
> 
> The "hype" I've seen revolves around what they price Ryzen at, not that they'll easily be beating a $1100+ CPU in every scenario..
> 
> If you read the content of this thread it's gone full circle from AMD lying and deliberately sabotaging Intel, to them being nefarious and intentionally misleading people on performance by posting a false test. A few pages later the test is accurate but absolutely a best-case for Ryzen, and it won't match Intel in games, for some reason we also need to run a custom AVX version of blender to show what Intel can really do i guess?
> 
> After 140 pages of negativity and doubt, and the impossibility of what AMD have shown during a _preview_ , some of the same people are now acting like there's a bunch of lunatics in this thread who think Ryzen is going to slaughter a 6900k for $99..
> 
> Well i disagree, from what I've seen most people are cautiously optimistic about finally having some competition, most aren't even looking for something that completely outclasses Intel's best, they just want a good 8 core CPU that can bring some competition back to the industry for a "fair" price. All in all, with the pessimism I've seen AMD can only go up from here.


It sometimes boggles my mind just HOW negative some people are. AMD is one of three companies who are in the desktop CPU and GPU markets, and the only one competing (or trying to) on the high end with both. You DON'T survive as long as AMD did in this market (well over two decades) without having some serious engineering talent. Yeah, there were mess ups and rough spots, Intel and NVIDIA had those too, but the talent is still there.


----------



## Marios145

Quote:


> Originally Posted by *lolerk52*
> 
> It sometimes boggles my mind just HOW negative some people are. AMD is one of three companies who are in the desktop CPU and GPU markets, and the only one competing (or trying to) on the high end with both. You DON'T survive as long as AMD did in this market (well over two decades) without having some serious engineering talent. Yeah, there were mess ups and rough spots, Intel and NVIDIA had those too, but the talent is still there.


But...FAILDOZER...HERP DERP


----------



## sepiashimmer

Quote:


> Originally Posted by *lolerk52*
> 
> It sometimes boggles my mind just HOW negative some people are. *AMD is one of three companies who are in the desktop CPU and GPU markets*, and the only one competing (or trying to) on the high end with both. You DON'T survive as long as AMD did in this market (well over two decades) without having some serious engineering talent. Yeah, there were mess ups and rough spots, Intel and NVIDIA had those too, but the talent is still there.


What are the other two?


----------



## lolerk52

Quote:


> Originally Posted by *sepiashimmer*
> 
> What are the other two?


NVIDIA and Intel.


----------



## Digitalwolf

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Not sure how some people who have been in this thread for a couple days are warning against false hype. We must have been reading a different thread..
> 
> The "hype" I've seen revolves around what they price Ryzen at, not that they'll easily be beating a $1100+ CPU in every scenario..
> 
> If you read the content of this thread it's gone full circle from AMD lying and deliberately sabotaging Intel, to them being nefarious and intentionally misleading people on performance by posting a false test. A few pages later the test is accurate but absolutely a best-case for Ryzen, and it won't match Intel in games, for some reason we also need to run a custom AVX version of blender to show what Intel can really do i guess?


This is pretty much the scenario with any "discussion" here that Involves competing brands. Eventually it just becomes a way to start adding people to your block list so you can see actual discussion. That part played out as usual... I was a tad suprised to see people that I previously like to read post from releasing their inner "snark" as soon as the AMD demo had concluded.

If AMD brings a good product at a decent price it can only benefit the community as a whole. Prices have gotten rather stupid (my opinion). I mean if you think you might not be getting a bonus at work because you might lose some market segment... I would understand some of the ahh.... well I'll just say "negative stuff" I see posted in threads like this. Why the average person with nothing to lose and everything to gain from competition posts stuff like that... is beyond my ability to comprehend.


----------



## vtheofilis

Quote:


> Originally Posted by *Digitalwolf*
> 
> If AMD brings a good product at a decent price it can only benefit the community as a whole. Prices have gotten rather stupid (my opinion). I mean if you think you might not be getting a bonus at work because you might lose some market segment... I would understand some of the ahh.... well I'll just say "negative stuff" I see posted in threads like this. Why the average person with nothing to lose and everything to gain from competition posts stuff like that... is beyond my ability to comprehend.


It's good for the consumer, and it's good for PC as a platform to have competition in the CPU space. If AMD hadn't released Athlon 64s and Athlon 64 X2s, Ι wouldn't have my Core 2 Duo. AMD started being less competitive from the first Core i series, ending to the disparity that we have until RYZEN is released. Whoever doesn't understand this, either takes some form of payment from Intel, or is heavily biased.


----------



## ciarlatano

Quote:


> Originally Posted by *formula m*
> 
> Conclusive..?
> Perhaps not. But, we already know things and they do not have to be "truly conclusive, for there to be a win (ie: a buy), in many enthusiasts mind.
> 
> Concrete..?
> Yes, there is concrete evidence, that SR7 @ 3.4Ghz, is comparable to a 6900k in Blender, @ nearly the same Ghz. That is irrefutable!
> 
> It doesn't matter what the actual numbers are, it illustrates AMD's IPC is within a few % of Intel's 7th gen. Then AMD illustrating Ryzen doing better with thermals and power draw. But who the actual winner is... is moot. Which again, doesn't matterif both AMD & Intel are comparable as shown. So, if that is the case, and AMD & Intel's gaming platforms have about the same IPC, then you start to look at other features, etc.
> 
> Price..? Wattage? OC'ability..? Mobo.? Chipsets..? Features..? etc...
> 
> You are right. When someone isn't looking at the over-all picture, it is easy to spot their bias within their posts. They are after one thing. And they can't defend from all angles of the consumer.


Sorry, I can't surmise a whole performance picture from two cherry picked benchmarks. Frankly, I think anyone that does is foolish. We have seen where that can lead in the past. I am far less concerned with there being a "winner" as I am with AMD becoming a relevant competitor in the market again, even if it is only at specific levels. Competition in the market is good for the consumer. It leads to better products at more competitive prices.

BTW - I have absolutely no allegiance to Intel or AMD. I use whichever better fits my needs at the time I build. While I would very much like to see Zen successful, and I can't even speculate on the true outcome with the facts on hand. And I am fine with that. Things will unfold, and so far all that has come to light has been very promising.


----------



## epic1337

to base everything about a products worth just from one or two benchmarks is nonsense to begin with.
otherwise why would reviewers bother using other benchmarks and not just use blender or handbrake?

pretty much the only thing you can get from this preview is "Zen doesn't suck".
other than that theres nothing else, e.g. no price, no overclocking, no single-thread benchmarks.


----------



## oxidized

Quote:


> Originally Posted by *epic1337*
> 
> pretty much the only thing you can get from this preview is "Zen doesn't suck".
> other than that theres nothing else, e.g. no price, no overclocking, no single-thread benchmarks.


This.
There's no doubt it doesn't suck, and it's the only thing there's no doubt about.


----------



## Blameless

Quote:


> Originally Posted by *ciarlatano*
> 
> Until then, all you are posting is your dreams or nightmares, depending on which side you are on.


And if I'm not on any side?

Frankly, I've always found even incomplete information to be useful when trying to forecast or predict the end-result of new architectures. If I simply waited for hardware to be out in the wild before looking at anything, most of the time I wouldn't be especially more informed, just further behind and paying more.

No doubt there are many things we cannot know yet, but there are most certainly things we can know, and more that we can infer with respectable confidence.

I have deliberately avoided making predictions in some areas, and I expect some of the predictions I have made to be incorrect, but just as with every other major architecture release in the last fifteen years, I expect to be right four times in five.


----------



## ciarlatano

Quote:


> Originally Posted by *Blameless*
> 
> And if I'm not on any side?
> 
> Frankly, I've always found even incomplete information to be useful when trying to forecast or predict the end-result of new architectures. If I simply waited for hardware to be out in the wild before looking at anything, most of the time I wouldn't be especially more informed, just further behind and paying more.
> 
> No doubt there are many things we cannot know yet, but there are most certainly things we can know, and more that we can infer with respectable confidence.
> 
> I have deliberately avoided making predictions in some areas, and I expect some of the predictions I have made to be incorrect, but just as with every other major architecture release in the last fifteen years, I expect to be right four times in five.


Then you are simply being rational. Is that allowed here?


----------



## 7850K

Quote:


> Originally Posted by *Digitalwolf*
> 
> This is pretty much the scenario with any "discussion" here that Involves competing brands. *Eventually it just becomes a way to start adding people to your block list so you can see actual discussion.* That part played out as usual... I was a tad suprised to see people that I previously like to read post from releasing their inner "snark" as soon as the AMD demo had concluded.


yes. The big zen thread on Anandtech forums is considerably worse. There is really good discussion there once you block all the trolls.


----------



## Catscratch

i5 2500k 4ghz 119 seconds @ 150 samples
x6 1090t 3.3ghz 126 seconds @ 150 samples

And zen does this in what 30 seconds ? Wow.


----------



## Blameless

Quote:


> Originally Posted by *oxidized*
> 
> There's no doubt it doesn't suck, and it's the only thing there's no doubt about.


That's a fair enough assessment for those not wanting to speculate any further.
Quote:


> Originally Posted by *ciarlatano*
> 
> Then you are simply being rational. Is that allowed here?


Well, some people don't like it!
Quote:


> Originally Posted by *Catscratch*
> 
> i5 2500k 4ghz 119 seconds @ 150 samples
> x6 1090t 3.3ghz 126 seconds @ 150 samples
> 
> And zen does this in what 30 seconds ? Wow.


35-36 seconds.


----------



## epic1337

Quote:


> Originally Posted by *Catscratch*
> 
> i5 2500k 4ghz 119 seconds @ 150 samples
> x6 1090t 3.3ghz 126 seconds @ 150 samples
> 
> And zen does this in what 30 seconds ? Wow.


well, this zen has twice the cores and 4times the thread handling, so whether it can be interpreted as impressive is subjective to opinion.

edit: i tried directly down sampling that i5's results to 3.4Ghz. 4 / 3.4 = 1.1765 * 119sec = 140seconds.
it looks like Zen is around 4times faster than i5-2500K in blender at the same clocks.
this points us to one thing, Zen has higher IPC, since no app scales 100% in multi-thread workloads.


----------



## cssorkinman

Quote:


> Originally Posted by *DaaQ*
> 
> Quote:
> 
> 
> 
> Originally Posted by *mouacyk*
> 
> Your age really shows your priorities. Retiring on AMD stocks is nice and fine, provided Zen sells well.
> 
> *On a point that actually matters here on this forum, let's hope it performs well when reviewed independently. Obviously, your priorities have something against that. No? Good.*
> 
> 
> 
> Just would like to add my opinion on independent or 3rd party reviews alot keep speaking of.
> I myself find user reviews much more informative than leading review sites reviews. Due to the fact of all the differing methodologies used, meaning that review sites I find often misleading at best and downright shilled at worst.
> Case in point @cssorkinman (sorry if misspelled) bf1 info.
> Too many take the graphs and such from review sites as gospel without regards to any bias they may have.
> USER reviews from members here are much more valuable as a whole since there is no monetary or free gear incentives for their time and effort.
> Just want to point this out.
Click to expand...

Hopefully I can soon provide user benchmarks with the 8C/16T Zen










Spoiler: Warning: Spoiler!



64 player ST quentin game - FX 8 core at 3.4 ghz Fury at stock low graphics preset ( cpu testing) - latest update to the game and video drivers as of 12/15/16. Lows during gameplay were @ 75fps lowest shown are during player deaths.

Considering making a thread about the BF1 multiplayer performance of my various rigs.


----------



## budgetgamer120

Quote:


> Originally Posted by *oxidized*
> 
> Why unsub from such an interesting and passionate thread? Just ignore (i actually mean to put them into the ignore list) those spreading cancer, problem solved.


You finally understand your i7 can't stream at the same settings as Zen.


----------



## budgetgamer120

Quote:


> Originally Posted by *ciarlatano*
> 
> Sorry, I can't surmise a whole performance picture from two cherry picked benchmarks. Frankly, I think anyone that does is foolish. We have seen where that can lead in the past. I am far less concerned with there being a "winner" as I am with AMD becoming a relevant competitor in the market again, even if it is only at specific levels. Competition in the market is good for the consumer. It leads to better products at more competitive prices.
> 
> BTW - I have absolutely no allegiance to Intel or AMD. I use whichever better fits my needs at the time I build. While I would very much like to see Zen successful, and I can't even speculate on the true outcome with the facts on hand. And I am fine with that. Things will unfold, and so far all that has come to light has been very promising.


Well you really shouldn't have anything negative to say about Zen then. Peopke here are forming negative summaries from I don't know what. And giving people a hard time for forming positive summaries.

People with positive summaries are working with facts while the negative ones are making stuff up.


----------



## budgetgamer120

Quote:


> Originally Posted by *Catscratch*
> 
> i5 2500k 4ghz 119 seconds @ 150 samples
> x6 1090t 3.3ghz 126 seconds @ 150 samples
> 
> And zen does this in what 30 seconds ? Wow.


The 1090t is unable to overclock?


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> well, this zen has twice the cores and 4times the thread handling, so whether it can be interpreted as impressive is subjective to opinion.
> 
> edit: i tried directly down sampling that i5's results to 3.4Ghz. 4 / 3.4 = 1.1765 * 119sec = 140seconds.
> it looks like Zen is around 4times faster than i5-2500K in blender at the same clocks.
> this points us to one thing, Zen has higher IPC, since no app scales 100% in multi-thread workloads.


So what about it being a 6900k. Did you forget about that?


----------



## ciarlatano

Quote:


> Originally Posted by *budgetgamer120*
> 
> Well you really shouldn't have anything negative to say about Zen then. Peopke here are forming negative summaries from I don't know what. And giving people a hard time for forming positive summaries.
> 
> People with positive summaries are working with facts while the negative ones are making stuff up.


I have never said anything negative about Zen. I think I have been very clear that I feel that there is not enough information to say anything negative, while two benchmarks are not enough to project across the spectrum to give a wholly positive evaluation. What we have seen so far has been very good, but it has also been very limited. I will have nothing positive or negative to say until there is a sufficient amount of info to base it on.

I have no idea where you get that I have anything negative to say about Zen.....


----------



## epic1337

Quote:


> Originally Posted by *budgetgamer120*
> 
> So what about it being a 6900k. Did you forget about that?


Zen compared to i7-6900K in blender is sort of "oh good" not that sort of "wow thats great".
its a different story when you'll compare it with how the rumored price though, at $500 or so it's performance is _impressive_.
but this is only in reference to blender benchmarks, we still don't know it's single-thread performance or how well it can overclock.

on a side note, rather than comparing Zen SR7 to i7-6900K price wise, i think its more logical to compare it to i7-5820K.
yes although i7-5820K is only an 6C/12T chip, its one of LGA2011's best selling value chips.

e.g. if Zen SR7 is 30% faster than i7-5820K, then Zen SR7 should be 40%~50% more expensive than i7-5820K, that puts it's price point close to $500.


----------



## lombardsoup

Quote:


> Originally Posted by *epic1337*
> 
> Zen compared to i7-6900K in blender is sort of "oh good" not that sort of "wow thats great".
> its a different story when you'll compare it with how the rumored price though, at $500 or so it's performance is _impressive_.
> but this is only in reference to blender benchmarks, we still don't know it's single-thread performance or how well it can overclock.
> 
> on a side note, rather than comparing Zen SR7 to i7-6900K price wise, i think its more logical to compare it to i7-5820K.
> yes although i7-5820K is only an 6C/12T chip, its one of LGA2011's best selling value chips.
> 
> e.g. if Zen SR7 is 30% faster than i7-5820K, then Zen SR7 should be 40%~50% more expensive than i7-5820K, that puts it's price point close to $500.


Thought this through a bit more and came to a conclusion: I don't even mind a minor hit in Zen single threaded performance, if it means forcing Intel to do away with 4C/8T or 2C/2T cpus lacking in Hyperthreading or SMT. Games are starting to scale with more than 8 threads (Division, BF1, etc).


----------



## looniam

Quote:


> Originally Posted by *epic1337*
> 
> on a side note, rather than comparing Zen SR7 to i7-6900K price wise, i think its more logical to compare it to i7-5820K.
> yes although i7-5820K is only an 6C/12T chip, its one of LGA2011's best selling value chips.
> 
> e.g. if Zen SR7 is 30% faster than i7-5820K, then Zen SR7 should be 40%~50% more expensive than i7-5820K, that puts it's price point close to $500.


just a quick check on new egg and amazon puts the 5870K $390/$400 respectively so +50% ~$600, no?

call me a dreamer but i hope it matches the price out right. that would keep it in reach of those low balling at $350 and a steal for those expecting $500. fwiw, i believe AMD has to swim upstream in a PR battle. remember bulldozer released ~i5-2500K pricing.


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> Zen compared to i7-6900K in blender is sort of "oh good" not that sort of "wow thats great".
> its a different story when you'll compare it with how the rumored price though, at $500 or so it's performance is impressive.
> but this is only in reference to blender benchmarks, we still don't know it's single-thread performance or how well it can overclock.
> 
> on a side note, rather than comparing Zen SR7 to i7-6900K price wise, i think its more logical to compare it to i7-5820K.
> yes although i7-5820K is only an 6C/12T chip, its one of LGA2011's best selling value chips.
> 
> e.g. if Zen SR7 is 30% faster than i7-5820K, then Zen SR7 should be 40%~50% more expensive than i7-5820K, that puts it's price point close to $500.


Why would you be comparing a 5820k to a SR7, and not a SR5..?


----------



## Shogon

Quote:


> Originally Posted by *ciarlatano*
> 
> I have no idea where you get that I have anything negative to say about Zen.....


It's natural when your delusions of grandeur have exceeded past the point of no return for your favorite stock portfolio company. If you look at his post history it's pretty self explanatory, and quite comical.


----------



## epic1337

Quote:


> Originally Posted by *formula m*
> 
> Why would you be comparing a 5820k to a SR7, and not a SR5..?


because SR5 is better off sitting between LGA1151 and LGA2011, that means to say priced less than $400 and has performance close to i7-5820K.

on a side note, Zen isn't exactly meant to clash head on against intel's enthusiast platform, heck they don't even have a chip to go on against intel's Core i7-6950X.
so why would they waste their time targeting a chip they're barely on-par with, considering they could utterly dominate the next tier below them?

Quote:


> Originally Posted by *looniam*
> 
> just a quick check on new egg and amazon puts the 5870K $390/$400 respectively so +50% ~$600, no?
> 
> call me a dreamer but i hope it matches the price out right. that would keep it in reach of those low balling at $350 and a steal for those expecting $500. fwiw, i believe AMD has to swim upstream in a PR battle. remember bulldozer released ~i5-2500K pricing.


i7-5820K price swings around $350~$440 as noted on PCPP, the average price is roughly $390.
http://pcpartpicker.com/product/6tXfrH/intel-cpu-bx80648i75820k?history_days=730
though in either case, $600 is still reasonable, so long as it can OC well enough that is.

bulldozer's RND took so long so its one of their ways to get back all that money invested.
i mean look, they could've simply refreshed and refined their star cores if they wanted to, but they kept doing CMT instead.


----------



## Ghoxt

Quote:


> Originally Posted by *Cyrious*
> 
> Results are in for AVX build. For comparison, the stock non-avx version was also benched. Do note, all 6 runs had some interference with other background processes and the like (approx 1-2% CPU usage).
> 
> 
> Spoiler: Images here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AVX run:
> 200 Samples: 50.80s
> 150 Samples: 36.99s
> 100 Samples: 25.15s
> 
> Stock run:
> 200 Samples: 1:08.97
> 150 Samples: 0:51.20
> 100 Samples: 0:34.23
> 
> Delta between AVX and Stock:
> 200 Samples: 18.7s
> 150 Samples: 14.21s
> 100 Samples: 9.08s


Aside from the data itself, what is this supposed to mean performance wise regarding Zen? Excuse my ignorance, other than knowing what AVX instructions are in Intel CPU's. IE is there some summary that this exposes in whole or in part?


----------



## ciarlatano

Quote:


> Originally Posted by *Shogon*
> 
> It's natural when your delusions of grandeur have exceeded past the point of no return for your favorite stock portfolio company. If you look at his post history it's pretty self explanatory, and quite comical.


The funny part of that being.....AMD makes up a large portion of my own investment portfolio right now. Which is all the more reason not to be delusional about it. You won't find me holding if it looks like things are going south. Just like you won't find me selling if all is going well.


----------



## looniam

Quote:


> Originally Posted by *epic1337*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> just a quick check on new egg and amazon puts the 5870K $390/$400 respectively so +50% ~$600, no?
> 
> call me a dreamer but i hope it matches the price out right. that would keep it in reach of those low balling at $350 and a steal for those expecting $500. fwiw, i believe AMD has to swim upstream in a PR battle. remember bulldozer released ~i5-2500K pricing.
> 
> 
> 
> 
> 
> 
> 
> i7-5820K price swings around $350~$440 as noted on PCPP, the average price is roughly $390.
> http://pcpartpicker.com/product/6tXfrH/intel-cpu-bx80648i75820k?history_days=730
> though in either case, $600 is still reasonable, so long as it can OC well enough that is.
> 
> bulldozer's RND took so long so its one of their ways to get back all that money invested.
> i mean look, they could've simply refreshed and refined their star cores if they wanted to, but they kept doing CMT instead.
Click to expand...

i'll completely agree with the potential OCing; if it ends up being locked down, ie. it goes only as far as your cooling helps, then that would be a big damper to enthusiasts. the reasonable part is a little more subjective _to me._ i'm looking at a ~$800/$850 budget to upgrade (if others can afford twice as much, more power to them!) $600 for just the cpu would eat too much with having to look at mobos and ram . . . . along with either a new bracket if not water block. sure $250 might be doable but why buy junk?

i mention bulldozer in passing since it seemed/could be amd priced it for the competing platform; intel's mainstream socket 1155, where the price was the least expensive enthusiast's SKU. IF amd is looking at going for intel's HEDT platform; it might be comparable to price it again at the least expensive SKU.

hey, i could be in pleb fantasy land. i'm ok with that.


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> Zen compared to i7-6900K in blender is sort of "oh good" not that sort of "wow thats great".
> its a different story when you'll compare it with how the rumored price though, at $500 or so it's performance is _impressive_.
> but this is only in reference to blender benchmarks, we still don't know it's single-thread performance or how well it can overclock.
> 
> on a side note, rather than comparing Zen SR7 to i7-6900K price wise, i think its more logical to compare it to i7-5820K.
> yes although i7-5820K is only an 6C/12T chip, its one of LGA2011's best selling value chips.
> 
> e.g. if Zen SR7 is 30% faster than i7-5820K, then Zen SR7 should be 40%~50% more expensive than i7-5820K, that puts it's price point close to $500.


Believe what you want to make up. I'll take the facts that are before me.


----------



## Cyrious

Quote:


> Originally Posted by *Ghoxt*
> 
> Aside from the data itself, what is this supposed to mean performance wise regarding Zen? Excuse my ignorance, other than knowing what AVX instructions are in Intel CPU's. IE is there some summary that this exposes in whole or in part?


Assuming the Zen runs were done with the stock Blender program, it running w/o AVX is a tad faster (couple percentage points, but still faster) than my Xeon running with AVX. Taking AVX out of the equation and pitting it thread to thread and clock for clock against my Xeon, Its going to be something like 20-30% faster in MT workloads. I could be off on those numbers, someone better at math than me can figure that out, but it's still a not-insignificant amount.

Depending on how well it does in other benchmarks and how its priced, it could be a very compelling buy for a large number of people.


----------



## budgetgamer120

Quote:


> Originally Posted by *Cyrious*
> 
> Assuming the Zen runs were done with the stock Blender program, it running w/o AVX is a tad faster (couple percentage points, but still faster) than my Xeon running with AVX. Taking AVX out of the equation and pitting it thread to thread and clock for clock against my Xeon, Its going to be something like 20-30% faster in MT workloads. I could be off on those numbers, someone better at math than me can figure that out, but it's still a not-insignificant amount.
> 
> Depending on how well it does in other benchmarks and how its priced, it could be a very compelling buy for a large number of people.


Does AMD use AVX or is it an Intel exclusive?

It all comes down to price. Will have to be cheap enough for me to drop my x99 platform which is so good.


----------



## Cyrious

Quote:


> Originally Posted by *budgetgamer120*
> 
> Does AMD use AVX or is it an Intel exclusive?
> 
> It all comes down to price. Will have to be cheap enough for me to drop my x99 platform which is so good.


AMD does use AVX, its integrated into all of the Construction cores, but their floating point units struggle with it, and its made further worse by the fact the construction core's caches were all crap.


----------



## Tojara

Quote:


> Originally Posted by *budgetgamer120*
> 
> Does AMD use AVX or is it an Intel exclusive?
> 
> It all comes down to price. Will have to be cheap enough for me to drop my x99 platform which is so good.


AMD can use AVX. Bulldozer struggled with it and Piledriver even more so, but afaik Zen should be able to do AVX in ones cycle. AVX2 will have to be split up.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> Does AMD use AVX or is it an Intel exclusive?


AMD has access to AVX and Zen has the same AVX, AVX2, and FMA3 instruction sets the newest Intel parts have. No AVX-512 yet, but only Intel's E7 server products and coprocessors have those, so far.

However, AVX2 performance is one of the worrying unknowns about Zen. We know that Zen cannot do single cycle 256-bit AVX2 instructions, like Intel's parts have been capable of since Haswell, and we know that Zen doesn't have anywhere near the L1/L2 cache bandwidth Intel gave Haswell (and later) to help handle these instructions. Both of these facts can be seen on AMD's own Zen slides.

What sort of impact this will have remains to be seen. The default Windows builds of Blender don't use AVX at all and x264 (used in handbrake), while built to support AVX/AVX2, seems to barely make use of it.

Heavy AVX2 use is still fairly rare in most consumer apps, but where it is used we would expect Zen to fall behind (core for core and clock for clock) any Intel architecture Haswell or newer.


----------



## Shatun-Bear

Quote:


> Originally Posted by *looniam*
> 
> just a quick check on new egg and amazon puts the 5870K $390/$400 respectively so +50% ~$600, no?
> 
> call me a dreamer but i hope it matches the price out right. that would keep it in reach of those low balling at $350 and a steal for those expecting $500. fwiw, i believe AMD has to swim upstream in a PR battle. remember bulldozer released ~i5-2500K pricing.


This isn't a sensible way to approach the price proposition of these new CPUs. Looking it the way you are, AMD can never win as you're comparing a CPU that has been on the market for 2 years with a possible RRP of a brand new product.


----------



## looniam

Quote:


> Originally Posted by *Shatun-Bear*
> 
> This isn't a sensible way to approach the price proposition of these new CPUs. Looking it the way you are, AMD can never win as you're comparing a CPU that has been on the market for 2 years with a possible RRP of a brand new product.


huh?









ok, _though i didn't first use the 5870K_ then i'll mention the newer 6800K, which is ~same price.

better?


----------



## Catscratch

Quote:


> Originally Posted by *budgetgamer120*
> 
> The 1090t is unable to overclock?


I don't have the motherboard to do it. My 790fx board k9a2 platinum barely supports it. I was only comfy with 3.9ghz @ 1.45v. Didn't use it for long. I had settled for 1.35v manual and 3.7/4.0 mode just like zambezi







Then this machine went to somewhere else after i got 2500k.

There are 3 1090t/1100t entires on https://docs.google.com/spreadsheets/d/1O2Or6XOETZr3a4gLAuCuYEx8m5X_lv23LdwZiCZpBcM/edit#gid=0 none are confirmed to be 150 samples which was used with zen. One 3.9ghz 1090t does 143 seconds, sample size info is missing but probably 200 samples. One 4ghz 1100t does 69 seconds with 100 samples.

I'm wondering what the cheapest zen can do


----------



## Shogon

Quote:


> Originally Posted by *ciarlatano*
> 
> The funny part of that being.....AMD makes up a large portion of my own investment portfolio right now. Which is all the more reason not to be delusional about it. You won't find me holding if it looks like things are going south. Just like you won't find me selling if all is going well.


There are differences between being sensible with your holdings and thinking; and showing the exact opposite as you've seen from some. Enjoy that investment, as it seems like it will continue to rise







.


----------



## budgetgamer120

Quote:


> Originally Posted by *Cyrious*
> 
> AMD does use AVX, its integrated into all of the Construction cores, but their floating point units struggle with it, and its made further worse by the fact the construction core's caches were all crap.


Quote:


> Originally Posted by *Tojara*
> 
> AMD can use AVX. Bulldozer struggled with it and Piledriver even more so, but afaik Zen should be able to do AVX in ones cycle. AVX2 will have to be split up.


Quote:


> Originally Posted by *Blameless*
> 
> AMD has access to AVX and Zen has the same AVX, AVX2, and FMA3 instruction sets the newest Intel parts have. No AVX-512 yet, but only Intel's E7 server products and coprocessors have those, so far.
> 
> However, AVX2 performance is one of the worrying unknowns about Zen. We know that Zen cannot do single cycle 256-bit AVX2 instructions, like Intel's parts have been capable of since Haswell, and we know that Zen doesn't have anywhere near the L1/L2 cache bandwidth Intel gave Haswell (and later) to help handle these instructions. Both of these facts can be seen on AMD's own Zen slides.
> 
> What sort of impact this will have remains to be seen. The default Windows builds of Blender don't use AVX at all and x264 (used in handbrake), while built to support AVX/AVX2, seems to barely make use of it.
> 
> Heavy AVX2 use is still fairly rare in most consumer apps, but where it is used we would expect Zen to fall behind (core for core and clock for clock) any Intel architecture Haswell or newer.


Thanks for the detailed response.

I look forward to seeing how the 2 compare using AVX


----------



## Hueristic

Quote:


> Originally Posted by *cssorkinman*
> 
> I've seen a half century myself and am also quite anxious to see this product come to market.


Until this year I was short of a full deck as well, but it's all good now [52].









Quote:


> Originally Posted by *Blameless*
> 
> Makes me wish I sat on my 1500 shares, instead of cashing out to gamble it away on those Chinese small caps a few years back!


Should have bought XMR.









I have been running 1605T since faildozer (which really only came out before the software which was capable of taking advantage of it's design) and only now am really needing a now CPU, MOO is taking too long between turns.









http://masteroforion.com/

They captured so much of the first game and kept the greatness of the second and thankfully ignore the crapfest of the third that I've been finding myself at 4am doing the one more turn thing that I haven't done in over a decade.


----------



## PostalTwinkie

Quote:


> Originally Posted by *epic1337*
> 
> i didn't say anything about flagships?
> 
> plus a flagship doesn't always mean their strongest chip, its simply their "iconic product" within their lineup, the bearer of the flag so to speak.
> you could find news with regards to a server Zen chip with 32cores out there yet it isn't an AM4 flagship chip.


Sure, if you want to split hairs, but you know that isn't what they are talking about. They are talking about the consumer segment, and they are targeting Intel's "HEDT" platform - which is obvious by their presentation.

Now we might see a 10/20 come later to target the 6950X and that Halo level product, but right out of the gate they have gone after the 6900K.


----------



## SuperZan

I agree, they've been pretty clear thus far about where they're looking to shine relative to Intel and I think it's smart. Rather than competing with Intel at each and every step which is just not financially feasible for them, they're mixing and matching mainstream and HEDT to give compelling products at several price-points. There are far too many unknowns regarding price, overclocking, certain instruction-sets, and ST performance generally but it looks like they're aiming for the heart of Intel's lineup rather than the extremes where it's more difficult for them to compete.


----------



## epic1337

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Sure, if you want to split hairs, but you know that isn't what they are talking about. They are talking about the consumer segment, and they are targeting Intel's "HEDT" platform - which is obvious by their presentation.
> 
> Now we might see a 10/20 come later to target the 6950X and that Halo level product, but right out of the gate they have gone after the 6900K.


but reserving the >$500 slot for later bigger chips has nothing to do with not releasing the flagship.


----------



## GorillaSceptre

What's Zen+'s purpose then?


----------



## epic1337

Quote:


> Originally Posted by *GorillaSceptre*
> 
> What's Zen+'s purpose then?


its a refresh for the most part, much like kabylake is to skylake.

but whom is this question meant for?


----------



## mcg75

I've been holding off upgrades for a few years now waiting for something interesting to come along.

Hopefully Zen is it.


----------



## Fyrwulf

Quote:


> Originally Posted by *GorillaSceptre*
> 
> What's Zen+'s purpose then?


Zen+ is coming later, once 7nm drops. It's probably going to have some architectural refinements over Zen to get the promised 15% IPC over Zen.


----------



## IRobot23

Quote:


> Originally Posted by *Fyrwulf*
> 
> Zen+ is coming later, once 7nm drops. It's probably going to have some architectural refinements over Zen to get the promised 15% IPC over Zen.


I think ZEN+ will have FPU 256bit x2 + IPC improvements 10-15%.
And probably late 2018 or early 2019.
I also think that they will "rebrand" ZEN (ryzen) in higher clocks next year


----------



## GorillaSceptre

Quote:


> Originally Posted by *epic1337*
> 
> its a refresh for the most part, much like kabylake is to skylake.
> 
> but whom is this question meant for?


Quote:


> Originally Posted by *Fyrwulf*
> 
> Zen+ is coming later, once 7nm drops. It's probably going to have some architectural refinements over Zen to get the promised 15% IPC over Zen.


Nvm. Was thinking of something completely different.


----------



## pony-tail

Quote:


> Originally Posted by *Hueristic*
> 
> Until this year I was short of a full deck as well, but it's all good now [52].
> 
> 
> 
> 
> 
> 
> 
> 
> Should have bought XMR.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I have been running 1605T since faildozer (which really only came out before the software which was capable of taking advantage of it's design) and only now am really needing a now CPU, MOO is taking too long between turns.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://masteroforion.com/
> 
> They captured so much of the first game and kept the greatness of the second and thankfully ignore the crapfest of the third that I've been finding myself at 4am doing the one more turn thing that I haven't done in over a decade.


I have a couple of years on you at 59


----------



## Tojara

Quote:


> Originally Posted by *Fyrwulf*
> 
> Zen+ is coming later, once 7nm drops. It's probably going to have some architectural refinements over Zen to get the promised 15% IPC over Zen.


There's no official information about Zen+, other than that it's coming after Zen and that it has IPC improvements. The IPC increase could be anything from 5-20%, and that heavily depends on how they choose to spend the transistors and whether it's on the mature 14nm or the new 7nm process.

Quote:


> Originally Posted by *epic1337*
> 
> its a refresh for the most part, much like kabylake is to skylake.
> 
> but whom is this question meant for?


Since it does have an IPC increase(source), and it's possibly on a different process , it's more akin to Skylake vs Haswell or Haswell over Sandy Bridge. Kaby Lake did have zero IPC increase.

Quote:


> Originally Posted by *IRobot23*
> 
> I think ZEN+ will have FPU 256bit x2 + IPC improvements 10-15%.
> And probably late 2018 or early 2019.
> I also think that they will "rebrand" ZEN (ryzen) in higher clocks next year


They'll probably just gradually release higher clocked models, if the XFR implementation doesn't make it unnecessary.


----------



## bossie2000

Bulldozer... A design before it's time! That's what it was. The world (software world) was not ready for big multi cpu tasking.Poor dozer.... Only now that software(Games and Apps) is more and more threaded it is that the dozer starts to wake !


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> because SR5 is better off sitting between LGA1151 and LGA2011, that means to say priced less than $400 and has performance close to i7-5820K.
> 
> on a side note, Zen isn't exactly meant to clash head on against intel's enthusiast platform, heck they don't even have a chip to go on against intel's Core i7-6950X.
> so why would they waste their time targeting a chip they're barely on-par with, considering they could utterly dominate the next tier below them?


What..?

The 5820k is a 6-core... the SR5 is a 6-core. Who cares what platform it sits on? The 6900K sits on the same socket.

If the 8-core SR7 can be compared to a 6900, then a SR5 can be compared to a 5820..!


----------



## Blameless

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Now we might see a 10/20 come later to target the 6950X and that Halo level product, but right out of the gate they have gone after the 6900K.


A 10 core Zen would require another core group, which would mean an entirely different die flavor. I find this highly unlikely for the first iteration of Zen on AM4, for exactly the same reason Intel won't just arbitrarily go back and add 6-core SKUs to the LGA-1151 6000 and 7000 series mainstream line up.

Also, if they did add another core group and really wanted to target the super high-end 10 core HEDT parts from Intel, they'd likely jump straight to 12-cores, because each core group is four cores.

The 6950X won't be Intel's flagship HEDT part by the time It's practical to put 10+ cores on AM4.
Quote:


> Originally Posted by *bossie2000*
> 
> Bulldozer... A design before it's time! That's what it was. The world (software world) was not ready for big multi cpu tasking.Poor dozer.... Only now that software(Games and Apps) is more and more threaded it is that the dozer starts to wake !


Bulldozer was mostly a dead end. It had a few niches, but these have declined over time, not increased.


----------



## epic1337

Quote:


> Originally Posted by *formula m*
> 
> What..?
> The 5820k is a 6-core... the SR5 is a 6-core. Who cares what platform it sits on? The 6900K sits on the same socket.
> 
> If the 8-core SR7 can be compared to a 6900, then a SR5 can be compared to a 5820..!


SR7 can be compared to 6900 in blender, but what about other benchmarks or apps?

just because SR7 is 8core doesn't mean its purposefully equal to intel's 8core.
take bulldozer for example, 4M/8C could sometimes out perform intel's 4C/8T, but is it even remotely equal to it?

to begin with, not many expects Zen to reach broadwell IPC from the get go.


----------



## Benny89

Quote:


> Originally Posted by *epic1337*
> 
> SR7 can be compared to 6900 in blender, but what about other benchmarks or apps?
> 
> just because SR7 is 8core doesn't mean its purposefully equal to intel's 8core.
> take bulldozer for example, 4M/8C could sometimes out perform intel's 4C/8T, but is it even remotely equal to it?
> 
> to begin with, not many expects Zen to reach broadwell IPC from the get go.


Woah, guys, calm down







. One might saying "SR7" is better/as good as 6900k in everything, other one might say that it won't be in other apps/benches.

Nobody has any idea. We have to wait for launch and actual real games/apps benches to see it.

Right now this is all just a pure speculation of optimism vs realism vs pessimism/

I see some people here that really want to proof based on nothing that Zen > 6900k or 6900k > Zen.

Wait for benches people


----------



## Marios145

For me, even if ryzen is 20% behind intel, everything depends on OC headroom and prices.


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> SR7 can be compared to 6900 in blender, but what about other benchmarks or apps?
> 
> just because SR7 is 8core doesn't mean its purposefully equal to intel's 8core.
> take bulldozer for example, 4M/8C could sometimes out perform intel's 4C/8T, but is it even remotely equal to it?
> 
> to begin with, not many expects Zen to reach broadwell IPC from the get go.


Bro, any time "someone" has to bring up bulldozer... it means they are trying to divert attention from the actual topic/question. Zen is not bulldozer.. go look at the block diagrams... m'kay?

40% increase in IPC alone, let alone all the other tech improvement within the ZEN's uarch.

You fail to understand the reason why AMD chose to show Ryzen SR7 against 6900K..., because the SR3 will stack nicely up against the 6700k..! And might be $169 bucks..! The SR5 (depending on actual performance), will fit nicely at the 4790K/5820K;s level of performance, but at $249?

So, what about other benchmarks or apps..?

Ryzen doesn't need to match broadwells's IPC, it can be within 2~5% and still be a win for Ryzen. And ironically, we are only talking stock clock IPC, bcz we know Ryzen going to OC.

And for those on a x99... or looking to jump on x299..? Or on Z170 and want moAr..?

If AMD's Ryzen SR7 is priced right, it might not leave any decision to make.

I actually want to know more about the motherboards right now, than I do about the rest of Ryzen at this point.


----------



## epic1337

Quote:


> Originally Posted by *formula m*
> 
> Bro, any time "someone" has to bring up bulldozer... it means they are trying to divert attention from the actual topic/question. Zen is not bulldozer.. go look at the block diagrams... m'kay?
> 
> 40% increase in IPC alone, let alone all the other tech improvement within the ZEN's uarch.
> 
> You fail to understand the reason why AMD chose to show Ryzen SR7 against 6900K..., because the SR3 will stack nicely up against the 6700k..! And might be $169 bucks..! The SR5 (depending on actual performance), will fit nicely at the 4790K/5820K;s level of performance, but at $249?
> 
> So, what about other benchmarks or apps..?
> 
> Ryzen doesn't need to match broadwells's IPC, it can be within 2~5% and still be a win for Ryzen. And ironically, we are only talking stock clock IPC, bcz we know Ryzen going to OC.
> 
> And for those on a x99... or looking to jump on x299..? Or on Z170 and want moAr..?
> If AMD's Ryzen SR7 is priced right, it might not leave any decision to make.
> 
> I actually want to know more about the motherboards right now, than I do about the rest of Ryzen at this point.


point is, AMD purposefully cherrypicked bulldozer's benches to make it look competitive.
plus you're already contradicting yourself.

as i've said before, SR5 will sit between LGA1151 and LGA2011, its main target is not i7-5820K but the general boundary between LGA1151 and LGA2011.
it is AMD's solution to user's disappointment about intel mainstream platform's lack of higher end SKUs, effectively bridging the gulf in between the two separate platforms.

you're forgetting the fact that AM4 is AMD's flexible mainstream platform, SR7 is not meant to hit against intel's HEDT straight on, its meant to sit within the bracket but be more lucrative than HEDT.
this means to say its got to be more beneficial to take SR7 than the general lineup of HEDT, we all know that i7-6900K is overpriced as heck so thats out of the question.
so in other words, SR7's general target is intel HEDT's most cost effective chip, which points to either i7-5820K or i7-6800K.


----------



## Asisvenia

If I'm not wrong there is no any info about ''single core performance'' for the Zen, right ?


----------



## epic1337

Quote:


> Originally Posted by *Asisvenia*
> 
> If I'm not wrong there is no any info about ''single core performance'' for the Zen, right ?


yup, thats what i've been telling these guys.

they've been taking this preview as a solid proof that Zen would compete against i7-6900K...


----------



## Ding Chavez

But if the 8c16t zen is close to the 6900k 8c16t in blender that indicates IPC is close to the 6900k doesn't it? At least in blender anyway.

Also how does blender translate into say gaming, what exactly does a good blender score mean, anything?

So far so good for zen but it's not out yet so I'm not getting on the hypetrain. Although at this point it does look good and might actually shake things up a bit which will be fun.


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> you're already contradicting yourself.
> 
> as i've said before, SR5 will sit between LGA1151 and LGA2011, its main target is not i7-5820K but the general boundary between LGA1151 and LGA2011.
> it is AMD's solution to user's disappointment about intel mainstream platform's lack of higher end SKUs, effectively bridging the gulf in between the two separate platforms.
> 
> you're forgetting the fact that AM4 is AMD's flexible mainstream platform, SR7 is not meant to hit against intel's HEDT straight on, its meant to sit within the bracket but be more lucrative than HEDT.
> this means to say its got to be more beneficial to take SR7 than the general lineup of HEDT, we all know that i7-6900K is overpriced as heck so thats out of the question.
> so in other words, SR7's general target is intel HEDT's most cost effective chip, which points to either i7-5820K or i7-6800K.


There is no contradiction.

I am not placing any restraints on what/where the SR5 will directly compete. Only that we know it is a 6c/12th with IPC equal to Intel's 6th gen.

How these products stack up, it is not brain science.

6700K = SR3

6800K = SR5

6900K = SR7

How do you see it..?


----------



## epic1337

Quote:


> Originally Posted by *formula m*
> 
> There is no contradiction.
> I am not placing any restraints on what/where the SR5 will directly compete. Only that we know it is a 6c/12th with IPC equal to Intel's 6th gen.
> 
> How these products stack up, isn't brain science.
> 
> Here:
> 6700K = SR3
> 6800K = SR5
> 6900K = SR7
> 
> How do you see it..?


nope, its this way.

i7-6900K
SR7
i7-6800K
SR5
i7-6700K
SR3
i5-6600K
athlon
i3-6300


----------



## Tojara

Quote:


> Originally Posted by *epic1337*
> 
> you're already contradicting yourself.
> 
> as i've said before, SR5 will sit between LGA1151 and LGA2011, its main target is not i7-5820K but the general boundary between LGA1151 and LGA2011.
> it is AMD's solution to user's disappointment about intel's mainstream platform lack of higher end SKUs.
> 
> you're forgetting the fact that AM4 is AMD's flexible mainstream platform, SR7 is not meant to hit against intel's HEDT straight on, its meant to sit within the bracket but be more lucrative than HEDT.
> this means to say its got to be more beneficial to take SR7 than the general lineup of HEDT, we all know that i7-6900K is overpriced as heck so thats out of the question.
> so in other words, SR7's general target is intel's HEDT most cost effective chip, which points to either i7-5820K or i7-6800K.


SR7 coming in at $500 doesn't seem unreasonable, if clocks are 3.6/3.8Ghz or aboveand IPC is close. That's ~$100 above Intel's cheapest (BW-E) hex and it should have cheaper motherboards. You get two more cores for less than $100.

Quote:


> Originally Posted by *epic1337*
> 
> yup, thats what i've been telling these guys.
> 
> they've been taking this preview as a solid proof that Zen would compete against i7-6900K...


Is there anything wrong with that? It can't be far off if AMD isn't flat out lying, if it's on par in one test and slightly ahead in another, that stress different areas of the CPU. The 6900k results they had are in line with what people are getting. If you do the math, outside a few niche cases the IPC is at least 85% of Skylake.


----------



## epic1337

Quote:


> Originally Posted by *Tojara*
> 
> Is there anything wrong with that? It can't be far off if AMD isn't flat out lying, if it's on par in one test and slightly ahead in another, that stress different areas of the CPU. The 6900k results they had are in line with what people are getting. If you do the math, outside a few niche cases the IPC is at least 85% of Skylake.


its because its only a single benchmark, one does not base everything on a single benchmark.

look at this and tell me whether concluding everything on a single benchmark is a good idea.


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> nope, its this way.
> 
> i7-6900K
> SR7
> i7-6800K
> SR5
> i7-6700K
> SR3
> i5-6600K
> athlon
> i3-6300


OK, so you are essentially arguing semantics at this point.

And, you choose to see it that^ in your mind, because you believe AMD's IPC is less..? Or because each chip won't be able to hold thermals and match that tiers compute power..?


----------



## epic1337

Quote:


> Originally Posted by *formula m*
> 
> OK, so you are essentially arguing semantics at this point.
> 
> And, you choose to see it that^ in your mind, because you believe AMD's IPC is less..? Or because each chip won't be able to hold thermals and match that tiers compute power..?


sigh, because of marketing.

one does not clash with someone of equal knowing that you'd split sales between the two parties.
sure you can make your product cheaper at an equal performance, but whats the point? you could've just hit the next thing below and say "i'm better".
not only would it look better it would also be cheaper, two advantageous marketing point which will attract more buyers.


----------



## Blameless

Quote:


> Originally Posted by *Ding Chavez*
> 
> But if the 8c16t zen is close to the 6900k 8c16t in blender that indicates IPC is close to the 6900k doesn't it? At least in blender anyway.
> 
> Also how does blender translate into say gaming, what exactly does a good blender score mean, anything?


Quote:


> Originally Posted by *formula m*
> 
> Only that we know it is a 6c/12th with IPC equal to Intel's 6th gen.


Quote:


> Originally Posted by *Tojara*
> 
> It can't be far off if AMD isn't flat out lying


IPC varies from task to task...it's not a fixed attribute.

There are apps where my Westmeres have exactly the same IPC as my Broadwell-Es. There are also apps where my Broadwell-E's have 60% more IPC than my Westmeres.

We can probably take the figures AMD has shown us at face value, and I am, but when reaching beyond those limited scenarios, we have to extrapolate. There are some extrapolations that cannot be made with anything resembling certainty, but there are others that seem fairly clear, given the other information we have on hand...most of it is somewhere in the middle.


----------



## formula m

And for sku stack..?

6900k = $1,100

SR7 = $499

6800k = $440

6700k = $350

SR5 = $289

SR3 = $169


----------



## lolerk52

Here is the Handbrake video test file: https://drive.google.com/file/d/0B0uKOxJkKhQkcmlnaWUzU05QRlk/view?usp=sharing

Convert it to AppleTV3 preset in handbrake and post your result.
Mine was 1 minute 27 seconds on a Core i5 6600K @ 4.4GHz


----------



## Blameless

Quote:


> Originally Posted by *lolerk52*
> 
> Here is the Handbrake video test file: https://drive.google.com/file/d/0B0uKOxJkKhQkcmlnaWUzU05QRlk/view?usp=sharing
> 
> Convert it to AppleTV3 preset in handbrake and post your result.
> Mine was 1 minute 27 seconds on a Core i5 6600K @ 4.4GHz


Ran the test on my primary system the other day, came to about 60 seconds on my 4.3GHz 5820K.

Some more detail:
Quote:


> Originally Posted by *Blameless*
> 
> I just ran the Handbrake test again to compare AVX enabled (default) vs. disabled on my primary system.
> 
> *Default settings:*
> x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
> ...
> [10:01:02] work: average encoding speed for job is 62.540569 fps
> 
> *After turning off everything more recent than SSE 4.2:*
> x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
> ...
> [10:11:33] work: average encoding speed for job is 61.065166 fps
> 
> As you can see, the x264 build in Handbrake 10.5 is very light on AVX work, only a ~2.5% advantage with the newer instruction sets.
> 
> You can run this test yourself by checking the "use advanced tab" option and pasting this ":asm=MMX2,SSE2Fast,SSSE3,SSE4.2" into the advanced tab's "x264 encoder options" (_after_ everything else, do not delete any of the default already there).


----------



## kd5151

Quote:


> Originally Posted by *lolerk52*
> 
> Here is the Handbrake video test file: https://drive.google.com/file/d/0B0uKOxJkKhQkcmlnaWUzU05QRlk/view?usp=sharing
> 
> Convert it to AppleTV3 preset in handbrake and post your result.
> Mine was 1 minute 27 seconds on a Core i5 6600K @ 4.4GHz


Is this already clipped to 60 seconds? Will test later if so.


----------



## lolerk52

Quote:


> Originally Posted by *kd5151*
> 
> Is this already clipped to 60 seconds? Will test later if so.


Yup


----------



## kd5151

Quote:


> Originally Posted by *lolerk52*
> 
> Yup


Quick answer. Thank you thank you thank YOU!


----------



## CrazyElf

Quote:


> Originally Posted by *The Stilt*
> 
> Someone with some spare time could test how these builds compare between each other on AMD 15h CPUs.
> Both are significantly faster than the current official build, however I would like to know what is their different on AMD CPUs. On Haswell the other is around 5% faster, on AMD the difference might be larger.
> 
> This is the original build I've posted before: https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD (password "ryzen", without the quotes.)
> 
> The test build: https://1drv.ms/u/s!Ag6oE4SOsCmDhFJcooVchjDI9SFD (password "SIMD", without the quotes)


Just curious, what is the in test build? The original used AVX and AVX2, what is the test build?

Quote:


> Originally Posted by *Cyrious*
> 
> Results are in for AVX build. For comparison, the stock non-avx version was also benched. Do note, all 6 runs had some interference with other background processes and the like (approx 1-2% CPU usage).
> 
> 
> Spoiler: Images here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AVX run:
> 200 Samples: 50.80s
> 150 Samples: 36.99s
> 100 Samples: 25.15s
> 
> Stock run:
> 200 Samples: 1:08.97
> 150 Samples: 0:51.20
> 100 Samples: 0:34.23
> 
> Delta between AVX and Stock:
> 200 Samples: 18.7s
> 150 Samples: 14.21s
> 100 Samples: 9.08s


Very interesting.

Let's take a quick look:

200 samples: 68.97s / 50.80s = 1.3577
150 samples: 51.20s / 36.99s = 1.3842
100 samples: 34.23s / 25.15s = 1.3610
This I believe is with a Xeon E5-2690 (Sandy Bridge).

Quote:


> Originally Posted by *Blameless*
> 
> Ok, below is my primary signature system, at it's normal 24/7 settings, in three different versions of (Win x64) Blender, all at 150 render samples.
> 
> 2.77 -- 39.09 seconds:
> http://cdn.overclock.net/a/a3/a3b7616c_RyzenGraphic_bench_150samples_2.77_Win_x64.png
> 
> 2.78 -- 39.55 seconds:
> http://cdn.overclock.net/7/7b/7b925433_RyzenGraphic_bench_150samples_2.78_Win_x64.png
> 
> And something a little different...Blender 2.78 recompiled (courtesy of Stilt) to support AVX2 and other modern instructions -- 27.55 seconds:
> http://cdn.overclock.net/1/10/101df7b0_RyzenGraphic_bench_150samples_2.78AVX2_Win_x64.png
> 
> 2.77 seems to be a bit over 1% faster than 2.78 (I ran the tests many times to confirm the discrepancy was not a fluke).
> 
> The AVX2 recompile is pretty interesting. Firstly, it shows just how unoptimized the official Windows compiles are on recent hardware. Secondly, it makes me wonder how Zen will compare in a more AVX heavy scenario...will likely have to wait until they are out in the wild to test this though.
> Fairly certain that's 150.


I assume this is with a 5820K? Or a 6800K? Not a huge deal, as they both have AVX2.

That gives us:
For 150 samples, that gives us: 39.55 / 27.55 = 1.4189


Taking the median of Cyrious, we get 1.3610, so without AVX, the render took 36.10% longer
Taking the score that Blameless gave us for 2.78, with AVX2, the render took 41.89% longer
This is is important because Cyrious has AVX128, but not AVX256 (on Sandy Bridge Xeon), while Blameless' tests have AVX256. 41.89 - 36.10 = 5.79%.
That would imply that AVX256 is perhaps just under 5.79% (or thereabouts) faster than AVX128.

This is also significant because Zen might do best under AVX1 (they don't have a full 256 bit FMAC so cannot execute it in one cycle). Not having AVX2 might not be as bad a penalty as we had initially anticipated for many "real world" applications. The big gain was adding the AVX instruction extension to SSE, even if only AVX128.

With x265, we would also expect to see AMD deliver full performance under AVX1, but not AVX2.

It seems like there are diminishing returns to the AVX widening. I suspect that if we were to get AVX512 and software support. the gains would be very limited, perhaps no more than 3-5%. We may not know anytime soon. The latest I heard was that only the Skylake E7 Xeons would get the AVX512 instructions and not the LGA 2066s.

I strongly suspect that other consumer applications will see similar results.

Also, OT, but take a look at your OCAT thread Blameless when you have the opportunity to. I suspect that AMD's new OCAT tool is just a modification on PresentMon.


----------



## Cyrious

Quote:


> Originally Posted by *CrazyElf*
> 
> -snip-


Yep, it was a SB-E 2690.


----------



## Blameless

Quote:


> Originally Posted by *CrazyElf*
> 
> I assume this is with a 5820K? Or a 6800K? Not a huge deal, as they both have AVX2.


That's on my 5820K; I just tore down my 6800K setup to RMA the X99 OC Formula it was in.


----------



## lolerk52

https://forums.anandtech.com/threads/first-summit-ridge-zen-benchmarks.2482739/page-135#post-38637522

An analysis of Blender and Handbrake code.


----------



## tpi2007

Quote:


> Originally Posted by *lolerk52*
> 
> https://forums.anandtech.com/threads/first-summit-ridge-zen-benchmarks.2482739/page-135#post-38637522
> 
> An analysis of Blender and Handbrake code.


AMD keeping the cards close to its chest, thus it could mean either good or bad things; we'll only know more in January. I'd suggest we go do other stuff in the meantime, like doing something about that gaming backlog.


----------



## flippin_waffles

There's more info in videos up that aren't part of the main New Horizon feed.

Infinity fabric and SenseMI from Mark Papermaster

https://www.youtube.com/watch?v=pN8P6jHAqlU

Radeon Instinct with Naples platform + ROCm

https://www.youtube.com/watch?v=s1jvhHJ6xw8


----------



## Aussiejuggalo

So do we think this will be released in Jan like all the rumours have been saying for months or will it be later like Feb or March?


----------



## formula m

Quote:


> Originally Posted by *epic1337*
> 
> sigh, because of marketing.
> 
> one does not clash with someone of equal knowing that you'd split sales between the two parties.
> sure you can make your product cheaper at an equal performance, but whats the point? you could've just hit the next thing below and say "i'm better".
> not only would it look better it would also be cheaper, two advantageous marketing point which will attract more buyers.


Right..!

For those rare individuals who need TOP NOTCH IPC (+ 11% gain) would choose intel. For everyone else AMD, bcuz of the cost of diminishing returns. Which is essentially how buyers shop, with a cost/performance ratio on their mind. As you can see by my last post, these are not directly marketing against each other, they just happen to be competing for gamer space with each other.

Secondly, You seem to forget, that when Intel releases it's new X299 Gaming Platform in 2017, that AMD will also be releasing Naples... w/32-core variants..!

So, from the looks of it, the SR5 @ $249 might be the chip/platform for mainstream gamers & enthusiasts, and the $499 SR7 for those who need extreme gaming. No reason to build a new system based on Z170, or X99... they are EOL.

We have 10 months, to wait to hear Intel's answer.


----------



## tpi2007

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> So do we think this will be released in Jan like all the rumours have been saying for months or will it be later like Feb or March?


At least we'll know more (hopefully much more) at CES, which will take place from 5-8 January, including release date.


----------



## epic1337

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> So do we think this will be released in Jan like all the rumours have been saying for months or will it be later like Feb or March?


probably much later than january, CES is on jan 5~8 so unless they simultaneously release and announce Zen, it would be a few months after announcement.

there is also a chance that the release will be staggered, e.g. SR7 first, SR5 and SR3 later on.


----------



## PostalTwinkie

Quote:


> Originally Posted by *tpi2007*
> 
> At least we'll know more (hopefully much more) at CES, which will take place from 5-8 January, including release date.


Isn't CES supposed to be when they at least paper launch in full? Most are expecting at least the initial line-up to be announced with specifications, pricing, and available boards.


----------



## Redwoodz

Quote:


> Originally Posted by *epic1337*
> 
> yup, thats what i've been telling these guys.
> 
> they've been taking this preview as a solid proof that Zen would compete against i7-6900K...


It was proof. A 95w 3.4 Zen matched 6900k in Floating point and integer (blender & handbrake). That's never even been even close before. Plus first time AMD is not a node behind. Writing is all over the wall it IS competitive or even BETTER.

Now that doesn't mean Intel won't sabotauge it somehow....but the hardware is there.


----------



## Robenger

Quote:


> Originally Posted by *Redwoodz*
> 
> It was proof. A 95w 3.4 Zen matched 6900k in Floating point and integer (blender & handbrake). That's never even been even close before. Plus first time AMD is not a node behind. Writing is all over the wall it IS competitive or even BETTER.
> 
> Now that doesn't mean Intel won't sabotauge it somehow....but the hardware is there.


Maybe Intel will "update" their compiler?


----------



## Exxlir

Should I go zen or kaby lake ?


----------



## SuperZan

Quote:


> Originally Posted by *Exxlir*
> 
> Should I go zen or kaby lake ?


Different needs, I suppose. If you absolutely require the very best possible single-threaded performance available you'll probably want Skylake/Kaby Lake. If you make good use of MT workloads and also need good ST performance for gaming or other workloads which benefit from such, then whilst the jury's still out you'd probably want to wait and see for yourself what Zen has to offer before making a decision.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Exxlir*
> 
> Should I go zen or kaby lake ?


It is impossible for anyone to answer that since Zen isn't out, and next to nothing is known.


----------



## cssorkinman

Quote:


> Originally Posted by *Robenger*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Redwoodz*
> 
> It was proof. A 95w 3.4 Zen matched 6900k in Floating point and integer (blender & handbrake). That's never even been even close before. Plus first time AMD is not a node behind. Writing is all over the wall it IS competitive or even BETTER.
> 
> Now that doesn't mean Intel won't sabotauge it somehow....but the hardware is there.
> 
> 
> 
> Maybe Intel will "update" their compiler?
Click to expand...

I'm sure they will , that and 3d mark 17 will include a least one feature test that gimps the ever loving crap out of AMD GPU's and Zen


----------



## Aussiejuggalo

Quote:


> Originally Posted by *Exxlir*
> 
> Should I go zen or kaby lake ?


Well considering almost all leaks on Kaby Lake show it to be basically the same to Skylake you should be asking Skylake or Zen







.

Wait till unbiased reviews come out on new tech before deciding, best way to not get screwed and let down by all the hype.


----------



## CrazyElf

There are still a lot of mysteries here left unsolved.


Is there performance of AMD's Zen due to excellent single threaded performance or is this worse single threaded performance, but a really good simultaneous multi-threaded implementation?
How good is AMD's single threaded performance? Keep in mind, most desktop applications remain single threaded.
There is also the matter of branch prediction - apparently Blender isn't very branched
Just how good is AMD's implementation of AVX?
Can we see further SIMD optimizations for Zen?

Been browsing RWT:
http://www.realworldtech.com/forum/?threadid=163466&curpostid=163466

It's been suggested that AMD's single threaded performance and branch prediction are not as good as they would like us to believe. Perhaps their AVX implementation might also be weak. Blender is an almost textbook example of "embarrassingly parallel" from what I can tell. Checking the Blender forums, they say it scales almost linearly with cores and from what I can tell, they are bang on. There was also the matter of a 980Ti getting the Ryzen demo rendered in 7 seconds.

Unfortunately, because they have not disclosed with benchmarks that would tests these, we have no way to know. Another possibility is that they choose low branch prediction benchmarks deliberately because they had awesome branch prediction on Zen. All we can do at this point really is guess.

Oh, and apparently Linus Torvalds is optimistic about Zen as well, if that means anything.

One post from juanrga caught my eye. He claims that Skylake can be 60% faster with SIMD optimizations and Haswell 40% faster. Well the 40% we know thanks to Blameless' tests to be true. That said, the idea that a custom build of Blender will gain 60% with Skylake seems extraordinary, especially considering the idea that there are no new instruction sets on Skylake, but considering we saw >40% with AVX2, perhaps it might happen. *Could someone with a Skylake (or Kaby Lake) CPU run The Stilt's custom Blender builds please?*

I'm surprised that they don't add these SIMD optimizations into the default Blender. The 40% performance gain at the very least is huge. I don't know how much that works out to in terms of total joules to render, as AVX2 no doubt uses a lot of power, but the time is impressive.

*Server-side*
So far the rumors are:

Up to 8 cores will be on a single, monolithic die (Zen)
Snowy Owl might be compatible with the same socket, and is a 16 chip MCM (so 2 Zens glued together?) This is to be a BGA product.
Venice might be a 4 chip MCM? This is to be their monster with 32 cores/64 threads.
Would love to see an unlocked version of the 32 core thread version.

Charlie from SA has an image: https://semiaccurate.com/2016/12/12/naked-amd-naples-sp3-socket-spotted/



That's what a 4 chip MCM will give you. I bet the IOs take a lot of room too. 64 PCIe lanes and 8 channels of RAM are driving it.

On the Intel side, for a comparison, we've got Intel's LGA 3467.





Also a monster. Of course the big difference is that this is expected to be a monolithic die.

The Xeon E5 2699A v4 is at 456.12 mm^2 with 24 cores (22 enabled) at 2.4 GHz. So Purley, scaling up would make it 608.16 mm^2. Likely more though because of the extra room for the six channel memory and if there are any other IOs.

Mostly important because server parts are high margin and Intel is extremely dominant. We want AMD Opteron to gain some marketshare.

Quote:


> Originally Posted by *Exxlir*
> 
> Should I go zen or kaby lake ?


As SuperZan noted, single threaded performance will remain on top with Skylake.

However, if you want a better multithreaded monster for your money and "good enough" single threaded performance, then Zen is a better buy. Also consider the platforms as a whole, X370 versus Z270.

At this point, we need release benchmarks, pricing, and motherboard pricing to compare.
Quote:


> Originally Posted by *Blameless*
> 
> That's on my 5820K; I just tore down my 6800K setup to RMA the X99 OC Formula it was in.


Thanks for confirming.
Quote:


> Originally Posted by *Cyrious*
> 
> Yep, it was a SB-E 2690.


Thanks for confirming as well.

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Sure, if you want to split hairs, but you know that isn't what they are talking about. They are talking about the consumer segment, and they are targeting Intel's "HEDT" platform - which is obvious by their presentation.
> 
> Now we might see a 10/20 come later to target the 6950X and that Halo level product, but right out of the gate they have gone after the 6900K.


They come in MCMs of 8, so unless they are going to combine 2 of them as a MCM (they are doing this for a 16 core Snowy Owl) and disable 6 cores, that won't happen.



They also have an APU with 16 cores and a GCN (or Vega) based GPU.


----------



## epic1337

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Isn't CES supposed to be when they at least paper launch in full? Most are expecting at least the initial line-up to be announced with specifications, pricing, and available boards.


hopefully yes, but it doesn't seem like it, since they haven't even announce when they'll disclose everything.
so probably only a full detailing without a proper launch.

Quote:


> Originally Posted by *CrazyElf*
> 
> They also have an APU with 16 cores and a GCN (or Vega) based GPU.


wut? are you confusing it with CUs? i mean thats gonna be some HUGE chip.
if its CUs then those 16CUs amounts to 4cpu cores and 12gpu units.


----------



## OCmember

Quote:


> Originally Posted by *Exxlir*
> 
> Should I go zen or kaby lake ?


I will probably be going Zen. Maybe a SR5 or the 7. The _possible_ pricing structure looks really good. 149 for a 4 core, 249 for a 6 core, and 349 for an 8 core.


----------



## Ultracarpet

Quote:


> Originally Posted by *OCmember*
> 
> I will probably be going Zen. Maybe a SR5 or the 7. The _possible_ pricing structure looks really good. 149 for a 4 core, 249 for a 6 core, and 349 for an 8 core.


Those prices be wayyyyyyyyyyyyyyyyyyyyyyyy too low.


----------



## jeffdamann

Quote:


> Originally Posted by *OCmember*
> 
> I will probably be going Zen. Maybe a SR5 or the 7. The _possible_ pricing structure looks really good. 149 for a 4 core, 249 for a 6 core, and 349 for an 8 core.


I've been waiting since Phenom II for something decent from AMD. Im going the highest possible SKU, nicest ASUS mobo, putting my entire system underwater with the biggest rads I can get, and going hog wild.

Im going to overclock the thing as far as it will go. I will then await Zen+ and sell my zen chip when it releases, and again go highest sku. Im really anticipating Zen+, once they refine zen it is truly going to be a monster.

Im also going with 64gb of ram and the fastest possible SSD configuration I can get, possibly will pick up the best Vega card as well.

I can't wait. Im going to get zen on launch day and do some reviews for OCN and a ton of benches, it will be a glorious day.


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> hopefully yes, but it doesn't seem like it, since they haven't even announce when they'll disclose everything.
> so probably only a full detailing without a proper launch.
> wut? are you confusing it with CUs? i mean thats gonna be some HUGE chip.
> if its CUs then those 16CUs amounts to 4cpu cores and 12gpu units.


It is a data center chip. 16 CPU cores + some number of GCN shaders.


----------



## iLeakStuff

Everyone should get Zen if its within 10-20% from Kaby Lake. The real world difference would be neglible, the CPU should cost a lot less than the overpriced Intel CPUs, and most importantly you support AMD which will give a strong signal towards Intel to end their pricing BS for future products


----------



## hawker-gb

AMD matching or exeeding Intel with their level of magnitude bigger budget.

Intel robs people with their prices for so long, i hope that will now come to end.
They even manage to sell their overpriced frauds like dual cores.

And i still remeber how certain forum "economic" experts predicts doom for AMD just a few months back.








Where they are now?


----------



## tajoh111

Quote:


> Originally Posted by *epic1337*
> 
> point is, AMD purposefully cherrypicked bulldozer's benches to make it look competitive.
> plus you're already contradicting yourself.
> 
> as i've said before, SR5 will sit between LGA1151 and LGA2011, its main target is not i7-5820K but the general boundary between LGA1151 and LGA2011.
> it is AMD's solution to user's disappointment about intel mainstream platform's lack of higher end SKUs, effectively bridging the gulf in between the two separate platforms.
> 
> you're forgetting the fact that AM4 is AMD's flexible mainstream platform, SR7 is not meant to hit against intel's HEDT straight on, its meant to sit within the bracket but be more lucrative than HEDT.
> this means to say its got to be more beneficial to take SR7 than the general lineup of HEDT, we all know that i7-6900K is overpriced as heck so thats out of the question.
> so in other words, SR7's general target is intel HEDT's most cost effective chip, which points to either i7-5820K or i7-6800K.


Yeah we have to be optimistic while cautiously guarded about hyping up Zen too much.

The one thing about blender is it's a weakpoint for Intel which they noticeably improved with skylake.

Vs the 4790k(which is the same frequency), skylake is 16.6% faster. If skylake was 16.6% faster IPC, people would have been happy with the ipc improvement.

http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-7.html

The same can be said with handbrake too.

http://proclockers.com/reviews/cpus/intel-core-i7-6700k-cpu-review?nopaging=1

Skylake is 30% faster than haswell, but again we know in reality the average difference is much lower.

It's closer to 5% on average. Meaning, tying on blender or beating Intel by 10% on handbrake, could mean 11 percent behind everywhere else which would still be pretty good if AMD can sell us a 6 core for around 300 dollars.

I wish AMD chose something like cinebench to show it's performance. Cinebench is the gold standard for showing IPC and thread comparisons.


----------



## Arturo.Zise

Quote:


> Originally Posted by *lolerk52*
> 
> Here is the Handbrake video test file: https://drive.google.com/file/d/0B0uKOxJkKhQkcmlnaWUzU05QRlk/view?usp=sharing
> 
> Convert it to AppleTV3 preset in handbrake and post your result.
> Mine was 1 minute 27 seconds on a Core i5 6600K @ 4.4GHz


I just ran this on my 5820k @ 4ghz and got 2min 25sec. Why is my system so slow compared to others? Is it because I'm using a Nightly build and not HB 10.5?


----------



## Wishmaker

Quote:


> Originally Posted by *hawker-gb*
> 
> AMD matching or exeeding Intel with their level of magnitude bigger budget.
> 
> Intel robs people with their prices for so long, i hope that will now come to end.
> They even manage to sell their overpriced frauds like dual cores.
> 
> And i still remeber how certain forum "economic" experts predicts doom for AMD just a few months back.
> 
> 
> 
> 
> 
> 
> 
> 
> Where they are now?


10 years is an eon in IT time. Better later than never and let us be honest, INTEL stopped with the huge performance jumps 5 years ago. They did what any company would do in their position, cruise control and fill the coffers







. Nobody is robbing anyone. I do not recall INTEL coming to your door with a baseball bat and taking your money. Neither them having SEPA rights to your bank account. People got robbed because of their own behaviour : people paid knowing clearly the prices were ridiculous.

Nobody is forcing you to buy INTEL yet everyone complains they get robbed







. STOP PAYING THESE PRICES and


----------



## Blameless

Quote:


> Originally Posted by *Redwoodz*
> 
> It was proof. A 95w 3.4 Zen matched 6900k in Floating point and integer (blender & handbrake). That's never even been even close before. Plus first time AMD is not a node behind. Writing is all over the wall it IS competitive or even BETTER.


It was proof that Zen does very well in reference Blender and x264 builds.

The listed TDP of the parts isn't particularly relevant since it's been established that both were consuming very similar amounts of power in the tests run. "Foating point and integer" aren't remotely the most relevant attributes or distinctions of the code executed (and both of these apps use a fair amount of floating point).

There is no doubt that Zen will match or best Intel parts, core for core and clock for clock, in at least a few areas. It's also highly likely that it will be of competitive value in most other areas. However, it's a mistake to take a presentation of what is certainly Zen's most impressive areas and assume it doesn't have any weaknesses.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> It is impossible for anyone to answer that since Zen isn't out, and next to nothing is known.


Impossible to answer, but more because his question was more vague than Zen's capabilities.
Quote:


> Originally Posted by *hawker-gb*
> 
> And i still remeber how certain forum "economic" experts predicts doom for AMD just a few months back.
> 
> 
> 
> 
> 
> 
> 
> 
> Where they are now?


To be fair, AMD is still in a tenuous economic position. How the Zen and Vega launches go, and how strong long term sales are will determine where they end up.

As we should be well aware, it takes more than a good product to stand up to Intel and NVIDIA...though a good product certainly helps.
Quote:


> Originally Posted by *Wishmaker*
> 
> Nobody is forcing you to buy INTEL yet everyone complains they get robbed
> 
> 
> 
> 
> 
> 
> 
> . STOP PAYING THESE PRICES and


I've had about a dozen Intel hex core parts, but the most expensive CPU I've ever purchased was the original Athlon 64 3400+ (S754) Clawhammer that I bought in January 2004, which was about 400 dollars at the time.

Been tempted by some more expensive processors since then, but have never bitten, not for my own builds anyway. The gap between the top tier and what's under them, or between Intel's hex cores and octo cores has been too great for me to feel comfortable with.

Depending on how things go, the SR7 could be my new most expensive personal CPU of all time.


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> It was proof that Zen does very well in reference Blender and x264 builds.
> 
> The listed TDP of the parts isn't particularly relevant since it's been established that both were consuming very similar amounts of power in the tests run. "Foating point and integer" aren't remotely the most relevant attributes or distinctions of the code executed (and both of these apps use a fair amount of floating point).
> 
> There is no doubt that Zen will match or best Intel parts, core for core and clock for clock, in at least a few areas. It's also highly likely that it will be of competitive value in most other areas. However, it's a mistake to take a presentation of what is certainly Zen's most impressive areas and assume it doesn't have any weaknesses.
> Impossible to answer, but more because his question was more vague than Zen's capabilities.
> To be fair, AMD is still in a tenuous economic position. How the Zen and Vega launches go, and how strong long term sales are will determine where they end up.
> 
> As we should be well aware, it takes more than a good product to stand up to Intel and NVIDIA...though a good product certainly helps.
> I've had about a dozen Intel hex core parts, but the most expensive CPU I've ever purchased was the original Athlon 64 3400+ (S754) Clawhammer that I bought in January 2004, which was about 400 dollars at the time.
> 
> Been tempted by some more expensive processors since then, but have never bitten, not for my own builds anyway. The gap between the top tier and what's under them, or between Intel's hex cores and octo cores has been too great for me to feel comfortable with.
> 
> Depending on how things go, the SR7 could be my new most expensive personal CPU of all time.


According to the CEO. Zen is a 95w part. Here is a stock 6900k using 136w and an overclocked 6900k using 200w. Zen will never reach over 95w 21st stock. Give credit where it's due. FX almost never used 125w but that was used to bash AMD.

http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-9.html


----------



## FoeFiddyCuh

I'm ready to retire my 2500k be it Zen or a 6700k from micro center. Being able to seamlessly do 10 things at once is where it's at, so I'm hoping Zen clocks well.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> According to the CEO. Zen is a 95w part.


Lisa Su was very careful to say "TDP" after any mention of watt figures between the two parts and there is a reason for that. TDP ratings and power consumption are only very loosely related and the test systems in question weren't consuming appreciably different levels of power.
Quote:


> Originally Posted by *budgetgamer120*
> 
> Here is a stock 6900k using 136w


That 136w, prior to the VRM losses, in what appears to be Prime95 or perhaps OCCT.

Your own link also reveals how little TDP has to do with actual power consumption. Look at the spreads between all the BW-E parts tested there...every single one of those CPUs has a 140w TDP rating.
Quote:


> Originally Posted by *budgetgamer120*
> 
> and an overclocked 6900k using 200w.


Not particularly relevant as OCing, especially beyond a minimal level, dramatically increases power consumption. Indeed, most desktop parts are well beyond peak performance/watt clocks even at stock.

Every system in my signature has a CPU at settings that will cause it to consume more than twice what they will at stock, and all but the 6800K will pull well over 200w, CPU alone. The only reason the 6800K doesn't is because I've been testing it with an H55 which won't cool 200w+.
Quote:


> Originally Posted by *budgetgamer120*
> 
> Zen will never reach over 95w 21st stock.


TDP is there to dictate cooling needs and cap performance to certain levels to assure stock parts don't damage themselves. A radically lower TDP rating with similar manufacturing nodes and similar architectural capabilities either means that the peak performance of the part is less (artificially so or otherwise), or that the parts are of typically lower leakage. Lower leakage parts generally mean they were engineered or binned for low leakage, which is bad news for peak clock speeds.

The only way you can get most stock BW-Es to consume more than ~90w is to utilize very heavy AVX2/FMA3 loads that would almost certainly be retiring far more instructions in a given period of time than an equivalent Zen part. I can get some stock CPUs of modern Intel architectures to consume near their TDP ratings, but only in things like Prime95 or LINPACK...where they are extremely fast.

It's highly unlikely that an average 8c/16t Zen sample is going to be using appreciably less power while doing the same amount of work as a typical 6900K sample. You can quote me on this later when a dozen post-launch reviews show power consumption in normal use to be more or less the same, core for core, at similar speeds.
Quote:


> Originally Posted by *budgetgamer120*
> 
> Give credit where it's due.


I've given credit where it's due, but your understanding of the implication of TDP rating is lacking.
Quote:


> Originally Posted by *budgetgamer120*
> 
> FX almost never used 125w but that was used to bash AMD.


I've had three FX parts (FX-8150, 8350, and a 9590) and they were all much more likely to reach or exceed their rated TDP, in actual power consumption, at stock, than the overwhelming majority of Intel parts. Power consumption was one of the most deserved areas of criticism for the FX series.


----------



## cssorkinman

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> According to the CEO. Zen is a 95w part.
> 
> 
> 
> Lisa Su was very careful to say "TDP" after any mention of watt figures between the two parts and there is a reason for that. TDP ratings and power consumption are only very loosely related and the test systems in question weren't consuming appreciably different levels of power.
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> Here is a stock 6900k using 136w
> 
> Click to expand...
> 
> That 136w, prior to the VRM losses, in what appears to be Prime95 or perhaps OCCT.
> 
> Your own link also reveals how little TDP has to do with actual power consumption. Look at the spreads between all the BW-E parts tested there...every single one of those CPUs has a 140w TDP rating.
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> and an overclocked 6900k using 200w.
> 
> Click to expand...
> 
> Not particularly relevant as OCing, especially beyond a minimal level, dramatically increases power consumption. Indeed, most desktop parts are well beyond peak performance/watt clocks even at stock.
> 
> Every system in my signature has a CPU at settings that will cause it to consume more than twice what they will at stock, and all but the 6800K will pull well over 200w, CPU alone. The only reason the 6800K doesn't is because I've been testing it with an H55 which won't cool 200w+.
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> Zen will never reach over 95w 21st stock.
> 
> Click to expand...
> 
> TDP is there to dictate cooling needs and cap performance to certain levels to assure stock parts don't damage themselves. A radically lower TDP rating with similar manufacturing nodes and similar architectural capabilities either means that the peak performance of the part is less (artificially so or otherwise), or that the parts are of typically lower leakage. Lower leakage parts generally mean they were engineered or binned for low leakage, which is bad news for peak clock speeds.
> 
> The only way you can get most stock BW-Es to consume more than ~90w is to utilize very heavy AVX2/FMA3 loads that would almost certainly be retiring far more instructions in a given period of time than an equivalent Zen part. I can get some stock CPUs of modern Intel architectures to consume near their TDP ratings, but only in things like Prime95 or LINPACK...where they are extremely fast.
> 
> It's highly unlikely that an average 8c/16t Zen sample is going to be using appreciably less power while doing the same amount of work as a typical 6900K sample. You can quote me on this later when a dozen post-launch reviews show power consumption in normal use to be more or less the same, core for core, at similar speeds.
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> Give credit where it's due.
> 
> Click to expand...
> 
> I've given credit where it's due, but your understanding of the implication of TDP rating is lacking.
> Quote:
> 
> 
> 
> Originally Posted by *budgetgamer120*
> 
> FX almost never used 125w but that was used to bash AMD.
> 
> Click to expand...
> 
> I've had three FX parts (FX-8150, 8350, and a 9590) and they were all much more likely to reach or exceed their rated TDP, in actual power consumption, at stock, than the overwhelming majority of Intel parts. Power consumption was one of the most deserved areas of criticism for the FX series.
Click to expand...

I'd agree , especially on 8150 and 9590 - the 8350 might be better if it was a late batch chip..
My 9370 is otherworldly when it comes to making heat and using power even in it's default stock configuration.
I have a couple of FX 8 cores that can very nearly match my 4790k clock for clock for power consumption ( when keeping the FX below its voltage wall) due to being very good undervolting post batch 1429 chips Power consumed vs work completed is a different story which is dependent on the task and almost always leans in the Intel chips favor.


----------



## done12many2

Quote:


> Originally Posted by *budgetgamer120*
> 
> According to the CEO. Zen is a 95w part. Here is a stock 6900k using 136w and an overclocked 6900k using 200w. Zen will never reach over 95w 21st stock. Give credit where it's due. FX almost never used 125w but that was used to bash AMD.
> 
> http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-9.html


You are confusing what TDP actually is.

Anyways, in this video you'll see that under load the total system draw for both are far closer than the 95w vs 140w TDPs would lead you to believe. TDP has nothing to do with the actual amount of power drawn. You better believe that if Ryzen is to do anything worth while in the area of overclocking, it WILL draw far greater than the "95w max" you are suggesting.

If a Ryzen is truly a 95w part, with that logic a 6900k must be a 140w part. I don't see the application of this in the blender/power draw demo provided by AMD


----------



## Marios145

Looks like some people were right, the "Infinity fabric" is not something different, but the evolution of HyperTransport. (Probably backwards-compatible)
Quote:


> About four years ago, AMD decided to create a superset of Hypertransport to replace separate on-chip interconnects that it used for its CPUs and GPUs. The resulting Infinity fabric debuts in Summit Ridge before April and in AMD's Vega GPUs before July.
> 
> The company declined to give data rates or latency figures for Infinity, which comes only in a coherent version. *However, it said that it is modular and will scale from 30- to 50-GBytes/second versions for notebooks to 512 Gbytes/s and beyond for Vega.*
> 
> AMD does not plan to license the link, *which uses the Hypertransport messaging protocol*. Instead, it will use Infinity both as a network-on-chip and as a clustering link between its GPUs and x86 server SoCs. It supports the open CCIX standard as a link to third-party accelerators such as FPGAs.
> 
> Infinity is agnostic on topologies and will be implemented like a mesh on Vega, said Maurice Steinman, an AMD fellow for client SoC architectures and modeling.
> *It can provide the full bandwidth of any attached DRAM.*


EE Times

There's lots of other stuff on that article. Good read


----------



## EightDee8D

Some bad news is coming for those who got hyped.


----------



## Wishmaker

Quote:


> Originally Posted by *EightDee8D*
> 
> Some bad news is coming for those who got hyped.


Do tell


----------



## EightDee8D

Quote:


> Originally Posted by *Wishmaker*
> 
> Do tell


Just wait and watch. i can see some popcorn thread incoming.


----------



## Xuper

Please don't! Look at Anand Forum! they got nightmare!


----------



## sepiashimmer

Quote:


> Originally Posted by *Xuper*
> 
> Please don't! Look at Anand Forum! they got nightmare!


What thread at Anand Forum? I'm looking but I don't see any there.


----------



## CrazyElf

I'd be very interested to see if the 14 nm LPP process is going to limit overclocks.

Quote:


> Originally Posted by *epic1337*
> 
> wut? are you confusing it with CUs? i mean thats gonna be some HUGE chip.
> if its CUs then those 16CUs amounts to 4cpu cores and 12gpu units.


No 16 core APU is the rumor.

http://www.fudzilla.com/news/processors/37588-amd-x86-16-core-zen-apu-to-fight-core-i7



It might end up being a MCM as well or it could be that APU dies are a lot smaller.

Quote:


> Originally Posted by *tajoh111*
> 
> Yeah we have to be optimistic while cautiously guarded about hyping up Zen too much.
> 
> The one thing about blender is it's a weakpoint for Intel which they noticeably improved with skylake.
> 
> Vs the 4790k(which is the same frequency), skylake is 16.6% faster. If skylake was 16.6% faster IPC, people would have been happy with the ipc improvement.
> 
> http://www.tomshardware.com/reviews/skylake-intel-core-i7-6700k-core-i5-6600k,4252-7.html
> 
> The same can be said with handbrake too.
> 
> http://proclockers.com/reviews/cpus/intel-core-i7-6700k-cpu-review?nopaging=1
> 
> Skylake is 30% faster than haswell, but again we know in reality the average difference is much lower.
> 
> It's closer to 5% on average. Meaning, tying on blender or beating Intel by 10% on handbrake, could mean 11 percent behind everywhere else which would still be pretty good if AMD can sell us a 6 core for around 300 dollars.
> 
> I wish AMD chose something like cinebench to show it's performance. Cinebench is the gold standard for showing IPC and thread comparisons.


Let's face it, the days of silicon improvement are over. We get single digits, maybe low double digits on a few benchmarks. We've hit the limits of what silicon can do. Things like a memory controller can only be integrated once. The only real innovation I would love is for HBM or HMC to act like an L4 cache on HEDT CPUs somehow.

Maybe if someone finds a way to better take advantage of multithreaded apps that might change. Either that or ASICs (but they have their own drawbacks of course).

Alternatively, unless we see something like graphene take over, we may be done with rapid scaling.

Quote:


> Originally Posted by *Wishmaker*
> 
> 10 years is an eon in IT time. Better later than never and let us be honest, INTEL stopped with the huge performance jumps 5 years ago. They did what any company would do in their position, cruise control and fill the coffers
> 
> 
> 
> 
> 
> 
> 
> . Nobody is robbing anyone. I do not recall INTEL coming to your door with a baseball bat and taking your money. Neither them having SEPA rights to your bank account. People got robbed because of their own behaviour : people paid knowing clearly the prices were ridiculous.
> 
> Nobody is forcing you to buy INTEL yet everyone complains they get robbed
> 
> 
> 
> 
> 
> 
> 
> . STOP PAYING THESE PRICES and


Intel is a monopoly at this point.

They can extract economic rent and price at whatever the market will bare, maximizing their surplus and minimizing consumer surplus.

It's like saying, "don't buy from the local telecom monopoly". But if you don't, you won't have Internet at home or for that matter a mobile data and talk plan.

Basically I feel like you're being an apologist for a monopoly here.

Quote:


> Originally Posted by *hawker-gb*
> 
> AMD matching or exeeding Intel with their level of magnitude bigger budget.
> 
> Intel robs people with their prices for so long, i hope that will now come to end.
> They even manage to sell their overpriced frauds like dual cores.
> 
> And i still remeber how certain forum "economic" experts predicts doom for AMD just a few months back.
> 
> 
> 
> 
> 
> 
> 
> 
> Where they are now?


Very few people in this world are ever willing to admit they are wrong about anything.

I personally was hoping for a "good enough" single threaded performance and aggressively priced CPU. That and massively improved performance per watt. I may get my wish. We need a lot of third party benchmarks at this point to prove/disprove it.

Quote:


> Originally Posted by *Blameless*
> 
> There is no doubt that Zen will match or best Intel parts, core for core and clock for clock, in at least a few areas. It's also highly likely that it will be of competitive value in most other areas. However, it's a mistake to take a presentation of what is certainly Zen's most impressive areas and assume it doesn't have any weaknesses.
> Impossible to answer, but more because his question was more vague than Zen's capabilities.
> To be fair, AMD is still in a tenuous economic position. How the Zen and Vega launches go, and how strong long term sales are will determine where they end up.


At this point, they NEED a good product for a turnaround. It simply cannot be done without a competitive product. Bulldozer was a total failure from that regard.

They need to find their niches and sell lots of high margin Opterons (and perhaps professional GPUs as well).

But yes, sales still matter and a good product (technically good) doesn't assure sales either. On the GPU front, I like the way AMD has been marketing and it does look like even Polaris, although it certainly did not meet the 2.8x power efficiency claims, did help improve RTG's fortunes. They'll need a lot more than marketing though to move those Zen Opterons in the volume they want, even if it is a good product.

I think that this thread has reached a point where barring further information, we aren't going to get very far.

We know that AMD has improved non-AVX IPC by a lot.
Power consumption too, even under a bad case scenario is significantly improved compare to Bulldozer.
Judging by the Blender performance, AMD's SMT seems to be very strong. It kept up with the 6900k after all on a textbook "embarrassingly parallel" bench. Beyond that we don't have any other information. Need more benchmarks.
We don't know how good Zen's branch prediction is because the benchmarks didn't show us much. We need a branch intensive benchmark. On paper, Zen does have the potential to be good, perhaps better than Intel.
We know that AVX and other SIMD optimizations can drastically improve the performance on Intel CPUs for Blender, x265, and other apps.

The only real insight is if someone could run that Blender bench with a custom build on Skylake or Kaby Lake. We know the 40% is true, but could Skylake really get 60% over the public build?

Otherwise we need third party benchmarks. The only other "cool" thing we might see soon though are the motherboards. A platform I'd argue is as important as the performance of the CPU.


----------



## Maintenance Bot

Quote:


> Originally Posted by *EightDee8D*
> 
> Some bad news is coming for those who got hyped.


The 1188 cb score....


----------



## IRobot23

Quote:


> Originally Posted by *EightDee8D*
> 
> Some bad news is coming for those who got hyped.


Because of new leaked benchmarks? Well we dont know how well will RYZEN perform in CB15


----------



## EightDee8D

We don't know alot of things, but i always said " just focus on one thing Dr.su said - 40% more ipc vs XV" and you won't be disappointed. guess we will see soon.

in any case, Ryzen will improve alot of things for amd.


----------



## Xuper

Quote:


> Originally Posted by *sepiashimmer*
> 
> What thread at Anand Forum? I'm looking but I don't see any there.


Start from there

https://forums.anandtech.com/threads/summit-ridge-zen-benchmarks.2482739/page-138#post-38638517


----------



## IRobot23

Nice rumor. We are going to see many rumors until final release.


----------



## Gumbi

Isn't Cinebench specifically gimped for AMD? I don't know by how much though. As always, it's a waiting game for real benches to come out.


----------



## Particle

Quote:


> Originally Posted by *Gumbi*
> 
> Isn't Cinebench specifically gimped for AMD? I don't know by how much though. As always, it's a waiting game for real benches to come out.


It depends on what you mean by that. There is no reason to believe that Cinebench is specifically constructed so as to make AMD perform worse as a goal in itself, but that is the practical consequence of them compiling it with Intel's compiler. Using Intel's compiler isn't an unreasonable decision for a software developer, however.


----------



## Wishmaker

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *EightDee8D*
> 
> Just wait and watch. i can see some popcorn thread incoming.


We already have the smaller budget already better posts







.
Quote:


> Originally Posted by *CrazyElf*
> 
> I'd be very interested to see if the 14 nm LPP process is going to limit overclocks.
> No 16 core APU is the rumor.
> 
> http://www.fudzilla.com/news/processors/37588-amd-x86-16-core-zen-apu-to-fight-core-i7
> 
> 
> 
> It might end up being a MCM as well or it could be that APU dies are a lot smaller.
> Let's face it, the days of silicon improvement are over. We get single digits, maybe low double digits on a few benchmarks. We've hit the limits of what silicon can do. Things like a memory controller can only be integrated once. The only real innovation I would love is for HBM or HMC to act like an L4 cache on HEDT CPUs somehow.
> 
> Maybe if someone finds a way to better take advantage of multithreaded apps that might change. Either that or ASICs (but they have their own drawbacks of course).
> 
> Alternatively, unless we see something like graphene take over, we may be done with rapid scaling.
> Intel is a monopoly at this point.
> 
> They can extract economic rent and price at whatever the market will bare, maximizing their surplus and minimizing consumer surplus.
> 
> It's like saying, "don't buy from the local telecom monopoly". But if you don't, you won't have Internet at home or for that matter a mobile data and talk plan.
> 
> Basically I feel like you're being an apologist for a monopoly here.
> Very few people in this world are ever willing to admit they are wrong about anything.
> 
> I personally was hoping for a "good enough" single threaded performance and aggressively priced CPU. That and massively improved performance per watt. I may get my wish. We need a lot of third party benchmarks at this point to prove/disprove it.
> At this point, they NEED a good product for a turnaround. It simply cannot be done without a competitive product. Bulldozer was a total failure from that regard.
> 
> They need to find their niches and sell lots of high margin Opterons (and perhaps professional GPUs as well).
> 
> But yes, sales still matter and a good product (technically good) doesn't assure sales either. On the GPU front, I like the way AMD has been marketing and it does look like even Polaris, although it certainly did not meet the 2.8x power efficiency claims, did help improve RTG's fortunes. They'll need a lot more than marketing though to move those Zen Opterons in the volume they want, even if it is a good product.
> 
> I think that this thread has reached a point where barring further information, we aren't going to get very far.
> 
> We know that AMD has improved non-AVX IPC by a lot.
> Power consumption too, even under a bad case scenario is significantly improved compare to Bulldozer.
> Judging by the Blender performance, AMD's SMT seems to be very strong. It kept up with the 6900k after all on a textbook "embarrassingly parallel" bench. Beyond that we don't have any other information. Need more benchmarks.
> We don't know how good Zen's branch prediction is because the benchmarks didn't show us much. We need a branch intensive benchmark. On paper, Zen does have the potential to be good, perhaps better than Intel.
> We know that AVX and other SIMD optimizations can drastically improve the performance on Intel CPUs for Blender, x265, and other apps.
> 
> The only real insight is if someone could run that Blender bench with a custom build on Skylake or Kaby Lake. We know the 40% is true, but could Skylake really get 60% over the public build?
> 
> Otherwise we need third party benchmarks. The only other "cool" thing we might see soon though are the motherboards. A platform I'd argue is as important as the performance of the CPU.






You give me an example where a telecoms company has a monopoly and then claim that I am apologetic to INTEL. Fair enough, however, your analogy is so wrong that we do not even have a measurement unit for it. INTEL is a monopoly and has been for the past 15 years, *however*, you can still have a computer if you do not buy INTEL.

I do not know where you live, nor I care, but in my area you can buy plenty of AMD chips and the required parts to have a computer. Clicky. There you go 8 cores 4 GHz should be enough to run some apps.

Many claim that Intel is robbing them and forcing them to pay. Why don't people note in their 'I am a victim' posts that they are forced to buy INTEL because of the performance aspect. You can do plenty with a 8350 Vishera Chip.

So your argument that Intel is a monopoly and we *cannot buy something else* is wrong. if you rephrase and say INTEL is a monopoly and we cannot buy, for the time being, something as equally good, then I will cut you some slack.


----------



## Blameless

I wouldn't be especially surprised if the Cinebench and/or the Fritz Chess benchmark rumors/leaks turned out to be accurate.

Fritz involves a lot of complex branch work if AMD's misprediction penalty is higher than Intel's, this could easily explain the discrepancy.

Cinebench is less clear. It's (a benchmark of) a rendering program, much like Blender, and like the reference blender builds doesn't use newer instruction sets. The discrepancy here could be compiler related.


----------



## Ultracarpet

Quote:


> Originally Posted by *Xuper*
> 
> Start from there
> 
> https://forums.anandtech.com/threads/summit-ridge-zen-benchmarks.2482739/page-138#post-38638517


That's honestly pretty believable in my mind. An FX 8-series is like mid 500 points at 3.5 ghz in r15... So that rumored score is about 100 points over being twice as fast as the fx 8core ** at the same, or near the same, clocks.


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> Lisa Su was very careful to say "TDP" after any mention of watt figures between the two parts and there is a reason for that. TDP ratings and power consumption are only very loosely related and the test systems in question weren't consuming appreciably different levels of power.
> That 136w, prior to the VRM losses, in what appears to be Prime95 or perhaps OCCT.
> 
> Your own link also reveals how little TDP has to do with actual power consumption. Look at the spreads between all the BW-E parts tested there...every single one of those CPUs has a 140w TDP rating.
> Not particularly relevant as OCing, especially beyond a minimal level, dramatically increases power consumption. Indeed, most desktop parts are well beyond peak performance/watt clocks even at stock.
> 
> Every system in my signature has a CPU at settings that will cause it to consume more than twice what they will at stock, and all but the 6800K will pull well over 200w, CPU alone. The only reason the 6800K doesn't is because I've been testing it with an H55 which won't cool 200w+.
> TDP is there to dictate cooling needs and cap performance to certain levels to assure stock parts don't damage themselves. A radically lower TDP rating with similar manufacturing nodes and similar architectural capabilities either means that the peak performance of the part is less (artificially so or otherwise), or that the parts are of typically lower leakage. Lower leakage parts generally mean they were engineered or binned for low leakage, which is bad news for peak clock speeds.
> 
> The only way you can get most stock BW-Es to consume more than ~90w is to utilize very heavy AVX2/FMA3 loads that would almost certainly be retiring far more instructions in a given period of time than an equivalent Zen part. I can get some stock CPUs of modern Intel architectures to consume near their TDP ratings, but only in things like Prime95 or LINPACK...where they are extremely fast.
> 
> It's highly unlikely that an average 8c/16t Zen sample is going to be using appreciably less power while doing the same amount of work as a typical 6900K sample. You can quote me on this later when a dozen post-launch reviews show power consumption in normal use to be more or less the same, core for core, at similar speeds.
> I've given credit where it's due, but your understanding of the implication of TDP rating is lacking.
> I've had three FX parts (FX-8150, 8350, and a 9590) and they were all much more likely to reach or exceed their rated TDP, in actual power consumption, at stock, than the overwhelming majority of Intel parts. Power consumption was one of the most deserved areas of criticism for the FX series.


It doesn't really matter how you twist it. 6900k is a 140w chip and Zen is 95w. Just as it has always been before AMD had something good going.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> It doesn't really matter how you twist it. 6900k is a 140w chip and Zen is 95w.


I'm not twisting anything, you are simply wrong.


----------



## Fyrwulf

Quote:


> Originally Posted by *CrazyElf*
> 
> One post from juanrga caught my eye. He claims that Skylake can be 60% faster with SIMD optimizations and Haswell 40% faster. Well the 40% we know thanks to Blameless' tests to be true. That said, the idea that a custom build of Blender will gain 60% with Skylake seems extraordinary, especially considering the idea that there are no new instruction sets on Skylake, but considering we saw >40% with AVX2, perhaps it might happen. *Could someone with a Skylake (or Kaby Lake) CPU run The Stilt's custom Blender builds please?*


Juanrga is a paid shill. He has never, to the best of my knowledge, posted anything that is even neutral towards AMD. Anything that comes from him I automatically discard as garbage.

Quote:


> *Server-side*
> So far the rumors are:
> 
> Up to 8 cores will be on a single, monolithic die (Zen)
> Snowy Owl might be compatible with the same socket, and is a 16 chip MCM (so 2 Zens glued together?) This is to be a BGA product.
> Venice might be a 4 chip MCM? This is to be their monster with 32 cores/64 threads.


AMD using BGA packaging for anything but embedded would be nearly unprecedented. It might actually be unprecedented. Traditionally Opteron processors have been LGA.
Quote:


> The Xeon E5 2699A v4 is at 456.12 mm^2 with 24 cores (22 enabled) at 2.4 GHz. So Purley, scaling up would make it 608.16 mm^2. Likely more though because of the extra room for the six channel memory and if there are any other IOs.


I think IBM was the only one with the tech to produce chips larger than 600mm^2. The old IBM chip experimental foundry is now owned by GloFo, though, and that technology wasn't ready for prime time. So if Purley is actually that big, it's an MCM.
Quote:


> They come in MCMs of 8, so unless they are going to combine 2 of them as a MCM (they are doing this for a 16 core Snowy Owl) and disable 6 cores, that won't happen.


Okay, first of all, MCM means multi-chip module. Which means two or more separate dies on a unified package. Given that Summit Ridge isn't an MCM, that's a negative, Ghostrider. Second, AMD is using a core complex of 4 cores as the basic building block for its processors. Intel uses multiples of two because its basic building block is two cores. Therefor, any potential competitor to to Intel's HEDT 10 core chips will probably be 12 cores. Considering that the max TDP for AM4 is 140 watts, it's possible AMD could choose to do that, although I don't think it'll happen before the end of next year if it happens at all.


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> I'm not twisting anything, you are simply wrong.


I find reviews showing 6900k using 136w or more. How Ann I wrong?
And they all they 6900k it's a 140w chip 82nd their introduction.

Lets change now that AMD is back in the game. TDP doesn't matter anymore.


----------



## PostalTwinkie

Quote:


> Originally Posted by *budgetgamer120*
> 
> I find reviews showing 6900k using 136w or more. How Ann I wrong?
> And they all they 6900k it's a 140w chip 82nd their introduction.
> 
> Lets change now that AMD is back in the game. TDP doesn't matter anymore.


TDP =/= Wattage/Power draw.

That means Thermal Design Power isn't the wattage a chip draws, TDP is a thermal measurement.

EDIT:

There also isn't any standard testing methodology for determining TDP, as it is left to the various manufacturers. While there is a formula used to get the number, the variables used in the formula are far from standardized.

Think of TDP like the processors equivalent to display contrast ratio ratings. They are ballpark, without standards.

Ehhh....that last part is a bit too broad of a statement about it.


----------



## SoloCamo

Quote:


> Originally Posted by *budgetgamer120*
> 
> I find reviews showing 6900k using 136w or more. How Ann I wrong?
> And they all they 6900k it's a 140w chip 82nd their introduction.
> 
> Lets change now that AMD is back in the game. TDP doesn't matter anymore.


TDP is not, and has never been, a true indicator of power usage.


----------



## tpi2007

Those leaked benchmarks do remind me of Bulldozer a bit. Back then they put the FX-8150 against a 2500K in the Fritz and Cinebench benchmarks. On the Handbrake one they felt confident to show both the 2500K and the 2600K results, where the FX-8150 was able to beat the former and equal the latter.

This time they didn't show either the Fritz benchmark nor the Cinebench one. And this leak may tell why. Assuming, of course, it's accurate and not based on a lower clocked ES sample, for example.

Here is the official Bulldozer comparison video from 2011:




Quote:


> Originally Posted by *budgetgamer120*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> I'm not twisting anything, you are simply wrong.
> 
> 
> 
> I find reviews showing 6900k using 136w or more. How Ann I wrong?
> And they all they 6900k it's a 140w chip 82nd their introduction.
> 
> Lets change now that AMD is back in the game. TDP doesn't matter anymore.
Click to expand...

TDP matters, but TDP and power consumption are not the same thing. In one of the tests both systems were using very similar amounts of power while doing the same heavy CPU task.

Also, TDP ratings are broad umbrella categories where both Intel and AMD put their CPUs. It gives them margin and lets them account for yield quality, for example, so any isolated example may or may not be representative. It also allows them to have the same specifications for the same platform, so you know what kind of heatsink for the CPU category and power delivery on the motherboard should be in place, so you don't have any headaches when upgrading to a higher spec CPU on the same motherboard and with the same cooling solution later down the road.

Take my current CPU, the i7-3820, it's a quad core, based on a quad core die, that has the same 130w TDP as the hexacore 3960X, which is based on an octa core die. It doesn't use not even close to 130w nor dissipates that much heat. Even more evident is the case of the 4820K, which is a 22nm quad core, also with a 130w TDP.


----------



## budgetgamer120

Quote:


> Originally Posted by *tpi2007*
> 
> Those leaked benchmarks do remind me of Bulldozer a bit. Back then they put the FX-8150 against a 2500K in the Fritz and Cinebench benchmarks. On the Handbrake one they felt confident to show both the 2500K and the 2600K results, where the FX-8150 was able to beat the former and equal the latter.
> 
> This time they didn't show either the Fritz benchmark nor the Cinebench one. And this leak may tell why. Assuming, of course, it's accurate and not based on a lower clocked ES sample, for example.
> TDP matters, but TDP and power consumption are not the same thing. In one of the tests both systems were using very similar amounts of power while doing the same heavy CPU task.
> 
> Also, TDP ratings are broad umbrella categories where both Intel and AMD put their CPUs. It gives them margin and lets them account for yield quality, for example, so any isolated example may or may not be representative. It also allows them to have the same specifications for the same platform, so you know what kind of heatsink for the CPU category and power delivery on the motherboard should be in place, so you don't have any headaches when upgrading to a higher spec CPU on the same motherboard and with the same cooling solution later down the road.
> 
> Take my current CPU, the i7-3820, it's a quad core, based on a quad core die, that has the same 130w TDP as the hexacore 3960X, which is based on an octa core die. It doesn't use not even close to 130w nor dissipates that much heat. Even more evident is the case of the 4820K, which is a 22nm quad core, also with a 130w TDP.


There are workloads that will make your cpu use 130w. Cpu stress software will. That says the cpu can like I see in the 6900k reviews. A 95w Zen part will never use that even with stress test while a 6900k will use close to 140w under stress test.

Otherwise my 85w cpu never uses 85w. I know about the workload thing, but tdp is not some made up number.


----------



## epic1337

TDP's relation to power consumption is like the relation of a car's redline and speed.
e.g. just because the car's redline is 10K RPM doesn't mean it'll always be running at 10K RPM.
and yes, you can exceed the TDP much like a car can exceed the redline.

on the side note, thermal dissipation cannot exceed the wattage consumed.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> I find reviews showing 6900k using 136w or more. How Ann I wrong?


You didn't find a review showing the 6900K using 136w or more. You found a review where the +12v EPS connector was supplied with 136w. You also aren't showing anything but BW-Es in that same scenario, so there is nothing to compare the measurement against; we don't know if Zen, in the same scenario, would pull more or less power.

We have _one_ test where power consumption figures are shown between Zen and BW-E. This is the AMD presentation in the video shown several posts ago in this very thread. The 6900K system has ~5w higher in peak power draw and a ~10w lower delta between idle and load, relative to the Zen system. This is extremely close, and the difference could be put down to motherboard used, the fixed clock speed of the Zen part, or just per-sample variance of the CPUs themselves (not every 6900K uses the same amount of power).
Quote:


> Originally Posted by *budgetgamer120*
> 
> And they all they 6900k it's a 140w chip 82nd their introduction.


We have two very similar shirts in front of us, from different brands...

I'm saying they are both about 40 inches around the chest.

You say that doesn't matter because the one on the left is a medium, while the one on the right is an extra-large.

I say what one brand calls medium and one calls extra large doesn't matter if they are actually the same size.

You say that doesn't matter because the one on the left is a medium while the one on the right is an extra large...then accuse me of trying to spin things.


----------



## dmasteR

Quote:


> Originally Posted by *Blameless*
> 
> You didn't find a review showing the 6900K using 136w or more. You found a review where the +12v EPS connector was supplied with 136w. You also aren't showing anything but BW-Es in that same scenario, so there is nothing to compare the measurement against; we don't know if Zen, in the same scenario, would pull more or less power.
> 
> We have _one_ test where power consumption figures are shown between Zen and BW-E. This is the AMD presentation in the video shown several posts ago in this very thread. The 6900K system has ~5w higher in peak power draw and a ~10w lower delta between idle and load, relative to the Zen system. This is extremely close, and the difference could be put down to motherboard used, the fixed clock speed of the Zen part, or just per-sample variance of the CPUs themselves (not every 6900K uses the same amount of power).
> We have two very similar shirts in front of us, from different brands...


https://www.youtube.com/watch?v=7yxSFmEOkrA&feature=youtu.be&t=41
Their "Idle": Ryzen 93W vs Intel 106.5W
Their "load": Ryzen 188W vs Intel 191W

For all those who haven't seen the video that Blameless is referring to.

It's nothing new. TDP is calculated differently between Intel and AMD. Why is this even a surprise to anyone?


----------



## Kuivamaa

Quote:


> Originally Posted by *Xuper*
> 
> Start from there
> 
> https://forums.anandtech.com/threads/summit-ridge-zen-benchmarks.2482739/page-138#post-38638517


This performs roughly as my E5 2650v2 2.6GHz base IB 8C/16T Xeon.Which seems suspicious. I think it is fake and this is why:

Look at the CB15 image of leaks:



The score is visible but the CPU ID is cropped. This is ofc fishy in itself but bear with me. The ID characters are not totally cropped,we can see the top of them. Now, look at this pic of results...



...and compare the top of processor IDs to the one from the leak. AMD ones like FX or Opteron can be excluded since they are extremely long and full of lower case letters in the end and those would have been invisible here. Intel ones are shorter and more simple. They either end in "CPU" in case of Core family or in "XXXX" (numerals) if they are Xeons. The leak does not seem to end in "CPU", so by contradiction, the leaked CPU id belongs to a Xeon.


----------



## Mahigan

Intel 6900K power usage:





Looks to me like it is hitting 136 to 141W under full load. That's what the TDP claims and that is what we're seeing. Or am I missing something?


----------



## lolerk52

It depends on workload.
The 140W TDP is done with AVX256, which is a power hog. Under most circumstances it doesn't approach that. Zen doesn't have the same AVX capabilities as Broadwell or Skylake, so the maximum is lower.

Notice how it says "Torture".


----------



## Mahigan

Quote:


> Originally Posted by *lolerk52*
> 
> It depends on workload.
> The 140W TDP is done with AVX256, which is a power hog. Under most circumstances it doesn't approach that. Zen doesn't have the same AVX capabilities as Broadwell or Skylake, so the maximum is lower.
> 
> Notice how it says "Torture".


Yes.. I am not stupid. I see where it says "Torture". It doesn't change a thing though (that's the point of a TDP). Because Blender uses... AVX too. Handbrake uses, AVX. These are the tests conducted by AMD. So the Tests conducted by AMD had both the Intel and AMD CPUs (RyZen and the 6900K) using AVX (because pretty much all transcode applications use AVX).

So when you look at any application which maximizes the CPU usage, you see the +140W power usage on the 6900K (at stock). I see similar results with my 3930K.

Have a look: http://www.guru3d.com/articles-pages/core-i7-6950x-6900k-6850k-and-6800k-processor-review,7.html

Of course this doesn't mean that you will be hitting this power usage 24/7. What it does mean is that under full load, the power usage of the 6900K is matching its TDP. We know that RyZen consumes less power under similar circumstances.

How will RyZen deal with Gaming power usage (average power usage)? That remains to be seen but we cannot discount the full load power usage figures.


----------



## lolerk52

Quote:


> Originally Posted by *Mahigan*
> 
> Yes.. I am not stupid. I see where it says "Torture". It doesn't change a thing though (that's the point of a TDP). Because Blender uses... AVX too. Handbrake uses, AVX. These are the tests conducted by AMD. So the Tests conducted by AMD had both the Intel and AMD CPUs (RyZen and the 6900K) using AVX (because pretty much all transcode applications use AVX).
> 
> So when you look at any application which maximizes the CPU usage, you see the +140W power usage on the 6900K (at stock). I see similar results with my 3930K.
> 
> Have a look: http://www.guru3d.com/articles-pages/core-i7-6950x-6900k-6850k-and-6800k-processor-review,7.html
> 
> Of course this doesn't mean that you will be hitting this power usage 24/7. What it does mean is that under full load, the power usage of the 6900K is matching its TDP. We know that RyZen consumes less power under similar circumstances.
> 
> How will RyZen deal with Gaming power usage (average power usage)? That remains to be seen but we cannot discount the full load power usage figures.


In situations where it will draw the 140W, it will also be significantly faster.
Blender doesn't use AVX256 normally, only AVX128. But there's a build compiled of the testing version with AVX256 support, and it SIGNIFICANTLY speeds up things for the higher power draw. My 6600K went from 1 minute 20 seconds to a score in the 40's (don't remember exactly).

What I'm saying is, while the TDP's are different, at similar performance levels they appear to be drawing similar amounts of power according to the AMD demo. I expect that AMD's 95W TDP is simply there because they don't have the AVX256 capabilities that Intel has since Haswell and beyond. And it's not surprising, considering it's very costly in terms of die space and increases power consumption significantly.


----------



## Blameless

Quote:


> Originally Posted by *Mahigan*
> 
> Looks to me like it is hitting 136 to 141W under full load. That's what the TDP claims and that is what we're seeing. Or am I missing something?


You are missing the test methodology. Tom's measured the current going through the EPS +12v lines, this is not what's being delivered to the CPU. You have to go through the board's VRM first, which is going to be ~90% efficient, at best. This isn't horribly far off TDP (127 vs. 140), but that's also not how Intel defines TDP either (pure Fast-Fourier transforms probably doesn't qualify as "commercially useful software").

There is also the missing Zen comparison. Zen likely won't draw the same sort of current in recent versions of Prime95 (as it can't do as much AVX 256 work), but without actually testing Zen in such a scenario, we can't say for certain. Neither Intel nor AMD claim their parts will never exceed TDP.
Quote:


> Originally Posted by *Mahigan*
> 
> Because Blender uses... AVX too.


Not the default Windows version.

Compile it to use AVX and it will, but it's also ~40% faster than the precompiled binaries available from the Blender site.

http://www.overclock.net/t/1618534/blender-ryzen-scene-benchmark#post_25714916
Quote:


> Originally Posted by *Mahigan*
> 
> Handbrake uses, AVX.


Barely. Disabling AVX only makes it 2-3% slower.

http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/1300#post_25716380
Quote:


> Originally Posted by *Mahigan*
> 
> These are the tests conducted by AMD. So the Tests conducted by AMD had both the Intel and AMD CPUs (RyZen and the 6900K) using AVX (because pretty much all transcode applications use AVX).


The tests conducted by AMD (which, again, are not AVX intensive) do not show an appreciable power consumption differential.
Quote:


> Originally Posted by *Mahigan*
> 
> So when you look at any application which maximizes the CPU usage, you see the +140W power usage on the 6900K (at stock). I see similar results with my 3930K.


A loose correlation.

I have 130w TDP parts that hit 130w peak load. I also have 130w TDP parts that won't crack 80w peak load.

The very Tom's review you are pulling images from also features a 140w TDP (the 6800K) part that is barely cracking 105w in Prime95 and another that probably isn't reaching 95w (the 6850K) in addition to some that are coming closer to the TDP rating they've been given.
Quote:


> Originally Posted by *Mahigan*
> 
> We know that RyZen consumes less power under similar circumstances.


No we do not.
Quote:


> Originally Posted by *Mahigan*
> 
> How will RyZen deal with Gaming power usage (average power usage)? That remains to be seen but we cannot discount the full load power usage figures.


You have it entirely backwards. All we have are real-world load figures, which are the same as equivalent loads from the closest configuration competitior part. We do not know peak load figures of Zen.


----------



## tpi2007

Quote:


> Originally Posted by *Mahigan*
> 
> Yes.. I am not stupid. I see where it says "Torture". It doesn't change a thing though (that's the point of a TDP). Because Blender uses... AVX too. Handbrake uses, AVX. These are the tests conducted by AMD. So the Tests conducted by AMD had both the Intel and AMD CPUs (RyZen and the 6900K) using AVX (because pretty much all transcode applications use AVX).
> 
> So when you look at any application which maximizes the CPU usage, you see the +140W power usage on the 6900K (at stock). I see similar results with my 3930K.
> 
> Have a look: http://www.guru3d.com/articles-pages/core-i7-6950x-6900k-6850k-and-6800k-processor-review,7.html
> 
> Of course this doesn't mean that you will be hitting this power usage 24/7. What it does mean is that under full load, the power usage of the 6900K is matching its TDP. *We know that RyZen consumes less power under similar circumstances.*
> 
> How will RyZen deal with Gaming power usage (average power usage)? That remains to be seen but we cannot discount the full load power usage figures.


Actually we don't know that (what I put in bold). Handbrake puts the CPU under full load (or close to) and so does Blender and Cinebench, but they aren't torture tests like the Prime 95 small FFTs or even more so the in-place large FFTs tests. The power consumption of the CPU is not the same. My overclocked 3820 uses around 122w on the in-place large FFTs Prime 95 test, whereas while rendering the Ryzen Blender file it only uses 90w. Both are reported as 100% CPU loads, but they are different loads.


----------



## epic1337

take note, intel has their entire HEDT lineup rated at ~140W TDP, regardless of whether its a 4C/8T chip or a 10C/20T chip.
on this note, its unlikely for a 4C/8T chip to pull as much power as a 10C/20T chip under full load at the same clock.

this means that TDP has not much relation to the chip's actual power draw.
and for that matter, TDP rating is used to limit a chip's cooling requirements and platform requirements (VRM handling capacity).


----------



## looniam

so . . . ok, i can't help myself









wouldn't a lower TDP but ~same power consumption mean either the chip is more efficient or is able to handle higher core temps?


----------



## epic1337

Quote:


> Originally Posted by *looniam*
> 
> so . . . ok, i can't help myself
> 
> 
> 
> 
> 
> 
> 
> 
> 
> wouldn't a lower TDP but ~same power consumption mean either the chip is more efficient or is able to handle higher core temps?


thats an impossible scenario.

a processor is like a heating element, the majority of it's power consumption is converted into heat at a ratio of almost 1:1.
which means to say, you can't dissipate more heat at the same power consumption or vice versa, otherwise it's breaking the law of conservation of energy.


----------



## lolerk52

Quote:


> Originally Posted by *epic1337*
> 
> thats an impossible scenario.
> 
> a processor is like a heating element, the majority of it's power consumption is converted into heat at a ratio of almost 1:1.
> which means to say, you can't dissipate more heat at the same power consumption or vice versa, otherwise it's breaking the law of conservation of energy.


The neural net inside Zen went out of control with optimizations and broke the laws of physics. Intel in response is attempting to create a perpetual motion device.


----------



## black96ws6

Quote:


> Originally Posted by *Gumbi*
> 
> Isn't Cinebench specifically gimped for AMD? I don't know by how much though. As always, it's a waiting game for real benches to come out.


No, that's been debunked. Maybe ages ago when Intel was doing their shady thing before they got caught







. Do a search for it.

Even AMD uses it in their presentations these days:


----------



## looniam

Quote:


> Originally Posted by *epic1337*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> so . . . ok, i can't help myself
> 
> 
> 
> 
> 
> 
> 
> 
> 
> wouldn't a lower TDP but ~same power consumption mean either the chip is more efficient or is able to handle higher core temps?
> 
> 
> 
> 
> 
> 
> thats an impossible scenario.
> 
> a processor is like a heating element, the majority of it's power consumption is converted into heat at a ratio of almost 1:1.
> which means to say, you can't dissipate more heat at the same power consumption or vice versa, otherwise it's breaking the law of conservation of energy.
Click to expand...

i guess you didn't see my loaded question









but i didn't say anything about lower consumption than TDP. i guess i ought to have said:

wouldn't a lower TDP but ~same power consumption *of another* mean either the chip is more efficient or is able to handle higher core temps?


----------



## epic1337

Quote:


> Originally Posted by *looniam*
> 
> i guess you didn't see my loaded question
> 
> 
> 
> 
> 
> 
> 
> 
> 
> but i didn't say anything about lower consumption than TDP. i guess i ought to have said:
> 
> wouldn't a lower TDP but ~same power consumption *of another* mean either the chip is more efficient or is able to handle higher core temps?


i don't understand your point, TDP is a unit of energy, since its using the same amount of power then it would still emit the same amount of energy.
which means to say, it doesn't matter which chip it is, so long as both consume the same amount of power they'll be emitting the same amount of heat.

or if you're asking about TDP ratings itself, the chip's capability has little to do with it, and more on the attached requirements for it.
e.g. a 95W TDP limited CPU is not because the CPU can only handle 95W, but the motherboard and coolers are normally designed to only handle a 95W CPU.


----------



## black96ws6

So unpacking those Leaks that we should take with a grain of salt:



From the link:

i7-6900K - 24.545
i7-6800K - 19.576
Ryzen 8C/16T - 17.693
i7-6700K - 16.239

i7-7700K would be in a dead heat with Ryzen here. The i7-6900K 8C/16T is 38.6% faster.

At 17693 Kn/s this would be slightly better than a 4790K?

For CB comparison:


But of course, we have no idea what the core counts and speeds actually are of this "leak". Erg.


----------



## Blameless

Quote:


> Originally Posted by *looniam*
> 
> wouldn't a lower TDP but ~same power consumption *of another* mean either the chip is more efficient or is able to handle higher core temps?


I'm not sure by what measure of efficiency one part could be considered to be better than the other when both are doing the same work, in roughly the same time, at roughly the same power consumption.

If you are implying that a lower TDP rating could mean that a worse cooler is recommended for the same thermal load, that's a possibility. However, the part could also have a lower temp limit as ICs are more efficient at lower temperatures and Intel, at least, always rates TDP at the maximum TCASE the part is rated for.

TDP really doesn't say a whole lot, even when comparing parts within the same brand. It says next to nothing when the very definitions of TDP are different between AMD and Intel and when the criteria used to assign such figures are largely unknown.


----------



## black96ws6

Here's a 4790k benchmark from a Tom's review for comparison:


Ryzen scored a 17693 at stock clocks, whatever that may be because the leaker didn't post that info!!


----------



## black96ws6

And here is Haswell-E Fritz for comparison. A 5820k (6c/12t) scores a bit better:


----------



## L36

I really hope these leaks are false rumors or just premature hardware not performing at its best. We desperately need some competition in the CPU market.


----------



## looniam

Quote:


> Originally Posted by *epic1337*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> i guess you didn't see my loaded question
> 
> 
> 
> 
> 
> 
> 
> 
> 
> but i didn't say anything about lower consumption than TDP. i guess i ought to have said:
> 
> wouldn't a lower TDP but ~same power consumption *of another* mean either the chip is more efficient or is able to handle higher core temps?
> 
> 
> 
> 
> 
> 
> 
> i don't understand your point, TDP is a unit of energy, since its using the same amount of power then it would still emit the same amount of energy.
> which means to say, it doesn't matter which chip it is, so long as both consume the same amount of power they'll be emitting the same amount of heat.
> 
> or if you're asking about TDP ratings itself, the chip's capability has little to do with it, and more on the attached requirements for it.
> e.g. a 95W TDP limited CPU is not because the CPU can only handle 95W, but the motherboard and coolers are normally designed to only handle a 95W CPU.
Click to expand...

your telling me stuff i know. i was a engineer in the navy (boiler tech) and have _at least a rudimentary understanding_ of energy/heat relationship.

so, power consumed (energy)- work (energy used) - heat (energy "wasted").
Quote:


> Originally Posted by *Blameless*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> wouldn't a lower TDP but ~same power consumption *of another* mean either the chip is more efficient or is able to handle higher core temps?
> 
> 
> 
> 
> 
> 
> 
> I'm not sure by what measure of efficiency one part could be considered to be better than the other when both are doing the same work, in roughly the same time, at roughly the same power consumption.
> 
> If you are implying that a lower TDP rating could mean that a worse cooler is recommended for the same thermal load, that's a possibility. *However, the part could also have a lower temp limit as ICs are more efficient at lower temperatures* and Intel, at least, always rates TDP at the maximum TCASE the part is rated for.
> 
> TDP really doesn't say a whole lot, even when comparing parts within the same brand. It says next to nothing when the very definitions of TDP are different between AMD and Intel and when the criteria used to assign such figures are largely unknown.
Click to expand...

ah, your right there and forgot. i was inclined to think that a part with a higher operating temperate would have a lower TDP since it would need to dissipate as much heat to safely operate.


----------



## black96ws6

And here is a leaked 7700k result:

35.52, 17049 vs Ryzen's 36.86, 17693.


----------



## tpi2007

For completeness' sake, here is a picture from the Fritz test from the 2011 video I posted above. I don't know if there were changes made to the benchmark that might have affected the scoring since:



Anyway, we don't know if the leak is legitimate or not, and even if we assume it is, we don't know if it's using a 3.4 Ghz 8C/16T Ryzen CPU or a lower clocked ES or even if we are looking at first hand SR3 results. If this were to be a 4C/8T SR3 result, it would be serious competition to Kaby Lake.


----------



## black96ws6

Here is an FX-9590 to give another point of reference:


----------



## black96ws6

I just realized something. That Fritz leak is definitely of the 8c/16t Ryzen. Because it says "Logical Processors found: 16", and below it, "processors to use (editable dropdown) - 16".

So if the leak is real, we're looking at the flagship results here.


Compare it with the image of the 9590 in the post above.


----------



## 1216

What a waste of a leak. Clearly visible CPU-Z yet not included in the photos. Could at least tell us the frequency...


----------



## Blameless

Quote:


> Originally Posted by *L36*
> 
> I really hope these leaks are false rumors or just premature hardware not performing at its best. We desperately need some competition in the CPU market.


Should they turn out to be correct, it's still hardly a doom and gloom scenario, nor will it mean we won't get competitive parts.

People have to be nuts to expect AMD to not shy away from tests their parts don't do well on, in favor of those they do. No one shows their worst case scenarios in press/marketing events...they gloss over them and put their best foot forward.
Quote:


> Originally Posted by *1216*
> 
> What a waste of a leak. Clearly visible CPU-Z yet not included in the photos. Could at least tell us the frequency...


Would certainly be helpful.


----------



## tpi2007

Quote:


> Originally Posted by *black96ws6*
> 
> I just realized something. That Fritz leak is definitely of the 8c/16t Ryzen. Because it says "Logical Processors found: 16", and below it, "processors to use (editable dropdown) - 16".
> 
> So if the leak is real, we're looking at the flagship results here.
> 
> 
> Compare it with the image of the 9590 in the post above.


If the leak is legitimate, yes, we can rule out SR3 and SR5 CPUs then. We still can't rule out lower clocked Engineering Samples.

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *L36*
> 
> I really hope these leaks are false rumors or just premature hardware not performing at its best. We desperately need some competition in the CPU market.
> 
> 
> 
> Should they turn out to be correct, it's still hardly a doom and gloom scenario, nor will it mean we won't get competitive parts.
> 
> People have to be nuts to expect AMD to not shy away from tests their parts don't do well on, in favor of those they do. No one shows their worst case scenarios in press/marketing events...they gloss over them and put their best foot forward.
Click to expand...

It would still mean that AMD would have produced another very inconsistent performing CPU.


----------



## black96ws6

So, for those that don't know, what Fritz is basically doing is comparing the tested CPU to a P3 @ 1.0 Ghz, which scores 480 knodes\sec.

In the example of the 9590 above, that means the Vishera 9590 is almost 33x faster (32.66) and processes 15678 knodes\sec (due to multi-threading and much-higher clock speed).


----------



## budgetgamer120

Ok guys


----------



## CrazyElf

Quote:


> Originally Posted by *Fyrwulf*
> 
> Juanrga is a paid shill. He has never, to the best of my knowledge, posted anything that is even neutral towards AMD. Anything that comes from him I automatically discard as garbage.
> AMD using BGA packaging for anything but embedded would be nearly unprecedented. It might actually be unprecedented. Traditionally Opteron processors have been LGA.


+ Rep on Juanrga, did not know that.

The 40% though is for real on AVX2 on Haswell. The 60% is where my skepticism comes in. I still would like someone to do The Stilt's Handbrake build on Skylake so that we have assurance though.

Quote:


> Originally Posted by *Fyrwulf*
> 
> I think IBM was the only one with the tech to produce chips larger than 600mm^2. The old IBM chip experimental foundry is now owned by GloFo, though, and that technology wasn't ready for prime time. So if Purley is actually that big, it's an MCM.
> Okay, first of all, MCM means multi-chip module. Which means two or more separate dies on a unified package. Given that Summit Ridge isn't an MCM, that's a negative, Ghostrider. Second, AMD is using a core complex of 4 cores as the basic building block for its processors. Intel uses multiples of two because its basic building block is two cores. Therefor, any potential competitor to to Intel's HEDT 10 core chips will probably be 12 cores. Considering that the max TDP for AM4 is 140 watts, it's possible AMD could choose to do that, although I don't think it'll happen before the end of next year if it happens at all.


Actually, there have been chips larger than 600mm^2.

Intel's Tukwila is an example at 698mm^2.

More recently:

Haswell Xeon E5 2699 v3 came in at 662mm.
http://www.anandtech.com/show/8730/intel-haswellep-xeon-14-core-review-e52695-v3-and-e52697-v3

Skylake Xeon E5 2699 v5 will no doubt be >600mm^2 too.

There have been GPUs too that large. The Titan X was >600mm^2 for Maxwell and the HBM2 variant of the Pascal Titan is supposed to be 610mm^2. Fury X was just under 600mm^2 too.

Quote:


> Originally Posted by *Wishmaker*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> We already have the smaller budget already better posts
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> 
> You give me an example where a telecoms company has a monopoly and then claim that I am apologetic to INTEL. Fair enough, however, your analogy is so wrong that we do not even have a measurement unit for it. INTEL is a monopoly and has been for the past 15 years, *however*, you can still have a computer if you do not buy INTEL.
> 
> I do not know where you live, nor I care, but in my area you can buy plenty of AMD chips and the required parts to have a computer. Clicky. There you go 8 cores 4 GHz should be enough to run some apps.
> 
> Many claim that Intel is robbing them and forcing them to pay. Why don't people note in their 'I am a victim' posts that they are forced to buy INTEL because of the performance aspect. You can do plenty with a 8350 Vishera Chip.
> 
> So your argument that Intel is a monopoly and we *cannot buy something else* is wrong. if you rephrase and say INTEL is a monopoly and we cannot buy, for the time being, something as equally good, then I will cut you some slack.


In that case Intel is using their market position to extract economic rent.

It's like if one company offers a 2.5G EDGE coverage (AMD with its power inefficient Bulldozer) and the other is offering decent 3G+ coverage, but milking the hell out of it (Intel), then I guess we have a closer analogy.

My point though is that Intel is milking us and that with a more competitive product, it might force them to compete. At this point their biggest competition is literally older Intel CPUs. Whether Intel is the only CPU maker or the only one with CPUs worth buying (which hopefully Zen will change), *they are still milking us because of their market position.*

Right now Intel has some of the largest gross margins in the industry. That's coming out of your wallet, directly or indirectly (ex: from the services you use that use Intel server CPUs).

Quote:


> Originally Posted by *Blameless*
> 
> We have two very similar shirts in front of us, from different brands...
> 
> I'm saying they are both about 40 inches around the chest.
> 
> You say that doesn't matter because the one on the left is a medium, while the one on the right is an extra-large.
> 
> I say what one brand calls medium and one calls extra large doesn't matter if they are actually the same size.
> 
> You say that doesn't matter because the one on the left is a medium while the one on the right is an extra large...then accuse me of trying to spin things.


It's a good analogy. +Rep

The problem right now is that we simply have too many unknowns and none of us have Zen chips so we are all speculating, at this point. Real world power consumption is what matters.

This is a 7 year old article, but it might be relevant: http://www.anandtech.com/show/2807/2


Spoiler: Anandtech on how AMD versus Intel measure TDP



Quote:


> TDP or Thermal Design Power is defined by AMD as (from AMD Family 10h Server and Workstation Processor Power and Thermal Data Sheet, Rev 3.04 - June 2009):
> 
> "The maximum power a processor draws for a thermally significant period while running commercially useful software. The constraining conditions for TDP are specified in the notes in the thermal and power tables."
> 
> That seems very similar to Intel's TDP, but in practice AMD's TDP is (almost) equal to the maximum electrical power a CPU can draw (current times voltage). Therefore, AMD's own definition is not accurate for the published TDP results. The irony is that the published Intel TDP numbers are more accurately described by AMD's definition. Intel's engineers measure the power draw of hundreds of commercially available software packages and ignore the "not thermally significant" peaks. All those power measurements are averaged and a small percentage (a buffer) is added. Thus, Intel's TDP is lower than the maximum power draw.
> 
> In a nutshell:
> 
> AMD's ACP uses a "round down" average of power measurements performed with industry standard benchmarks (usually running at 100% CPU load, with the exception of Stream).
> AMD's TDP is close to the electrical maximum a CPU can draw (when it is operating at its maximum voltage).
> Intel's TDP is a "round up" average of power measurements of processor intensive benchmarks.
> 
> If AMD would apply the methodology of Intel to determine TDP they would end up somewhere between ACP and the current "AMD TDP". "There is no substitute for your own power measurements" is a correct but an incredibly lame and unrealistic conclusion.






Unfortunately, we don't know what the real world numbers are. As the Anandtech article notes, no substitute for our own measurements.

I am not sure if AMD has changed how they measure TDP either - I would like some clarification on that front from AMD. Either way it doesn't really say much because we are literally comparing apples to oranges here. If they are understating real world power consumption, it's going to be apparent soon enough when the real world results come out.

Quote:


> Originally Posted by *Mahigan*
> 
> Yes.. I am not stupid. I see where it says "Torture". It doesn't change a thing though (that's the point of a TDP). Because Blender uses... AVX too. Handbrake uses, AVX. These are the tests conducted by AMD. So the Tests conducted by AMD had both the Intel and AMD CPUs (RyZen and the 6900K) using AVX (because pretty much all transcode applications use AVX).
> 
> So when you look at any application which maximizes the CPU usage, you see the +140W power usage on the 6900K (at stock). I see similar results with my 3930K.
> 
> Have a look: http://www.guru3d.com/articles-pages/core-i7-6950x-6900k-6850k-and-6800k-processor-review,7.html
> 
> Of course this doesn't mean that you will be hitting this power usage 24/7. What it does mean is that under full load, the power usage of the 6900K is matching its TDP. We know that RyZen consumes less power under similar circumstances.
> 
> How will RyZen deal with Gaming power usage (average power usage)? That remains to be seen but we cannot discount the full load power usage figures.


I don't think that we are going to know until we get third party reviews and more importantly, a few chips in the hands of OCNers.

Add it to the list of unknowns:

Actual power consumption while gaming relative to Intel and in day to day use
Branch prediction of Zen (there was a some analysis in the Real World Technologies Forum and Blender was not considered a branch intensive benchmark); giving AMD the benefit of the doubt here; I'm hoping they have a good branch prediction, but no way to know at this point.
How good their AVX implementation is (remember we've tested independently here and on The Stilt's third party build, Haswell based CPUs get a >40% performance increase, while Sandy Bridge gets just a bit less)?
Single threaded performance? We assume that their SMT implementation is strong because of the Blender benches. Either they've got parity with Intel, a good single threaded + crappy SMT (I hope this one is true) or they've got really good SMT with very single threaded that falls a bit short (Blender is a near perfect "embarrassingly parallel app")
If the chip is good, then a lot of OCNers will buy them.

There is a camp here that seems to deeply love Intel and Nvidia, but there's also a larger camp here that just wants the best CPU and GPU for the money.

Quote:


> Originally Posted by *Blameless*
> 
> Should they turn out to be correct, it's still hardly a doom and gloom scenario, nor will it mean we won't get competitive parts.
> 
> People have to be nuts to expect AMD to not shy away from tests their parts don't do well on, in favor of those they do. No one shows their worst case scenarios in press/marketing events...they gloss over them and put their best foot forward.
> Would certainly be helpful.


I feel like we've reached the point where there isn't much more insight to be had. The Skylake Stilt benches would be helpful, but that is it. It would be useful too because we would be able to compare between what a hypothetical 8 core Skylake might look like against a Zen in terms of IPC. Heck, if Zen is good, they won't be able to sell the 8 core version for 5960X prices.


----------



## dmasteR

Quote:


> Originally Posted by *CrazyElf*
> 
> + Rep on Juanrga, did not know that.
> 
> The 40% though is for real on AVX2 on Haswell. The 60% is where my skepticism comes in. I still would like someone to do The Stilt's Handbrake build on Skylake so that we have assurance though.


If someone can link me to it. I'll run it on my 6700K.


----------



## CrazyElf

Quote:


> Originally Posted by *dmasteR*
> 
> If someone can link me to it. I'll run it on my 6700K.


Thanks a lot; appreciate it. +Rep

Quote:


> Originally Posted by *The Stilt*
> 
> Someone with some spare time could test how these builds compare between each other on AMD 15h CPUs.
> Both are significantly faster than the current official build, however I would like to know what is their different on AMD CPUs. On Haswell the other is around 5% faster, on AMD the difference might be larger.
> 
> This is the original build I've posted before: https://1drv.ms/u/s!Ag6oE4SOsCmDhFAm03vWlB3s_qeD (password "ryzen", without the quotes.)
> 
> The test build: https://1drv.ms/u/s!Ag6oE4SOsCmDhFJcooVchjDI9SFD (password "SIMD", without the quotes)


I think it's the bottom build that we need to see tested (the one with the SIMD password). Then we just need to compare with someone else who has used the public build of Handbrake.

Then after that, can you please compare against the AMD Public build?
http://www.pcworld.com/article/3151464/hardware/you-can-find-out-how-your-cpu-compares-to-amds-ryzen-for-free.html

The question is if it is really 60% faster or just comparable to Haswell (which was a bit over 40% faster).

Edit: Skylake on Blender public build, a 6600K


from: https://www.reddit.com/r/Amd/comments/5idz87/ryzen_blender_benchmark_comparison_spreadsheet/

Edit 3:
Public build of Blender at 200:
Quote:


> Originally Posted by *Arctucas*
> 
> 6700K using 2.78


Again at 150.
Quote:


> Originally Posted by *Arctucas*
> 
> Using the links you provided:
> 
> 2.78


So yeah, it looks like something is up with Skylake, definetely faster than the IPC would otherwise suggest.


----------



## Tobiman

How sure are we that it's not some random low clocked Intel Xeon chip?


----------



## epic1337

Quote:


> Originally Posted by *black96ws6*
> 
> Here is an FX-9590 to give another point of reference:


Quote:


> Originally Posted by *black96ws6*
> 
> I just realized something. That Fritz leak is definitely of the 8c/16t Ryzen. Because it says "Logical Processors found: 16", and below it, "processors to use (editable dropdown) - 16".
> 
> So if the leak is real, we're looking at the flagship results here.
> 
> 
> Compare it with the image of the 9590 in the post above.


looks reasonable actually, don't forget that Zen is still an 8core chip with hyperthreading, meaning that extra 8threads isn't 100% scalable.

FX-9590 (4M/8C) @ 5Ghz = 15678
SR7 (8C/16T) @ <4Ghz = 17693

can anyone submit an i7-6900K Fritz bench for a 1:1 comparison?


----------



## Tobiman

From anantech forums:

Core i7-6900K (Stock)
47.80


----------



## epic1337

Quote:


> Originally Posted by *Tobiman*
> 
> From anantech forums:
> 
> Core i7-6900K (Stock)
> 47.80


thanks, i also found this on Anandtech forums.



fritz bench:
FX-9590 (4M/8C) @ 5Ghz = 14952
Zen SR7 (8C/16T) @ <4Ghz = 17693
i7-6900K (8C/16T) @ 3.5Ghz = 24524

24524 / 17693 = 1.386x
i7-6900K @ 24524 = 100%
Zen SR7 @ 17693 = 72%
either SR7 is clocked around 2.5Ghz, or SR7 is just bad at Fritz.


----------



## dmasteR

4.6Ghz 6700K running STILT version ( 33:44)



4.6GGhz 6700K running Stock Version. (49:98)


----------



## CrazyElf

Quote:


> Originally Posted by *dmasteR*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 4.6Ghz 6700K running STILT version ( 33:44)
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 4.6GGhz 6700K running Stock Version. (49:98)


Awesome; Thanks for doing both builds! Really appreciate it. +Rep.


Skylake 49.98 / 33.44 = 1.4946; so without the custom build, the render took 49.46% longer

Then from earlier: http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/1400_100#post_25720080

Taking the median of Cyrious, we get 1.3610, so without AVX, the render took 36.10% longer
Taking the score that Blameless gave us for 2.78, with AVX2, the render took 41.89% longer

It looks like the claim that Skylake is 60% faster is not true, but this means that it's close to 50% faster with a custom Blender build on Skylake. That s still huge. Something is definitely up.

Actually that may not bode well for AMD Zen either. Recall it has 2x 128 and 4x 64 FMACs.

Skylake Premium time (49.46% longer time) - Sandy Bridge E Premium Time (36.10% longer) = 13.36%

I'd be prepared to bet that with AVX and Stilt's custom build (the latest version in a few months from now when both Skylake E and Zen are out), that Skylake E will be able to beat by a significant margin AMD's Zen. That's unless Zen has something that we don't know about to help performance under AVX. Remember, the design of Zen is closer to AVX1 (it can do AVX2, but once per 2 cycles), so that's how fast we would expect it to be, unless there is missing information (which there very well could be as we don't have the full picture yet).

Another mystery is why Skylake sees such a huge premium with The Stilt's custom build. No new instruction sets were introduced. There must be SIMD optimizations? Haswell of course brought AVX2, but Skylake (consumer Skylake anyways) brought just Intel MPX (Memory Protection Extensions) and Intel SGX (Software Guard Extensions).


----------



## ciarlatano

Quote:


> Originally Posted by *epic1337*
> 
> either SR7 is clocked around 2.5Ghz, or SR7 is just bad at Fritz.


Or there simply isn't a real Fitz benchmark for the SR7.....









I'm rolling with that for the moment.


----------



## iLeakStuff




----------



## Blameless

Quote:


> Originally Posted by *tpi2007*
> 
> It would still mean that AMD would have produced another very inconsistent performing CPU.


Well, unless they are wholesale copying Intel (and they aren't), inconsistent performance should be expected.

Different architectures have different strengths and weaknesses. It's natural to use Intel as the baseline, because that's what's relevant now, but if Zen had come first and Intel had just released Ivy Bridge ((for example), we'd be seeing just as much inconsistency.

All of this is why the ultra-optimistic predictions of Zen being all-round better than BW-E were always exceedingly far fetched. Differences are a given, and so is AMD showing what they are strongest at.
Quote:


> Originally Posted by *CrazyElf*
> 
> Another mystery is why Skylake sees such a huge premium with The Stilt's custom build. No new instruction sets were introduced. There must be SIMD optimizations? Haswell of course brought AVX2, but Skylake (consumer Skylake anyways) brought nothing.


Skylake is wider. A full extra execution port and all-round deeper queues and buffers, relative to Haswell/Broadwell.

It's not a trivial architecture change, at least not here.


----------



## Tojara

I'm going to throw a few older Intel hexas/octas to compare againts in Fritz. The IPC is per core against a 3,4GHz 8C/16T Zen CPU that scored 17 700.
i7-5960x - 8C/16T - 3,3GHz - 23 084 knodes/s - +30% performance - +34% IPC over Zen
i7-3960x - 6C/12T - 3,6GHz - 19 339 knodes/s - +9% performance - +38% IPC
i7 990x - 6C/12T - 3,6GHz - 18 887 knodes/s - +7% performance - +34% IPC

It's pretty apparent that Fritz doesn't scale too well with cores, but IPC being well behind Westmere even with that taken into account seems odd to say the least. It's actually only 10% faster per clock than Penryn. ..

If the test was ran at those clock speeds, the performance is way beyond horrible. Meanwhile the IPC difference in Cinebench would be about 16% in favour of Haswell-E which is absolutely believable.


----------



## Tobiman

Looking more like a low-clocked Intel Xeon.


----------



## tpi2007

Quote:


> Originally Posted by *Blameless*
> 
> Well, unless they are wholesale copying Intel (and they aren't), inconsistent performance should be expected.
> 
> Different architectures have different strengths and weaknesses. It's natural to use Intel as the baseline, because that's what's relevant now, but if Zen had come first and Intel had just released Ivy Bridge ((for example), we'd be seeing just as much inconsistency.
> 
> All of this is why the ultra-optimistic predictions of Zen being all-round better than BW-E were always exceedingly far fetched. Differences are a given, and so is AMD showing what they are strongest at.


Of course, the inconsistency I was referring to would always have to be based on what people can buy and use now. If AMD compares itself against the competition in Blender and Handbrake hoping to make a case for certain users, other usage cases will also be relevant to certain users, some of them overlapping and they will look at the strengths and weaknesses of the product and determine how balanced their workflow performance will be with product A and B.


----------



## Kuivamaa

Quote:


> Originally Posted by *Tobiman*
> 
> Looking more like a low-clocked Intel Xeon.


Or an actual low clocked ES sample of Ryzen. How much would a 2.8GHz i7-6900k score in those two benches? Anyone care to try it out?


----------



## Tojara

Quote:


> Originally Posted by *Kuivamaa*
> 
> Or an actual low clocked ES sample of Ryzen. How much would a 2.8GHz i7-6900k score in those two benches? Anyone care to try it out?


Scaled linearly, about 1472*2,8/3,5 = 1 178 in Cinebench R15 and 24524*2,8/3,5 = 19 620 knodes/s in Fritz. About a percent below and 11% above above alleged Zen results.


----------



## FreeElectron

Is there any CPU benchmarking done comparatively with an open source software (+compiler if needed) that is ensured not to favor one over the other?

Otherwise how can those "benchmarking" results be trusted after claims like this?

If there is any benchmarking that meets the above criteria, please let me know.


----------



## IRobot23

Shouldnt be ZEN 1C/2T faster than excavator 1M/2C?
I mean, ZEN FPU should be clearly faster than FPU (MODULE) excavator. But how much faster?


----------



## Aussiejuggalo

Anybody seen this? looks like the AM3+ cooler mounts wont be compatible with AM4.


----------



## epic1337

Quote:


> Originally Posted by *IRobot23*
> 
> Shouldnt be ZEN 1C/2T faster than excavator 1M/2C?
> I mean, ZEN FPU should be clearly faster than FPU (MODULE) excavator. But how much faster?


if we're looking at single-thread performance or FPU performance then yes.
but if we're looking at INT multi-thread performance its in favor of excavator's full cores.

the difference between 1M/2C as opposed to 1C/2T would depend on how well apps scales with virtual cores or hyperthreading.
this is one of those reasons why sometimes FX-8 could out perform an i7.


----------



## Kuivamaa

Quote:


> Originally Posted by *Tojara*
> 
> Scaled linearly, about 1472*2,8/3,5 = 1 178 in Cinebench R15 and 24524*2,8/3,5 = 19 620 knodes/s in Fritz. About a percent below and 11% above above alleged Zen results.


I figured as much but I would like to see the actual CPU in that freq. Earlier in this thread I had doubt this result comes from a Ryzen because CPU ID did not match any older AMD processors. Now I believe it is one of the ES units we have see floating around. As a matter of fact the last characters here...



..appear to be 31_N (the underscore is presumed). Which means it is a 3.1GHz sample. Then again it might be a 34 or a 24. Or simply forged. If 31 is the case,this puts its throughput for CB roughly at ivy bridge levels.Which would be perfectly acceptable (especially since CB is heavily Intel sponsored and biased ,I would not expect AMD to get nearly as good SMT scaling as intel here) if it wasn't for that Fritz result. A Ryzen 3.1GHz at 17693 might be passable depending on what 6900k scores at what turbo frequency. A [email protected] dishing that score would be rather bad though. Then again I haven't got much insight on Fritz bench nuances.


----------



## Cyrious

Is this the point where I post my Cinebench R15 score and do a run of Fritz for comparison?


----------



## Kuivamaa

Quote:


> Originally Posted by *Cyrious*
> 
> Is this the point where I post my Cinebench R15 score and do a run of Fritz for comparison?


Please mention the actual turbo frequencies while running these.I will add my own Xeon scores at some point tomorrow.


----------



## Aussiejuggalo

Massive chance that isn't even Zen in the new "leaked" benchmarks, probably an old Xeon, old AMD chip or a downclocked x99 chip. Take those "leaks" with this little bit of salt.


----------



## Marios145

Let's speculate on fake/unconfirmed leaks








If it's running the test at 3.4GHz, then at 4GHz it gets 1400, which is really what i was expecting from a 40% IPC increase over excavator.
If that score is at <3.1GHz then 4GHz will get 1530+, which...wow
Since cinebench is usually the worst-case scenario for amd this looks fine to me.

Also, faster ram improves scores in cinebench.


----------



## Cyrious

Quote:


> Originally Posted by *Kuivamaa*
> 
> Please mention the actual turbo frequencies while running these.I will add my own Xeon scores at some point tomorrow.


My chip is more or less locked to 3.3ghz. It wont go faster, and it wont slow down. I think it has something to do with how screwy my CPU/Board combo is. Ram is set to 1600, 9-9-9-28.

Anyways, Cinebench R15 right here



I'll post Fritz here shortly.

EDIT: Fritz


----------



## CortexA99

Hello, I'm new here. I come here to just clarify the screenshot of cinebenchR15 and Fritzchess of Zen. This forum registration makes me headache that verification code doesn't display that I can't resigter for quite a while.









Both CBR15 and fritzchess screenshot are from Chinese forum Baidu, the original thread has been deleted for unknown reason. Most likely deleted by poster. I asked my friends which read that thread before got deleted on that forum, they said poster would 'post more bench' after a little while, but then thread deleted and poster disapeear. This poster hold a shop on chinese online shop Taobao and sell rigs, this is suspicious that a little ordinary trader could have final silicon or even ES.

As you can see he has no CPU-Z screenshot, CBR15 and fritzchess score looks quite suspicious and looks seems to not from a same CPU or even same rig. Afer I talked to my friends and went over on that forum, we decided to regard these screenshot as a fake. very likely to be a fake. And that poster has reasonable intent to make a fake to help selling his toy at his shop.

You can ask some question here but I'm not sure I can give reasonable answer to you. I'm here to try my best to break language barrier between world and Chinese. Thanks a lot.









Please don't trust any thing from Baidu, there's so many troll thread and wrong information on that forum.


----------



## mouacyk

Ohhhhhhhhhhhhhhhhh!

So the original guy was trying to sell an I7-6900K. It all makes sense now.


----------



## cssorkinman

Anyone have a link to that fritz download?


----------



## Hueristic

Quote:


> Originally Posted by *CortexA99*
> 
> Hello, I'm new here. I come here to just clarify the screenshot of cinebenchR15 and Fritzchess of Zen. This forum registration makes me headache that verification code doesn't display that I can't resigter for quite a while.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Both CBR15 and fritzchess screenshot are from Chinese forum Baidu, the original thread has been deleted for unknown reason. Most likely deleted by poster. I asked my friends which read that thread before got deleted on that forum, they said poster would 'post more bench' after a little while, but then thread deleted and poster disapeear. This poster hold a shop on chinese online shop Taobao and sell rigs, this is suspicious that a little ordinary trader could have final silicon or even ES.
> 
> As you can see he has no CPU-Z screenshot, CBR15 and fritzchess score looks quite suspicious and looks seems to not from a same CPU or even same rig. Afer I talked to my friends and went over on that forum, we decided to regard these screenshot as a fake. very likely to be a fake. And that poster has reasonable intent to make a fake to help selling his toy at his shop.
> 
> You can ask some question here but I'm sure I can give reasonable answer to you. I'm here to try my best to break language barrier between world and Chinese. Thanks a lot.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please don't trust any thing from Baidu, there's so many troll thread and wrong information on that forum.


Thanks for signing up and providing this info.







+rep


----------



## Cyrious

Quote:


> Originally Posted by *cssorkinman*
> 
> Anyone have a link to that fritz download?


http://www.jens-hartmann.at/Fritzmarks/

The link is right there.


----------



## Fyrwulf

Quote:


> Originally Posted by *CortexA99*
> 
> Hello, I'm new here. I come here to just clarify the screenshot of cinebenchR15 and Fritzchess of Zen. This forum registration makes me headache that verification code doesn't display that I can't resigter for quite a while.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Both CBR15 and fritzchess screenshot are from Chinese forum Baidu, the original thread has been deleted for unknown reason. Most likely deleted by poster. I asked my friends which read that thread before got deleted on that forum, they said poster would 'post more bench' after a little while, but then thread deleted and poster disapeear. This poster hold a shop on chinese online shop Taobao and sell rigs, this is suspicious that a little ordinary trader could have final silicon or even ES.
> 
> As you can see he has no CPU-Z screenshot, CBR15 and fritzchess score looks quite suspicious and looks seems to not from a same CPU or even same rig. Afer I talked to my friends and went over on that forum, we decided to regard these screenshot as a fake. very likely to be a fake. And that poster has reasonable intent to make a fake to help selling his toy at his shop.
> 
> You can ask some question here but I'm not sure I can give reasonable answer to you. I'm here to try my best to break language barrier between world and Chinese. Thanks a lot.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please don't trust any thing from Baidu, there's so many troll thread and wrong information on that forum.


Don't sweat the language barrier, man. English is an incredibly hard language to learn since many of the words and a great deal of the grammar structure come from a dead language (Old Norse). I don't think anybody is confused by what you're saying, so your mission is accomplished.


----------



## CortexA99

Quote:


> Originally Posted by *mouacyk*
> 
> Ohhhhhhhhhhhhhhhhh!
> 
> So the original guy was trying to sell an I7-6900K. It all makes sense now.


Haha. I don't know whether that guy has 6900k selling or not. But what I doubt is such these guys that hold shop online is very unlikely to have any new thing from AMD. unlike intel side, before kabylake announced there's already so many traders on Taobao having i7 7700k and i5 7600k for sell.

If I remember correctly. there was very very few(almost none) new AMD CPU (retail or eng-sample) that leaked from chinese forum first, since bulldozer leak from chinese nearly half a year ahead of bulldozer released. I guess AMD think it's a disaster to hold eng-sample widely test in china, because those guys like breaking NDA and leaking. LOL.

by the way I would share Gigabyte X370 to you which is leaked from baidu.


----------



## CortexA99

Quote:


> Originally Posted by *Fyrwulf*
> 
> Don't sweat the language barrier, man. English is an incredibly hard language to learn since many of the words and a great deal of the grammar structure come from a dead language (Old Norse). I don't think anybody is confused by what you're saying, so your mission is accomplished.


Thanks! But so many Chinese don't understand English. so it's my mission to share correct information and eliminate misunderstanding here.


----------



## mouacyk

Quote:


> Originally Posted by *CortexA99*
> 
> Thanks! But so many Chinese don't understand English. so it's my mission to share correct information and eliminate misunderstanding here.


Thank you. Does your sharing come with salt?


----------



## Hueristic

Quote:


> Originally Posted by *mouacyk*
> 
> Thank you. Does your sharing come with salt?


LOL, testing the language barrier already ic.


----------



## Darkpriest667

Quote:


> Originally Posted by *CortexA99*
> 
> Hello, I'm new here. I come here to just clarify the screenshot of cinebenchR15 and Fritzchess of Zen. This forum registration makes me headache that verification code doesn't display that I can't resigter for quite a while.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Both CBR15 and fritzchess screenshot are from Chinese forum Baidu, the original thread has been deleted for unknown reason. Most likely deleted by poster. I asked my friends which read that thread before got deleted on that forum, they said poster would 'post more bench' after a little while, but then thread deleted and poster disapeear. This poster hold a shop on chinese online shop Taobao and sell rigs, this is suspicious that a little ordinary trader could have final silicon or even ES.
> 
> As you can see he has no CPU-Z screenshot, CBR15 and fritzchess score looks quite suspicious and looks seems to not from a same CPU or even same rig. Afer I talked to my friends and went over on that forum, we decided to regard these screenshot as a fake. very likely to be a fake. And that poster has reasonable intent to make a fake to help selling his toy at his shop.
> 
> You can ask some question here but I'm not sure I can give reasonable answer to you. I'm here to try my best to break language barrier between world and Chinese. Thanks a lot.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Please don't trust any thing from Baidu, there's so many troll thread and wrong information on that forum.


Thank you for sharing information with us. Your English is better than our Chinese! Thank you for coming here! +rep


----------



## EightDee8D

Quote:


> Originally Posted by *CortexA99*
> 
> Haha. I don't know whether that guy has 6900k selling or not. But what I doubt is such these guys that hold shop online is very unlikely to have any new thing from AMD. unlike intel side, before kabylake announced there's already so many traders on Taobao having i7 7700k and i5 7600k for sell.
> 
> If I remember correctly. there was very very few(almost none) new AMD CPU (retail or eng-sample) that leaked from chinese forum first, since bulldozer leak from chinese nearly half a year ahead bulldozer released. I guess AMD think it's a disaster to hold eng-sample wildly test in china, because those guys like break NDA and leaking. LOL.
> 
> by the way I would share Gigabyte X370 to you which is leaked from baidu.
> 
> 
> Spoiler: Warning: Spoiler!


Hdmi, hmmm there's onboard gpu on x370 chipset ? or cpu ?


----------



## Quantum Reality

It sounds like the WCCF leak is very speculative at best and may be outright misleading at worst.

I realize AMD has been burned badly by the Bulldozer release, but I really do wish there were some early-release engineering samples they could have independently tested to give us a feel for how the CPUs will compare in real-world situations.

If Zen can trade blows with a 6600K, I think they are in business to score a radical upset in the computing industry and present a viable alternative upgrade path from people using i5 and i7 4xxx parts still.


----------



## ozlay

Quote:


> Originally Posted by *EightDee8D*
> 
> Hdmi, hmmm there's onboard gpu on x370 chipset ? or cpu ?


AM4 will also support APU's not just ZEN. Basically just ports that won't be used when using zen. Like putting an Athlon on an FM2+. I kinda want an zen opteron.


----------



## EightDee8D

Quote:


> Originally Posted by *ozlay*
> 
> AM4 will also support APU's not just ZEN. Basically just ports that won't be used when using zen. Like putting an Athlon on an FM2+. I kinda want an zen opteron.


Ohhh, yeah i forgot about APUs.


----------



## BeepBeep2

Didn't pay much attention to the Fritz Chess score, focused more on the Cinebench R15 score.



I gave this a look over and it seems to be a 3.1/3.5 8 core ES, Dresdenboy came to the same conclusion - probably 2D_31_01A2M88E4__35/31__N

What I posted at Anandtech and Hardforum:
"If it is even close to the CPU we think it is based on the ES OPN, it is an 8-core CPU

The result is still almost Haswell-E level IPC, so Dresdenboy (and I) am not sure if it is fake or not. The part OPN suggests not, but anyone knowing how AMD names their ES parts could theoretically fake that.

Assuming Summit Ridge @ 3.1 GHz clock and not in a boost state (this chip shows 3.5 GHz boost...) and 1188 points, you get around 1350 points @ 3.5 GHz
Haswell-E 8c/16t 5960X @ 3.0 Base and 3.5 GHz Turbo (not sure of boost clock when under Cinebench load, maybe 3.3 GHz?) : ~1350 points

Could be plausible. Broadwell-E does exceptionally well in Cinebench R15, about 10% uplift clock-per-clock over Haswell-E."

If it is fake it is fake, if it is not, it is still somewhat impressive imho.


----------



## tpi2007

Quote:


> Originally Posted by *BeepBeep2*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Didn't pay much attention to the Fritz Chess score, focused more on the Cinebench R15 score.
> 
> 
> 
> I gave this a look over and it seems to be a 3.1/3.5 8 core ES, Dresdenboy came to the same conclusion - probably 2D_31_01A2M88E4__35/31__N
> 
> What I posted at Anandtech and Hardforum:
> "If it is even close to the CPU we think it is based on the ES OPN, it is an 8-core CPU
> 
> The result is still almost Haswell-E level IPC, so Dresdenboy (and I) am not sure if it is fake or not. The part OPN suggests not, but anyone knowing how AMD names their ES parts could theoretically fake that.
> 
> Assuming Summit Ridge @ 3.1 GHz clock and not in a boost state (this chip shows 3.5 GHz boost...) and 1188 points, you get around 1350 points @ 3.5 GHz
> Haswell-E 8c/16t 5960X @ 3.0 Base and 3.5 GHz Turbo (not sure of boost clock when under Cinebench load, maybe 3.3 GHz?) : ~1350 points
> 
> Could be plausible. Broadwell-E does exceptionally well in Cinebench R15, about 10% uplift clock-per-clock over Haswell-E."
> 
> If it is fake it is fake, if it is not, it is still somewhat impressive imho.


Nice catch! Rep+ I wonder if they did this on purpose, make this little charade where you could just about more or less match the part number by leaving just a few pixels visible if you knew what you were looking for.

If the chip is indeed a lower clocked ES, then things are looking quite a bit better for Zen.


----------



## Olivon

Hope the cinebench score is not true.
A 5 years old 3930K can do better within 30 seconds in BIOS (5100MHz) :


----------



## Ashura

Quote:


> Originally Posted by *Olivon*
> 
> Hope the cinebench score is not true.
> A 5 years old 3930K can do better within 30 seconds in BIOS (5100MHz) :


Hmm, even Intel's current gen sub $500 Cpus can't match that score @stock (or @3.1ghz)


----------



## tpi2007

Quote:


> Originally Posted by *Olivon*
> 
> Hope the cinebench score is not true.
> A 5 years old 3930K can do better within 30 seconds in BIOS (5100MHz) :


And how much power does your 32nm SB-E use at 5.1 Ghz and 1.504v? 250w?


----------



## CortexA99

Quote:


> Originally Posted by *BeepBeep2*
> 
> Didn't pay much attention to the Fritz Chess score, focused more on the Cinebench R15 score.
> 
> 
> 
> I gave this a look over and it seems to be a 3.1/3.5 8 core ES, Dresdenboy came to the same conclusion - probably 2D_31_01A2M88E4__35/31__N
> 
> What I posted at Anandtech and Hardforum:
> "If it is even close to the CPU we think it is based on the ES OPN, it is an 8-core CPU
> 
> The result is still almost Haswell-E level IPC, so Dresdenboy (and I) am not sure if it is fake or not. The part OPN suggests not, but anyone knowing how AMD names their ES parts could theoretically fake that.
> 
> Assuming Summit Ridge @ 3.1 GHz clock and not in a boost state (this chip shows 3.5 GHz boost...) and 1188 points, you get around 1350 points @ 3.5 GHz
> Haswell-E 8c/16t 5960X @ 3.0 Base and 3.5 GHz Turbo (not sure of boost clock when under Cinebench load, maybe 3.3 GHz?) : ~1350 points
> 
> Could be plausible. Broadwell-E does exceptionally well in Cinebench R15, about 10% uplift clock-per-clock over Haswell-E."
> 
> If it is fake it is fake, if it is not, it is still somewhat impressive imho.


Great find! I'll share your work to my friends.









I'm not surprised if cinebench score is closed to reality, if it just perform a little bit better it should fits my expectation that Zen perform in cinebench. And it's not contradict with what AMD demo us using Blender.

The fritzchess looks quite unrealistic though, I don't think it's benched with same CPU.


----------



## Olivon

Quote:


> Originally Posted by *tpi2007*
> 
> And how much power does your 32nm SB-E use at 5.1 Ghz and 1.504v? 250w?


Yep around it. It's a 32nm chip.


----------



## tpi2007

Quote:


> Originally Posted by *Olivon*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tpi2007*
> 
> And how much power does your 32nm SB-E use at 5.1 Ghz and 1.504v? 250w?
> 
> 
> 
> Yep around it. It's a 32nm chip.
Click to expand...

If the above posted result for Zen is indeed from a lower clocked ES + we still don't know what the Turbo bins will be (and if they work while fully loaded) + overclockability, then I'd say that it may turn out reasonable and with much lower power consumption at stock. After all, you are comparing it against a highly overclocked SB-E within at most 200 Mhz of the absolute max 24/7 stable overclock for SB-E and also close to the safe max voltage limit and very high power usage.


----------



## naz2

a leak where they intentionally crop out the system specs. you people will bite anything.

Sad!


----------



## tpi2007

It's called speculation and nobody is biting anything. We're just talking, and that post by BeepBeep2 was interesting.

In any case it's not as if we are putting down money for pre-orders or something. We are just _discussion foruming_ (yeah, I just made that up).


----------



## Olivon

Quote:


> Originally Posted by *tpi2007*
> 
> If the above posted result for Zen is indeed from a lower clocked ES + we still don't know what the Turbo bins will be (and if they work while fully loaded) + overclockability, then I'd say that it may turn out reasonable and with much lower power consumption at stock. After all, you are comparing it against a highly overclocked SB-E within at most 200 Mhz of the absolute max 24/7 stable overclock for SB-E and also close to the safe max voltage limit and very high power usage.


Still, scores are not impressive for a 8C/16T and the Fritz benches are even worst.


----------



## BeepBeep2

Quote:


> Originally Posted by *naz2*
> 
> a leak where they intentionally crop out the system specs. you people will bite anything.
> 
> Sad!


I only "bit" for the sake of curiosity, and the fact that I noticed they didn't crop out all of the system specs.









A lot of people are eager to call it fake, a lot of people can't believe the score is that low, others with intel machines are going crazy saying "My extremely high overclock on intel's X gen CPU beats low clocked 8-core easy!"

I suggest everyone just wait.

Also, anyone remember these leaks?


"Zen SR7 Special - CB R15 >1300
Zen SR7 - CB R15 ~1300"

If the leak is running 3.1 GHz and boost off (shown as 3.5 in OPN however) =1188, 3.4 GHz no turbo would be ~1300.


----------



## Kuivamaa

Quote:


> Originally Posted by *Cyrious*
> 
> My chip is more or less locked to 3.3ghz. It wont go faster, and it wont slow down. I think it has something to do with how screwy my CPU/Board combo is. Ram is set to 1600, 9-9-9-28.
> 
> Anyways, Cinebench R15 right here
> 
> 
> 
> I'll post Fritz here shortly.
> 
> EDIT: Fritz


And here is my Ivy Xeon. 8C/16T, runs CB15 at a steady 2993MHz.



Ryzen appears to be between Ivy and HW for CB15.

Edit. Without HW monitor running to check for frequency,I get 1066. Still Ryzen appears to be more than 10% faster for a mere 3% clock advantage. Not bad at all.


----------



## renx

I wonder if we will hear more about Ryzen on CES.


----------



## Shatun-Bear

Seems really good to me.

My 5820K at 4.5Ghz does 1330 cb in Cinebench so an 8-core Zen doing nearly 1200 *at stock* is pretty sweet.


----------



## Pro3ootector

Mine does 1036, still not bad.


----------



## Xuper

Quote:


> Originally Posted by *Pro3ootector*
> 
> 
> 
> Mine does 1036, still not bad.


Running at 2.6GHz?


----------



## Pro3ootector

Quote:


> Originally Posted by *Xuper*
> 
> Running at 2.6GHz?


Turbos to ~ 3.0.

Edit: FX-8350 scores around 630cb, so it would seem legit, as AMD claimed double the performance.


----------



## Blameless

Quote:


> Originally Posted by *CortexA99*
> 
> by the way I would share Gigabyte X370 to you which is leaked from baidu.


6+1 phase power and what looks like ~20 CPU attached PCI-E 3.0 lanes.
Quote:


> Originally Posted by *Kuivamaa*
> 
> Ryzen appears to be between Ivy and HW for CB15.


Adjusted for the likely clock speeds that were apparently used, these results are quite plausible, though obviously still unconfirmed.


----------



## Mahigan

Quote:


> Originally Posted by *Blameless*
> 
> You are missing the test methodology. Tom's measured the current going through the EPS +12v lines, this is not what's being delivered to the CPU. You have to go through the board's VRM first, which is going to be ~90% efficient, at best. This isn't horribly far off TDP (127 vs. 140), but that's also not how Intel defines TDP either (pure Fast-Fourier transforms probably doesn't qualify as "commercially useful software").
> 
> There is also the missing Zen comparison. Zen likely won't draw the same sort of current in recent versions of Prime95 (as it can't do as much AVX 256 work), but without actually testing Zen in such a scenario, we can't say for certain. Neither Intel nor AMD claim their parts will never exceed TDP.
> Not the default Windows version.
> 
> Compile it to use AVX and it will, but it's also ~40% faster than the precompiled binaries available from the Blender site.
> 
> http://www.overclock.net/t/1618534/blender-ryzen-scene-benchmark#post_25714916
> Barely. Disabling AVX only makes it 2-3% slower.
> 
> http://www.overclock.net/t/1617227/amd-new-horizon-zen-preview-on-12-13-at-3-pm-cst/1300#post_25716380
> The tests conducted by AMD (which, again, are not AVX intensive) do not show an appreciable power consumption differential.
> A loose correlation.
> 
> I have 130w TDP parts that hit 130w peak load. I also have 130w TDP parts that won't crack 80w peak load.
> 
> The very Tom's review you are pulling images from also features a 140w TDP (the 6800K) part that is barely cracking 105w in Prime95 and another that probably isn't reaching 95w (the 6850K) in addition to some that are coming closer to the TDP rating they've been given.
> No we do not.
> You have it entirely backwards. All we have are real-world load figures, which are the same as equivalent loads from the closest configuration competitior part. We do not know peak load figures of Zen.


Logic.

If a 6900K consumes 100W under Blender which, according to you, uses very little AVX code whereas RyZen uses 93W under the same test... and if AVX code makes Intel CPUs consume a ton of power (as you and others are saying) then adding more AVX code means that the 6900K will only consume more power.

We know that RyZen has a power savings implementation of AVX. That is to say that each core has two FPUs and those FPUs can merge in order to perform some AVX operations or operate separately, in tandem, when not processing AVX code.

In other words... RyZen can never consume more power than a 6900K under high workloads. Ever. It is only logical based on the information you, and others, have stated.

The only point of contention is how much power does RyZen use while gaming (meaning when the CPU is not under relative heavy load). This is where I also cannot see how RyZen could consume more power seeing as the Blender results show that when the load isn't too heavy (according to you and others who claim that AVX use in Blender is minimal even when Blender is recompiled to take advantage of AVX).

Seems to me that, logically, what you and others are claiming is not possible with the current known information. RyZen (at 3.4GHz and not boosting) cannot consume more power than a 6900K (3.2GHz with a 3.7GHz all core Turbo/Single core Turbo of 4GHz) according to the information we currently have at our disposal.

Also worth mentioning that the TDP Intel assigns tends to be the same figure for a series of CPUs of various core amounts, cache sizes etc. That is true, but the higher end you move, the closer the TDP ends up designating the Full (if not) heavy core load power consumption figures. It has always been this way.

So even if a 6800K never reaches close to its TDP figure in terms of its max power usage... this says nothing about the higher end 6900K and higher CPUs. Those CPUs stress the manufacturing process and power envelope requirements. They tend to always be near their TDP in terms of power usage under heavy loads. It was the same with my 3930K and it is the same with the newer Intel CPUs.

So this whole argument ya'll are having appears to be more or less centered around CPU fandom rather than logic and objectivity.


----------



## Bruizer

Quote:


> Originally Posted by *Aussiejuggalo*
> 
> Anybody seen this? looks like the AM3+ cooler mounts wont be compatible with AM4.


Yes. If Be quiet! doesn't release an adapter for the Dark Rock Pro 3 I bought on sale month ago in preparation I'll be screwed. One can hope. But then again, maybe there was a reason they were having that ridiculous sale on it for the past month.

Edit: And after doing some reading it looks like Be quiet! may very well release an adapter! Yay!


----------



## Blameless

Quote:


> Originally Posted by *Mahigan*
> 
> In other words... RyZen can never consume more power than a 6900K under high workloads. Ever. It is only logical based on the information you, and others, have stated.


There is a big difference between suspecting that Zen will consume less power at peak loads than BW-E, which _is_ a logical extrapolation, and knowing it will, which has not been demonstrated and is still somewhat in doubt. Thus far, we only have power consumption figures up to moderate loads, which are roughly the same.

While I will be very surprised if a 6900K doesn't consume significantly more power than the fastest Ryzen part in LINPACK, this is less certain than power consumption at lower loads, which have actually been demonstrated.
Quote:


> Originally Posted by *Mahigan*
> 
> The only point of contention is how much power does RyZen use while gaming


I'm not sure how you consider this to be more in contention than peak-load power. Gaming is going to be equal or lower CPU load than an unoptimized Blender and we already know that low to moderate load power consumption between the parts demonstrated is a wash.
Quote:


> Originally Posted by *Mahigan*
> 
> according to you and others who claim that AVX use in Blender is minimal even when Blender is recompiled to take advantage of AVX


Blender uses a lot of AVX when compiled to take advantage of it...just not as much as more dedicated stress tests or mathematical apps. Maybe you meant Handbrake?
Quote:


> Originally Posted by *Mahigan*
> 
> Seems to me that, logically, what you and others are claiming is not possible with the current known information. RyZen (at 3.4GHz and not boosting) cannot consume more power than a 6900K (3.2GHz with a 3.7GHz all core Turbo/Single core Turbo of 4GHz) according to the information we currently have at our disposal.


_The sole test we have shows a lower delta between load and idle on the 6900K than on the Ryzen part._ I've generally glossed over this as exact details of the test configurations have eluded us and because the difference is small compared to the per-sample spread of power consumption with the same model, but it does hint at a slight advantage for the BW-E part, especially now that we are starting to see hints of how sparse X370 boards are likely to be relative to X99.
Quote:


> Originally Posted by *Mahigan*
> 
> Also worth mentioning that the TDP Intel assigns tends to be the same figure for a series of CPUs of various core amounts, cache sizes etc. That is true, but the higher end you move, the closer the TDP ends up designating the Full (if not) heavy core load power consumption figures. It has always been this way.
> 
> So even if a 6800K never reaches close to its TDP figure in terms of its max power usage... this says nothing about the higher end 6900K and higher CPUs. Those CPUs stress the manufacturing process and power envelope requirements. They tend to always be near their TDP in terms of power usage under heavy loads. It was the same with my 3930K and it is the same with the newer Intel CPUs.


You are ignoring the per-sample variance that exists and which dwarfs the trend you note here. If I buy ten 6800Ks I will see a spread of power consumptions within those ten that is larger than the average difference between a 6800K and a 6900K.

TDP figures in general are being given far too much weight.
Quote:


> Originally Posted by *Mahigan*
> 
> So this whole argument ya'll are having appears to be more or less centered around CPU fandom rather than logic and objectivity.


I have no biases in this and you should look at your own interpretation of the data at hand before accusing others.


----------



## mouacyk

@Mahigan - While your past efforts to disseminate Async Compute emulation on NVidia Maxwell/Pascal is greatly appreciated, I don't understand the need for you to be throwing cpu fandom accusations in here. I'm equally amazed at the testing, verification, and re-verification of data by those who are interested in finding truth behind a bunch of marketing talking points.

"At its best, skepticism is a manifestation of curiosity, intelligence, and imagination -- in a world, the best of the human spirit." In all honesty, the skeptics care most about the product than those with shallow blind belief.


----------



## CrazyElf

Quote:


> Originally Posted by *Mahigan*
> 
> Logic.
> 
> If a 6900K consumes 100W under Blender which, according to you, uses very little AVX code whereas RyZen uses 93W under the same test... and if AVX code makes Intel CPUs consume a ton of power (as you and others are saying) then adding more AVX code means that the 6900K will only consume more power.
> 
> We know that RyZen has a power savings implementation of AVX. That is to say that each core has two FPUs and those FPUs can merge in order to perform some AVX operations or operate separately, in tandem, when not processing AVX code.
> 
> In other words... RyZen can never consume more power than a 6900K under high workloads. Ever. It is only logical based on the information you, and others, have stated.


We saw huge gains with AVX fully enabled. The stock build of Blender uses very little AVX.

In regards to the AVX, see the following:


Spoiler: Results with AVX on a custom compiled build



Quote:


> Originally Posted by *dmasteR*
> 
> 
> 
> 4.6Ghz 6700K running STILT version ( 33:44)
> 
> 
> 
> 4.6GGhz 6700K running Stock Version. (49:98)






33:44s with AVX2 on Skylake versus 49:98 off. A difference of almost 50%.

There is a way to demonstrate this for yourself as well. Try running this build on your 3930K. Judging by the other Sandy Bridge E results, you'll get about ~35-38% more performance with fully enabled AVX (SB is capable of AVX128, but not AVX256, and newer architectures are more wide).

The test build from the Stilt: https://1drv.ms/u/s!Ag6oE4SOsCmDhFJcooVchjDI9SFD (password "SIMD", without the quotes)

The above is a fully enabled AVX build of Blender. Run the same test on your 3930K at 4.5 GHz like before and compare to your results with the build that is publicly available that hardly uses AVX at all.


Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Mahigan*
> 
> Seems that my old score was wrong.
> 
> I set it to 150 samples and...
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 00:36.33
> 
> Not bad... but at 4.5GHz. Which means that Zen @ 3.4GHz is a match for my 3930K @ 4.5Ghz. That's very nice performance indeed.






Same run only with AVX fully on, 150 samples.

On Intel CPUs, AVX when enabled uses an extra 0.1V of voltage, so the current draw must be huge. Newer Broadwell E CPUs (and Kaby Lake will offer this as well) actually have a feature called "AVX negative offset" that lowers your OC multiplier. I wonder if AMD has a similar feature. AVX may use a ton of power, but the performance (as you saw with the 50% gain) on Blender is not inconsiderable. x264 (Handbrake) makes very limited use of AVX instructions. The x265 builds see a ~20% or so gain with AVX2 on Haswell.

Quote:


> Originally Posted by *Mahigan*
> 
> The only point of contention is how much power does RyZen use while gaming (meaning when the CPU is not under relative heavy load). This is where I also cannot see how RyZen could consume more power seeing as the Blender results show that when the load isn't too heavy (according to you and others who claim that AVX use in Blender is minimal even when Blender is recompiled to take advantage of AVX).
> 
> Seems to me that, logically, what you and others are claiming is not possible with the current known information. RyZen (at 3.4GHz and not boosting) cannot consume more power than a 6900K (3.2GHz with a 3.7GHz all core Turbo/Single core Turbo of 4GHz) according to the information we currently have at our disposal.
> 
> Also worth mentioning that the TDP Intel assigns tends to be the same figure for a series of CPUs of various core amounts, cache sizes etc. That is true, but the higher end you move, the closer the TDP ends up designating the Full (if not) heavy core load power consumption figures. It has always been this way.
> 
> So even if a 6800K never reaches close to its TDP figure in terms of its max power usage... this says nothing about the higher end 6900K and higher CPUs. Those CPUs stress the manufacturing process and power envelope requirements. They tend to always be near their TDP in terms of power usage under heavy loads. It was the same with my 3930K and it is the same with the newer Intel CPUs.
> 
> So this whole argument ya'll are having appears to be more or less centered around CPU fandom rather than logic and objectivity.


Here was the official demonstration results:

*Idle*
Zen: 93W
Intel: 106W

*Load*
Zen: 187W
Intel: 191W

Assuming this is accurate, that means that Zen drew 94W under load and the 6900K drew 85W. That's slightly worse, although Zen in this case was also slightly faster in IPC. Of course, this is one chip sample and one benchmark, so that isn't the end-all be-all. We are all really trying to make educated guesses here.

WE don't have any AVX or branch intensive benches from AMD. That's the problem. I think that Zen will come out on top on peak draw compared to Broadwell E (ex; under AVX2 and the heaviest loads, Broadwell E will draw more). The unknown we have is what AMD will use in total joules to render under AVX load and what the performance gains will be. AVX of course may take a huge amount of current, but it also makes it much faster (see the Skylake benchmark above with AVX on the Stilt's build versus the public build). The other matter is that in a few months after Zen, Intel will release Skylake E, which will capture the crown again (although I'm sure not at a price competitive point).

Keep in mind that this is a huge achievement for AMD because Intel is only one with a "true" 14nm process. AMD is using via Global Foundries the 14/20 nm hybrid process. If they were on the same node, I'd be willing to bet there's a good chance AMD would win out on load power draw. Still, major props to AMD for designing a chip that is a huge improvement over Bulldozer especially on a shoestring R&D budget compared to Intel. It also looks like AMD has a more efficient motherboard than whatever X99 board they were using. That may actually be valuable if it translates into a lower power mobile PCH (for battery life).

We're not going to know the real numbers until we get these chips in our own hands I'm afraid. I don't think that most games could possibly use more than Blender, but AVX is slowly heading into games as well.

Quote:


> Originally Posted by *Blameless*
> 
> Well, unless they are wholesale copying Intel (and they aren't), inconsistent performance should be expected.
> 
> Different architectures have different strengths and weaknesses. It's natural to use Intel as the baseline, because that's what's relevant now, but if Zen had come first and Intel had just released Ivy Bridge ((for example), we'd be seeing just as much inconsistency.


At least now AMD has a competitive solution. Far more so than Bulldozer.

I mean back in the K8 days, AMD used to be the reference because they were on top (especially considering the inefficient Prescott build that Intel had that was hardly any better than Northwood). Conroe changed that.

But yes, it will have a different set of strengths and weaknesses.
Quote:


> Originally Posted by *Blameless*
> 
> Skylake is wider. A full extra execution port and all-round deeper queues and buffers, relative to Haswell/Broadwell.
> 
> It's not a trivial architecture change, at least not here.


Very true.

https://en.wikichip.org/wiki/Skylake#Key_changes_from_Broadwell


Spoiler: Wikichip Skylake Differences



Bus/Interface to Chipset

DMI 3.0 (from 2.0)
Skylake S and Skylake H cores, connected by 4-lane DMI 3.0
Skylake Y and Skylake U cores have chipset in the same package (simplified OPIO)
Increases speed from 5.0 GT/s to 8.0 GT/s (~3.93GB/s up from 2GB/s)
Limits motherboard trace design to 7 inches max from (down from 8) from the CPU to chipset

Front End

Larger legacy pipeline delivery (5 µOPs, up from 4)
Larger IDQ delivery (6 µOPs, up from 4)
2.28x larger allocation queue (64/thread, up from 28/thread)
Improved branch prediction unit

Execution Engine

Larger re-order buffer (224 entries, up from 192)
Larger scheduler (97 entries, up from 64)
Larger Integer Register File (180 entries, up from 160)
Larger store buffer (56 entries, up from 42)

Memory

L2$ was changed from 8-way to 4-way set associative

TLBs

ITLB
4 KiB page translations was changed from 4-way to 8-way associative
STLB
4 KiB + 2 MiB page translations was changed from 6-way to 12-way associative



Perhaps AMD will widen Zen+ too in the coming years.

The great thing is that at least they will have a product that is worth building on. I'm hoping we'll see more relative gains on Intel and AMD moving a few Opterons. They aren't going to break the Xeon monopoly, but they certainly can carve themselves a niche in the server and HPC world that will leave them in a good shape. Combined with Vega, they have some solid products and far more competitive than before that could make them one of the best comebacks in the history of the semiconductor industry.


----------



## Blameless

Just to clarify a few points:

1. I will be _astounded_ if an average 6900K does not consume more power at absolute maximum load than an average example of the fastest Ryzen AM4 part (both at stock, of course).

2. Even though I am 95% certain of the above, I am less certain of the above than I am that power consumption at lesser loads will be very comparable, _because we actually have this being demonstrated in AMD's own test_.

3. Gaming CPU loads, in remotely demanding titles (the kind where you'd rather have a Ryzen or BW-E than an i5), are _way_ closer to a reference Blender build render load than said Blender build is to an absolute maximum load.

Also, a bit of hypothesizing that may seem contradictory at first, but isn't:

- If you push both parts hard enough (talking about load here, not overclocking), you will find BW-E both more power hungry and harder to cool.

- If you push both parts hard enough, you will find BW-E to be generally more efficient.

Meaning that where BW-E is reaching appreciably higher peak power, it's doing the work demanded of it with less total energy consumed.


----------



## Olivon

Idle power consumption are really high on the Intel Platform.
Contrary to what AMD said ("no tweak in BIOS"), they must had disable C-States in BIOS.
106W idle for the Intel plaform is really high and not representative of the reality.

http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-9.html


----------



## Blameless

Quote:


> Originally Posted by *Olivon*
> 
> Idle power consumption are really high on the Intel Platform.
> Contrary to what AMD said ("no tweak in BIOS"), they must had disable C-States in BIOS.
> 106W idle for the Intel plaform is really high and not representative of the reality.
> 
> http://www.tomshardware.com/reviews/intel-core-i7-broadwell-e-6950x-6900k-6850k-6800k,4587-9.html


The AMD demo seems to have been total system power, at the wall. ~100w idle is pretty normal for most of my HEDT systems at stock, measured at the wall.

Tom's Hardware isn't measuring total AC power consumption in that review; reference page 2 where they talk about the test system and methodology. The figures they show are the +12v EPS connector's current, multiplied by the voltage of the +12v rail, which should be quite accurate for CPU + VRM power consumption, and is about the most direct way you can measure power without something sitting between the CPU and the socket.


----------



## Olivon

It seems quite high to me :



http://www.pcper.com/reviews/Processors/Intel-Core-i7-6950X-10-core-Broadwell-E-Review/Power-Consumption-Perf-Dollar-Clos


----------



## Blameless

Is that at the wall? I assume it is, but I can't find specific mention.

Also, do we have full specs on the AMD test systems?

Delta to load looks about the same in both the PCPer and AMD tests.


----------



## epic1337

theres also different PSU efficiencies when measuring through the wall, does anyone know whether AMD is using a bronze rated PSU or Titanium rated PSU?


----------



## Quantum Reality

My main concern about TDP would be the amount of heat needed to take away. As such a higher nominal TDP means more heat must be taken away to keep the CPU within operating parameters.

Also, as regards the AM4 socket mounts, I think this is a step backwards for AMD. Their prime selling point, since the 939 -> AM2 incompatibility kerfluffle, has been trying to keep a chain of backwards and forwards compatibility all across their socket lines, from AM2 -> AM3+ , both in number of pins and in mechanical heatsink attachment measurements. To a lesser extent, memory compatibility has been hugely helpful, since the DDR2/DDR3 overlap now parallels the DDR3/DDR4 overlap.


----------



## Olivon

Quote:


> Originally Posted by *Blameless*
> 
> Also, do we have full specs on the AMD test systems?


Yes, we have :



edit : Arf, we don't have motherboards and PSUs

edit2 : Friday, we'll have solid informations from Doc TB (CPU-Z). He will give some insider view in his CPC hardware paper.
Fo those who remember, he was on point during the bulldozer launch or more recently, on Skylake.


----------



## epic1337

amusingly they used Titan X instead of their lovely Fury X for BF1.
and in comparison, they're using RX480 instead for blender and handbrake.


----------



## Marios145

Well, maybe they chose high-performance plan on windows, this would certainly increase power consumption at idle.


----------



## Colin1204

Quote:


> Originally Posted by *Quantum Reality*
> 
> My main concern about TDP would be the amount of heat needed to take away. As such a higher nominal TDP means more heat must be taken away to keep the CPU within operating parameters.
> 
> Also, as regards the AM4 socket mounts, I think this is a step backwards for AMD. Their prime selling point, since the 939 -> AM2 incompatibility kerfluffle, has been trying to keep a chain of backwards and forwards compatibility all across their socket lines, from AM2 -> AM3+ , both in number of pins and in mechanical heatsink attachment measurements. *To a lesser extent, memory compatibility has been hugely helpful, since the DDR2/DDR3 overlap now parallels the DDR3/DDR4 overlap.*


I loved AMD through the AM2 to AM3+ days, being able to drop in a high number of possible CPU's from multiple gens and not even having to upgrade my ram from 4GB DDR2 1066 CL5 at the time was awesome as DDR3 prices were too far out of my read and an upgrade of both CPU and Ram would have been crap in my budget. I then was able to buy 4GB DDR3 1600 on a great deal and later found a crazy deal on 8GB DDR3 2133. That and being able to install 6 CPU upgrades on a single mobo and I call that a huge success for me as a consumer.

But I know we need a socket change, I was hoping the BD arch would even do that but we got AM3+. AMD needs a 100% new socket and then hopefully we can get the same goodness we had before (AM4 to AM4+, then AM5 and AM5+, etc until new arch requires a full on new socket) I've always disliked Intels constant socket changes, even when not required at all.


----------



## Marios145

I'm counting on asrock to be awesome and release a ddr3/4 mobo


----------



## dagget3450

Oh man this thread is losing steam. Need more conspiracy's and leaked fakes at least. Need more debates about fake benchmarks or evil lying marketing. This was a juicy thread to read for a week for sure!!!

Now it's fizzing out!!! Nooooooooo

I do wish they would have done some gaming fps numbers at 1080p instead of 4k. I also think the choice of Nvidia was smart because the market share on GPUs. I think they attempted to appeal a larger market or they just wanted to use Nvidia's better CPU efficient drivers to their advantage? Does it really have an advantage in 4k over AMD gpus?

Hmmmm


----------



## Kuivamaa

Quote:


> Originally Posted by *epic1337*
> 
> amusingly they used Titan X instead of their lovely Fury X for BF1.
> and in comparison, they're using RX480 instead for blender and handbrake.


The idea is to use neutral graphics card so there will be no "gee ,AMD used driver hack to gimp intel CPU" doubts.


----------



## Marios145

Beginning of AMD presentation: 1:02:00

Michael Clark: +40% IPC is single-thread, before SMT.
^
This statement is at 1:27:40
Get the calculators back out.

EDIT:
Is it just me or does Michael Clark look like Noah Bennet from the Heroes tv show?


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> amusingly they used Titan X instead of their lovely Fury X for BF1.
> and in comparison, they're using RX480 instead for blender and handbrake.


They used the fastest gpu. Why would they use a FuryX. People would complain about gpu bottleneck


----------



## epic1337

Quote:


> Originally Posted by *Kuivamaa*
> 
> The idea is to use neutral graphics card so there will be no "gee ,AMD used driver hack to gimp intel CPU" doubts.


then why use RX480 for their other tests?

Quote:


> Originally Posted by *budgetgamer120*
> 
> They used the fastest gpu. Why would they use a FuryX. People would complain about gpu bottleneck


reasonable yes, but Fury X isn't that slow either.
plus if they wanted to test CPU specific bottleneck they could've simply used low resolutions.


----------



## Kuivamaa

Quote:


> Originally Posted by *epic1337*
> 
> then why use RX480 for their other tests?


They used Titan X where there was a gaming comparison (BF1) between intel and AMD, in other words where it mattered for performance. They used vega when they demonstrated Ryzen alone in Battlefront and RX 480 when they ran bog standard CPU only tests where GPU plays no role and is there only to give signal to monitors. It makes sense in every case.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Marios145*
> 
> Michael Clark: +40% IPC is single-thread, before SMT.
> 
> Get the calculators back out.
> 
> EDIT:
> Is it just me or does Michael Clark look like Noah Bennet from the Heroes tv show?


You mind throwing in a time stamp? That vid is an hour and a half..

Thanks.


----------



## LongtimeLurker

I read somewhere that 25% of the 40% increase in IPC is due to better branch prediction?


----------



## GamerusMaximus

Quote:


> Originally Posted by *Quantum Reality*
> 
> My main concern about TDP would be the amount of heat needed to take away. As such a higher nominal TDP means more heat must be taken away to keep the CPU within operating parameters.
> 
> Also, as regards the AM4 socket mounts, I think this is a step backwards for AMD. Their prime selling point, since the 939 -> AM2 incompatibility kerfluffle, has been trying to keep a chain of backwards and forwards compatibility all across their socket lines, from AM2 -> AM3+ , both in number of pins and in mechanical heatsink attachment measurements. To a lesser extent, memory compatibility has been hugely helpful, since the DDR2/DDR3 overlap now parallels the DDR3/DDR4 overlap.


Why would a 95 watt 14nm chip be a concern, when we can already properly cool a 140 watt 14nm chip? 95 watt chips have been in use for many, many years without issue.

As for the socket, AMD needs to break compatibility sooner or later. They are changing the design for AM4, for the first time in a decade, which could be a chance for them to improve it. Perhaps make the stock bracket good enough that things like water and air coolers dont need a 3rd party bracket. As for RAM, DDR3 is dying. At this point, by next year when zen is available, DDR3 will be obsolete. DDR4 also has come down in cost to be competitive to DDR3. No real reason to support DDR3 over DDR4.


----------



## epic1337

Quote:


> Originally Posted by *Kuivamaa*
> 
> They used Titan X where there was a gaming comparison (BF1) between intel and AMD, in other words where it mattered for performance. They used vega when they demonstrated Ryzen alone in Battlefront and RX 480 when they ran bog standard CPU only tests where GPU plays no role and is there only to give signal to monitors. It makes sense in every case.


sounds like a hassle to me, aside from the Vega sneak peak, they could've used Titan X to give signal to monitors too or am i missing something?

on a side note, i tried looking up whether Fury X scales better with CPU performance, looks like it does, though no comparison with Titan X Pascal.
http://www.hardocp.com/article/2016/04/19/dx11_vs_dx12_intel_cpu_scaling_gaming_framerate/4

would be better if i could find a Titan X Pascal review with regards to CPU scaling, though i can't seem to find one.


----------



## GorillaSceptre

Quote:


> Originally Posted by *epic1337*
> 
> sounds like a hassle to me, aside from the Vega sneak peak, they could've used Titan X to give signal to monitors too or am i missing something?


The fact that besides a "neutral test", they might want to show that they also have their own GPU's?


----------



## epic1337

Quote:


> Originally Posted by *GorillaSceptre*
> 
> The fact that besides a "neutral test", they might want to show that they also have their own GPU's?


Vega sounds like a better candidate than Polaris for that, it gives it that mysteriousness that makes you wanna "ITS VEGA, QUICK AS A QUESTION".


----------



## Removed1

Quote:


> Originally Posted by *epic1337*
> 
> sounds like a hassle to me, aside from the Vega sneak peak, they could've used Titan X to give signal to monitors too or am i missing something?
> 
> on a side note, i tried looking up whether Fury X scales better with CPU performance, looks like it does, though no comparison with Titan X Pascal.
> http://www.hardocp.com/article/2016/04/19/dx11_vs_dx12_intel_cpu_scaling_gaming_framerate/4
> 
> would be better if i could find a Titan X Pascal review with regards to CPU scaling, though i can't seem to find one.


Well, when AMD don't need raw horse power during the test, why it would make some advert to other brand, especially to Nvidia?!

What Kuivamaa said fit really well with the different situations of test AMD bring us!
Quote:


> Originally Posted by *Kuivamaa*
> 
> They used Titan X where there was a gaming comparison (BF1) between intel and AMD, in other words where it mattered for performance. They used vega when they demonstrated Ryzen alone in Battlefront and RX 480 when they ran bog standard CPU only tests where GPU plays no role and is there only to give signal to monitors. It makes sense in every case.


After that, unfortunately the FuryX with 4Gb of ram isn't really the best to bench in 4K, the gpu scaling on the cpu in this case do not matter, it is more for be free to load **** of textures in 4K.


----------



## Blameless

Quote:


> Originally Posted by *budgetgamer120*
> 
> They used the fastest gpu. Why would they use a FuryX. People would complain about gpu bottleneck


Yes.
Quote:


> Originally Posted by *epic1337*
> 
> reasonable yes, but Fury X isn't that slow either.
> plus if they wanted to test CPU specific bottleneck they could've simply used low resolutions.


More appropriate for a very visual demonstration to use the highest settings possible.

The layman isn't going to be impressed by either a low IQ scene at frame rates too high to distinguish nor by a high detail/res scene chugging along on a Fury X (which is no where near as fast as a Titan XP).
Quote:


> Originally Posted by *Kuivamaa*
> 
> They used Titan X where there was a gaming comparison (BF1) between intel and AMD, in other words where it mattered for performance. They used vega when they demonstrated Ryzen alone in Battlefront and RX 480 when they ran bog standard CPU only tests where GPU plays no role and is there only to give signal to monitors. It makes sense in every case.


Yes.


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> then why use RX480 for their other tests?
> reasonable yes, but Fury X isn't that slow either.
> plus if they wanted to test CPU specific bottleneck they could've simply used low resolutions.


Those 720p test that are meaningless? No I rather they test resolution enthusiasts actually use when testing enthusiasts grade product.


----------



## kd5151




----------



## GorillaSceptre

Quote:


> Originally Posted by *epic1337*
> 
> Vega sounds like a better candidate than Polaris for that, it gives it that mysteriousness that makes you wanna "ITS VEGA, QUICK AS A QUESTION".


The GPU was irrelevant in those tests, i guess they just wanted to plug one of their current GPU's that is actually on the market.


----------



## h2323

This single thread IPC by Clark at HC was on seeking alpha and twitter days after the presentation, surprised this forum missed it, plus people must have been assuming the worst if they thought it wasn't ST IPC.

Some take away's people need to except, I have been following this closely for some time.

AMD is not lying.

It has always been greater than 40% IPC (AMD conference calls and financial presentations), Ryzen presentation signals much better than 40% IPC.

AMD is going the extra mile on transparency to any other arc released in recent memory

Zen is very good.

AMD's financial issues were largely overplayed in regards to Zen, Radeon suffered for it, not Zen. Large team, majority of quarterly R&D for nearly 4 years adds up to lot of money and a well funded chip.

Key to most AMD "info" is following financials and financial presentations and investors page.

Intel has acknowledged Zen as having "short term" impact in 17...we'll see how short term it is.

Zen is being released across the entire product portfolio (Desktop Q1, Server Q2, continued server roll-out 2H, Norebook APU 2H, followed by desktop APU, embedded, semi-custom,.

Zen+ is hot on Zen's heals.

AMD has less debt, more cash, and a better WSA than they did during the entire bulldozer era

I could go on.


----------



## Benny89

If Ryzen hits market in January, I wonder when we can expect Zen+?


----------



## Nenkitsune

Quote:


> Originally Posted by *CrazyElf*
> 
> Here was the official demonstration results:
> 
> *Idle*
> Zen: 93W
> Intel: 106W
> 
> *Load*
> Zen: 187W
> Intel: 191W
> 
> *Assuming this is accurate, that means that Zen drew 94W under load and the 6900K drew 85W.* That's slightly worse, although Zen in this case was also slightly faster in IPC. Of course, this is one chip sample and one benchmark, so that isn't the end-all be-all. We are all really trying to make educated guesses here.


THATS NOT HOW POWER CONSUMPTION WORKS. All you're calculating is the DELTA BETWEEN IDLE POWER CONSUMPTION AND LOAD POWER CONSUMPTION.

you're making the assumption that the cpu uses ZERO watts at idle. This couldn't be further from the truth. Assuming the setups are relatively similar to connected devices, etc, then what we're seeing is zen is using 13w less at idle than intel, and 4w less at load then intel.


----------



## diggiddi

Quote:


> Originally Posted by *h2323*
> 
> This single thread IPC by Clark at HC was on seeking alpha and twitter days after the presentation, surprised this forum missed it, plus people must have been assuming the worst if they thought it wasn't ST IPC.
> 
> Some take away's people need to except, I have been following this closely for some time.
> 
> AMD is not lying.
> 
> It has always been greater than 40% IPC (AMD conference calls and financial presentations), Ryzen presentation signals much better than 40% IPC.
> 
> AMD is going the extra mile on transparency to any other arc released in recent memory
> 
> Zen is very good.
> 
> AMD's financial issues were largely overplayed in regards to Zen, Radeon suffered for it, not Zen. Large team, majority of quarterly R&D for nearly 4 years adds up to lot of money and a well funded chip.
> 
> Key to most AMD "info" is following financials and financial presentations and investors page.
> 
> Intel has acknowledged Zen as having "short term" impact in 17...we'll see how short term it is.
> 
> Zen is being released across the entire product portfolio (Desktop Q1, Server Q2, continued server roll-out 2H, Norebook APU 2H, followed by desktop APU, embedded, semi-custom,.
> 
> Zen+ is hot on Zen's heals.
> 
> AMD has less debt, more cash, and a better WSA than they did during the entire bulldozer era
> 
> I could go on.


Please do


----------



## Tojara

Quote:


> Originally Posted by *GorillaSceptre*
> 
> The GPU was irrelevant in those tests, i guess they just wanted to plug one of their current GPU's that is actually on the market.


They wanted to show off with the fastest (current) GPU available with zero hitching during the demo. Vega might either not be fast enough or have unfinished drivers, though the raw horsepower should still be well enough to run the game at 60FPS+.
Quote:


> Originally Posted by *Benny89*
> 
> If Ryzen hits market in January, I wonder when we can expect Zen+?


H2 2018-H1 2019 is a pretty good guesstimate. It'll be interesting to see how good it is since the second revision of CPU archs have been AMDs big hitters.
Quote:


> Originally Posted by *Nenkitsune*
> 
> THATS NOT HOW POWER CONSUMPTION WORKS. All you're calculating is the DELTA BETWEEN IDLE POWER CONSUMPTION AND LOAD POWER CONSUMPTION.
> 
> you're making the assumption that the cpu uses ZERO watts at idle. This couldn't be further from the truth. Assuming the setups are relatively similar to connected devices, etc, then what we're seeing is zen is using 13w less at idle than intel, and 4w less at load then intel.


Yep. The idle power for the larger chips isn't insignificant at all.
Quote:


> Originally Posted by *LongtimeLurker*
> 
> I read somewhere that 25% of the 40% increase in IPC is due to better branch prediction?


They've improved it in the last three arch updates after Bulldozer so it should be in an acceptable state to begin with, and if I'm not entirely mistaken what they're using with Zen isn't all that different from what they had in the construction cores. They probably couldn't get much of an increase out of any branch prediction without massively buffing up the cores. The exact performance increases in such a radical arch change are hard to attribute to individual factors since the cores are so different.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Marios145*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Beginning of AMD presentation: 1:02:00
> 
> Michael Clark: +40% IPC is single-thread, before SMT.
> ^
> This statement is at 1:27:40
> Get the calculators back out.
> 
> EDIT:
> Is it just me or does Michael Clark look like Noah Bennet from the Heroes tv show?


Lmao @ the Intel dude's questions at the end.. No idea what they were talking about, but him slinking in like that sure got a bit awkward.









Quote:


> Originally Posted by *Tojara*
> 
> They wanted to show off with the fastest (current) GPU available with zero hitching during the demo. Vega might either not be fast enough or have unfinished drivers, though the raw horsepower should still be well enough to run the game at 60FPS+.


We were talking about the RX480, during the CPU-only tests.


----------



## h2323

Quote:


> Originally Posted by *diggiddi*
> 
> Please do


AMD being misleading is largely a myth with some unfortunate exceptions (fury overclock, and bulldozer server prospects) AMD did not have a solid CPU roadmap for years for good reason and now they do for good reason, people point to this as evidence that they intentionally mislead.

Entering into Bulldozer era (late 2011) AMD reorganized and fired entire management team, had 5 slides at hotchips and did very little press, that is tech and financial press, this is before chip release

Entering into Zen era (late 2016) AMD is solidifying current management with stock offerings and more accountability and face time, has the largest hotchips presentation and largest crowd, is doing unprecedented press, tech and financial.

AMD market cap is double going into Zen to what is was going into Bulldozer on less revenue with a lower margin.

AMD has secured "several" server design wins. (source: last three earnings calls)

AMD is the only company with a chip on the market that is HSA 1.0 certified. Machine learning and data center will be using this standard sooner than later.

The surviving philosophy from Skybride is "cookie cutter" design, they still use the puzzle piece imagery in presentations to this day. AMD communicates through imagery a great deal.
Zen, Polaris, Vega are designed to "drop in" (more complex than that) but through minimal effort and cost be able to function in HSA compliance on an APU for server/ desktop/ notebook,APU's.

AMD's third "new" semi-custom design win is ARM in the embedded space.

Recent 5 year WSA has second source written in. AMD is estimating to pay GF 100 million in penalty's per Q for using the second source starting Q1. This is largely not discussed but logically only works if the second source is pumping out serious wafers, there is no other way it makes any sense. Why such large volume's from second source? The bullish scenario leads one to think large orders of Zen with samsung as a foundry on top of global foundries (same or similiar 14nm node)

Piledriver/Richland/Kaveri/Opteron inventory will need to be written down, channel is full of it, can't sell it for less, large chips, quality chips, selling them at a loss would actually hurt the company more than just writing the inventory down, look for this mid 2017.

RTG formation is in line with Zen high level design completion. More funding better graphics chips happened fast.

Zen+ is likely end of 2017 on superior 14nm node, Solid CPU roadmap will be presented at Q4 earnings call.


----------



## CortexA99

I have no time to go through the web, but I don't see any new leak from that poster in our forums, so I guess this story just coming to an end.









Anyway I found the proper version of Fritzchess that is used by that poster to bench his 'Zen', it's a version that translated to Chinese. There's so many talented programmer in western so I believe you might find something if there's anything wrong with this version of Fritzchess. Link is below:

http://www85.zippyshare.com/v/SuIFyvIR/file.html

Although it's 'Chinesed' but I think it's no problem to distinguish the button.

NEWS: So many guys and media said that poster was using E5 2660 to bench and told us it's 'Zen'. This story should put it to an end now.


----------



## Exxlir

Can't wait to see full proper benchmarks after release of both many lake and zen, sitting here with cash at the ready for the new build, not having a PC really really really sucks major balls, consoles lack highly, I hope zen lives up to the mark but I'm undecided Intel or amd due to single threaded performance, had a 8350 in my last build before I sold it, it done well enough but going forward I want a more reliable chip that I'll have large gains in fps etcetc


----------



## flippin_waffles

Quote:


> Originally Posted by *epic1337*
> 
> amusingly they used Titan X instead of their lovely Fury X for BF1.
> and in comparison, they're using RX480 instead for blender and handbrake.


I dont see whats so amuzing about it.? It seems like pretty good and smart marketing to me. They have a high end enthusiast platform coming to market soon, why would they not cater to all potential customers?? Im sure they realized they would get the obligatory 'her derp' crowd but im sure they are more focused on running a successful business, which means welcoming all gamers to the platform.


----------



## pony-tail

I am pretty sure I will be getting Ryzen and putting it on a basic WC loop ( not aiw as I will likely run it through a slurry at least a couple of times ) to see what I can get out of it .
I had a lot of fun ( destruction ) testing a bulldozer - bought one cheap a while after they were found to be lacking - I ryzen is a decent chip I will not go that far but I will give it a good run ( not games I have a 4690k for them )


----------



## BeepBeep2

Quote:


> Originally Posted by *flippin_waffles*
> 
> I dont see whats so amuzing about it.? It seems like pretty good and smart marketing to me. They have a high end enthusiast platform coming to market soon, why would they not cater to all potential customers?? Im sure they realized they would get the obligatory 'her derp' crowd but im sure they are more focused on running a successful business, which means welcoming all gamers to the platform.


I somewhat agree. Vega was ready to game during New Horizon, but they used Titan X GPUs as well.

In the past couple years AMD used Intel CPUs in demonstrations with their CPUs, probably because they know (and know that everyone else knows) that they didn't have a competitive product. RX480 wasn't doing any work in the 2D CPU workload demos, and can't hold a candle to Pascal in high end, so it makes sense that they used the high end of the discrete GPU market to showcase their HEDT CPU platform against intel's in gaming.

I commend AMD for using competitors products in demos like this when they didn't have a comparable in-house solution. It helps show that they are paying attention to their consumers instead of operating in some cynical little bubble.


----------



## JakdMan

Oh snap, things just got real guys









AMD just sent an email of everything that's happened thus far. And better yet, they've released RYZEN wallpapers of varying sizes for your use case of choice









The Digital Fan Kit page as its titled

Can only assume it took them a while to render them all considering....


Spoiler: Warning: Spoiler!


----------



## BeepBeep2

I posted some wallpapers on XS and Anand a day ago, and now this. I got the email too









The render time for my wallpaper was 30+ minutes @ 4K on 4690K


----------



## JakdMan

IDK. Computer wierdness
Quote:


> Originally Posted by *BeepBeep2*
> 
> I posted some wallpapers on XS and Anand a day ago, and now this. I got the email too
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The render time for my wallpaper was 30+ minutes @ 4K on 4690K


I'm very much so still testing this system before I crank the OCs. I sett this thing to 500 samples, 16bit uncompressed and have been rendering since the night of the presentation (with intermittent mega chrome, Firefox, Photoshop and the like and this thing hasn't skipped a beat yet oddly enough. That would be murder on the core2 duo laptop lol


----------



## Blameless

Quote:


> Originally Posted by *Nenkitsune*
> 
> THATS NOT HOW POWER CONSUMPTION WORKS. All you're calculating is the DELTA BETWEEN IDLE POWER CONSUMPTION AND LOAD POWER CONSUMPTION.
> 
> you're making the assumption that the cpu uses ZERO watts at idle. This couldn't be further from the truth. Assuming the setups are relatively similar to connected devices, etc, then what we're seeing is zen is using 13w less at idle than intel, and 4w less at load then intel.


With stock settings and all standard power saving features enabled, most desktop CPUs will consume very similar, and very low, amounts of power at idle.

I'd be willing to bet the bulk of the difference is coming from the motherboards used, not the CPUs. There isn't a 13w difference between at pure stock idle between a modern Pentium and a 6950X, because both have one active core at 1200MHz, a dramatically reduced cache clock with excess slices power gated, low power states on all I/O, and are running at well under 1v.

HEDT boards tend to be more power hungry than mainstream platforms. If you are comparing CPU only power consumption this needs to be taken into account, which is why looking at the load to idle delta is often a better indicator than absolute power consumption.

Since AMD hasn't revealed the boards used, we can only speculate, but I would be very surprised if the average X370 board was as power hungry as the average X99.


----------



## os2wiz

Quote:


> Originally Posted by *h2323*
> 
> AMD being misleading is largely a myth with some unfortunate exceptions (fury overclock, and bulldozer server prospects) AMD did not have a solid CPU roadmap for years for good reason and now they do for good reason, people point to this as evidence that they intentionally mislead.
> 
> Entering into Bulldozer era (late 2011) AMD reorganized and fired entire management team, had 5 slides at hotchips and did very little press, that is tech and financial press, this is before chip release
> 
> Entering into Zen era (late 2016) AMD is solidifying current management with stock offerings and more accountability and face time, has the largest hotchips presentation and largest crowd, is doing unprecedented press, tech and financial.
> 
> AMD market cap is double going into Zen to what is was going into Bulldozer on less revenue with a lower margin.
> 
> AMD has secured "several" server design wins. (source: last three earnings calls)
> 
> AMD is the only company with a chip on the market that is HSA 1.0 certified. Machine learning and data center will be using this standard sooner than later.
> 
> The surviving philosophy from Skybride is "cookie cutter" design, they still use the puzzle piece imagery in presentations to this day. AMD communicates through imagery a great deal.
> Zen, Polaris, Vega are designed to "drop in" (more complex than that) but through minimal effort and cost be able to function in HSA compliance on an APU for server/ desktop/ notebook,APU's.
> 
> AMD's third "new" semi-custom design win is ARM in the embedded space.
> 
> Recent 5 year WSA has second source written in. AMD is estimating to pay GF 100 million in penalty's per Q for using the second source starting Q1. This is largely not discussed but logically only works if the second source is pumping out serious wafers, there is no other way it makes any sense. Why such large volume's from second source? The bullish scenario leads one to think large orders of Zen with samsung as a foundry on top of global foundries (same or similiar 14nm node)
> 
> Piledriver/Richland/Kaveri/Opteron inventory will need to be written down, channel is full of it, can't sell it for less, large chips, quality chips, selling them at a loss would actually hurt the company more than just writing the inventory down, look for this mid 2017.
> 
> RTG formation is in line with Zen high level design completion. More funding better graphics chips happened fast.
> 
> Zen+ is likely end of 2017 on superior 14nm node, Solid CPU roadmap will be presented at Q4 earnings call.


Quote:


> Originally Posted by *h2323*
> 
> AMD being misleading is largely a myth with some unfortunate exceptions (fury overclock, and bulldozer server prospects) AMD did not have a solid CPU roadmap for years for good reason and now they do for good reason, people point to this as evidence that they intentionally mislead.
> 
> Entering into Bulldozer era (late 2011) AMD reorganized and fired entire management team, had 5 slides at hotchips and did very little press, that is tech and financial press, this is before chip release
> 
> Entering into Zen era (late 2016) AMD is solidifying current management with stock offerings and more accountability and face time, has the largest hotchips presentation and largest crowd, is doing unprecedented press, tech and financial.
> 
> AMD market cap is double going into Zen to what is was going into Bulldozer on less revenue with a lower margin.
> 
> AMD has secured "several" server design wins. (source: last three earnings calls)
> 
> AMD is the only company with a chip on the market that is HSA 1.0 certified. Machine learning and data center will be using this standard sooner than later.
> 
> The surviving philosophy from Skybride is "cookie cutter" design, they still use the puzzle piece imagery in presentations to this day. AMD communicates through imagery a great deal.
> Zen, Polaris, Vega are designed to "drop in" (more complex than that) but through minimal effort and cost be able to function in HSA compliance on an APU for server/ desktop/ notebook,APU's.
> 
> AMD's third "new" semi-custom design win is ARM in the embedded space.
> 
> Recent 5 year WSA has second source written in. AMD is estimating to pay GF 100 million in penalty's per Q for using the second source starting Q1. This is largely not discussed but logically only works if the second source is pumping out serious wafers, there is no other way it makes any sense. Why such large volume's from second source? The bullish scenario leads one to think large orders of Zen with samsung as a foundry on top of global foundries (same or similiar 14nm node)
> 
> Piledriver/Richland/Kaveri/Opteron inventory will need to be written down, channel is full of it, can't sell it for less, large chips, quality chips, selling them at a loss would actually hurt the company more than just writing the inventory down, look for this mid 2017.
> 
> RTG formation is in line with Zen high level design completion. More funding better graphics chips happened fast.
> 
> Zen+ is likely end of 2017 on superior 14nm node, Solid CPU roadmap will be presented at Q4 earnings call.


Where did you get the idea Zen + will be in late 2017 and even more incredible that it will be on a different 14nm process. GF does NOT have a superior 14nm process to 14 LPP only IBM does. Also Zen+ is supposed to use a different socket than Zen. If you are implying Samsung has a better implementation of 14 LPP than GF , I suspect that may not be true. Samsung has zero experience in producing large core cpus. Virttually all their 14 LPP production goes to small core cpus for phones and tablets.


----------



## Fyrwulf

Okay, there's so much nonsense being slung around right now my world is starting to look brown. To address some of these things point by point:


Samsung has a bunch of 14nm iterations. 14LPE (Low Power Early), the first generation 14nm process which has been phased out; 14LPP (Low Power Plus), a higher performance iteration in present use; 14LPC (Low Power Cost-reduced), which has the same design rules and power as 14LPP but at a lower cost; and 14LPU, which has the same power envelope and design rules, but a higher performance.
If there's a mid-cycle update for Zen, it will be on the 14LPU node, which will become available 2Q17.
Zen+ isn't dropping until 2019 and so far the only leak we've seen relates to Grey Hawk, which is the Zen+ APU.
Zen+ will use 7nm, which GloFo and Samsung are co-developing.
Zen+ is going to use the same socket as Zen. Lisa Su made a point of stating that in the original Zen reveal, go back and watch it if you don't believe me.
Everything all clear now? Good.


----------



## Tojara

Quote:


> Originally Posted by *Fyrwulf*
> 
> Okay, there's so much nonsense being slung around right now my world is starting to look brown. To address some of these things point by point:
> 
> 
> Samsung has a bunch of 14nm iterations. 14LPE (Low Power Early), the first generation 14nm process which has been phased out; 14LPP (Low Power Plus), a higher performance iteration in present use; 14LPC (Low Power Cost-reduced), which has the same design rules and power as 14LPP but at a lower cost; and 14LPU, which has the same power envelope and design rules, but a higher performance.
> If there's a mid-cycle update for Zen, it will be on the 14LPU node, which will become available 2Q17.
> Zen+ isn't dropping until 2019 and so far the only leak we've seen relates to Grey Hawk, which is the Zen+ APU.
> Zen+ will use 7nm, which GloFo and Samsung are co-developing.
> Zen+ is going to use the same socket as Zen. Lisa Su made a point of stating that in the original Zen reveal, go back and watch it if you don't believe me.
> Everything all clear now? Good.


You messed up a few things already. There is no given date for Zen+, we know essentially nothing about it, nor are Glofo and Samsung co-developing a 7nm process. They're still doing some co-operation as previously, but it isn't the same process at all.


----------



## Colin1204

Quote:


> Originally Posted by *Tojara*
> 
> You messed up a few things already. There is no given date for Zen+, we know essentially nothing about it, nor are Glofo and Samsung co-developing a 7nm process. They're still doing some co-operation as previously, but it isn't the same process at all.


I'm pretty sure GF and Samsung were working on 10nm process node but GF has since updated its roadmaps to say they will skip that and move to their own 7nm node process instead. And this will be a "true node shrink" vs a "hybrid node shrink" that the 14nm process now uses. Intel, for example, uses the true node shrink method rather than hybrid.

The issue I'm seeing is that Samsung has been talking about EUV for 7nm and even they don't want to make any claims in public and the info I see shows they might not even get it to market by 2019 so how GF will do it is making me nervous that it will be heavily delayed.

Some info can be found on an anand article here but you can easily do a google for GlobalFoundries 7nm and get quite a few articles talking about it.


----------



## costeakai

My feeling is that apus will be so formidable , that at a given price tag , they will destroy ! Everything else , including ryzen , vega and intel latest cpus combined with the latest nvidia gpus.
The single memory shared space architecture will work so well , that the price / performance / tdp ratios will be way ahead. Not the performance , but the price/performance will be unbeatable.


----------



## Ultracarpet

Quote:


> Originally Posted by *costeakai*
> 
> My feeling is that apus will be so formidable , that at a given price tag , they will destroy ! Everything else , including ryzen , vega and intel latest cpus combined with the latest nvidia gpus.
> The single memory shared space architecture will work so well , that the price / performance / tdp ratios will be way ahead. Not the performance , but the price/performance will be unbeatable.


Well, from everything we know, the iGPU for Raven Ridge will be only slightly more capable than an rx 460. So I doubt it will affect much of anything, and it will also probably not be the best price/performance because you are paying for the form factor, and novelty. A good example would be dual GPU cards. You can almost always buy 2 individual cards for cheaper than the dual chip cards.

APUs will likely get incremental graphics performance increases and stay just barely capable of playing e-sport class games for a long time.


----------



## CriticalOne

RX 460 performance in games is very generous for the APUs. I doubt they will come anywhere close, and I think the most performance they will provide is around the level of a discrete 250x. The current 512 SP Kaveri GPU when given high speed ram still falls short of the R7 250.

I'll be shocked if there is more than 640 shader processors on the Raven Ridge APU. Even with DDR4, we don't know how much AMD has improved their memory controllers. They could still be behind Intel in terms of memory perfromance. Even if AMD gets their IMCs up to Intel's level, its still not enough.

It depends on how much better Polaris is at color compression than GCN 1.0 and how much AMD has improved with their memory interfaces.


----------



## GamerusMaximus

Quote:


> Originally Posted by *Ultracarpet*
> 
> Well, from everything we know, the iGPU for Raven Ridge will be only slightly more capable than an rx 460. So I doubt it will affect much of anything, and it will also probably not be the best price/performance because you are paying for the form factor, and novelty. A good example would be dual GPU cards. You can almost always buy 2 individual cards for cheaper than the dual chip cards.
> 
> APUs will likely get incremental graphics performance increases and stay just barely capable of playing e-sport class games for a long time.


Still, a iGPU around the 460 would blow all other APUs out of the water by a long shot. While APUs will never be high end, they are going to be pushing the baseline much higher.
Quote:


> Originally Posted by *CriticalOne*
> 
> RX 460 performance in games is very generous for the APUs. I doubt they will come anywhere close, and I think the most performance they will provide is around the level of a discrete 250x. The current 512 SP Kaveri GPU when given high speed ram still falls short of the R7 250.
> 
> I'll be shocked if there is more than 640 shader processors on the Raven Ridge APU. Even with DDR4, we don't know how much AMD has improved their memory controllers. They could still be behind Intel in terms of memory perfromance. Even if AMD gets their IMCs up to Intel's level, its still not enough.
> 
> It depends on how much better Polaris is at color compression than GCN 1.0 and how much AMD has improved with their memory interfaces.


OTOH, consider that the 460 mobile chip, which uses the full polaris 11 die, pulls only 35 watt. You could easily fit that on a 95 watt APU, like what AMD makes now.

AMD is also rumored to be looking into putting HBM on die, which would help eliminate the need for high speed external memory.

Combine highly binned GPU, HBM, and zen cores, it isnt unfeasible that AMD could get 460 performance into a 95 watt desktop APU.


----------



## Blameless

If they have a mobile APU with anywhere near RX460 performance that's what will be in my next laptop.


----------



## Fyrwulf

Quote:


> Originally Posted by *Tojara*
> 
> You messed up a few things already. There is no given date for Zen+, we know essentially nothing about it


Search for AMD Grey Hawk. At this point we know only a bit more about Zen than we do Zen+.
Quote:


> nor are Glofo and Samsung co-developing a 7nm process. They're still doing some co-operation as previously, but it isn't the same process at all.


http://www.eetimes.com/document.asp?doc_id=1330657

They certainly are co-developing a 7nm process. GloFo is developing an immersion lithography 7nm process in addition to EUV in case the latter isn't ready in time, but given GloFo's track record I'd guess EUV will make it to market in time.


----------



## Tojara

Quote:


> Originally Posted by *Fyrwulf*
> 
> Search for AMD Grey Hawk. At this point we know only a bit more about Zen than we do Zen+.
> http://www.eetimes.com/document.asp?doc_id=1330657


We have a fairly robust block diagram of Zen's major elements, while we have no real idea what will be improved in Zen+. Wider FPU? Cache latency or bandwidth? More ALUs/AGUs? Larger uop cache? Better scheduling/queues/branch prediction?
Quote:


> Originally Posted by *Fyrwulf*
> 
> They certainly are co-developing a 7nm process. GloFo is developing an immersion lithography 7nm process in addition to EUV in case the latter isn't ready in time, but given GloFo's track record I'd guess EUV will make it to market in time.


There's been more or less co-operation between them for a while, but with a single article stating that they're sharing a process I'm more than a bit sceptical. With 14nm it was all over the news and both Samsung and Glofo mentioned it in their press release, this time there is no mention of a shared process from either of them, in fact the EUV timing seems to be off. EEtimes has been mostly good but they've also made some mistakes in the past. You might be correct, but the information available isn't quite enough.


----------



## IRobot23

Anyone knows why AMD didnt move pins on MB?


----------



## CriticalOne

Quote:


> Originally Posted by *IRobot23*
> 
> Anyone knows why AMD didnt move pins on MB?


AMD cited cost reasons. It must have been more expensive for them to develop a LGA socket for the mainstream platform than to just continue on with PGA.


----------



## IRobot23

When will they launch AM4 for desktop?


----------



## Fyrwulf

Quote:


> Originally Posted by *Tojara*
> 
> We have a fairly robust block diagram of Zen's major elements, while we have no real idea what will be improved in Zen+. Wider FPU? Cache latency or bandwidth? More ALUs/AGUs? Larger uop cache? Better scheduling/queues/branch prediction?


Who knows? We do know it's going to be on 7nm. We do know it will be on the AM4 platform, although most likely with an updated PCH. We do know Lisa Su has promised 15% IPC improvement over Zen; which almost certainly means some major architectural revisions. We also have a fairly good idea AMD has been working on Zen+ a couple years.
Quote:


> There's been more or less co-operation between them for a while, but with a single article stating that they're sharing a process I'm more than a bit sceptical. With 14nm it was all over the news and both Samsung and Glofo mentioned it in their press release, this time there is no mention of a shared process from either of them, in fact the EUV timing seems to be off. EEtimes has been mostly good but they've also made some mistakes in the past. You might be correct, but the information available isn't quite enough.


The article was written based on a presentation they gave at an industry event. Companies don't do that unless there's active cooperation between them, in any industry.


----------



## PostalTwinkie

Quote:


> Originally Posted by *costeakai*
> 
> My feeling is that apus will be so formidable , that at a given price tag , they will destroy ! Everything else , including ryzen , vega and intel latest cpus combined with the latest nvidia gpus.
> The single memory shared space architecture will work so well , that the price / performance / tdp ratios will be way ahead. Not the performance , but the price/performance will be unbeatable.


Zen+, not Zen, will likely be the release where an APU from AMD will flawlessly handle everything 1080P/60 and down, and for under $200. Even now AMD sells APUs that with a little concession handle gaming well, and absolutely destroy office productivity. So an APU based off a Zen refinement using HBM is going to dominate a very large segment. It is why mobile is taking over the PC market, they are cheap devices with enough power to accommodate the majority of user needs.

(Incoming rant)

The broader market is really underestimating what AMD is working towards with their long game in regard to HBM, uHMA, Zen, and open projects. How AMD is working towards ultimately a product that is a desktop class, consumer level, SoC. Think something along the lines of a moderate level gaming rig today, for ~$200, Intel's Skull Canyon NUC will look like a potato in every regard by comparison. The big concerns before were of AMD staying alive long enough to see it through, and if Zen could succeed.

AMD has survived longer than many screamed, financial obligations even being deferred until later to give breathing room. One major concern at least addressed! The other, about Zen, looks to be more than addressed. During the presentation this month Dr. Su mentioned their 40% IPC increase goal. She only briefly mentioned it with a smile, but stated they managed to exceed that goal. While brief, that is a massive claim! One that AMD isn't likely to give that opinion again, not without being confident they truly achieved - simply because of the "last time" they did it.

If Zen is 40%+, then Zen+ based APUs will be truly monsters given their package size.


----------



## budgetgamer120

Quote:


> Originally Posted by *IRobot23*
> 
> Anyone knows why AMD didnt move pins on MB?


The only thing I hate about my x99 motherboard is that the pins are on it. I hope AMD doesn't adapt this terrible socket design.


----------



## Fyrwulf

Quote:


> Originally Posted by *PostalTwinkie*
> 
> The broader market is really underestimating what AMD is working towards with their long game in regard to HBM, uHMA, Zen, and open projects. How AMD is working towards ultimately a product that is a desktop class, consumer level, SoC. Think something along the lines of a moderate level gaming rig today, for ~$200, Intel's Skull Canyon NUC will look like a potato in every regard by comparison. The big concerns before were of AMD staying alive long enough to see it through, and if Zen could succeed.


The great irony is that AMD is in effect working towards killing an industry that has kept it alive. Consoles become irrelevant when a $200 PC can overmatch a $400 console. If SteamOS can ever take off, watch out.


----------



## budgetgamer120

Quote:


> Originally Posted by *Fyrwulf*
> 
> The great irony is that AMD is in effect working towards killing an industry that has kept it alive. Consoles become irrelevant when a $200 PC can overmatch a $400 console. If SteamOS can ever take off, watch out.


Consoles will never be irrelevant. Consoles are dedicated gaming machines. A PC isn't and thus harder to operate than a console.

There won't be a $200 pc that matches a console with a controller included


----------



## jeffdamann

Quote:


> Originally Posted by *budgetgamer120*
> 
> Consoles will never be irrelevant. Consoles are dedicated gaming machines. A PC isn't and thus harder to operate than a console.
> 
> There won't be a $200 pc that matches a console with a controller included


There won't be a $200 console that matches a console with a controller included either...


----------



## Raghar

Quote:


> Originally Posted by *IRobot23*
> 
> Anyone knows why AMD didnt move pins on MB?


Because they are not crazy.

Pins on MB are seriously bad idea. Intel managed to get away with that because Intel has dominance on market, and moving pins on MB as a cost saving feature was something everyone was forced to live with. Pins on MB are MUCH more fragile, harder to test properly, and much more prone to wrong contact. They can also lose theirs springiness, and then CPU is in trouble.


----------



## Removed1

So on portables and cheap desktop AMD could propose a combo APU Zen + cheap RX460 = crossfire?!








Sure the RX into the cpu will be much slower due to the DDR4 but still add some perfs!


----------



## Fyrwulf

Quote:


> Originally Posted by *Wimpzilla*
> 
> So on portables and cheap desktop AMD could propose a combo APU Zen + cheap RX460 = crossfire?!
> 
> 
> 
> 
> 
> 
> 
> 
> Sure the RX into the cpu will be much slower due to the DDR4 but still add some perfs!


We're talking about Zen+ and beyond, where the APU will have on-chip HBM.


----------



## CriticalOne

Quote:


> Originally Posted by *Raghar*
> 
> Because they are not crazy.
> 
> Pins on MB are seriously bad idea. Intel managed to get away with that because Intel has dominance on market, and moving pins on MB as a cost saving feature was something everyone was forced to live with. Pins on MB are MUCH more fragile, harder to test properly, and much more prone to wrong contact. They can also lose theirs springiness, and then CPU is in trouble.


The reason why LGA exists is because it offers greater pin density than PGA designs. This is crucial in servers where the package is already large and PGA would only make it larger.

AMD's new "SP3" server platform is 99.9% likely to feature LGA like its predecessor, Socket G34.


----------



## KarathKasun

Quote:


> Originally Posted by *CriticalOne*
> 
> The reason why LGA exists is because it offers greater pin density than PGA designs. This is crucial in servers where the package is already large and PGA would only make it larger.
> 
> AMD's new "SP3" server platform is 99.9% likely to feature LGA like its predecessor, Socket G34.


Servers tend to not have many CPU swaps done either. They also have much more secure heatsink mounting mechanisms.


----------



## Colin1204

Quote:


> Originally Posted by *CriticalOne*
> 
> The reason why LGA exists is because it offers greater pin density than PGA designs. This is crucial in servers where the package is already large and PGA would only make it larger.
> 
> AMD's new "SP3" server platform is 99.9% likely to feature LGA like its predecessor, Socket G34.


PGA bent pins are nothing. Hell, I've had pins break off and just drop it in the slot and it worked haha. When I bend an LGA pin tho, I'm in a bad mood for days at times depending on how bad it was and how long it takes to fix.


----------



## Blameless

Takes a lot of CPU swaps before an LGA socket starts having issue. On my oldest/most used LGA boards, I'm at over 100 CPU installations/removals each, without issue.

I've only bent pins on an LGA socket once where I couldn't repair it fairly simply. I've destroyed more PGA CPUs beyond easy repair. Motherboards also tend to be less expensive than CPUs as well...if I'm going to lose one, I'd rather lose the board.

Really though, unless you are dropping things in to the socket, or trying to clean it with stuff that can hook the pins, it's very rare to have issues.
Quote:


> Originally Posted by *Colin1204*
> 
> PGA bent pins are nothing. Hell, I've had pins break off and just drop it in the slot and it worked haha.


About 30-40% of the pins on any given CPU are redundant VCC, VSS, or reserved pins. LGA sockets can lose these too.


----------



## Colin1204

Quote:


> Originally Posted by *Blameless*
> 
> About 30-40% of the pins on any given CPU are redundant VCC, VSS, or reserved pins. LGA sockets can lose these too.


I actually did not know this









So your telling me as long as nothing was overlapping/touching I could have just installed and tried it? I remember the first LGA board I ever fixed the pins on, was my father in laws IB rig where he slipped and messed up a good 20 or more pins. Took me several hours and more swearing than I care to admit to doing to get them all fixed. I just assumed when I was putting the pin into the PGA socket it was making contact with the CPU once installed and thats why it worked. So your telling me I just got lucky and broke an non-essential pin? Dang...

Edit> darn, this is a news post and I'm getting off topic sorry. Um... Zen looks promising


----------



## KarathKasun

Quote:


> Originally Posted by *Blameless*
> 
> Takes a lot of CPU swaps before an LGA socket starts having issue. On my oldest/most used LGA boards, I'm at over 100 CPU installations/removals each, without issue.
> 
> I've only bent pins on an LGA socket once where I couldn't repair it fairly simply. I've destroyed more PGA CPUs beyond easy repair. Motherboards also tend to be less expensive than CPUs as well...if I'm going to lose one, I'd rather lose the board.
> 
> Really though, unless you are dropping things in to the socket, or trying to clean it with stuff that can hook the pins, it's very rare to have issues.
> About 30-40% of the pins on any given CPU are redundant VCC, VSS, or reserved pins. LGA sockets can lose these too.


If you are destroying PGA CPU's, you are doing it wrong. Ive not killed a single PGA CPU in ~20 years of building rigs. Including the non-ZIF socket chips that you have to pry out of the socket.


----------



## tenchimuyo93

Long as it does the job I want it to, and stays cheaper than Intel I'm staying AMD if they ever drop the ball again ill switch back. Hype or not.


----------



## BinaryDemon

Quote:


> Originally Posted by *budgetgamer120*
> 
> The only thing I hate about my x99 motherboard is that the pins are on it. I hope AMD doesn't adapt this terrible socket design.


IMO, the pins should be one whichever component is cheaper since there is a greater chance of mechanical failure. Personally I'm glad the pins are on my motherboard rather than my CPU.


----------



## AlphaC

Quote:


> Originally Posted by *Blameless*
> 
> If they have a mobile APU with anywhere near RX460 performance that's what will be in my next laptop.


Well the RX 460 is 75W , Radeon RX 480M is 35W, and the pro version in the WX 4100 is < 50W so we still have more to go before that happens.

EDIT: Keeping in mind a mobile intel extreme only has a 55W TDP budget you're looking at some sort of compromise for the CPU power budget


----------



## KarathKasun

Quote:


> Originally Posted by *AlphaC*
> 
> Well the RX 460 is 75W and the pro version in the WX 4100 is < 50W so we still have more to go before that happens.


Ive got my RX 470 down to ~80w while still having ~GTX 970 performance. APUs with RX 460 performance should not be difficult.


----------



## epic1337

Quote:


> Originally Posted by *AlphaC*
> 
> Well the RX 460 is 75W , Radeon RX 480M is 35W, and the pro version in the WX 4100 is < 50W so we still have more to go before that happens.
> 
> EDIT: Keeping in mind a mobile intel extreme only has a 55W TDP budget you're looking at some sort of compromise for the CPU power budget


WX4100 is simply binned with lower base clock, it also has 1024SP instead of 896SP which is RX460.

on a side note, Polaris desktop SKUs are clocked too high, point being when undervolted (and underclocked) they get really efficient.
so effectively, RX460 clocked below 1Ghz and vcc tuned to around 0.8v will pull much less than 50W.


----------



## Blameless

Quote:


> Originally Posted by *KarathKasun*
> 
> If you are destroying PGA CPU's, you are doing it wrong.


If you are destroying any CPU or socket, you are doing it wrong. It's just not any easier to do it wrong with an LGA motherboard than a PGA CPU.

I generally don't need to handle boards without the socket cover in place as much as I need to handle CPUs.


----------



## Particle

Quote:


> Originally Posted by *Blameless*
> 
> If you are destroying any CPU or socket, you are doing it wrong. It's just not any easier to do it wrong with an LGA motherboard than a PGA CPU.
> 
> I generally don't need to handle boards without the socket cover in place as much as I need to handle CPUs.


Given that PGA pins are stronger than LGA contacts, I don't think I can agree. Neither will take being abused, but calling them equal isn't in the furtherance of accuracy.


----------



## budgetgamer120

Quote:


> Originally Posted by *Blameless*
> 
> If you are destroying any CPU or socket, you are doing it wrong. It's just not any easier to do it wrong with an LGA motherboard than a PGA CPU.
> 
> I generally don't need to handle boards without the socket cover in place as much as I need to handle CPUs.


I found it easier with the pins on the cpu. puting the cpu in the x99 socket was a nightmare for me.


----------



## SuperZan

Quote:


> Originally Posted by *Blameless*
> 
> If you are destroying any CPU or socket, you are doing it wrong. It's just not any easier to do it wrong with an LGA motherboard than a PGA CPU.
> 
> I generally don't need to handle boards without the socket cover in place as much as I need to handle CPUs.


I haven't got the steadiest hands (not so much shaky as semi-frequent involuntary tweaks/spasms), which is one of the reasons I chose the science route rather than the surgeon's when I was at uni. For me, PGA CPU's have survived the occasional drop without issue. but doing anything in or around an LGA motherboard notches my heart up a few BPM's. If, like me, you can occasionally lose proper hold of a tool or component, that LGA socket just cannot take much by way of impact without causing a massive headache. While it's true that CPU's (usually) cost more than boards, I've not found the proportional cost offset to be satisfactorily weighted against accident-absorption capability when it comes to LGA.

Obviously, I use LGA boards as I've got Intel rigs, but I don't relish the prospect of removing/re-setting them. When possible I walk my SO through what I need him to do, but that's a bit nerve-wracking as well, as he's used to working on BMW's, not smallish PC components. Meanwhile, the one time I bent a PGA pin it took two minutes and a bank card and I was back in business.

I won't dispute that your situation is different and your experiences are thus as well, but I think that in the majority of ways in which components will be handled, LGA is more fragile than PGA. That said, I'm happy to deal with whichever so long as I get my performance (I'd just prefer PGA).


----------



## Blameless

Quote:


> Originally Posted by *Particle*
> 
> Given that PGA pins are stronger than LGA contacts, I don't think I can agree. Neither will take being abused, but calling them equal isn't in the furtherance of accuracy.


I didn't call them equal. I feel fairly strongly that it's easier to damage PGA CPU pins because there are far more ways to do so, not because they are weaker than LGA pins.

Dropping, lapping, delidding, or cleaning CPUs are the only ways I've ever damaged PGA pins. I even went so far as to cut the socket out of a dead S939 board so I could use it as caddy to reduce the risk of damage while manipulating CPUs. LGA parts don't need or benefit from such protection, and I'm not trying to lap or delid my motherboards, nor are they easy to drop or as prone to damage if dropped. The only time I've bent LGA socket pins is when cleaning them inappropriately.
Quote:


> Originally Posted by *SuperZan*
> 
> I haven't got the steadiest hands (not so much shaky as semi-frequent involuntary tweaks/spasms), which is one of the reasons I chose the science route rather than the surgeon's when I was at uni. For me, PGA CPU's have survived the occasional drop without issue. but doing anything in or around an LGA motherboard notches my heart up a few BPM's. If, like me, you can occasionally lose proper hold of a tool or component, that LGA socket just cannot take much by way of impact without causing a massive headache.


If I had any significant involuntary movement of my hands, poor hand-eye coordination, or some other relevant impairment, I'd probably feel the same hesitation around LGA sockets.


----------



## Hueristic

OCN, where we can stray off topic and argue that for hours. Lol


----------



## tpi2007

When you're handling an LGA socket motherboard make sure to either have a CPU or the socket cover that comes with the motherboard installed. I do wonder how much power those little contacts can take versus a PGA solution when overclocking. There are a few cases of burnt out contacts, but I'm guessing that since the discussion didn't become widespread it has more to do with extreme cases using high voltages and low motherboard quality.

Anyway, put your hand up if you have a Low Insertion Force (LIF) socket chip removal tool!


----------



## costeakai

Quote:


> Originally Posted by *Ultracarpet*
> 
> Well, from everything we know, the iGPU for Raven Ridge will be only slightly more capable than an rx 460. So I doubt it will affect much of anything, and it will also probably not be the best price/performance because you are paying for the form factor, and novelty. A good example would be dual GPU cards. You can almost always buy 2 individual cards for cheaper than the dual chip cards.
> 
> APUs will likely get incremental graphics performance increases and stay just barely capable of playing e-sport class games for a long time.


Good enaugh for me


----------



## budgetgamer120

Quote:


> Originally Posted by *BinaryDemon*
> 
> IMO, the pins should be one whichever component is cheaper since there is a greater chance of mechanical failure. Personally I'm glad the pins are on my motherboard rather than my CPU.


I put my cpu in wrong so many times. With PGA this never happened.


----------



## Fyrwulf

Quote:


> Originally Posted by *Ultracarpet*
> 
> APUs will likely get incremental graphics performance increases and stay just barely capable of playing e-sport class games for a long time.


It really depends on AMD's priorities. TMSC is claiming that you can get a 35% frequency improvement moving from 16nm to 10nm. I wouldn't be terribly surprised if going from 14nm to 7nm could get a frequency increase of 50% or more. A 16CU iGPU clocked at 1900Mhz is an awfully beefy e-sports GPU.


----------



## Ultracarpet

Quote:


> Originally Posted by *Fyrwulf*
> 
> It really depends on AMD's priorities. TMSC is claiming that you can get a 35% frequency improvement moving from 16nm to 10nm. I wouldn't be terribly surprised if going from 14nm to 7nm could get a frequency increase of 50% or more. A 16CU iGPU clocked at 1900Mhz is an awfully beefy e-sports GPU.


It won't be 4 years from now when that all happens tho


----------



## Fyrwulf

Quote:


> Originally Posted by *Ultracarpet*
> 
> It won't be 4 years from now when that all happens tho


I got curious and started crunching some numbers. If Vega 10 is the rumored 4096 SP monster and it's producing 25 TFLOPS at half-precision, then that means each CU is worth 879 GFLOPS of computing power. In comparison, the CUs in the Polaris 11 are worth 725 GFLOPS. That's a 21% boost just based on architectural changes. That means that if Raven Ridge really does have 16CUs, it'll be capable of 14TFLOPS half precision if it matches a Polaris 11 core clock for clock. AMD would have to do a really freaking severe downclock to reach the depths of the RX 460.


----------



## mohiuddin

Quote:


> Originally Posted by *Hueristic*
> 
> OCN, where we can stray off topic and argue that for hours. Lol


But, LGA vs PGA discussion should be welcomed here. General people (including me) have very very little idea about it. My previous idea was lga is better in every aspect. But now I know, both of them got their shortcomings


----------



## Olivon

For those who forgot AMD use PGA for desktop and LGA for servers.
LGA is safer for CPU change operation on professionnal server motherboards and that's why AMD use it.
Intel is LGA only on both desktop and servers.


----------



## Ultracarpet

Quote:


> Originally Posted by *Fyrwulf*
> 
> I got curious and started crunching some numbers. If Vega 10 is the rumored 4096 SP monster and it's producing 25 TFLOPS at half-precision, then that means each CU is worth 879 GFLOPS of computing power. In comparison, the CUs in the Polaris 11 are worth 725 GFLOPS. That's a 21% boost just based on architectural changes. That means that if Raven Ridge really does have 16CUs, it'll be capable of 14TFLOPS half precision if it matches a Polaris 11 core clock for clock. AMD would have to do a really freaking severe downclock to reach the depths of the RX 460.


I'm all for a powerful APU, I just doubt it's going to be the monster than everyone wants it to be. Other stuff to take into consideration is how much vram (HBM) they include (could be hindered by like a 2GB or something stupid), and what the thermal envelop will allow. I don't have a very technical background, but I feel like everything being equal, if there is a GPU in a discrete card and one built in alongside a CPU, the discrete one is probably going to be faster. So even if its slightly bigger, and or slightly superior with performance per CU, the iGPU probably won't be that far separated from an RX 460.


----------



## Kuivamaa

If I was AMD I wouldn't put too high expectations on APU. Sure the tech is interesting and it has helped them secure various semicustom (like consoles) or embedded contracts. But in the PC space I cannot see them getting really appreciated. They could be the base for decent gaming laptops that do not need heavy cooling but that market is cornered by intel. Laptop buyers are divided in two major categories. Those that have a cursory understanding of the brands (which will prefer intel no matter what) and those that will simply follow what the sales persons suggest (which will mostly be intel again). The amount of people buying intel HD 3000 and HD 4000 based laptops for gaming when they would have been much better served by an APU speaks volumes. I have mentioned years ago that AMD should focus on system builders first in order to build brand recognition and leave laptops for later. And they are now doing exactly that.


----------



## Shatun-Bear

Quote:


> Originally Posted by *Kuivamaa*
> 
> *If I was AMD I wouldn't put too high expectations on APU*. Sure the tech is interesting and it has helped them secure various semicustom (like consoles) or embedded contracts. But in the PC space I cannot see them getting really appreciated. They could be the base for decent gaming laptops that do not need heavy cooling but that market is cornered by intel. Laptop buyers are divided in two major categories. Those that have a cursory understanding of the brands (which will prefer intel no matter what) and those that will simply follow what the sales persons suggest (which will mostly be intel again). The amount of people buying intel HD 3000 and HD 4000 based laptops for gaming when they would have been much better served by an APU speaks volumes. I have mentioned years ago that AMD should focus on system builders first in order to build brand recognition and leave laptops for later. And they are now doing exactly that.


I disagree. They're the market leader when it comes to APUs, at least in terms of volume of sales thanks to all the consoles minus the upcoming Nintendo Switch. Sony and Microsoft will continue to use them and buy tens of millions of these AMD chips for their upcoming and future consoles. That's tens of millions of guaranteed sales and income. The cost efficiency that an APU brings over an Intel CPU + Nvidia GPU when we're talking mass orders is huge.

And before someone says 'but Intel's iGPUs on their CPUs has surpassed AMD's APU's in terms of gaming performance', and that there won't be anything stopping the console manufacturers just getting Intel to supply the processors for their future consoles, I would like to point out that there isn't an Intel iGPU that is even half as fast as the PS4 Pro's APU, or even the original PS4. AMD does it best.


----------



## Kuivamaa

Quote:


> Originally Posted by *Shatun-Bear*
> 
> I disagree. They're the market leader when it comes to APUs, at least in terms of volume of sales thanks to all the consoles minus the upcoming Nintendo Switch. Sony and Microsoft will continue to use them and buy tens of millions of these AMD chips for their upcoming and future consoles. That's tens of millions of guaranteed sales and income. The cost efficiency that an APU brings over an Intel CPU + Nvidia GPU when we're talking mass orders is huge.
> 
> And before someone says 'but Intel's iGPUs on their CPUs has surpassed AMD's APU's in terms of gaming performance', and that there won't be anything stopping the console manufacturers just getting Intel to supply the processors for their future consoles, I would like to point out that there isn't an Intel iGPU that is even half as fast as the PS4 Pro's APU, or even the original PS4. AMD does it best.


Consoles aren't APUs per se though, at least not in the PC space sense, they are semi-custom solution with exotic elements (ESRAM for xbox, GDDR5 for PS4) . What I argue is that AMD failed to gain any meaningful laptop marketshare between the years 2011-14 where they had a big advantage in gaming performance in that particular segment (400-500). People that wanted gaming performance still chose subpar machines with intel integrated graphics. Come Zen, AMD will sell more laptop chips but nowhere near as many as they should based on performance.I do not argue performance, rather their ability to penetrate that specific market.


----------



## epic1337

Quote:


> Originally Posted by *Fyrwulf*
> 
> It really depends on AMD's priorities. TMSC is claiming that you can get a 35% frequency improvement moving from 16nm to 10nm. I wouldn't be terribly surprised if going from 14nm to 7nm could get a frequency increase of 50% or more. A 16CU iGPU clocked at 1900Mhz is an awfully beefy e-sports GPU.


clock frequency increase has more to do with the general layout of the architecture, take Polaris' less than 20% increase in max frequency for example, in comparison to Nvidia's 30% increase.
on that note, paths crossing on the wrong place can induce higher stray capacitance as node shrinks, this means immense increase in current leakage at higher frequencies.


----------



## B NEGATIVE

Its at this point where I kinda regret buying my 6900k......


----------



## Gilles3000

Quote:


> Originally Posted by *B NEGATIVE*
> 
> Its at this point where I kinda regret buying my 6900k......


Maybe now is the time to sell before resell prices nosedive. Assuming the SR7 is going to be significantly cheaper which might not necessarily be the case.


----------



## B NEGATIVE

Quote:


> Originally Posted by *Gilles3000*
> 
> Maybe now is the time to sell before resell prices nosedive. Assuming the SR7 is going to be significantly cheaper which might not necessarily be the case.


No point. Any advantage of buying Zen would be lost in the resale.
And all this is moot if manufacturers dont step up with fully featured boards rather than the cast off designs they used to run with for AMD.....

Im happy but its tinged with 'What if....'


----------



## GamerusMaximus

Quote:


> Originally Posted by *Ultracarpet*
> 
> I'm all for a powerful APU, I just doubt it's going to be the monster than everyone wants it to be. Other stuff to take into consideration is how much vram (HBM) they include (could be hindered by like a 2GB or something stupid), and what the thermal envelop will allow. I don't have a very technical background, but I feel like everything being equal, if there is a GPU in a discrete card and one built in alongside a CPU, the discrete one is probably going to be faster. So even if its slightly bigger, and or slightly superior with performance per CU, the iGPU probably won't be that far separated from an RX 460.


2GB of HBM would work well. As intel showed, even a 128MB cache makes a world of difference in GPU performance.

A 2GB HBM cache for the GPU would largely remove the bandwidth bottleneck, when combined with higher frequency DDR4.


----------



## Marios145

You people forget that the RX460 consumes 75W as a whole, it contains PCIe, RAM, PCB, VRMs. Most of these are manufactured at 90nm+
Moving all these on a single die manufactured at 14nm will provide a huge efficiency boost, remember that all APUs share IMC between GPU-CPU.


----------



## SoloCamo

Quote:


> Originally Posted by *Kuivamaa*
> 
> If I was AMD I wouldn't put too high expectations on APU. Sure the tech is interesting and it has helped them secure various semicustom (like consoles) or embedded contracts. But in the PC space I cannot see them getting really appreciated. They could be the base for decent gaming laptops that do not need heavy cooling but that market is cornered by intel. Laptop buyers are divided in two major categories. Those that have a cursory understanding of the brands (which will prefer intel no matter what) and those that will simply follow what the sales persons suggest (which will mostly be intel again). The amount of people buying intel HD 3000 and HD 4000 based laptops for gaming when they would have been much better served by an APU speaks volumes. I have mentioned years ago that AMD should focus on system builders first in order to build brand recognition and leave laptops for later. And they are now doing exactly that.


I spent a good deal less than $300 on my laptop and is ridiculously better than any intel laptop at this price point for gaming. Unfortunately you are probably right that the majority of buyers probably wouldn't know better, though unfortunately..

The lowend sub $400 laptop market could be absolutely owned by AMD if consumers were even slightly aware and if manufacturers ever actually pushed AMD instead of Intel. Looking at intel based laptops at this price range is just disgusting.


----------



## GamerusMaximus

Quote:


> Originally Posted by *SoloCamo*
> 
> I spent a good deal less than $300 on my laptop and is ridiculously better than any intel laptop at this price point for gaming. Unfortunately you are probably right that the majority of buyers probably wouldn't know better, though unfortunately..
> 
> The lowend sub $400 laptop market could be absolutely owned by AMD if consumers were even slightly aware and if manufacturers ever actually pushed AMD instead of Intel. Looking at intel based laptops at this price range is just disgusting.


My first laptop was an asus k53ta, the second was a lenovo thinkped e535. Both APU powered, both awesome.

It's saddening that nobody took APUs seriously. Only dell and toshiba made 14 inch APU notebooks, and those were llano. Nobody made a 14 inch 35 watt trinity machine, and kaveri was completely MIA. The few machines that did exist were limited to 15 watt tdps, single channel RAM, ece. And often they had weak dedicated GPUs attached that added nothing and only jacked up the price.

I would love a raven ridge 35 watt chip in a 14 inch laptop form factor, like the thinkpad E, or the latitudes, ece. Id pay good money for it too. but nobody seems interested in making one.


----------



## Quantum Reality

Quote:


> Originally Posted by *GamerusMaximus*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SoloCamo*
> 
> I spent a good deal less than $300 on my laptop and is ridiculously better than any intel laptop at this price point for gaming. Unfortunately you are probably right that the majority of buyers probably wouldn't know better, though unfortunately..
> 
> The lowend sub $400 laptop market could be absolutely owned by AMD if consumers were even slightly aware and if manufacturers ever actually pushed AMD instead of Intel. Looking at intel based laptops at this price range is just disgusting.
> 
> 
> 
> My first laptop was an asus k53ta, the second was a lenovo thinkped e535. Both APU powered, both awesome.
> 
> It's saddening that nobody took APUs seriously. Only dell and toshiba made 14 inch APU notebooks, and those were llano. Nobody made a 14 inch 35 watt trinity machine, and kaveri was completely MIA. The few machines that did exist were limited to 15 watt tdps, single channel RAM, ece. And often they had weak dedicated GPUs attached that added nothing and only jacked up the price.
> 
> I would love a raven ridge 35 watt chip in a 14 inch laptop form factor, like the thinkpad E, or the latitudes, ece. Id pay good money for it too. but nobody seems interested in making one.
Click to expand...

I have to say, until seeing this thread I wasn't seriously considering moving to an AMD laptop, but if an A10 based one is actually decent for gaming I would consider it. Anyway this is probably moving a bit too far off topic but you're welcome to PM me with suggestions and comments


----------



## sboub78

ES 8C/16T - 3.15Ghz / 3.30Ghz Turbo









Source : CanardPC Hardware, the best french hardware magazine.


----------



## Colin1204

Quote:


> Originally Posted by *sboub78*
> 
> 
> 
> ES 8C/16T - 3.15Ghz / 3.30Ghz Turbo
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Source : CanardPC Hardware, the best french hardware magazine.


I can't read french, and this is a pic so no translate. Any1 wanna sum this up? I'm guessing it states a quad core has 3.8ghz with 4.0 to 4.2 turbo?


----------



## sboub78

He said that the 8C / 16T CPUs are on the right track but that AMD must quickly release the 4C / 8T CPUs with a frequency of 3.8 / 4.2Ghz if AMD wants to achieve the performances of Kabylake especially in Gaming

Sorry I do my best to translate







I hope you will understand.


----------



## Raghar

Quote:


> Originally Posted by *Colin1204*
> 
> I can't read french, and this is a pic so no translate. Any1 wanna sum this up? I'm guessing it states a quad core has 3.8ghz with 4.0 to 4.2 turbo?


The report is in weird language full of difficult words, which I assume might be French.

It says the CPU was limited by its low frequency 3.3 GHz in tests. Also it says it's 1.35x faster than 8370 at the same frequency.
Then it talks about Intel arogance and high end CPUs pricing.

Of course I don't know French. A proper translation would be probably better than my summary.

I personally think: The CPU had 93 W only because of this low frequency, if it would be stable at higher frequency, the voltage might go up fast.
And I think AMD did bad move by marketing/going with this CPUs against HEDP. HEDP is next to irrelevant, going with decent 8 core as upgrade of AMD mainstream CPUs would be smarter.


----------



## sepiashimmer

This is so unfair! Why didn't they release benchmark in English which is almost a global language? I'd really like to know who here can read French and translate? Any members from France?


----------



## GorillaSceptre

Quote:


> Originally Posted by *sepiashimmer*
> 
> This is so unfair! Why didn't they release benchmark in English which is almost a global language? I'd really like to know who here can read French and translate? Any members from France?


http://www.overclock.net/t/1619110/cpc-first-unofficial-ryzen-benchmarks/90#post_25730623


----------



## sepiashimmer

Quote:


> Originally Posted by *GorillaSceptre*
> 
> http://www.overclock.net/t/1619110/cpc-first-unofficial-ryzen-benchmarks/90#post_25730623


Thanks


----------



## Fyrwulf

Okay, so they got ahold of an underclocked ES without the SenseMI technologies enabled, why do we care what the bench results are?


----------



## Fancykiller65

Quote:


> Originally Posted by *Fyrwulf*
> 
> Okay, so they got ahold of an underclocked ES without the SenseMI technologies enabled, why do we care what the bench results are?


If you read the article, http://www.overclock.net/t/1619110/cpc-first-unofficial-ryzen-benchmarks/90#post_25730623 , these benchmarks, even as a ES without SenseMI, etc, prove that AMD's presentations and assertions on IPC are not too far off (I want to wait before retail chips are reviewed before saying its made it) and that its architecture is not a slouch. Thats really a lot of info when up to now, we have only heard of rumors and facts from AMD that could not be verified by third parties.


----------



## dmasteR

Quote:


> Originally Posted by *Fancykiller65*
> 
> If you read the article, http://www.overclock.net/t/1619110/cpc-first-unofficial-ryzen-benchmarks/90#post_25730623 , these benchmarks, even as a ES without SenseMI, etc, prove that AMD's presentations and assertions on IPC are not too far off (I want to wait before retail chips are reviewed before saying its made it) and that its architecture is not a slouch. Thats really a lot of info when up to now, we have only heard of rumors and facts from AMD that could not be verified by third parties.


How do you know SenseMI isn't implemented into the ES Sample?


----------



## Fancykiller65

Quote:


> Originally Posted by *dmasteR*
> 
> How do you know SenseMI isn't implemented into the ES Sample?


I don't know unfortunately which is why until retail is reviewed, I'm still cautious.


----------



## KarathKasun

AI has arguably been in CPU's since prefetch algos came around. SenseMI seems to just be a re-branding of already existing tech to make people feel better about a purchase or convey a feeling of high technology through the use of buzzwords.

TL;DR, its just a marketing gimmick.


----------



## TheBloodEagle

It must suck then. Pack it up folks. There's nothing here apparently.


----------



## KarathKasun

Quote:


> Originally Posted by *TheBloodEagle*
> 
> It must suck then. Pack it up folks. There's nothing here apparently.


No, its just useless marketing jargon. The CPU may be good or great, but AMD is trying really hard to muddy the waters with their new products. Everything has some sort of marketing jargon attaching it to AI or machine learning in some way.


----------



## Fyrwulf

Quote:


> Originally Posted by *KarathKasun*
> 
> No, its just useless marketing jargon. The CPU may be good or great, but AMD is trying really hard to muddy the waters with their new products. Everything has some sort of marketing jargon attaching it to AI or machine learning in some way.


Or AMD could have integrated wavefront processor concepts into the architecture to improve branch prediction.


----------



## Fyrwulf

Quote:


> Originally Posted by *dmasteR*
> 
> How do you know SenseMI isn't implemented into the ES Sample?


Because there's no mention in the article about the processor overclocking itself. Since the article was written before the New Horizon event, the author could have spilled the beans and earned major credit by revealing something AMD kept secret.


----------



## sepiashimmer

I hope AMD doesn't lock overclocking to a software which has to be done through operating system, that causes many security issues, for example some can hack into a PC and override all the settings or force settings which will burn the chip.


----------



## eTheBlack

Quote:


> Originally Posted by *sepiashimmer*
> 
> I hope AMD doesn't lock overclocking to a software which has to be done through operating system, that causes many security issues, for example some can hack into a PC and override all the settings or force settings which will burn the chip.


That wont happen, unless they also hack firmware of MB/CPU. If you reach 100C or whatever is limit, system will shut down.


----------



## Yuhfhrh

Quote:


> Originally Posted by *eTheBlack*
> 
> That wont happen, unless they also hack firmware of MB/CPU. If you reach 100C or whatever is limit, system will shut down.


You can set a high system agent voltage on Intel chips, something like 1.6V+, which can cause near instant degradation/death without high temperatures.


----------



## eTheBlack

Quote:


> Originally Posted by *Yuhfhrh*
> 
> You can set a high system agent voltage on Intel chips, something like 1.6V+, which can cause near instant degradation/death without high temperatures.


You can set voltage thru software? Damn, then yeah, I can only hope too


----------



## Yuhfhrh

Quote:


> Originally Posted by *eTheBlack*
> 
> You can set voltage thru software? Damn, then yeah, I can only hope too


That's the reason why I don't leave motherboard utilities installed on my systems, like AI suite.


----------



## budgetgamer120

Quote:


> Originally Posted by *eTheBlack*
> 
> You can set voltage thru software? Damn, then yeah, I can only hope too


song voltage via software is nothing new. It sucks when you set to auto load voltage on boot then get instant bsod. The have to reformat pc


----------



## Hattifnatten

Quote:


> Originally Posted by *budgetgamer120*
> 
> song voltage via software is nothing new. It sucks when you set to auto load voltage on boot then get instant bsod. The have to reformat pc


Safe-mode -> Uninstall software and/or delete profile
Reboot


----------



## KarathKasun

Quote:


> Originally Posted by *Fyrwulf*
> 
> Or AMD could have integrated wavefront processor concepts into the architecture to improve branch prediction.


I doubt it. It likely has to do with intelligent microOP execution order to make the most use of SMT. If so, Intel has been doing this for a long time.


----------



## h2323

Quote:


> Originally Posted by *os2wiz*
> 
> Where did you get the idea Zen + will be in late 2017 and even more incredible that it will be on a different 14nm process. GF does NOT have a superior 14nm process to 14 LPP only IBM does. Also Zen+ is supposed to use a different socket than Zen. If you are implying Samsung has a better implementation of 14 LPP than GF , I suspect that may not be true. Samsung has zero experience in producing large core cpus. Virttually all their 14 LPP production goes to small core cpus for phones and tablets.


Where did you get the idea that Zen+ has a different socket? Zero info on that...

Zen+ has been in the works since Keller was still with them. AMD material shows a a strong push and a strong roadmap to be unveiled Q1.

GF is working with Samsung, AMD is also paying out penalty's to "another foundry" every Q 16.

All of the marketing material and the hotchips summit and the horizon event and conference calls lead to a this is a "just the beginning" theme.

7nm is a ways out, superior high yield 14nm processes are ideal for Zen+ and AMD is rumored to have it completed, (although I do not believe that)

New Socket for Zen+, thats makes no sense and there is no info on that.,


----------



## h2323

Quote:


> Originally Posted by *Fyrwulf*
> 
> Okay, there's so much nonsense being slung around right now my world is starting to look brown. To address some of these things point by point:
> 
> 
> Samsung has a bunch of 14nm iterations. 14LPE (Low Power Early), the first generation 14nm process which has been phased out; 14LPP (Low Power Plus), a higher performance iteration in present use; 14LPC (Low Power Cost-reduced), which has the same design rules and power as 14LPP but at a lower cost; and 14LPU, which has the same power envelope and design rules, but a higher performance.
> If there's a mid-cycle update for Zen, it will be on the 14LPU node, which will become available 2Q17.
> Zen+ isn't dropping until 2019 and so far the only leak we've seen relates to Grey Hawk, which is the Zen+ APU.
> Zen+ will use 7nm, which GloFo and Samsung are co-developing.
> Zen+ is going to use the same socket as Zen. Lisa Su made a point of stating that in the original Zen reveal, go back and watch it if you don't believe me.
> Everything all clear now? Good.


2019 would be Zen++


----------



## atomicmew

Quote:


> Originally Posted by *KarathKasun*
> 
> AI has arguably been in CPU's since prefetch algos came around. SenseMI seems to just be a re-branding of already existing tech to make people feel better about a purchase or convey a feeling of high technology through the use of buzzwords.
> 
> TL;DR, its just a marketing gimmick.


It is a pure buzzword. If you know a bit about machine learning, claiming they used a neural net is suspicious because neural nets are actually NOT computationally efficient non-linear algorithms.

According to TR, the actual algorithm is just hashed perceptrons, a linear predictive model. If that is true, AMD is using "neural network" very, very loosely which is disappointing. It's also not doing any learning on the fly whatsoever.


----------



## sepiashimmer

Does anyone here think ThreadRipper splits up a single thread into multiple fragments and runs them on many cores? Could that be how it is matching Broadwell IPC?


----------



## cssorkinman

Quote:


> Originally Posted by *sepiashimmer*
> 
> Does anyone here think ThreadRipper splits up a single thread into multiple fragments and runs them on many cores? Could that be how it is matching Broadwell IPC?


Lots of people are thinking AMD showed it's performance against Intel using programs that use multiple threads because that's where they are most competitive. If you don the tinfoil tiara the thought crosses the idle mind that it might be possible they didn't want to reveal something revolutionary about threadripper.

That's why they didn't release any benchmarks showing single thread performance ( gathered from the far reaches of my imagination).

* Doffing tinfoil tiara*

At any rate , I'm very excited about those MSI boards and the rather unique power connections the Titanium board has.


----------



## Raghar

Quote:


> Originally Posted by *sepiashimmer*
> 
> Does anyone here think ThreadRipper splits up a single thread into multiple fragments and runs them on many cores?


You can't do that because communication between cores would kill performance. SW that can do stuff asynchronously with high cache coherency, even when heavily multithreaded, can use multiple threads explicitly.


----------



## PiOfPie

Quote:


> Originally Posted by *sepiashimmer*
> 
> Does anyone here think ThreadRipper splits up a single thread into multiple fragments and runs them on many cores? Could that be how it is matching Broadwell IPC?


More than likely it's just an extremely efficient design in terms of IPC (at least relative to the Construction Cores). Dual-core Cyclone was able to match the IPS of competing quad-core ARM designs like Snapdragon and Exynos back in the day (dunno if that's still the case). Keller is that good.

Both Intel and AMD have researched reverse multithreading, but nothing's really come of it.


----------



## sepiashimmer

Something about the name "ThreadRipper" is making me think like that, like ripping threads apart.


----------



## TerlChamco

The only thing I care about and see coming is at least a phenom II era pc 10 yrs later, Keller was at DCCA when "hyperthreading" was born. he most likely remembers how to work it in since if i remember right he was part of the team that concocted it.

and with intel pretty much dropping the ball with Kaby lake, only gains made are from clock speed gains, the bio's dropping to much voltage on the chip and making them cook at load if you don't know any better.

All AMD has to do now is have a clean launch and AMD grabs market share by the ton, from dwelling I seen MB manufacturers are thrilled about this processor.

and from the peak of MSI mother boards i just saw......a320 tomahawk is not really shabby looking, b350 has 2 m.2 and all 3.1 usb headers no 3.0 and about 6-7 fan ports on board, 6 sata ports, dual pcie armored slots, x370 is about the same only it has a 12 pin cpu header?

go to 8.06minute mark full pics displayed.

https://youtu.be/_4Lvh8antRk


----------



## lolerk52

Quote:


> Originally Posted by *sepiashimmer*
> 
> Something about the name "ThreadRipper" is making me think like that, like ripping threads apart.


Maybe ripping threads appart like chewing through them quickly.


----------



## Poppinj

Quote:


> Originally Posted by *Hattifnatten*
> 
> Safe-mode -> Uninstall software and/or delete profile
> Reboot


LOL. This. No need to reformat.


----------



## CULLEN

Is there an official release date?


----------



## dmasteR

Quote:


> Originally Posted by *CULLEN*
> 
> Is there an official release date?


Quote:


> Speaking of which, while AMD wouldn't commit to a hard launch date for Ryzen, Hallock did give a glimpse at when not to expect Ryzen, which is anytime this quarter. "When companies say first quarter or first half, people assume that means the very end of that time frame," Hallock said. "The very last day of Q1 is not our trajectory."


Source: http://www.pcworld.com/article/3155109/computers/new-amd-ryzen-details-revealed-overclocking-crossfire-lineup-info-and-more.html

So no exact date, but it sounds like a late February/Early March launch.

Also a few slides that I honestly don't know if they've been posted yet, but here you all go!


----------



## cmpxchg8b

Quote:


> Hallock did give a glimpse at when _not_ to expect Ryzen, which is anytime this quarter.


This read as if Zen is _not_ going to be available in Q1?


----------



## dmasteR

Quote:


> Originally Posted by *cmpxchg8b*
> 
> This read as if Zen is _not_ going to be available in Q1?


They butchered his quote.


----------



## epic1337

Quote:


> Originally Posted by *dmasteR*
> 
> Source: http://www.pcworld.com/article/3155109/computers/new-amd-ryzen-details-revealed-overclocking-crossfire-lineup-info-and-more.html
> 
> So no exact date, but it sounds like a late February/Early March launch.
> 
> Also a few slides that I honestly don't know if they've been posted yet, but here you all go!
> 
> 
> Spoiler: Warning: Spoiler!


what, does this mean Zen + X370 would only have 8+8+8+4 lanes?
take note PCH only has 8 Gen2.0 multiplexed lanes, plus CPU lane crossfire 8+8 Gen3.0 support, and an additional x4 for NVMe M.2.


----------



## Quantum Reality

Question. Will AMD pull an "Intel" and start blocking overclocking on cheaper motherboards to force people to buy their flagship chipset(s)? If not, that will be a rather big selling point not so much on technical grounds per se as "political" grounds in the sense of leaving it to end users to decide how far to push their systems, which is how it should be.


----------



## dmasteR

Quote:


> Originally Posted by *Quantum Reality*
> 
> Question. Will AMD pull an "Intel" and start blocking overclocking on cheaper motherboards to force people to buy their flagship chipset(s)? If not, that will be a rather big selling point not so much on technical grounds per se as "political" grounds in the sense of leaving it to end users to decide how far to push their systems, which is how it should be.


According to what I posted above. X370 and B350 will be able to overclock, however the cheaper A320 will not be able to.


----------



## AlphaC

Quote:


> Originally Posted by *Quantum Reality*
> 
> Question. Will AMD pull an "Intel" and start blocking overclocking on cheaper motherboards to force people to buy their flagship chipset(s)? If not, that will be a rather big selling point not so much on technical grounds per se as "political" grounds in the sense of leaving it to end users to decide how far to push their systems, which is how it should be.


AMD is going to have mainstream B350 with overclocking similar to the AMD 990X chipset (so X370 would be similar to 990FX).

Don't expect to get top OC performance on the B350 though, preliminary boards have what looks to be VRM on par with cheap H170 or low end Z170.

A320 was mainly made for the A12 APUs.


----------



## tashcz

2 sata ports? What?


----------



## krabs

The midrange board looks like a winner ... most users don't use double graphic card.


----------



## epic1337

Quote:


> Originally Posted by *dmasteR*
> 
> According to what I posted above. X370 and B350 will be able to overclock, however the cheaper A320 will not be able to.


A320 boards wouldn't have enough power-handling to take on OC-ed chips anyway.
most of them from what i see are using 3phase VRMs, far from sufficient in handling OCed 4core chips.


----------



## JakdMan

Quote:


> Originally Posted by *krabs*
> 
> The midrange board looks like a winner ... most users don't use double graphic card.


The way some of you rag on is like they should just stop making anything for the high end and power users.


----------



## VeritronX

AMD let linus see the hardware that ran the test they showed off: https://www.youtube.com/watch?v=vMfNz2SXVLk

The 6900K system was only running in dual channel.. not that it makes any difference as we've seen frm the asrock ITX X99 board.

Also Australian PC hardware seller PC CASE GEAR had RYZEN systems on display there at CES.. So they've had the chip here in Aus already and will hopefully hard launch =D


----------



## pony-tail

Quote:


> Originally Posted by *VeritronX*
> 
> AMD let linus see the hardware that ran the test they showed off: https://www.youtube.com/watch?v=vMfNz2SXVLk
> 
> The 6900K system was only running in dual channel.. not that it makes any difference as we've seen frm the asrock ITX X99 board.
> 
> Also Australian PC hardware seller PC CASE GEAR had RYZEN systems on display there at CES.. So they've had the chip here in Aus already and will hopefully hard launch =D


It is usual that Australia gets it stuff last !
If PC Case Gear have them I doubt that they are far from shipping !


----------



## TerlChamco

AMD is name cloning in a way...

X370 = Z270 extra connector on some boards for better power stability
B & X350 = Z250
X320 = lower end
X300 = ITX &mITX and is over clockable

don't be surprised if you see the radeon X80 matching in the 1080 area


----------



## h2323

Told ya, 7nm is way out. Papermaster confirmed today, Zen+ 14nm

You'll see Zen+ this year.


----------



## Woundingchaney

Well this thread has grown quite a bit since I have last seen it. If someone doesnt mind summarizing for me, I was wondering if we have any indication of performance metrics in regards to Intels i7 lineup as of yet.

Thank you


----------



## Gilles3000

Quote:


> Originally Posted by *Woundingchaney*
> 
> Well this thread has grown quite a bit since I have last seen it. If someone doesnt mind summarizing for me, I was wondering if we have any indication of performance metrics in regards to Intels i7 lineup as of yet.
> 
> Thank you


Basically what we know so far is:
The 8 core Ryzen part seems to be competitive with the 6900K in certain multi-threaded applications, and keeps up in certain games too. And all demo's so far were done with no or unfinished boost.
All Ryzen CPU's will be overclockable with X370,B350 and X300 chipsets.

Aaand that's about it.


----------



## magnek

Quote:


> Originally Posted by *Gilles3000*
> 
> Basically what we know so far is:
> The 8 core Ryzen part seems to be competitive with the 6900K in certain multi-threaded applications, and keeps up in certain games too. And all demo's so far were done with no or unfinished boost.
> All Ryzen CPU's will be overclockable with X370,B350 and X300 chipsets.
> 
> Aaand that's about it.


pfffffffffffffffffffffffffffffffffffffffffffffft

*The real, factual summary*

8 core Ryzen is kicking 6900K's ass even as an ES chip with no/low turbo. The retail product will be a monster AND it'll go for <$500 which means Ryzen will be an Intel killer. The south AMD will ryze again thanks to Ryzen.


----------



## Silent Scone

Not sure if parody?


----------



## magnek

We need a [sarcasm] tag.


----------



## Silent Scone

You were convincing!


----------



## cmpxchg8b

Quote:


> Originally Posted by *magnek*
> 
> *The real, factual summary*


You forgot 5 GHz on air. Get your facts straight!


----------



## ciarlatano

Quote:


> Originally Posted by *cmpxchg8b*
> 
> You forgot 5 GHz on air. Get your facts straight!


I thought it did 5GHZ easy with the included cooler.....


----------



## cssorkinman

Quote:


> Originally Posted by *ciarlatano*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cmpxchg8b*
> 
> You forgot 5 GHz on air. Get your facts straight!
> 
> 
> 
> I thought it did 5GHZ easy with the included cooler.....
Click to expand...

Vishera did 5.2ghz on the stock cooler
















.


Spoiler: Warning: Spoiler!

















Probably not a good idea to put much of a load on it though


----------



## Fyrwulf

Quote:


> Originally Posted by *h2323*
> 
> Told ya, 7nm is way out. Papermaster confirmed to day, Zen+ 14nm


No he didn't. He said the AM4 socket and chipsets will last four years.


----------



## Pro3ootector

Looking at Nintendo Switch, i think of a laptop with integrated 16nm Xbox One APU,
and Windws 10 OS. Gosh.. that thing would be an overkill. And the mobility. But i guess Windows OS is still to buggy for such new horizon.


----------



## IRobot23

Quote:


> Originally Posted by *Fyrwulf*
> 
> No he didn't. He said the AM4 socket and chipsets will last four years.


Maybe even AM4... You might see AM4+ in 2019. But I think what he meant was that new architecture will come after 4 years. For 4 years we will only see upgrade of ZEN core.

With :TOCK, TOCK, TOCK = big upgrades = 10-20% IPC improvement

But in 2021 we could see different architecture to succeed ZEN.


----------



## TerlChamco

process,architecture,optimize, has always been what AMD does with chips.
Intels tick tock, went to the same thing.
P.A.O


----------



## Fyrwulf

Quote:


> Originally Posted by *TerlChamco*
> 
> process,architecture,optimize, has always been what AMD does with chips.
> Intels tick tock, went to the same thing.
> P.A.O


K7, K8, and K10 were about as similar to each other as Nehalem and Skylake.


----------



## TerlChamco

Quote:


> Originally Posted by *Fyrwulf*
> 
> K7, K8, and K10 were about as similar to each other as Nehalem and Skylake.


20 years ago
we are talking modern terminology.
athlon II /Phenom/phenom II /thuban all tweaks of athlon core


----------



## Fyrwulf

Quote:


> Originally Posted by *TerlChamco*
> 
> 20 years ago
> we are talking modern terminology.
> athlon II /Phenom/phenom II /thuban all tweaks of athlon core


And you're missing my point. Those revisions were fairly significant departures from each other compared to what Intel has done each generation since Nehalem. Also, K7 wasn't even around 20 years ago.


----------



## TerlChamco

Quote:


> Originally Posted by *Fyrwulf*
> 
> And you're missing my point. Those revisions were fairly significant departures from each other compared to what Intel has done each generation since Nehalem. Also, K7 wasn't even around 20 years ago.


athlon basic k8 core ran eight years with tweaks along it's life span.
same thing with zambezi
Bulldozer (32 nm),piledriver,steam roller,excavator(28nm)5 years total, all have had major tweaks along the life of the basic core design, the last has completely from the first.
now we have Zen basic core design
Ryzen then Zen+ whatever they name it.
Intel core has ran 6 years this will be it's 7th.


----------



## dagget3450

I have a feeling this will get shoes thrown at me. Just like 1080 threads around release these massive amount of zen threads are a bit too much. Wish there was a way to mash threm together.

To many threads to try to keep track of maybe i need multithreaded cpus for it.


----------



## Hueristic

Quote:


> Originally Posted by *dagget3450*
> 
> I have a feeling this will get shoes thrown at me. Just like 1080 threads around release these massive amount of zen threads are a bit too much. Wish there was a way to mash threm together.
> 
> To many threads to try to keep track of maybe i need multithreaded cpus for it.


Or maybe if a thread has had no activity in 8 Hours and you don't like it then You shouldn't BUMP it?

Wadda ya think?

FYI Mods do merge threads.


----------



## dagget3450

Quote:


> Originally Posted by *Hueristic*
> 
> Or maybe if a thread has had no activity in 8 Hours and you don't like it then You shouldn't BUMP it?
> 
> Wadda ya think?
> 
> FYI Mods do merge threads.


Maybe, i guess this was the original thread for Zen from AMD with official but cloudy information. I guess what i was referring to was the amount of threads now popping up for individual pieces of unofficial information thats mostly rumors at best.


----------



## h2323

Quote:


> Originally Posted by *Fyrwulf*
> 
> No he didn't. He said the AM4 socket and chipsets will last four years.


Nope....Tock Tock Tock...Zen Zen+ Zen++ all 14nm. You'll see Zen + fast, this year, very late.


----------



## cirov

Will there be 7700k price drop after zen is out what do you think?


----------



## OutlawII

Quote:


> Originally Posted by *cirov*
> 
> Will there be 7700k price drop after zen is out what do you think?


Probably not,still wont be any competition


----------



## oxidized

why keep bumping this thread


----------



## os2wiz

I seriously
Quote:


> Originally Posted by *h2323*
> 
> Nope....Tock Tock Tock...Zen Zen+ Zen++ all 14nm. You'll see Zen + fast, this year, very late.


Quote:


> Originally Posted by *h2323*
> 
> Nope....Tock Tock Tock...Zen Zen+ Zen++ all 14nm. You'll see Zen + fast, this year, very late.


Zen + is not due out until 2019 and is supposed to be on 7nm, so where are you getting this change to the AMD Roadmap????? There has been zero word about Zen ++ this sounds like rumor creation on your part or complete speculation by some rumor mongering web site you visited.


----------



## TerlChamco

Quote:


> Originally Posted by *os2wiz*
> 
> I seriously
> 
> Zen + is not due out until 2019 and is supposed to be on 7nm, so where are you getting this change to the AMD Roadmap????? There has been zero word about Zen ++ this sounds like rumor creation on your part or complete speculation by some rumor mongering web site you visited.


14nm will be a long node, 7 nm will be 2019-2020
10 nm is a supposedly a bad node it offers them no real power effiency or speed ticks so it is being skipped for 7nm


----------



## Tojara

Quote:


> Originally Posted by *os2wiz*
> 
> I seriously
> 
> Zen + is not due out until 2019 and is supposed to be on 7nm, so where are you getting this change to the AMD Roadmap????? There has been zero word about Zen ++ this sounds like rumor creation on your part or complete speculation by some rumor mongering web site you visited.


One of AMD's top level CPU guys said that Zen is going tock-tock-tock, so I find it incredibly unlikely that Zen+ would only be 2019. That does not mean that the last ones are still on 14nm, only that each iteration had an architectural change.


----------



## TerlChamco

Quote:


> Originally Posted by *Tojara*
> 
> One of AMD's top level CPU guys said that Zen is going tock-tock-tock, so I find it incredibly unlikely that Zen+ would only be 2019. That does not mean that the last ones are still on 14nm, only that each iteration had an architectural change.


This, since 14nm is going to be a long node zen+ will be on 14nm(refresh with optimizations)
7nm will likely be Zen++


----------



## Mahigan

I can't wait to replace this AMD FX 8350 rig with a RyZen powered behemoth










Spoiler: Warning: Spoiler!


----------



## dagget3450

Quote:


> Originally Posted by *Mahigan*
> 
> I can't wait to replace this AMD FX 8350 rig with a RyZen powered behemoth
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Spoiler: Warning: Spoiler!


I was wondering about Ryzen's chance as an AMD product. Its got me wondering if it will be like the video i saw about how AMD/ATI had better gpus than nvidia both in price and performance and nvidia still outsold them 3 or 4 to 1. Well i have a feeling it may repeat with Zen...

my objective proof:


Source HWbot main page survey

Almost 67% give or take won't buy zen at all, or may consider it if benchmarks are good. (that i am sure means it would need to be faster than Intel current offerings) This is among enthusiasts and overclockers a small community i know. I mean unless the price is really good to help sell it, i suspect people will still attribute a lower price to some sort of inferior product. Really sucks if it comes out the gate strong in price and performance and gets overlooked simply due to branding. So time will tell i guess.


----------



## dmasteR

Quote:


> Originally Posted by *dagget3450*
> 
> I was wondering about Ryzen's chance as an AMD product. Its got me wondering if it will be like the video i saw about how AMD/ATI had better gpus than nvidia both in price and performance and nvidia still outsold them 3 or 4 to 1. Well i have a feeling it may repeat with Zen...
> 
> my objective proof:
> 
> 
> Source HWbot main page survey
> 
> Almost 67% give or take won't buy zen at all, or may consider it if benchmarks are good. (that i am sure means it would need to be faster than Intel current offerings) This is among enthusiasts and overclockers a small community i know. I mean unless the price is really good to help sell it, i suspect people will still attribute a lower price to some sort of inferior product. Really sucks if it comes out the gate strong in price and performance and gets overlooked simply due to branding. So time will tell i guess.


Did you even read the Poll's Title?

It says specifically on *Launch Day*.

That video that you mention completely forgets about the 4000 series drivers as well which IMO were completely awful. Sure the performance was better than NVIDIA, but the drivers were the terrible.

I owned a 4870, and remember using OMEGA drivers to actually be able to use my GPU. Which were not AMD official drivers, they were 3rd party drivers that fixed everything wrong about the AMD drivers.


----------



## dagget3450

Quote:


> Originally Posted by *dmasteR*
> 
> Did you even read the Poll's Title?
> 
> It says specifically on *Launch Day*.
> 
> That video that you mention completely forgets about the 4000 series drivers as well which IMO were completely awful. Sure the performance was better than NVIDIA, but the drivers were the terrible.
> 
> I owned a 4870, and remember using OMEGA drivers to actually be able to use my GPU. Which were not AMD official drivers, they were 3rd party drivers that fixed everything wrong about the AMD drivers.


i'm willing to bet on hwbot way more people buy cpus on launch day for the sole purpose of getting first ranks. I guess what i was looking at is the stigma attached to AMD on delivering/trust/branding. Right now performance is unknown so that could skew those poll numbers as well. I think if AMD comes out with a better performing and priced cpu over intel it still has a huge hump to get over with branding/sales/marketing.

I myself didn't even vote and i also am not sure what Zen brings yet. I would be gaming more than anything or benching. So i am super interested in it's gaming performance/ipc/single thread - I consider myself more an AMD loyalists but even i am stuck on intel cpus unless Zen breaks the mold.


----------



## dmasteR

Quote:


> Originally Posted by *dagget3450*
> 
> i'm willing to bet on hwbot way more people buy cpus on launch day for the sole purpose of getting first ranks. I guess what i was looking at is the stigma attached to AMD on delivering/trust/branding. Right now performance is unknown so that could skew those poll numbers as well. I think if AMD comes out with a better performing and priced cpu over intel it still has a huge hump to get over with branding/sales/marketing.
> 
> I myself didn't even vote and i also am not sure what Zen brings yet. I would be gaming more than anything or benching. So i am super interested in it's gaming performance/ipc/single thread - I consider myself more an AMD loyalists but even i am stuck on intel cpus unless Zen breaks the mold.


AMD needs their presence in Servers, that's where AMD will really make their money. With the way things are looking, AMD should have no problem with this once RyZen launches.

Companies buying these by the truck load do not care if it's Intel or AMD branded.


----------



## dagget3450

Quote:


> Originally Posted by *dmasteR*
> 
> AMD needs their presence in Servers, that's where AMD will really make their money. With the way things are looking, AMD should have no problem with this once RyZen launches.
> 
> Companies buying these by the truck load do not care if it's Intel or AMD branded.


Did i miss something? I thought the launch is for the desktop enthusiasts section first?


----------



## dmasteR

Quote:


> Originally Posted by *dagget3450*
> 
> Did i miss something? I thought the launch is for the desktop enthusiasts section first?


It is indeed, but Desktops isn't where AMD really needs to have a presence as the Desktop margins are much smaller.


----------



## bigjdubb

I am really curious what the price is going to end up at for the 8/16 chip. I really want one but I doubt I would spend $600-$700 on a processor at this point, my 4790k seems to get the job done just fine. Seems like I would get more fun out of an extra vega card for that money.


----------



## DarkRadeon7000

I am curious whether it will beat the 7700k as my use case is only gaming


----------



## mohiuddin

Quote:


> Originally Posted by *DarkRadeon7000*
> 
> I am curious whether it will beat the 7700k as my use case is only gaming


may be not todays games. future i think more core cpus (beyond 8thread) will shine. Games already started to utilize more than 6 cores EFFECTIVELY quite often.


----------



## IRobot23

Quote:


> Originally Posted by *dagget3450*
> 
> I was wondering about Ryzen's chance as an AMD product. Its got me wondering if it will be like the video i saw about how AMD/ATI had better gpus than nvidia both in price and performance and nvidia still outsold them 3 or 4 to 1. Well i have a feeling it may repeat with Zen...
> 
> my objective proof:
> 
> 
> Source HWbot main page survey
> 
> Almost 67% give or take won't buy zen at all, or may consider it if benchmarks are good. (that i am sure means it would need to be faster than Intel current offerings) This is among enthusiasts and overclockers a small community i know. I mean unless the price is really good to help sell it, i suspect people will still attribute a lower price to some sort of inferior product. Really sucks if it comes out the gate strong in price and performance and gets overlooked simply due to branding. So time will tell i guess.


Why would there be 8C/8T? and where is 6C/12T? Are you trying to say that if you would cut out 1 core from CCX all of them would work just because they share L3$?


----------



## bigjdubb

Quote:


> Originally Posted by *IRobot23*
> 
> Why would there be 8C/8T?


For the same reason there are 4c/4t even though 2c/4t processors are in the same lineup.


----------



## IRobot23

Quote:


> Originally Posted by *bigjdubb*
> 
> For the same reason there are 4c/4t even though 2c/4t processors are in the same lineup.


Just NOOO!

*8C : 6C = 133%* (almost SMT)
6C/12T ~ 8C/8T

*4C : 2C = 200%* (SMT adds around 25-35%)

*4C : 3C = 133%* (almost SMT)
4C/4T ~ 3C/6T


----------



## bigjdubb

Quote:


> Originally Posted by *IRobot23*
> 
> Just NOOO!
> 
> *8C : 6C = 133%* (almost SMT)
> 6C/12T ~ 8C/8T
> 
> *4C : 2C = 200%* (SMT adds around 25-35%)
> 
> *4C : 3C = 133%* (almost SMT)
> 4C/4T ~ 3C/6T


When I made my comment there was nothing in your post about 6 core processors.

Either way, there is nothing wrong with having options available. Maybe some 8 cores don't make the grade with SMT enabled?


----------



## budgetgamer120

I would like to see another 3c cpu


----------



## Hueristic

Quote:


> Originally Posted by *budgetgamer120*
> 
> I would like to see another _Unlockable_ 3c cpu


FTFY


----------



## Tojara

Quote:


> Originally Posted by *bigjdubb*
> 
> When I made my comment there was nothing in your post about 6 core processors.
> 
> Either way, there is nothing wrong with having options available. Maybe some 8 cores don't make the grade with SMT enabled?


The amount of those chips should be a minimal part of defective chips, as SMT takes little area. 6C/12T still has perf/w advantage over 8C/8T and is far more likely what a defective chip would end as, while the 4C/4T and 4C/8T would be good use for a chip with several defects.


----------



## epic1337

Quote:


> Originally Posted by *IRobot23*
> 
> Just NOOO!
> 
> *8C : 6C = 133%* (almost SMT)
> 6C/12T ~ 8C/8T
> 
> *4C : 2C = 200%* (SMT adds around 25-35%)
> 
> *4C : 3C = 133%* (almost SMT)
> 4C/4T ~ 3C/6T


SMT doesn't work this way though.

in light loads SMT scales nearly 100%, so a 3C/6T is much closer to 150%.
but in heavy loads SMT sometimes doesn't scale so well, much closer to 110%.

the key point here is resource sharing, if the application requires the usage of only one type of resource then SMT provides no advantage.
in fact SMT could even end up having a negative scaling, e.g. games gets lower FPS when SMT is enabled.


----------



## IRobot23

Quote:


> Originally Posted by *epic1337*
> 
> SMT doesn't work this way though.
> 
> in light loads SMT scales nearly 100%, so a 3C/6T is much closer to 150%.
> but in heavy loads SMT sometimes doesn't scale so well, much closer to 110%.
> 
> the key point here is resource sharing, if the application requires the usage of only one type of resource then SMT provides no advantage.
> in fact SMT could even end up having a negative scaling, e.g. games gets lower FPS when SMT is enabled.


SMT works that way, if program utilize that. Its like gaming, you can take advantage of IPC or MT.


----------



## epic1337

Quote:


> Originally Posted by *IRobot23*
> 
> SMT works that way, if program utilize that. Its like gaming, you can take advantage of IPC or MT.


yes and no, SMT comes with "bonus situational performance", but a true core gives a 100% reliable performance.

saying that "6C/12T = 8C/8T" is far from being "ideal", it doesn't even represent the general performance it can give.
e.g. in light workloads with good scaling 6C/12T is 30%~50% faster than 8C/8T, but heavy workloads with poor scaling 6C/12T is 10%~25% slower than 8C/8T.

so it will depend on the workload, but in most cases, SMT enabled processor will be in favor due to sheer number of threads available.
specially when it comes to multi-tasking, more threads = better load spreading = less core overhead.


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> yes and no, SMT comes with "bonus performance", but a true core gives a 100% reliable performance.
> 
> saying that "6C/12T = 8C/8T" is far from being "ideal", it doesn't even represent the general performance it can give.
> e.g. in light workloads with good scaling 6C/12T is 30%~50% faster than 8C/8T, but heavy workloads with poor scaling 6C/12T is 10%~25% slower than 8C/8T.
> 
> so it will depend on the workload, but in most cases, SMT enabled processor will be in favor due to sheer number of threads available.
> specially when it comes to multi-tasking, more threads = better load spreading = less core overhead.


In good scaling and heavy workload 6core 12 thread matches 8core.



In good threading light workload 8 cores seems to be on top.



http://www.pcworld.com/article/3039552/hardware/tested-how-many-cpu-cores-you-really-need-for-directx-12-gaming.html

I will take the higher thread count over cores when core count is 6 or more


----------



## epic1337

ashes does "scale well", but it has a thread limit, you can see that between 6C/6T vs 6C/12T, and 8C/8T vs 8C/16T.


----------



## finalheaven

Quote:


> Originally Posted by *budgetgamer120*
> 
> I will take the higher thread count over cores when core count is 6 or more


You may be waiting a long time if this rumor is true: http://www.hardocp.com/news/2017/01/28/rumor_no_sixcore_amd_ryzen_cpus_at_launch

Not sure if this is good news or bad news (if it's true). I would think that AMD would need to launch $350 priced processors and can't just go with $500. Does this mean 8 cores for $350?


----------



## budgetgamer120

Quote:


> Originally Posted by *finalheaven*
> 
> You may be waiting a long time if this rumor is true: http://www.hardocp.com/news/2017/01/28/rumor_no_sixcore_amd_ryzen_cpus_at_launch
> 
> Not sure if this is good news or bad news (if it's true). I would think that AMD would need to launch $350 priced processors and can't just go with $500. Does this mean 8 cores for $350?


8 core could be $350.


----------



## epic1337

Quote:


> Originally Posted by *budgetgamer120*
> 
> 8 core could be $350.


which 8core? +SMT or -SMT?
though regardless of which, 8core -SMT @ $350 is still a decent bargain.

someone posted a while back about something like this:
Zen 4C/8T vs intel 2C/4T
Zen 8C/8T vs intel 4C/4T
Zen 8C/16T vs intel 4C/8T

so if we price match it like that as well, then we get a result of something like this:
$150~$200 Zen 4C/8T
$250~$300 Zen 8C/8T
$350~$400 Zen 8C/16T


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> which 8core? +SMT or -SMT?
> though regardless of which, 8core -SMT @ $350 is still a decent bargain.


No SMT. I will be going for 16 threads seeing as i hsve 12 now.

If AMD stuff isn't priced well I will go for a xeon 10 core or 14 core.


----------



## Mahigan

Quote:


> Originally Posted by *budgetgamer120*
> 
> No SMT. I will be going for 16 threads seeing as i hsve 12 now.
> 
> If AMD stuff isn't priced well I will go for a xeon 10 core or 14 core.


I'm thinking the same thing.


----------



## IRobot23

Quote:


> Originally Posted by *epic1337*
> 
> which 8core? +SMT or -SMT?
> though regardless of which, 8core -SMT @ $350 is still a decent bargain.
> 
> someone posted a while back about something like this:
> Zen 4C/8T vs intel 2C/4T
> Zen 8C/8T vs intel 4C/4T
> Zen 8C/16T vs intel 4C/8T
> 
> so if we price match it like that as well, then we get a result of something like this:
> $150~$200 Zen 4C/8T
> $250~$300 Zen 8C/8T
> $350~$400 Zen 8C/16T


So an old sample had problems with SMT and now we will see full line of 8C without SMT?
If AMD does that INTEL will drop price to match them, but 8C/16T will be around 500$.

Stop dreaming about 8C costing 350$, AMD is not a charity.
*AM4*
SR3 - 4C/8T (maybe even 4C/4T, cheaper variants) = 100-250$ (battling i5/i7)
SR5 - 6C/12T = 280-350$ (battling i7 6800K)
SR7 - 8C/16T = 450-550$ (battling i7 6900K)

Even if performance is on pair with skylake, AMD cannot price AM4 CPU same as INTEL does their LGA 2011 CPUs.
LGA 2011 offers much more for performance/enthusiast (even up to 10 cores, quad channel, more lanes).

SR will be without iGPU, APUs without iGPU (Athlon - excavator) will cover 50-100$.


----------



## epic1337

what is this guy's problem?









on a side note, "charity" and "fair price" are two different things and is the complete opposite of "price gouging", which intel is clearly doing.
proof = https://ark.intel.com/products/92986/Intel-Xeon-Processor-E5-2620-v4-20M-Cache-2_10-GHz
edit: E5-2620 v4 is *definitely NOT* a charity processor.


----------



## Quantum Reality

It seems that depending on price, a 4C/8T part would be pretty ample for my purposes.


----------



## Kuivamaa

A 8C/8T chip is a great alternative over a 6C/12T one. It should be identical in ST and a bit faster in MT loads.


----------



## budgetgamer120

Quote:


> Originally Posted by *Kuivamaa*
> 
> A 8C/8T chip is a great alternative over a 6C/12T one. It should be identical in ST and a bit faster in MT loads.


Quote:


> Originally Posted by *budgetgamer120*
> 
> In good scaling and heavy workload 6core 12 thread matches 8core.
> 
> 
> 
> In good threading light workload 8 cores seems to be on top.
> 
> 
> 
> http://www.pcworld.com/article/3039552/hardware/tested-how-many-cpu-cores-you-really-need-for-directx-12-gaming.html
> 
> I will take the higher thread count over cores when core count is 6 or more


Not always


----------



## Tojara

Quote:


> Originally Posted by *Kuivamaa*
> 
> A 8C/8T chip is a great alternative over a 6C/12T one. It should be identical in ST and a bit faster in MT loads.


Actually slightly worse in both, while also having higher power consumption. Granted, in many cases the 8C might get on top due to poor SMT scaling.


----------



## Kuivamaa

Quote:


> Originally Posted by *Tojara*
> 
> Actually slightly worse in both, while also having higher power consumption. Granted, in many cases the 8C might get on top due to poor SMT scaling.


In order for 6C/12T to beat 8C/8T of the same architecture in MT workloads , you need SMT scaling of above 33% per hyperthread which is extraordinary- that is a tall order even for cinebench which is famous for working perfectly with intel's SMT. 8C will most likely race to idle faster vs 6/12.


----------



## Marios145

Are you seriously arguing whether 2 real, fully complete cores with dedicated caches are better than 6 imaginary?


----------



## epic1337

Quote:


> Originally Posted by *Marios145*
> 
> Are you seriously arguing whether 2 real, fully complete cores with dedicated caches are better than 6 imaginary?


it highly depends on the application.
if the application has a thread cap of 8~10 then the 2 extra cores will win, because its 6+2 vs 8 or 6+4 vs 8.
or if the application uses the same type of resource then SMT would offer few to no performance.


----------



## renx

Hopefully we get to know line-up and prices in the next couple of weeks.


----------



## budgetgamer120

Quote:


> Originally Posted by *Marios145*
> 
> Are you seriously arguing whether 2 real, fully complete cores with dedicated caches are better than 6 imaginary?


Depends... in the screenshot above, 6 core beats the 8 core in cinebench by a few points.


----------



## Kuivamaa

Quote:


> Originally Posted by *epic1337*
> 
> it highly depends on the application.
> if the application has a thread cap of 8~10 then the 2 extra cores will win, because its 6+2 vs 8 or 6+4 vs 8.
> or if the application uses the same type of resource then SMT would offer few to no performance.


That's the thing. Even if the app uses 12 threads, it would take superb SMT scaling for 6C/12T to marginally beat the 2C deficit vs 8C/8T. That's the case with intel's HT at least, we will soon see what Zen is like. The main point is that 8C/8T is a great option for the consumer if priced appropriately. And I don't see it coexisting with 6C/12T on Zen stack. It will be either or.


----------



## epic1337

Quote:


> Originally Posted by *Kuivamaa*
> 
> That's the thing. Even if the app uses 12 threads, it would take superb SMT scaling for 6C/12T to marginally beat the 2C deficit vs 8C/8T. That's the case with intel's HT at least, we will soon see what Zen is like. The main point is that 8C/8T is a great option for the consumer if priced appropriately. And I don't see it coexisting with 6C/12T on Zen stack. It will be either or.


its intel's core design that made SMT somewhat "weak", they designed their pipeline to have unique ports.
if they had made redundant ports to support the more demanded resources then SMT would be able to utilize the other port while one is in use.
of course since these ports are on the same pipeline even if SMT is disabled the main thread can utilize both redundant ports to some degree, this theoretically increases per core IPC.



in fact they could make a super-wide core configuration and use 4-way SMT to utilize all that resources.
the effective result is a core thats roughly 2times larger yet provides up to 4times more performance.
on a side note, intel's Xeon Phi processors has a 4-way SMT, https://www-ssl.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html

as for 6C/12T, it would be quite slower than 8C/8T at the same clock, so it should be possible to place it at a lower position, so cheaper?
since AMD and Intel sells "binned" chips that has the same core config but lower clocked yet much cheaper in price, in fact you can OC them to exceed the more expensive chips.
e.g. ($142) FX-8320e @ 4.5Ghz *vs* ($195) FX-8350 @ 4.2Ghz, so i don't see much issue if its core-binned instead.


----------



## Blameless

Quote:


> Originally Posted by *Kuivamaa*
> 
> In order for 6C/12T to beat 8C/8T of the same architecture in MT workloads , you need SMT scaling of above 33% per hyperthread which is extraordinary- that is a tall order even for cinebench which is famous for working perfectly with intel's SMT. 8C will most likely race to idle faster vs 6/12.


Scaling is typically less than 33%, but Cinebench is hardly an SMT poster child.
Quote:


> Originally Posted by *Marios145*
> 
> Are you seriously arguing whether 2 real, fully complete cores with dedicated caches are better than 6 imaginary?


Logical cores provided by SMT are hardly "imaginary".
Quote:


> Originally Posted by *epic1337*
> 
> if they had made redundant ports


That would partially defeat the purpose of SMT...to recoup performance where possible and to improve performance per watt, without large increases in transistor count or complexity. Just widening the core is a poor use of transistors because we are already well into diminishing returns with ILP.

Designing an architecture around SMT that depends on SMT to extract the bulk of aggregate performance is what IBM has been doing with POWER, but this probably isn't practical for mainstream platforms. Intel's (and AMD's) cores need to be near ideal for everything from ultrabooks to upper mid-range servers. Giant, ultra-wide, cores don't scale low enough.


----------



## Kuivamaa

Quote:


> Originally Posted by *epic1337*
> 
> its intel's core design that made SMT somewhat "weak", they designed their pipeline to have unique ports.
> if they had made redundant ports to support the more demanded resources then SMT would be able to utilize the other port while one is in use.
> of course since these ports are on the same pipeline even if SMT is disabled the main thread can utilize both redundant ports to some degree, this theoretically increases per core IPC.
> 
> 
> 
> in fact they could make a super-wide core configuration and use 4-way SMT to utilize all that resources.
> the effective result is a core thats roughly 2times larger yet provides up to 4times more performance.
> on a side note, intel's Xeon Phi processors has a 4-way SMT, https://www-ssl.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html
> 
> as for 6C/12T, it would be quite slower than 8C/8T at the same clock, so it should be possible to place it at a lower position, so cheaper?
> since AMD and Intel sells "binned" chips that has the same core config but lower clocked yet much cheaper in price, in fact you can OC them to exceed the more expensive chips.
> e.g. ($142) FX-8320e @ 4.5Ghz *vs* ($195) FX-8350 @ 4.2Ghz, so i don't see much issue if its core-binned instead.


On an academic basis this is a nice and worthwhile discussion. Eg. how wide can a SMT enabled core be and how many logical threads can it muster before it is simply better to invest that die area in more cores instead, not as wide but cores nonetheless. But for now, we have a very specific landscape , x86-64 ,2 logical threads per physical core.
Quote:


> Originally Posted by *Blameless*
> 
> Scaling is typically less than 33%, but Cinebench is hardly an SMT poster child.


With haswell and onwards, from what I have seen, HT offers around 25% scaling in CB which is excellent. There are corner cases that do more but CB is quite the poster child without being an outlier.


----------



## Marios145

Quote:


> Originally Posted by *Blameless*
> 
> Logical cores provided by SMT are hardly "imaginary".


They might be real threads but they're imaginary cores.


----------



## Raghar

Quote:


> Originally Posted by *Blameless*
> 
> Scaling is typically less than 33%, but Cinebench is hardly an SMT poster child.
> Logical cores provided by SMT are hardly "imaginary".


Lookie what I found.

















My old benchmarks, which proves that i5-6600K is exact replacement of Ivy-E on RIV BE. I have even identical RAM speeds.
And it even has my old benchmark of E7200 for comparison.

From what I seen Cinebench had peak scaling with HT. Everything I used (and required CPU speed) scaled less. It had benefit however, CPU switched threads faster than W7. I wonder if W7 allowed to reduce timeslice, if that advantage wouldn't disappear.

Of course when something allow CPU to use more computing power, it also increases power consumption. Thus using HT would use more power. In fact even when HT would provide no benefits, use HT would use more power because power would be consumed by schedulers and by cache transfers.

128 MB L3 cache, or perhaps fully associative L2 cache might provide larger benefits.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> That would partially defeat the purpose of SMT...to recoup performance where possible and to improve performance per watt, without large increases in transistor count or complexity. Just widening the core is a poor use of transistors because we are already well into diminishing returns with ILP.
> 
> Designing an architecture around SMT that depends on SMT to extract the bulk of aggregate performance is what IBM has been doing with POWER, but this probably isn't practical for mainstream platforms. Intel's (and AMD's) cores need to be near ideal for everything from ultrabooks to upper mid-range servers. Giant, ultra-wide, cores don't scale low enough.


i don't think so, SMT needs roughly 5% more transistors to provide it's current performance benefit.
adding anywhere 10%~20% more transistors on top of that to widen the core's pipeline guarantees SMT's reliability.
furthermore it allows more threads per core without choking it (e.g. 3-way and 4-way) which increases each core's parallelism.

this would mean each core would end up with much higher performance per transistor count.
or in other words, you need less core count to provide the same performance.

imagine a 1C/4T processor thats at the very least on-par with current 2C/4T processors.
so it theoretically scales well, theres effectively 1C/4T, 2C/8T, 4C/16T and 6C/24T.
furthermore varying SMT threads like 2-way, 3-way and 4-way makes it further scalable.

and now lets go back to my point, a slightly wider core can make even 2-way SMT more reliable.
so depending on how AMD designed their cores their SMT could potentially perform much better than intel's.

Quote:


> Originally Posted by *Marios145*
> 
> They might be real threads but they're imaginary cores.


thats like saying cerberus' 2nd and 3rd head are just imaginary.


----------



## Blameless

Quote:


> Originally Posted by *Kuivamaa*
> 
> With haswell and onwards, from what I have seen, HT offers around 25% scaling in CB which is excellent. There are corner cases that do more but CB is quite the poster child without being an outlier.


Blender gets roughly 50% scaling and is probably as broadly used as Cinema4D, if not more so.
Quote:


> Originally Posted by *Marios145*
> 
> They might be real threads but they're imaginary cores.


That's a meaningless statement.
Quote:


> Originally Posted by *Raghar*
> 
> From what I seen Cinebench had peak scaling with HT.


Cinebench/Cinema 4D has decidedly average SMT/HT scaling among heavily threaded apps.

Blender, 7-zip, and plenty of other programs scale better.
Quote:


> Originally Posted by *epic1337*
> 
> and now lets go back to my point, a slightly wider core can make even 2-way SMT more reliable.
> so depending on how AMD designed their cores their SMT could potentially perform much better than intel's.


If things were remotely this clear cut, that's precisely what AMD and Intel would have done.


----------



## motoray

It feels like an eternity waiting for proper news.


----------



## yraith

I am thinking the same thing...

So, after much debate without official word, how many of you guys are going to buy Ryzen?


----------



## Hueristic

Quote:


> Originally Posted by *yraith*
> 
> I am thinking the same thing...
> 
> So, after much debate without official word, how many of you guys are going to buy Ryzen?


Nothing is set in stone until release with reputable reviews. With that being said I'm very hopeful but if they fail again I'll be going Intel (that left a bad taste in my mouth just typing it) unless Intels CPU's are still bugged.


----------



## budgetgamer120

Quote:


> Originally Posted by *yraith*
> 
> I am thinking the same thing...
> 
> So, after much debate without official word, how many of you guys are going to buy Ryzen?


Price of the platform will decide if I leave my x99 setup. So far I'm sold 80%


----------



## EightDee8D

Quote:


> Originally Posted by *yraith*
> 
> I am thinking the same thing...
> 
> So, after much debate without official word, how many of you guys are going to buy Ryzen?


Getting 4c one as soon as it comes out. no matter how it performs ( it's going to be faster than this i3 anyway). then i'll get whatever high end APU comes after it.









no new intel/nvidia for me as long as i don't need anything that only they can provide.


----------



## teh-yeti

Quote:


> Originally Posted by *motoray*
> 
> It feels like an eternity waiting for proper news.


Only reason that I still check this thred.


----------



## ryan92084

Quote:


> Originally Posted by *budgetgamer120*
> 
> Price of the platform will decide if I leave my x99 setup. So far I'm sold 80%


I'm in the same boat. As long as AMD doesn't royally screw up the pricing I'll be going ryzen. An all too real possibility IMO.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> If things were remotely this clear cut, that's precisely what AMD and Intel would have done.


thats actually what they did ever since with Haswell.



and it explains some of the points regarding Haswell's considerable multi-thread performance increase.
e.g. ST = +9.2% where as MT = +12.5%


----------



## motoray

Quote:


> Originally Posted by *ryan92084*
> 
> I'm in the same boat. As long as AMD doesn't royally screw up the pricing I'll be going ryzen. An all too real possibility IMO.


Same here, really annoying having to sift through pages of ppl arguing about bs waiting for facts. Everyone is just going stir crazy.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> thats actually what they did ever since with Haswell.


Yes, they've been getting slightly wider with each tock.
Quote:


> Originally Posted by *epic1337*
> 
> and it explains some of the points regarding Haswell's considerable multi-thread performance increase.
> e.g. ST = +9.2% where as MT = +12.5%


I'm curious as to how much of that differential came from the widening of the core and from other improvements between Ivy and Haswell. Haswell, for example, has double the L1 & L1 to L2 cache bandwidth, which has proven to be significant in many scenarios.

Regardless, very little of the widening of the core has been explicitly for the purpose of improving SMT. It's clearly been a motivation, but substantial SMT specific hardware hasn't been added to Intel parts since HT was introduced.

The problem I see with going all in on SMT is that the ILP needed to make it useful can't then be scaled back to allow small cores with the same design.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> Yes, they've been getting slightly wider with each tock.
> I'm curious as to how much of that differential came from the widening of the core and from other improvements between Ivy and Haswell. Haswell, for example, has double the L1 & L1 to L2 cache bandwidth, which has proven to be significant in many scenarios.


because widening the core and increasing cache bandwidth, also improving the front-end in general, affects both core performance and SMT performance.
but on the other hand, SMT scales better with wider cores so the overall result is a two-fold increase for multi-thread performance compared to single-thread.

Quote:


> Originally Posted by *Blameless*
> 
> The problem I see with going all in on SMT is that the ILP needed to make it useful can't then be scaled back to allow small cores with the same design.


they don't have to, don't they? their smallest offering is a 2C/2T chip at a low clock speed.
which means they could just fall back to making a 1C/2T chip to match that, which is theoretically a smaller implementation.

on the other hand, they could scale SMT up to 4-way so it becomes a 1C/4T chip, which is much more effective than 2C/2T in multi-threaded workloads.
this means to say that a 1C/4T chip would have a roughly smaller size than a 2C/2T chip, but can theoretically reach the performance of a 2C/4T i3 chip.

but then again, why would you want an inferior chip? they could effectively push out 2C/2T chips out of the market for good.
intel also have their ATOM architecture to provide for an even smaller implementation, so theres no issue with regards to this.


----------



## TerlChamco

Quote:


> Originally Posted by *yraith*
> 
> I am thinking the same thing...
> 
> So, after much debate without official word, how many of you guys are going to buy Ryzen?


we are at least guaranteed a Phenom II era cpu, totally acceptable in my book, but looking at just the Blender test and the encoding test we may get a surprise and have something on par with sky lake or better.
I will not support Intel, so yes no matter what i will be getting Ryzen.


----------



## budgetgamer120

Quote:


> Originally Posted by *epic1337*
> 
> because widening the core and increasing cache bandwidth, also improving the front-end in general, affects both core performance and SMT performance.
> but on the other hand, SMT scales better with wider cores so the overall result is a two-fold increase for multi-thread performance compared to single-thread.
> they don't have to, don't they? their smallest offering is a 2C/2T chip at a low clock speed.
> which means they could just fall back to making a 1C/2T chip to match that, which is theoretically a smaller implementation.
> 
> on the other hand, they could scale SMT up to 4-way so it becomes a 1C/4T chip, which is much more effective than 2C/2T in multi-threaded workloads.
> this means to say that a 1C/4T chip would have a roughly smaller size than a 2C/2T chip, but can theoretically reach the performance of a 2C/4T i3 chip.
> 
> but then again, why would you want an inferior chip? they could effectively push out 2C/2T chips out of the market for good.
> intel also have their ATOM architecture to provide for an even smaller implementation, so theres no issue with regards to this.


Yeah... I'll wait to see 1c 4t chip match an i3 our Pentium even..


----------



## epic1337

Quote:


> Originally Posted by *budgetgamer120*
> 
> Yeah... I'll wait to see 1c 4t chip match an i3 our Pentium even..


it should, theoretically at least, rather the question is whether how "wide" does it need to be.
if the core is wide enough, 1C/4T would be able to sustain all 4threads at the same level as our current 2C/4T i3s.

now if we scale it up a bit, how would a 2C/8T processor compare to our current lineups?
obviously it should be much faster than 2C/4T, and technically even 4C/4T since i3s already comes close to it.
then this puts it directly between 4C/4T and 4C/8T, for a 2core thats quite amazing.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> they don't have to, don't they? their smallest offering is a 2C/2T chip at a low clock speed.


Cores that aren't needed can be shut off. It's much harder to power gate logic within a single processor core.

I'm not convinced that a core wide enough to scale usefully with four or more threads will also be small/power efficient enough to be competitive in the same areas the lowest power Broadwell/Skylake/Kaby Lake parts see themselves in.

Intel has been trying to scale down it's mainstream architectures to fill roles Atom used to be needed for. This would be made more difficult by making extremely wide cores.
Quote:


> Originally Posted by *epic1337*
> 
> it should, theoretically at least, rather the question is whether how "wide" does it need to be.
> if the core is wide enough, 1C/4T would be able to sustain all 4threads at the same level as our current 2C/4T i3s.


IBM already has POWER8 processors with eight way SMT that can be competitive with other architectures of much greater physical core counts. So yes, it's clear that with a wide enough core, SMT can be used to extract quite a bit of performance...but how wide is too wide for the lower end of the power spectrum?

POWER9 will retain 8-way SMT, but will also introduce a 4-way SMT variant, likely because 8-way only isn't proving flexible enough in some scenarios.


----------



## TerlChamco

Quote:


> Originally Posted by *Blameless*
> 
> Cores that aren't needed can be shut off. It's much harder to power gate logic within a single processor core.
> 
> I'm not convinced that a core wide enough to scale usefully with four or more threads will also be small/power efficient enough to be competitive in the same areas the lowest power Broadwell/Skylake/Kaby Lake parts see themselves in.
> 
> Intel has been trying to scale down it's mainstream architectures to fill roles Atom used to be needed for. This would be made more difficult by making extremely wide cores.
> IBM already has POWER8 processors with eight way SMT that can be competitive with other architectures of much greater physical core counts. So yes, it's clear that with a wide enough core, SMT can be used to extract quite a bit of performance...but how wide is too wide for the lower end of the power spectrum?
> 
> POWER9 will retain 8-way SMT, but will also introduce a 4-way SMT variant, likely because 8-way only isn't proving flexible enough in some scenarios.


I remember reading something a while back, that this is where AMD is heading, multiple threads on an wide core design with smt(3-4 threads per core)


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> Cores that aren't needed can be shut off. It's much harder to power gate logic within a single processor core.
> 
> I'm not convinced that a core wide enough to scale usefully with four or more threads will also be small/power efficient enough to be competitive in the same areas the lowest power Broadwell/Skylake/Kaby Lake parts see themselves in.
> 
> Intel has been trying to scale down it's mainstream architectures to fill roles Atom used to be needed for. This would be made more difficult by making extremely wide cores.
> IBM already has POWER8 processors with eight way SMT that can be competitive with other architectures of much greater physical core counts. So yes, it's clear that with a wide enough core, SMT can be used to extract quite a bit of performance...but how wide is too wide for the lower end of the power spectrum?
> 
> POWER9 will retain 8-way SMT, but will also introduce a 4-way SMT variant, likely because 8-way only isn't proving flexible enough in some scenarios.


if we take POWER8 and POWER9 as reference then yes, 4-way and 8-way SMT aren't power efficient, and intel as well admits that hyperthreading isn't power efficient.

for one as you mentioned, they don't respond well to powergating, furthermore powergating a core or even downclocking a core affects a whole cluster of threads.
so yes while they could be made power efficient but thats at the cost of performance, they're best used for maximizing performance per transistor and overall ILP.

on the other hand, both POWER8 and POWER9 power efficiency could be attributed to it's own shortcoming by design, e.g. IBM hadn't made it as power efficient as intel's.
this could be high-lightened by running a POWER8 or POWER9 core in single-thread mode and comparing it to intel's performance per watt.
on a side note, IBM's POWER8 and POWER9 has a much lower IPC per thread compared to intel's, this indicates that it's general core performance is lower.

the reason why IBM offers a 4-way SMT on POWER9 is "workload alignment", either linux doesn't like super-wide cores or SMT4 provides better control.
in either case, the performance difference between SMT8 and SMT4 is small enough to not heavily affect it's overall performance.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> intel as well admits that hyperthreading isn't power efficient.


Intel admits nothing of the sort, nor would they, as it's demonstrably false. SMT was reintroduced in Nehalem because it met the critera for features that increased performance at least 2% for every 1% in power budget it cost.

SMT, in general, gives very good performance per watt.

It's not efficiency that keeps 4+ way SMT out of low power parts, it's the minimum power the part can scale down to. An ultra-wide core with the execution resources to scale to that many logical cores couldn't be reasonably fit in a ~2w power envelope, not while still achieving good single threaded performance, even if it's performance per watt at higher power levels was superior.
Quote:


> Originally Posted by *epic1337*
> 
> on the other hand, both POWER8 and POWER9 power efficiency could be attributed to it's own shortcoming by design, e.g. IBM hadn't made it as power efficient as intel's.
> this could be high-lightened by running a POWER8 or POWER9 core in single-thread mode and comparing it to intel's performance per watt.


POWER8 and 9 are pretty efficient, all things considered.

Running them in single thread mode produces poor performance per watt because of how wide they are. ILP (instruction level parallelism) has diminishing returns, and the amount of ILP needed to make 4 or 8-way SMT practical is never going to be used efficiently by any single thread.

This is the entire reason why there are trade-offs to huge cores with tons of SMT.
Quote:


> Originally Posted by *epic1337*
> 
> IBM's POWER8 and POWER9 has a much lower IPC per thread compared to intel's, this indicates that it's general core performance is lower]


That's not at all what it indicates.

The general core performance is higher, it's just so much wider and so oriented toward using SMT to make use of that width that you simply cannot compare single threaded workloads on an even footing.

http://www.anandtech.com/show/9567/the-power-8-review-challenging-the-intel-xeon-/9

If Intel widened their cores to accommodate high SMT multipliers single threaded performance per watt would go down, and raw single threaded IPC would not significantly improve. If clocks had to be reduced, single threaded performance might even decrease, because they are well into the point of diminishing returns on ILP with their current architectures.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> Intel admits nothing of the sort, nor would they, as it's demonstrably false. SMT was reintroduced in Nehalem because it met the critera for features that increased performance at least 2% for every 1% in power budget it cost.
> 
> SMT, in general, gives very good performance per watt.


thats what they implied with ATOM's silvermont architecture, they dropped Hyperthreading in favor of a thin and short pipeline for maximizing efficiency and single-thread performance.
Quote:


> In Silvermont, Kuttanna said, most of those operations can be handled in a fused fashion, so the architecture can be built without wide, power-hungry pipelines and still achieve good performance. "Using the fused nature of these instructions," he said, "we're able to extract the power efficiency out of our pipeline - and that's one of our key innovations in this out-of-order execution pipeline."
> 
> Kuttanna said, "we had hyperthreading on the cores. We had four threads operating on two physical cores. With Silvermont we dropped the support for hyperthreading because it wasn't the right trade-off for us with Silvermont where we were ... going after higher single-thread [instructions per clock cycle (IPC)]."


Quote:


> Originally Posted by *Blameless*
> 
> It's not efficiency that keeps 4+ way SMT out of low power parts, it's the minimum power the part can scale down to. An ultra-wide core with the execution resources to scale to that many logical cores couldn't be reasonably fit in a ~2w power envelope, not while still achieving good single threaded performance, even if it's performance per watt at higher power levels was superior.


they could do it with a modular structure.

following POWER9's design, they could split their core SKUs into two, what is formerly ATOM+CORE SKUs would become SMT2(mainstream)+SMT4(HEDT) SKUs.



so it both scales in pipeline width and thread count since the pipeline itself is modular in POWER9 architecture.

we'd then get a line up that would look like this:
2C/4T SMT2 (notes: hypothetical assessment, 2core = 100%, SMT 2threads = 40%, total ILP = 280%)
4C/8T SMT2 (notes: hypothetical assessment, 4core = 100%, SMT 4threads = 40%, total ILP = 560%)
4C/16T SMT4 (notes: hypothetical assessment, 4core = 100%, SMT 12threads = 26.67%, total ILP = 720%)
6C/24T SMT4 (notes: hypothetical assessment, 6core = 100%, SMT 18threads = 26.67%, total ILP = 1080%)
8C/32T SMT4 (notes: hypothetical assessment, 8core = 100%, SMT 24threads = 26.67%, total ILP = 1440%)

i took reference to POWER8 review, comparing SMT1 to SMT2 and SMT4 results.
with the average performance of SMT2 being 140% compared to single-thread performance.
with the average performance of SMT4 being 180% compared to single-thread performance.



though this brings up a point, can they mix SMT2+SMT4 on the same core structure?
this would become like a big.LITTLE design, although instead of IPC its ILP.


----------



## Raghar

https://www.techpowerup.com/forums/attachments/ht-png.49520/
Found some funny 7-zip benchmark. Well, it shows slowdowns in certain areas.
http://www.anandtech.com/show/9567/the-power-8-review-challenging-the-intel-xeon-/17
And there is a cute little table.
Running 7-zip (Integer) Intel: 300-350W PPC8: 780-800W

Well, EU decided to commit energy suicide*1, and behavior certain countries would cause energy to reach highs nobody believed. Thus double energy consumption when on load is quite concerning. (It's actually one of reasons why I switched from Ivy-E to i5-6600K. For me it's just temporally until 14 nm workstation CPUs and Sky-X would be available.) Considering sky high cost of PPC8, adding energy bills into equation is even worse. Main reason why Intel dominates the market is affordability.

edit:

*1 I just recently talked with my relative about news from nuclear power plant he build, and he talked about how laws allowed solar energy to get beautiful prices $ 0.593 per kWh, while buying energy from normal sources is for $ 0.033 per kWh. Well the older second nuclear power plant is falling apart and from beautiful price 0.033 it's completely impossible to afford to build new reactor core as a replacement. The price of energy for end user is high of course, right wing politicians wanted competition, and instead of one reliable company who did that during socialism, with people who had passion, it's multiple companies each with its overhead. Considering what democratic politicians did, it looks nearly like a serfdom.

I guess, electricity is cheap on Iceland, people are nice on Iceland, and Iceland will not protest against few large data centers who would prefer cold weather and cheap electricity. Suck to be EU, but what can you do. Bad energy policy takes at least 15 years to reverse.


----------



## 7850K

Quote:


> Originally Posted by *TerlChamco*
> 
> we are at least guaranteed a Phenom II era cpu, totally acceptable in my book


what?


----------



## TerlChamco

Quote:


> Originally Posted by *7850K*
> 
> what?


Either AMD has caught up or extremely closed the gap, unlike a lot of people who like to over hype....I am looking more at reality...........you obviously don't understand what era means.
A phenom II era cpu would put us at no more than 5-7 % slower than intel currant.


----------



## Robenger

Quote:


> Originally Posted by *TerlChamco*
> 
> Either AMD has caught up or extremely closed the gap, unlike a lot of people who like to over hype....I am looking more at reality...........you obviously don't understand what era means.
> A phenom II era cpu would put us at no more than 5-7 % slower than intel *currant*.


----------



## TerlChamco

Quote:


> Originally Posted by *Robenger*


obviously some in here need to read a dictionary.

a long and distinct period of history with a particular feature or characteristic.
"his death marked the end of an era"
synonyms: epoch, age, period, phase, time, span, eon; generation
"the Roosevelt era"

a system of chronology dating from a particular noteworthy event.
"the dawn of the Christian era"
synonyms: epoch, age, period, phase, time, span, eon; generation
"the Roosevelt era"


----------



## SuperZan

Quote:


> Originally Posted by *TerlChamco*
> 
> obviously some in here need to read a dictionary.
> 
> a long and distinct period of history with a particular feature or characteristic.
> "his death marked the end of an era"
> synonyms: epoch, age, period, phase, time, span, eon; generation
> "the Roosevelt era"
> 
> a system of chronology dating from a particular noteworthy event.
> "the dawn of the Christian era"
> synonyms: epoch, age, period, phase, time, span, eon; generation
> "the Roosevelt era"


http://www.dictionary.com/browse/currant

noun
1.
a small seedless raisin, produced chiefly in California and in the Levant, and used in cookery and confectionery.
2.
the small, edible, acid, round fruit or berry of certain wild or cultivated shrubs of the genus Ribes.
3.
the shrub itself.
4.
any of various similar fruits or shrubs.

Can be confused Expand
currant, current (see synonym study at current )
In short, the joke made sense because of the spelling.


----------



## TerlChamco

Quote:


> Originally Posted by *SuperZan*
> 
> http://www.dictionary.com/browse/currant
> 
> noun
> 
> 1.
> 
> a small seedless raisin, produced chiefly in California and in the Levant, and used in cookery and confectionery.
> 
> 2.
> 
> the small, edible, acid, round fruit or berry of certain wild or cultivated shrubs of the genus Ribes.
> 
> 3.
> 
> the shrub itself.
> 
> 4.
> 
> any of various similar fruits or shrubs.
> Can be confused Expand
> 
> currant, current (see synonym study at current )
> In short, the joke made sense because of the spelling.


then it fits perfectly since Kaby is a lemon


----------



## Niobium

Quote:


> Originally Posted by *7850K*
> 
> what?


Ah yes Phenom II, the miracle AMD CPU that didnt got completely owned by the i7 920 only because of the X58+DDR3 additional costs. Then Lynnfield happen and did anyway.


----------



## Blameless

Quote:


> Originally Posted by *TerlChamco*
> 
> I remember reading something a while back, that this is where AMD is heading, multiple threads on an wide core design with smt(3-4 threads per core)


I'm sure both AMD and Intel are headed this way.

I'm equally sure it's not going to be practical for mainstream architectures that need to scale down to the low single digit watts of power consumption, if not further, until 10nm or below.
Quote:


> Originally Posted by *Niobium*
> 
> Ah yes Phenom II, the miracle AMD CPU that didnt got completely owned by the i7 920 only because of the X58+DDR3 additional costs. Then Lynnfield happen and did anyway.


Phenom II was solid, but I agree, it's often enormously overrated.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> Phenom II was solid, but I agree, it's often enormously overrated.


*shrugs* could've been better if they hadn't abandoned it.
although there was the issue with regards to their clock speed, e.g. i've never seen them reach 4.5Ghz on air.

one thing notable about phenom II is that it actually uses less transistors than bulldozer uarch.

Deneb X4 = 45nm @ 258mm^2 ~ 758M transistors
Thuban X6 = 45nm @ 346mm^2 ~ 904M transistors
hypothetical X8 = 45nm @ 434mm^2 ~ 1050M transistors
BD/PD X8 = 32nm @ 315mm^2 ~ 1,200M transistors


----------



## Dhoulmagus

Quote:


> Originally Posted by *Blameless*
> 
> I'm sure both AMD and Intel are headed this way.
> 
> I'm equally sure it's not going to be practical for mainstream architectures that need to scale down to the low single digit watts of power consumption, if not further, until 10nm or below.
> Phenom II was solid, but I agree, it's often enormously overrated.


The Phenom II became solid in late 2009 early 2010 after the price came down. People forget that around launch the 955 BE was around $250 while the i7-920 was $295. I got mine around early 2010 from Newegg for about $139. At that point it was the best bang for the buck chip around. When the Phenom II was new the i7-920 was the champion in performance per dollar.

I still have my 955 BE running, but she's past her shelf life for modern games, still smooth sailing in operating system with an SSD though. I might add that the C2 revision I have can barely get past 3.6 without suicide voltage, so it really didn't shine until c3 came around short of golden chips. C3 steppings could blow past 4ghz and gave nice performance. It still ran all games just fine though in its time.


----------



## bigjdubb

Have you guys seen this: http://wccftech.com/amd-ryzen-am4-motherboard-details-features/



I like the looks of that motherboard.


----------



## TerlChamco

Quote:


> Originally Posted by *bigjdubb*
> 
> Have you guys seen this: http://wccftech.com/amd-ryzen-am4-motherboard-details-features/
> 
> 
> 
> I like the looks of that motherboard.


I am hoping for a maximus 9 extreme type motherboard for AM4, water block that covers cpu and power phasing..........sexy


----------



## budgetgamer120

Quote:


> Originally Posted by *bigjdubb*
> 
> Have you guys seen this: http://wccftech.com/amd-ryzen-am4-motherboard-details-features/
> 
> 
> 
> I like the looks of that motherboard.


In coming $500 motherboards


----------



## epic1337

Quote:


> Originally Posted by *bigjdubb*
> 
> I like the looks of that motherboard.


you mean the cleaning nightmares? i hate cleaning those heatsinks with weird shapes.
if the heatsinks are already bad enough, i can't imagine how worse those shrouds would end up.


----------



## kd5151

First time seeing Asus AM4 mobos!?!?!?


----------



## Kuivamaa

Quote:


> Originally Posted by *budgetgamer120*
> 
> In coming $500 motherboards


I expect them to come around Z270 price range and not X99 one.CHVI should be between 250-300 tops.


----------



## kd5151

Quote:


> Originally Posted by *bigjdubb*
> 
> That would make me happy, if they keep that monochromatic color scheme.
> Probably $400 + range for the top end ROG packed with useless stuff model.
> I just blow it all out with the electronics vac.


!!! metro datavac. never go back to compressed air again! keeps my pc and other things dust free. one of my best investments.


----------



## Aussiejuggalo

Asus really didn't skimp on that board, looks like they also expect Zen to be pretty good. Me likey







. Really liking that board to because it's not stupid black and red FINALLY!!!

Hope to see more MATX boards for launch, need me a Gene equivalent board day 1 thanks







.


----------



## TerlChamco

Quote:


> Originally Posted by *budgetgamer120*
> 
> In coming $500 motherboards


found it, totally worth 500 but highly doubtful thinking 350, base board is probably 250 then add water block for another 25-50(business agreement price) then add on the luxury tax

Edit: should added that EK is making these blocks for Asus. and they were announcing AM4 readiness for water blocks, which i have not seen any other wb company announce they even have readiness yet.


----------



## kd5151

Quote:


> Originally Posted by *bigjdubb*
> 
> I use the Metro data vac pro 3/4 hp here at home and I have the toner vac at work along with the electric duster. It seems expensive at first but if you clean dust often enough it pays for itself fairly quickly compared to canned air. The biggest benefit to me is being able to keep blowing air without having to stop and and wait for the can to warm up.


you also dont have to worry about the condensation in the can of compressed air also.


----------



## bitsum

I would love to see some benchmarks using ThreadRacer, which I am updating shortly to provide us with more power. Since Ryzen has full SMP, it *should* shine here. It will be interesting to see your results of the current product, as I can compare that to CPUs going back a long ways (since the algorithm hasn't changed). The next update to ThreadRacer I'll offer different types of loads to show the difference in performance between them on paired core systems. For some types of loads you will really see a performance hit on the paired cores, depending on your CPU type.

NOTE: ThreadRacer is 100% freeware. I will most likely open-source it at some point.

I wrote a very quick article here about how to know if you should disable HyperThreading (or use Process Lasso's paired-core avoidance) here.

Basically, by putting a load on 2 adjacent cores and then a load on 1 single core, you can compare the difference in performance between the cores. The results may be surprising to some! For previous generation AMD processors, you would see a hit when using adjacent cores, due to the shared computational units by those adjacent cores, but this is not true of Ryzen, or so I hear.

Here is an Intel i7-6650 on a Microsoft Surface -- where the power plans are *locked down*, a separate issue I feel compelled to toss in.


----------



## cssorkinman

Quote:


> Originally Posted by *bitsum*
> 
> I would love to see some benchmarks using ThreadRacer, which I am updating shortly to provide us with more power. Since Ryzen has full SMP, it *should* shine here. It will be interesting to see your results of the current product, as I can compare that to CPUs going back a long ways (since the algorithm hasn't changed). The next update to ThreadRacer I'll offer different types of loads to show the difference in performance between them on paired core systems. For some types of loads you will really see a performance hit on the paired cores, depending on your CPU type.
> 
> NOTE: ThreadRacer is 100% freeware. I will most likely open-source it at some point.
> 
> I wrote a very quick article here about how to know if you should disable HyperThreading (or use Process Lasso's paired-core avoidance) here.
> 
> Basically, by putting a load on 2 adjacent cores and then a load on 1 single core, you can compare the difference in performance between the cores. The results may be surprising to some! For previous generation AMD processors, you would see a hit when using adjacent cores, due to the shared computational units by those adjacent cores, but this is not true of Ryzen, or so I hear.
> 
> Here is an Intel i7-6650 on a Microsoft Surface -- where the power plans are *locked down*, a separate issue I feel compelled to toss in.


The program gave an error when I opened it but it ran anyway. fwiw


----------



## bitsum

What was the error? Do you have CryptoPrevent installed by chance? That software has particular rules about how one can use the temporary folders.

Thanks for posting your results!

One of the bad things with the current version is the iteration count isn't tied to time, so it's hard to get a good performance measurement other than the graphs. In that respect, running it on ALL cores really yields no information, but running it as you did in the second tests shows the disparity in core performance, of which the Windows CPU Scheduler is aware and thus tries to avoid scheduling to paired cores on applicable CPUs.

I'll work on that, so that we have a X/sec instead of total iterations. Now that I bring this out from mothballs, it seems obvious. I will try to get to that this week, easy change.

However, the core graphs and % of max performing core should be relatively static once you start the app and it continues, therefore this figure *is* something that we can reliably compare processors against.


----------



## Dawn of War

Has AMD released or does anyone have a diagram/listing of when the next generation of Ryzen will be out like I see Intel do all the time? I'll still running haswell but wouldn't mind switching to AMD if its worth it in a year or two if/when they release an even newer architecture.


----------



## Tojara

Quote:


> Originally Posted by *Dawn of War*
> 
> Has AMD released or does anyone have a diagram/listing of when the next generation of Ryzen will be out like I see Intel do all the time? I'll still running haswell but wouldn't mind switching to AMD if its worth it in a year or two if/when they release an even newer architecture.


This is the newest roadmap:

7nm should be in mass production by H2 2018, so the timeframe for Zen 2 should be somewhere between H2 2018 and H1 2019, if the third iteration will be out by the end of 2020. Glofo's 7nm also seems like massive improvement from 14nm LPP.


----------



## Dawn of War

Awesome, thanks for the info. So a wait til late 2018 it is.


----------

