# [Adoredtv] Ryzen 3000, Radeon 3000 Series LEAKS - It's Game On!



## ILoveHighDPI

3850X will be mine.

This may be OCN but at the very least I'm betting most people can appreciate taking the lottery out of buying a fast CPU.

Only four weeks until CES. If the base R9 3800X turns out to be true I'll have to hold my breath for another five months.


----------



## AlphaC

$130 6 core 12 thread with 4GHz turbo... that's crazy. I would expect $150ish.

~$180 8c/16t 4.4GHz turbo isn't as amazing though. R7 2700X is already 4.35GHz. The $230 4.8GHz turbo variant looks to be the chip to buy.

~$300 for 12cores / 16 threads are another highlight , especially with 5GHz.

~$450-500 16cores/32 threads is roughly TR 1950X pricing with a large speed bump to ~5GHz
--->You need a B550 or X570 motherboard for Ryzen 9 

The main competitor is the 8c/16t with IGPU that has 20CU for $200. You can't beat that on value. Even an i5 costs you $200ish already.

---------------

GPU wise:

RX 3060 4GB GDDR6 (Navi12) , 75W with RX 580 performance for $130 vs GTX 2050 / GTX 1060 <--- a major OEM win for all weak PSU desktops

RX 3070 8GB GDDR6 (Navi12) , 120W with VEGA 56 performance for $200 vs GTX 2060 / GTX 1070

RX 3080 8GB GDDR6 (Navi10), 150W with VEGA 64 +15% for $250 vs GTX 2070 / GTX 1080


----------



## ILoveHighDPI

Who would have guessed the 9900K could possibly lose half its value within six months of release.

Four Weeks, then we'll know.


----------



## Aussiejuggalo

If true AMD could take a good chunk of the market seeing Intel has crap all and might not have crap all till 2020.


But... AMD need to kick the motherboard manufactures in the ass, we have a handful of semi decent boards, most of them are the stupidly expensive high end ATX, mATX and ITX don't really have much, sure some have nice features but the power delivery and cooling of that power delivery on most boards is laughable.


----------



## epic1337

AlphaC said:


> ~$300 for 12cores / 16 threads are another highlight , especially with 5GHz.
> 
> 
> The main competitor is the 8c/16t with IGPU that has 20CU for $200. You can't beat that on value. Even an i5 costs you $200ish already.
> 
> ---------------
> 
> GPU wise: RX 3070 8GB GDDR6 (Navi12) , 120W with VEGA 56 performance for $200 vs GTX 2060 / GTX 1070


 
my interests are piqued.
good CPU with a good clock speed, but i doubt it'd be all-core 5Ghz.
the APU has potential to be completely stand-alone, if the 20CU IGP can at least match up to their RX560.


----------



## Imouto

Aussiejuggalo said:


> But... AMD need to kick the motherboard manufactures in the ass, we have a handful of semi decent boards, most of them are the stupidly expensive high end ATX, mATX and ITX don't really have much, sure some have nice features but the power delivery and cooling of that power delivery on most boards is laughable.


What AMD needs is to make reference motherboards.


----------



## epic1337

Imouto said:


> What AMD needs is to make reference motherboards.


that or some form of standardization.
e.g. budget MATX should have it's x16 GPU slot on the 2nd row, with the 1st row either occupied by an x4, x1 or none at all.

x4 would allow for an M.2 adapter card or 10GbE card, x1 would allow for a sound card, wifi card, or gigabit card to be placed without blocking the GPU's intake.
while not having anything on the 1st row would allow large heatsinks to fit with plenty of clearance to spare.


----------



## paulerxx

AMD!
AMD!
AMD!
AMD!
AMD!
AMD!

I wonder what the IPC increase will be on top of all the other cherries.


and boy do I want one of these R9 3080s!


----------



## LancerVI

This is truly exciting. If true, I'm going all in on an all AMD build and finally going to get a freesync monitor.

I cannot wait for CES!!!!


----------



## Aussiejuggalo

Imouto said:


> What AMD needs is to make reference motherboards.





epic1337 said:


> that or some form of standardization.
> e.g. budget MATX should have it's x16 GPU slot on the 2nd row, with the 1st row either occupied by an x4, x1 or none at all.
> 
> x4 would allow for an M.2 adapter card or 10GbE card, x1 would allow for a sound card, wifi card, or gigabit card to be placed without blocking the GPU's intake.
> while not having anything on the 1st row would allow large heatsinks to fit with plenty of clearance to spare.



Yeah AMD need to make some kind of standard that the manufactures have to stick to, the current mATX boards are a mess, I know mATX isn't super popular but still, most have horrible VRM cooling and power delivery, bad layouts in general, makes it hard for those of us who can't / don't want to have ATX or ITX to find a semi decent board. Even if they don't make a rock solid standard the one thing I wish they would force is putting the M.2 *above* the first PCI-e slot instead of directly under it, I cooked my first M.2 because of that, I now have to use a PCI-e adaptor card for my M.2 so I don't cook it again.

Another thing AMD should push for is server boards, if they can get Supermicro to put out even 1 or 2 dedicated AM4 server boards with onboard graphic chips (as is standard on a proper server board) AMD could start to take over the home server and DIY NAS market, could mean big money for them because I know a lot of people are sick of the overpriced under performing Xeons. Gimmie that R7 3700 or R9 3800 with 32 - 64GB of ECC RAM for both my NAS and server and I'd be very happy.


----------



## ryan92084

Until it's officially announced i won't hope for a core count bump this gen. I'll take those clock speeds though.

Edit: Not even Reddit believed the "leak" discussed in the beginning of the video https://www.reddit.com/r/Amd/comments/a2jfu9/email_with_amd_2019_cpu_lineup_colleague_messaged/


----------



## epic1337

Aussiejuggalo said:


> Yeah AMD need to make some kind of standard that the manufactures have to stick to, the current mATX boards are a mess, I know mATX isn't super popular but still, most have horrible VRM cooling and power delivery, bad layouts in general, makes it hard for those of us who can't / don't want to have ATX or ITX to find a semi decent board. Even if they don't make a rock solid standard the one thing I wish they would force is putting the M.2 *above* the first PCI-e slot instead of directly under it, I cooked my first M.2 because of that, I now have to use a PCI-e adaptor card for my M.2 so I don't cook it again.
> 
> Another thing AMD should push for is server boards, if they can get Supermicro to put out even 1 or 2 dedicated AM4 server boards with onboard graphic chips (as is standard on a proper server board) AMD could start to take over the home server and DIY NAS market, could mean big money for them because I know a lot of people are sick of the overpriced under performing Xeons. Gimmie that R7 3700 or R9 3800 with 32 - 64GB of ECC RAM for both my NAS and server and I'd be very happy.



from what i remember, theres not enough space between the CPU socket and 1st row PCIe, at least not for those that uses an x16 on the 1st row.


however if they do it like this...


----------



## LancerVI

ryan92084 said:


> Until it's officially announced i won't hope for a core count bump this gen. I'll take those clock speeds though.



Agreed. The core counts were gravy. I want the clock speed. If the cores counts in this leak aren't accurate, but the clock speeds are, or close; with a nice lil bump in IPC, I would be very happy.


----------



## Aussiejuggalo

epic1337 said:


> from what i remember, theres not enough space between the CPU socket and 1st row PCIe, at least not for those that uses an x16 on the 1st row.
> 
> 
> however if they do it like this...



That's pretty much what I meant, having an M.2 directly under the GPU can cause to much heat build up and cause damage but if it's above the GPU (closer to the CPU) or near the chipset then it's not to bad but right under is just utterly stupid unless you watercool the GPU.


Really it's not even an AMD thing it's more of a general motherboard thing but I've noticed its more AMD boards than Intel boards because the AMD ones are made cheaper.


----------



## Jspinks020

Well the 2600x is like a slower 8700k...in some stuff yeah it probably is threaded like that and close...and for that deal. Make 3rd a little faster.


----------



## kd5151

wait for ces 2019. plzz.


----------



## Firehawk

I didn't see anything that was too far fetched, except maybe clocks speeds.

AMD have already shown chiplets on their EPYC line, so we know they have that. It just makes sense to use it also on Ryzen. I can definitely see them binning the chiplets for clocks speed and core count. They're already binning current Ryzen chips, as seen in Ryzen Master which shows you your best cores. Clock speeds are almost guaranteed to get a bump due to the die shrink and improved processes.

One thing that did concern me was his speculation that a x500 series board will be required for the high end parts. I just bought a CH7 just because of the ridiculous VRM in anticipation of upgrading to a 16 core in the future. I don't want to be limited arbitrarily just because some manufacturers are cheaping out on the VRM. Though it might be handled similarly to the 9900k where they limit the TDP of the chip if the VRM on the board sucks.

I'm also curious if the core and the chiplets will have independent clock control. Could make for a more interesting overclocking situation.


----------



## AlphaC

Aussiejuggalo said:


> If true AMD could take a good chunk of the market seeing Intel has crap all and might not have crap all till 2020.
> 
> 
> But... AMD need to kick the motherboard manufactures in the ass, we have a handful of semi decent boards, most of them are the stupidly expensive high end ATX, mATX and ITX don't really have much, sure some have nice features but the power delivery and cooling of that power delivery on most boards is laughable.



B450M Mortar is more or less the B450 Pro Carbon in mATX form. However it is cut down in terms of features and the mosfets are slightly worse.


ITX-wise I think the best AM4 board is the ASUS x370 STRIX ITX board with 6x 50A Infineon powerblocks. I'd like to see an Asrock Phantom ITX board like the Z370/Z390 one with 5x ISL99227B 60A Powerstages.


As far as midrange goes, the STRIX x370 /x470 and older Taichi are under $200 typically. The X470 Pro Carbon gets away with using more mosfets than the lower boards but it does have decent performance. The Prime X470 Pro could have had a better heatsink , it suffers the same problem as the X370 Gaming K7 in that the 6x 40A Ir3553 powerstages aren't cooled well enough even though they are capable.



For B550 if the rumor is true there definitely has to be a more robust power delivery. I'd expect at least 8x 60A powerstages to be at least on par with what was required for X299 and X399.


----------



## doom26464

Ill keep some salt handy, but those clocks look a bit on the high side. Alot of cores though at still competitive pricing, Intel may have to take a back seat for next year but time will tell.

7nm yields could be working out good for AMD who knows, CES will have some buzz for once this year maybe.


----------



## ZealotKi11er

I think AMD will push the lead with Ryzen 9. Its a given. They can't leave it on the table. That rumour 10 Core Intel is there for a reason. Can't wait to finally upgrade my 3770K. I have jumped from 1 Core to 4 Core and now 16C.


----------



## The Pook

Aussiejuggalo said:


> That's pretty much what I meant, having an M.2 directly under the GPU can cause to much heat build up and cause damage but if it's above the GPU (closer to the CPU) or near the chipset then it's not to bad but right under is just utterly stupid unless you watercool the GPU.



If your build has *no* airflow, sure ... but that's not the location of the M2's fault, that's a bad build problem. My NVME is just fine under my 1080 Ti despite being in a case with a terribly restrictive front panel being further restricted by a front mount radiator.



doom26464 said:


> Ill keep some salt handy, but those clocks look a bit on the high side. Alot of cores though at still competitive pricing, Intel may have to take a back seat for next year but time will tell.
> 
> 7nm yields could be working out good for AMD who knows, CES will have some buzz for once this year maybe.



Big 'ol grain of salt, especially since no source was given and the variation on the same SKUs specs being >300mhz between some of them. 

If you believed the leaks on the 2000 series, they were supposed to be 4.5 and OC to 5.0 too.


----------



## SoloCamo

This is the cpu I've been waiting for... can't wait until we get confirmation. Until then, salt is on hand.


----------



## epic1337

i don't really need to upgrade anytime soon but with these kind of lineup i'm looking forward to whats coming 1~2years from now. :wheee:


----------



## guttheslayer

NOW WHO SAID THAT 16C/32T WONT COME TO MAINSTREAM! WHO?


Oh I remember it was os2wiz, pony-tail, and chakku, looks like its time for them to eat their word.



Anyway it look like AMD offer superior performance now than Intel *at half the price, or twice the cores at same price*. A wonderful 2019 indeed.


----------



## ILoveHighDPI

With Chiplets coming to mainstream CPUs this could also be big news for consoles.
There's going to be _tons_ of low binned chips readily available to slap into mass produced boxes.
For the first time ever we could see consoles actually share real desktop hardware.

I still expect 8 cores on PS5 but now I'd guess that's using two binned 6 core chiplets.


----------



## guttheslayer

ILoveHighDPI said:


> With Chiplets coming to mainstream CPUs this could also be big news for consoles.
> There's going to be _tons_ of low binned chips readily available to slap into mass produced boxes.
> For the first time ever we could see consoles actually share real desktop hardware.
> 
> I still expect 8 cores on PS5 but now I'd guess that's using two binned 6 core chiplets.


The real reason if the mass produced box offer PC level of performance is not the number of cores (4 or 6 is already sufficient), it is the awesome IPC & clock speed that Zen 2 brings to the consoles.


----------



## lightsout

Hopefully we are close to these and we have a huge winner. Loving AMD these days.


----------



## SoloCamo

lightsout said:


> Hopefully we are close to these and we have a huge winner. Loving AMD these days.


Just wish there gpu's would return too. But I'll take one competitive front over none.


----------



## LancerVI

SoloCamo said:


> Just wish there gpu's would return too. But I'll take one competitive front over none.



I'll take Vega64+15% for $300. I'd probably swap out my 1080ti for that and get me a Freesync monitor.


----------



## white owl

Who's glad they don't have a 9900k? Lol


----------



## Raghar

ryan92084 said:


> Until it's officially announced i won't hope for a core count bump this gen. I'll take those clock speeds though.
> 
> Edit: Not even Reddit believed the "leak" discussed in the beginning of the video https://www.reddit.com/r/Amd/comments/a2jfu9/email_with_amd_2019_cpu_lineup_colleague_messaged/


6-core dies are sensible. Either 8-core disable 2, or 6-core full yield. AMD is gluing all Ryzen chips, so gluing 6-die instead of 4-die is probably highly likely.
Intel would have to compete in price, or lose gaming market.


----------



## ILoveHighDPI

guttheslayer said:


> The real reason if the mass produced box offer PC level of performance is not the number of cores (4 or 6 is already sufficient), it is the awesome IPC & clock speed that Zen 2 brings to the consoles.


Right, but we have always known that anything would be better than the Jaguar cores they chose in 2013, that was a massive mistake from the beginning.
And depending on what workload you're looking at, the current Jaguar chips can behave more like a hyperthreaded quad core than a non-threaded octa core.
16 threads will be a big deal.

I actually don't expect the new consoles to go much over 3Ghz either, these still need to be power efficient designs. 3.2Ghz would be 2x the clock of the base PS4, but I have a hard time believing they would even go that far.

So we're probably looking at a 6x gain in equal portion from IPC, clock, and core count.


----------



## pony-tail

I am going to sit in a corner and sulk !
I just bought a new motherboard and CPU for my gaming rig ( only a couple of days ago ) - X470 AORUS GAMING 7 WIFI / R5 2600x.
It will get slaughtered by this new stuff - With any luck being AMD not intel I may be able to re-use the mobo.
The video says all but the r9 will be backwards compatible but we will see


----------



## white owl

guttheslayer said:


> The real reason if the mass produced box offer PC level of performance is not the number of cores (4 or 6 is already sufficient), it is the awesome IPC & clock speed that Zen 2 brings to the consoles.


I agree with you on the surface but I really feel like GPU's are tapped as far as massive strides in IPC and clock speed. The only place left to go is either using more GPU cores (or even bigger dies) or simply offloading more things to the CPU. I really think that the CPU is under utilized in games because consoles don't have loads of cores to work with anyway. Once consoles get more cores (even if they're low clocked) I think devs will learn to really utilize them and we'll see the scaling we've always wanted. Plus the PS5 will use Vulcan while xbox will use DX12 so I can really see things changing when all the pieces get put together.
Hell CPU's are tapped out too as far as speed goes, the only way to make the situation better is by using more of them in my mind.


----------



## sargatanas

Also glad quad cores are finally obsolete in this lineup, after 10 years of intel milking quad cores, it is finally over!


----------



## Wishmaker

Yay!
2019 looks like a good year to build a pc rig!


----------



## ILoveHighDPI

Wishmaker said:


> Yay!
> 2019 looks like a good year to build a pc rig!


I haven't felt this excited for PC hardware since 2010.


----------



## Imglidinhere

Genuinely curious and somewhat excited to know if the Ryzen 3 CPUs are truly going to be the core-count kings from here on. Would be an interesting thing if Ryzen 3 would then go toe to toe against Core i5s on a value proposition... Like... there is no way that Intel can compete.


----------



## Wishmaker

ILoveHighDPI said:


> I haven't felt this excited for PC hardware since 2010.




We have 5 machines in my household which are in desperate need of attention. I might go all out AMD if the price and motherboards are top notch ! 




Sent from my iPhone using Tapatalk Pro


----------



## Newbie2009

Looks like I'll be upgrading in 2019.


----------



## delboy67

paulerxx said:


> AMD!
> AMD!
> AMD!
> AMD!
> AMD!
> AMD!
> 
> I wonder what the IPC increase will be on top of all the other cherries.
> 
> 
> and boy do I want one of these R9 3080s!


13% from AMD themselves at the epyc reveal. Haven't been this excited about hardware in a long time.


----------



## ryboto

ZealotKi11er said:


> I think AMD will push the lead with Ryzen 9. Its a given. They can't leave it on the table. That rumour 10 Core Intel is there for a reason. Can't wait to finally upgrade my 3770K. I have jumped from 1 Core to 4 Core and now 16C.


I'm with ya! My 3570K wants to retire!


----------



## parityboy

guttheslayer said:


> NOW WHO SAID THAT 16C/32T WONT COME TO MAINSTREAM! WHO?


Me. 

I think 16C may be too much for AM4 or at least be heavily crippled compared to Threadripper. Chiplets + I/O (with heavy caching) may alleviate some of it, but with dual-channel memory the CPU is going to be memory-starved. Apart from Prime 95, how many desktop workloads are out there which require huge multi-core CPU performance but not decent memory bandwidth? I can see 16C on AM5 because DDR5 will have the bandwidth for it to be worthwhile. In light of Intel's rumoured 10-core plus the fact of the 1950X being the best selling Threadripper, I think AM4 _may_ see 12C, but not 16C.




ZealotKi11er said:


> I think AMD will push the lead with Ryzen 9. Its a given. They can't leave it on the table. That rumour 10 Core Intel is there for a reason. Can't wait to finally upgrade my 3770K. I have jumped from 1 Core to 4 Core and now 16C.


Quick question for those who may remember: when the original Threadripper rumours started, was it ever referred to as Ryzen 9?


----------



## Kaltenbrunner

I think the 7nm gaming GPUs aren't rumoured til after Computex in the Summer, so screw waiting that long


----------



## Frugal

The most exciting (if this leak is accurate) is that the 4-core CCX was replaced with 3 chiplets:
a 4-core
a 6-core
an 8-core.


----------



## PwrSuprUsr

A 5 Ghz boost clock means these things will probably actually run at an all core clock of 5Ghz if properly cooled. This is huge for overclocking and squeezing Intel. The biggest issue with AMD Ryzen and Threadripper chips wasn't the IPC or cores but the boost clock and overall overclockability. When you can push 5.0Ghz on a 8 core chip from intel but top out at 4.4Ghz on AMD, it's a big difference in performance.


----------



## airisom2

Adored's sources have been correct in the past (RTX, Epyc 2 core count, chiplets, etc.), so this is probably true as well. Pretty sure he said that interposers will be Zen3 too. 

PCIe 4.0 x64, 64 cores, nice clocks and ipc, quad channel ddr4, large IHS to dissipate heat, Threadripper 2 looks like the new X79. If they found a way retain NUMA latencies using UMA with the chiplets and I/O die, then TR's gonna be a long-lasting platform.


----------



## ToTheSun!

First time I've been excited while watching an adoredtv video. Even his accent couldn't take away from the hype.


----------



## Particle

Frugal said:


> The most exciting (if this leak is accurate) is that the 4-core CCX was replaced with 3 chiplets:
> a 4-core
> a 6-core
> an 8-core.


There wouldn't be much reason to spin three dies for that. We already know the 7 nm 8C die is absolutely tiny based on the pictures of the Rome sample we were shown.

I would expect that parts calling for 4, 6, or 12 cores would still use 8C dies but have cores disabled.


----------



## guttheslayer

ryan92084 said:


> Until it's officially announced i won't hope for a core count bump this gen. I'll take those clock speeds though.
> 
> Edit: Not even Reddit believed the "leak" discussed in the beginning of the video https://www.reddit.com/r/Amd/comments/a2jfu9/email_with_amd_2019_cpu_lineup_colleague_messaged/


I am almost 95% sure the core bump will happen. Just stay and watch 16C/32T coming in to mainstream AM4 socket at lesser price than 9900K currently.


It doesnt take common sense to know 2 chipset design is more than enough to fit inside AM4+ socket, and they can do so anytime cos it is cheap enough to produce them.


----------



## tajoh111

guttheslayer said:


> NOW WHO SAID THAT 16C/32T WONT COME TO MAINSTREAM! WHO?
> 
> 
> Oh I remember it was os2wiz, pony-tail, and chakku, looks like its time for them to eat their word.
> 
> 
> 
> Anyway it look like AMD offer superior performance now than Intel *at half the price, or twice the cores at same price*. A wonderful 2019 indeed.


Don't believe the optimistic rumors until products hit, you should learn from your confidence in the Vega 20 rumors that what gets the most hits and excitement in rumors does not reflect in reality what does come out. 

Without any competition from Intel since their plans are so fubared, AMD would price a 16 core 5.1 ghz chip at 700 dollars + because they would have the outright performance crown and can sell cut down and lesser chips at 500. 

When your competition is in a position where it cannot counter in the near future, this is the time when you raise prices which AMD will because it simply profit maximization and there will be supply issues at 7nm since TSMC is the only source of it at the moment. Higher prices along with top end performance also increases your brand value. 

Intel cannot release 5ghz 16 core chips(without a 400 watt tdp) and won't be able to till 10nm. If AMD is in such a position where intel cannot counter for a year and they have the performance crown, they will rightfully attempt to increase pricing to increase their margin and ASP. 

Have your product sell out because your priced your product too low and your supply is insufficient is not smart business. It also leaves less room for price drops and updates in terms of refreshing your product in the future. 

Considering the rumors seem to already be debunk in the original source, this is more like an attempt by someone to raise AMD's future profile to keep the stock propped up.


----------



## ejb222

tajoh111 said:


> Don't believe the optimistic rumors until products hit, you should learn from your confidence in the Vega 20 rumors that what gets the most hits and excitement in rumors does not reflect in reality what does come out.
> 
> Without any competition from Intel since their plans are so fubared, AMD would price a 16 core 5.1 ghz chip at 700 dollars + because they would have the outright performance crown and can sell cut down and lesser chips at 500.
> 
> When your competition is in a position where it cannot counter in the near future, this is the time when you raise prices which AMD will because it simply profit maximization and there will be supply issues at 7nm since TSMC is the only source of it at the moment. Higher prices along with top end performance also increases your brand value.
> 
> Intel cannot release 5ghz 16 core chips(without a 400 watt tdp) and won't be able to till 10nm. If AMD is in such a position where intel cannot counter for a year and they have the performance crown, they will rightfully attempt to increase pricing to increase their margin and ASP.
> 
> Have your product sell out because your priced your product too low and your supply is insufficient is not smart business. It also leaves less room for price drops and updates in terms of refreshing your product in the future.
> 
> Considering the rumors seem to already be debunk in the original source, this is more like an attempt by someone to raise AMD's future profile to keep the stock propped up.


Or keep the prices a bit lower than expected in order to put pressure on intel before they release a competitive chip. Imagine if ryzen 9 was 500 and intel released a similar chip 8months to a year later at higher price? That would be bad for intel. They would have to engineer their products on existing market price and throw out their current price tier habits. Intel had the advantage before in that they could set the market with earlier higher performance releases...now amd is in that position. Literally can influence the margins intel will be operating at. Do you see intel charging less than amd for a similar performing part released at a later date...especially in the top consumer tier or enthusiast parts?


----------



## Slaughtahouse

Let's build up the hype like every year with AMD!!! 

Let's wait to see the cost of the CPU's and the GPU's.

We can safely assume CPU's will get a decent marginal bump in performance but GPU's?

It all depends on cost. Mining is down, possibly they will undercut Nvidia but corporate AMD is terrible at knowing how to position their own products.


----------



## rluker5

I wonder if a separate I/O chiplet would hurt it's performance more or less than inter CCX latency? The time a cpu takes to do something includes communication. 
I guess we will see when it comes out.


----------



## EniGma1987

I saw these yesterday too. I was laughing as I read the articles. Higher core counts, lower TDP, massive clock speed boost, and much lower prices. Its basically just a wish list of what could be the most amazing thing for each category of the CPU. It is way too good to be true. 32 threads, at 5.1GHz, with lower TDP, and at lower prices than their threadripper lineups. Ya ok.






GPU specs are actually about what I would expect from Navi, but the prices seem just a bit low. I would have expected those to come with $300, $220, and $150 price tags.
Now if AMD would just go and make a Navi20 die with double everything that Navi 10 has expect for the memory controller, which could just move up to 384. Then we would finally have a top performing GPU from AMD that could compete with the current 2080ti but with a $650 price tag.


----------



## tajoh111

ejb222 said:


> Or keep the prices a bit lower than expected in order to put pressure on intel before they release a competitive chip. Imagine if ryzen 9 was 500 and intel released a similar chip 8months to a year later at higher price? That would be bad for intel. They would have to engineer their products on existing market price and throw out their current price tier habits. Intel had the advantage before in that they could set the market with earlier higher performance releases...now amd is in that position. Literally can influence the margins intel will be operating at. Do you see intel charging less than amd for a similar performing part released at a later date...especially in the top consumer tier or enthusiast parts?


That's grasping for straws at best. 

If AMD has the performance crown and Intel has little to now chance to take it back, they will not value price their product upon release in anticipation of a release from Intel. 

If Intel comes out with a similar product with similar performance, they will price it similarly to AMD at the very least and price it more expensive in all likelihood because of their brand. They have done it before and will do it again because they know the Intel brand carries a lot of weight to consumers and business. If their product performs worse than AMD, they will have to price it below AMD and this will boost AMD which will boost AMD's mindshare as reviews will reflect this. Of course with Intel's arrogance, I would not be surprised to see the price it higher than AMD expecting the Intel brand to carry them. If Intel's product out perform AMD's they will price it higher than AMD as they always have and if they don't AMD has the option for a price drop which is much easier to do when your pricing starts out higher. 

More importantly, AMD cares more about it's own margins than Intels. Trying to influence Intels margins, by pricing their products lower isn't smart business. It's emotional and vindictive while being self destructive. That is they are literally lowering their profits just to screw with Intel. The big guys will sometimes do this to put the smaller business out of business so they can get a monopoly. But for a company that needs to raise their profits badly, AMD needs to strike opportunistically to gain as much profit as possible and mindshare. Giving a low price when they don't need to because the risk from Intel is so low, is simply bad business. 

Not pricing low also helps AMD not get into a price war with Intel. Keeping pricing high in the market is mutually beneficial for Intel and AMD as it can tremendously raise profits as the memory market has shown. As I have illustrated, low pricing does not do AMD any favors because of the little to no risk of higher pricing and the guaranteed upside to varying degrees with raising it's average selling price.


----------



## mouacyk

l.o.l.

Given the horrible pricing of everything by the market leaders though, I too wish for these to become reality, at least in some parts.


----------



## Dimaggio1103

So who will be giving the Eulogy at Intel's funeral?

24 threads at 5Ghz.......Jaw dropping if true. I was happy with Ryzen first gen, this blows my mind.


----------



## epic1337

tajoh111 said:


> If AMD has the performance crown and Intel has little to now chance to take it back, they will not value price their product upon release in anticipation of a release from Intel.



its true that baselessly pricing their chips at dirt cheap price isn't a good idea, as this'll just cut into their margins.
but baselessly pricing their chips at exorbitant prices just because it performs great isn't a good idea either, specially when they should be prioritizing in expanding their market share.

imho i like the way they're currently pricing Zen processors, priced at a comfortable level for both the consumers and AMD.



for example, they could sell a 12C/24T @ 4Ghz processor in AM4 for $500 right now and it'd look like a bargain sale when compared to TR4, yet its not dirt cheap that it'd eat into 8C/16T or TR4 sales.


----------



## CDub07

I was with the video right up til the Ryzen 16C/32T. I don't see AMD eating into its own Threadripper turf. The 12C/24T is a little stretch but AMD has to fight back with something new and shiny. I think core speeds with be king over increased core count.


----------



## doom26464

Salt guys remeber salt.

This should not be taken as an offical launch but nothing more then it is but speculated rumour. 

I have no doubt AMD will be in a strong position come 30xx series but at the same time foolish wishful thinking will lead to disapontment. Thoses clocks at that many cores with that price, needs a reality check. 

16 cores on mainstream is pretty overkill, I mean sure it is doable but nothing will even come close to using that many cores for awhile in our current ecosystem outside of the few special use cases.


----------



## Slaughtahouse

doom26464 said:


> Salt guys remeber salt.
> 
> This should not be taken as an offical launch but nothing more then it is but speculated rumour.
> 
> I have no doubt AMD will be in a strong position come 30xx series but at the same time foolish wishful thinking will lead to disapontment. Thoses clocks at that many cores with that price, needs a reality check.
> 
> 16 cores on mainstream is pretty overkill, I mean sure it is doable but nothing will even come close to using that many cores for awhile in our current ecosystem outside of the few special use cases.


Logic and salt are not present in the OCN News section.

It's all hype and then complaints when the product release. 

Rinse & repeat every hardware cycle.


----------



## CDub07

doom26464 said:


> Salt guys remeber salt.
> 
> This should not be taken as an offical launch but nothing more then it is but speculated rumour.
> 
> I have no doubt AMD will be in a strong position come 30xx series but at the same time foolish wishful thinking will lead to disapontment. Thoses clocks at that many cores with that price, needs a reality check.
> 
> 16 cores on mainstream is pretty overkill, I mean sure it is doable but nothing will even come close to using that many cores for awhile in our current ecosystem outside of the few special use cases.



That's my problem now. I don't need anything. I was dead set on upgrading to a 2700X but after looking at the numbers its not worth the money from 1st Gen Ryzen. I don't really care about increased core count at the moment but higher clock speeds and IPC is were AMD really needs to give a great push.


----------



## mouacyk

This hype is exploiting the unchecked expectations of a die shrink from 14nm to 7nm. 50% die shrink is significant and IF it affects the CCX connections, this may bring the reduction in CCX latency that is badly needed. Unless Intel was extremely lazy, their shrinking from 22nm to 14nm didn't gain anything, besides 5% IPC and a physically smaller die.


----------



## doritos93

CDub07 said:


> That's my problem now. I don't need anything. I was dead set on upgrading to a 2700X but after looking at the numbers its not worth the money from 1st Gen Ryzen. I don't really care about increased core count at the moment but higher clock speeds and IPC is were AMD really needs to give a great push.


I don't care about clock speed improvements because we pretty much know we're at the limits. A couple hundred MHz more isn't revolutionary

What I do want is improvement in multicore processing as well as new software development patterns that can leverage 200 logical CPUs if necessary. That's forward thinking IMO

AMD has been pushing this idea since Phenom


----------



## Shiftstealth

AMD doubled up on Intel's core count with the 1800X. I find it hard to believe they'd do it again with the 3800X, but man. If they did i'd be real salty about my 9900K


----------



## cooljaguar

Zen 2 isn't just a 7nm port of Zen 1 guys, the chiplet design makes ramping up core counts easier than ever before. The only sketchy parts about this leak are the prices and the clock speeds, I don't think we're going to see 5GHz on Ryzen until 7nm+ and even then I only see that being realistic on the highest binned chips.

16C/32T on Ryzen doesn't conflict with Threadripper because of the lack of quad channel memory and PCIE lanes. Plus there's no doubt that the Threadripper lineup will have its own core count increase, I wouldn't be surprised if TR now starts at 24C/48T.


----------



## bigjdubb

Dammit. I will come back when AMD starts making comparisons to GTX2080ti performance.


----------



## cooljaguar

bigjdubb said:


> Dammit. I will come back when AMD starts making comparisons to GTX2080ti performance.


Think bigger. AMD needs to leapfrog Nvidia on performance. Matching them isn't enough to regain relevance in the market, hopefully that rumored 2020 GPU delivers.


----------



## EniGma1987

cooljaguar said:


> 16C/32T on Ryzen doesn't conflict with Threadripper because of the lack of quad channel memory and PCIE lanes.





I am actually very interested in seeing if AMD produces two IO dies for different product segments, or uses 1 die like they have done with the CPU dies of the past. Either way you are right, we wont get more than dual channel or the same PCIe lanes we have now, the socket would have to change for that to happen. But it would be such a waste to use the same IO die on Epyc as these AM4 Ryzens. Since the IO die has the memory controllers, some cache, and PCIe lanes, we know that the Epyc die has 8 channels of dram controllers and a bunch of PCIe. Would AMD really waste so many transistors by using the same die on these AM4's and cut out 6 of the memory channels, probably half the cache, and 75% of the PCIe lanes too? I would think for the IO die it would make more sense to just produce a second die with only a pair of memory controllers, only 24 PCIe lanes for AM4 use, and then half the cache since it wont need to connect to 8 core dies. That alone would be nearly 50% or more of the die size in savings meaning they could get 2x as many IOs dies per wafer, and thats a lot of savings.




For anyone not convinced on die savings, take a look at the current Ryzen+ core:
amd_zen_octa-core_die_shot_(annotated).jpg




Look at the size of those two memory controllers compared to the size of a CCX itself. Now imagine quadrupling the size those memory controllers take up. Thats how much space they would take in the IO die if the same one is used on all product lines. Same goes for the PCIe lanes at the top left corner, quadruple the size of those too. You have to think, is AMD really going to use up 115mm2 of die size only to disable 75% of that? Thats a HUGE waste of money.


----------



## reqq

tajoh111 said:


> Without any competition from Intel since their plans are so fubared, AMD would price a 16 core 5.1 ghz chip at 700 dollars + because they would have the outright performance crown and can sell cut down and lesser chips at 500.


Is it that simple though? Doesnt consumer mindset dictate if they buy something, AMD doesnt have same strong fanbase as intel. AMD want to increase theirs. And how about how many people that will buy in certain price brackets, maybe they done some research that for example twice amount of people are willing to pay for a 500 dollar cpu compared to 700.


----------



## Frugal

cooljaguar said:


> Zen 2 isn't just a 7nm port of Zen 1 guys, the chiplet design makes ramping up core counts easier than ever before


This.
The chiplet and the independent I/O controller are as important, if not more, than the 7nm shrink. It makes an already versatile architecture even more so.


----------



## tajoh111

epic1337 said:


> its true that baselessly pricing their chips at dirt cheap price isn't a good idea, as this'll just cut into their margins.
> but baselessly pricing their chips at exorbitant prices just because it performs great isn't a good idea either, specially when they should be prioritizing in expanding their market share.
> 
> imho i like the way they're currently pricing Zen processors, priced at a comfortable level for both the consumers and AMD.
> 
> 
> 
> for example, they could sell a 12C/24T @ 4Ghz processor in AM4 for $500 right now and it'd look like a bargain sale when compared to TR4, yet its not dirt cheap that it'd eat into 8C/16T or TR4 sales.


Exorbitant pricing is something like 1000+ and that's where threadripper comes to roost. Because it's impossible for Intel to counter AMD in this space unless Intel does go chiplet or multidie. AMD will be incredibly well position to unsurp Intel in this space. 

I think something like $799 for the 16 core 5 ghz ryzen i9 for the top SKU makes sense, 650-699 for the same core count at lower speeds, with the 12 core being 579-599 is completely acceptable and will have a good response from consumers. Considering how well the 9900k is selling at 530 which is actually 600-700 with real streat pricing, there is certainly a market for this type of CPU pricing.

A 12 core, 5ghz, 24 thread at 599 will still be a bargain because Ryzen motherboards are really cheap compared to threadripper ones and the 5ghz + lets say 8 to 10 ipc gain, will put it at 9960x performance which is a 1700 dollar processor at the moment meaning that even at 600 dollars, AMD 12 core processor will be very attractive as will be the 16core ryzen 2 at $800 vs the $2100 9980xe. This is still aggressive pricing considering the market. 

AMD can look like the heroes while vastly increasing their margins because of how poor the value Intel's top processors are. AMD doesn't need to bring, 16 core processors down to 500 dollars because a 12 core at 550 will sell just as well because in both cases, these chips will be sold out meaning the 12 core at 550 is a more profitable move because you can provide more volume while selling the perfect chip 16 core at a premium increasing margins further. With Intel in no position to counter, there is no downside to pricing like this. 

With Ryzen 1 and ryzen plus, Intel was in a position to counter because they had the nodal advantage, a significant IPC advantage and had kept core counts so low, they had room to increase their core count. They also had some power envelope to burn through.
As a result, AMD had to aggressively price their cards vs Intels, particularly since Intels 4 cores were still competitive with AMD's 8 core in some bench marks plus better than it with games. With 7nm for AMD in 2019, Intel will be screwed until 2020 and they need alot of things to go right which likely won't because they will be on the xxxxlake architecture till probably 2021/late 2020. This means AMD can dictate the pricing of the high end with them absorbing more profit than ever before with Intel basically unable to do anything but take it. 

By not aggressively pricing their 16 cores processors, it also leaves room for more refreshes and updates on 7nm which will be a very long lived node. One last thing, 7nm is not a cheap process the wafers are 2x as much. Combine the complexity of the assembly of what is like a tri die design on a interposer, and you have something that is likely more expensive to build than the outgoing product. So if your a business do you absorb this loss or pass it on to the consumers when you have no competition? We both know the answer to this. 

As consumer, most people are selfish and don't care about AMD long terms needs. This is why everyone gets hyped up on these big price jumps in price to performance rumors and they generate so much hits. 

But for the long term, AMD needs to raise their profits. You might want 500 dollar pricing for a 16 core processor with pricing similar to last gen but look at AMD's net profit for the last 3 quarters. It has been 102 million, 116 million and 81 million and 61 million dollars. That's pathetic for a company generating close to 1.5-2 billion in revenue every quarter and it shows. AMD is cash starved and it shows because the company cannot fund their CPU and GPU R and D at the same time. Even with all the Ryzen money and the success of the videocard market because of mining, AMD R and D expenditure has grown from 250 million on average in 2016, to 290 million in 2017 and to about 355 million in 2018. When you look at the net profits, you will see why it has not grown more. This doesn't not even match their 2012 spending. Considering the increased R and D cost of 7nm vs 28nm(it's atleast 6x as much). 

https://www.extremetech.com/computing/272096-3nm-process-node

This is not enough profit going forward into 7nm with the big increase in R and D cost with this node especially with the expenditures of Nvidia and Intel. I don't want a cash starved AMD that will be in trouble again once Intel finally does respond(similar to the core 2 duo generation vs athlon 64x2) because it has nothing in the pipeline and does not have a back up plan. I want a AMD that can afford to own their own campus again, can have multiple R and D teams, fully back projects without needing to back out half way because its too expensive(AMD's arm plans, Mantle, AMD Imageon graphics) and bring back GPU development to Markham. In addition to generate enough net profit where it can generate a cash pile to weather another storm and afford acquisitions in the future. This means they need billions more in revenue, this won't come from volume because wafers supply limitations during 2019. This means increasing the ASP which can vastly increase revenue as Nvidia's growth has shown.



reqq said:


> Is it that simple though? Doesnt consumer mindset dictate if they buy something, AMD doesnt have same strong fanbase as intel. AMD want to increase theirs. And how about how many people that will buy in certain price brackets, maybe they done some research that for example twice amount of people are willing to pay for a 500 dollar cpu compared to 700.


You need to look at Energizer vs duracell. Energizer was way behind Duracell in terms of sales and their volume and profits did not grow until they raised their price. Why? because consumers perceived Energizers as the inferiors product because it was priced lower and assumed it was a value product. 

When Energizer raised their pricing, sales increased because consumers saw the brands in more comparable light and the products were actually comparable in performance. 

With 7nm Ryzen, pricing their top consumer product at 500 dollars is simply reaffirming that they are the budget brand that has to give away their products for them to sell. 

At pricing levels like 800 dollars, AMD is slowing inching their perception back to what is was back in the Athlon days. 

A very long time ago, before Athlon, AMD was the value brand because their products performed worse than Intels and their pricing reflected this. Once Athlon hit, they knew they had a hit on their hand because they beat Intel and Intel could not immediately counter, they raised pricing. Both volume, marketshare and profits grew during this time. It reached the point where AMD pricing was pretty much inline with Intels during the athlon 64 years. This was AMD golden years. R and D spending were way higher considering inflation and it was concentrated on CPU. Their marketshare was very high compared to now(and would have been higher without Intel's bribery interference) How they reached that point was capitalizing on their product when it was better than Intel even without taking into account price to performance. 

Ryzen 2 has that same very chance where it can increase profits, marketshare and brand value at the same time. 

BTW being able to sell twice as much means nothing if you cannot supply twice as much. The mining market is a illustration of this. Price the product too low and what ends up happening is you create a supply bottleneck where the retailer scalps all the profit or the AIB partners do. AMD will be supply limited because of demand from the server market eating into ryzen chips along with Qualcomm, consoles,Huawei, mediatek, Nvidia, Apple using TSMC 7nm wafer process. 

If AMD is able to release a 16 core chip with a 10% IPC at 5GHZ, it will be the fastest consumer chip on the market beating out the $2100 9980xe. At $500, demand will outstrip supply which will lead to retailers scalping the chips leading to them gaining no more marketshare than if they released it at $800 because supply in both cases is the same and at $800 still sells out because its the fastest consumer chip on the market and beats out Intels fastest chip that cost $2100 dollars. On the other hand, if AMD releases a 12core 5ghz chip at $550, AMD is able to gain more marketshare at the 500 dollar price point which we will call the bottom tier of the enthusiast segment because they are able to supply this 500 dollar segment with more volume because using cutdown chips means higher supply. You don't think they will sell that well? It's performance competition are 1600-1700 Intel chips and with the cost of the X299 MB, 4 channel memory your looking at more than triple the cost of ownership.


----------



## ejb222

reqq said:


> Is it that simple though? Doesnt consumer mindset dictate if they buy something, AMD doesnt have same strong fanbase as intel. AMD want to increase theirs. And how about how many people that will buy in certain price brackets, maybe they done some research that for example twice amount of people are willing to pay for a 500 dollar cpu compared to 700.


That's what I meant earlier. You already know intels pricing habits year after year. Amd can price their chips that will outperform intels next offering slightly below intels typical price points. Then intel is pressured to either rise to the occasion or engineer something as powerful for under their usual margins. All the while amd is probably above their usual margins. And amds doesn't begin to look like intel gouging because there is no competition


----------



## criminal

R7 3700x for me please.


----------



## mouacyk

Frugal said:


> This.
> The chiplet and the independent I/O controller are as important, if not more, than the 7nm shrink. It makes an already versatile architecture even more so.


Not sure how that addresses the CCX latency issues.


----------



## figuretti

This post from Kyle... OMG

https://hardforum.com/threads/adore...on-3000-series-leaks.1973015/#post-1043970615



> There is a whole lot of reality in that video. A lot. There is a little wrong, but not a lot.


----------



## EniGma1987

figuretti said:


> This post from Kyle... OMG
> 
> https://hardforum.com/threads/adore...on-3000-series-leaks.1973015/#post-1043970615





Is this "Kyle" someone who would actually have real, legit information from under NDA?


----------



## ACleverName

Imglidinhere said:


> Genuinely curious and somewhat excited to know if the Ryzen 3 CPUs are truly going to be the core-count kings from here on. Would be an interesting thing if Ryzen 3 would then go toe to toe against Core i5s on a value proposition... Like... there is no way that Intel can compete.


What if they whip out the Elmer's glue and get real with their pricing?


----------



## mouacyk

EniGma1987 said:


> Is this "Kyle" someone who would actually have real, legit information from under NDA?


AND they put it on their Front Page News... lol. At least OCN has better sense to keep it as what it is.


----------



## ToTheSun!

mouacyk said:


> Not sure how that addresses the CCX latency issues.


Supposedly, communication is no longer tied to IF in the new paradigm. That's just what I read, though; I wouldn't know.


----------



## cooljaguar

mouacyk said:


> Not sure how that addresses the CCX latency issues.


I'm sure AMD found some way to address the latency issue during Zen 2's R&D, otherwise they wouldn't have gone with a chiplet design.


----------



## mouacyk

ToTheSun! said:


> Supposedly, communication is no longer tied to IF in the new paradigm. That's just what I read, though; I wouldn't know.


Right, most indication is that the chiplet design is moving the communications off-die to a separate I/O die. It will be interesting to see how that kind of design reduces the memory and communications latency compared to CCX. Intuition would suggest that this (network) design incurs more latency but will be even more modular than CCX was. Perhaps, chiplets will not be used for desktop skus but only server skus where AMD currently has and wants to maintain the advantage in core count.


----------



## AlphaC

figuretti said:


> This post from Kyle... OMG
> 
> https://hardforum.com/threads/adore...on-3000-series-leaks.1973015/#post-1043970615


A "little wrong" could easily be the claim of Ryzen 12 and 16 core chip on AM4. The clocks on the rest aren't that farfetched even if they seem high, since the turbo clock could be clocked to the wall (5GHz is about 15% increase over 4.35GHz).

A conservative estimate would be +10-15% in clock-speeds and 8 core as the flagship Ryzen 7 3700X at ~$350-400 to counter the I7-9700k and i9-9900k. That would put it between 4.1-4.3GHz all core listed boosted clocks and 4.8-5GHz turbo / XFR. With PBO I'd expect 4.4-4.6GHz all core using the 10-15% figure.

AMD's slides for 7nm state +25% performance with the same power and 0.5x power for the same performance vs 14nm. The IO die is still 14nm and uncore on Ryzen 5 and Ryzen 7 typically is around 20W of the total. So unless the clocks are held relatively constant to the 65W Ryzen 7 2700 or TR1950X (14nm) , I don't think that 16 cores in 135W is happening with 4.3GHz all core base clocks although it is plausible with 12 cores.


Ryzen 5 2400G = 3.6GHz base, 3.9GHz turbo in 65W with 11CUs --> -50% power would allow for a 8 core 20CU part such as the rumored Ryzen 5 3600G 

Ryzen 5 1600 = 3.2GHz base, 3.6GHz turbo in *65W *--> +25% = 4GHz base and 4.5GHz turbo for 6 cores , which is more than the Ryzen 3 3300X rumor
Ryzen 5 2600 (12nm) = 3.4GHz base , 3.9GHz turbo in 65W
Ryzen 5 1600X = 3.6GHz base, 4.0GHz turbo (+100MHz XFR) in *95W* --> +25% = 4.5GHz base and 5GHz turbo for 6 cores
Ryzen 5 2600X (12nm) = 3.6GHz base, 4.2GHz turbo in 95W
Ryzen 7 1700 = 3.0GHz base, 3.7GHz turbo (+50MHz XFR) in *65W *--> +25% = 3.75GHz base and 4.6GHz turbo for 8 cores  , which is more than the Ryzen 5 3600 rumor
Ryzen 7 2700 (12nm) = 3.2GHz base , 4.1GHz turbo in 65W
Ryzen 7 1700X = 3.4GHz base, 3.8GHz turbo (+100MHz XFR) in *95W *--> +25% = 4.25GHz base and 4.75GHz turbo for 8 cores , which is Ryzen 5 3600X rumor as far as turbo
Ryzen 7 1800X = 3.6GHz base , 4.0GHz turbo (+100MHz XFR) in *95W *--> +25% = 4.5GHz base and 5GHz turbo for 8 cores
Ryzen 7 2700X (12nm) = 3.7GHz base, 4.3GHz turbo in 105W for 8 cores
TR 1900X = 3.8GHz base, 4.0GHz turbo (+100MHz XFR) in 180W --> -50% power for same performance would be 90W for 8 cores 
TR 1920X = 3.5GHz base , 4.0GHz turbo (+100MHz XFR) in 180W --> -50% power for same performance would not yield a higher clock unless cores are cut vs 12 cores
TR 2920X (12nm) = 3.5GHz base, 4.3GHz turbo in 180W for 12 cores
TR 1950X = 3.4GHz base , 4.0GHz turbo (+100MHz XFR) in 180W --> -50% power for same performance would not yield a higher clock unless cores are cut vs 16 cores , 135W is 25% less power
TR 2950X (12nm) = 3.5GHz base , 4.4GHz turbo in 180W for 16 cores
TR 2970X (12nm) = 3.0GHz base , 4.2GHz turbo in 250W for 24 cores
TR 2990X (12nm) = 3.0GHz base , 4.2GHz turbo in 250W for 32 cores

Forbes article summarized the video : https://www.forbes.com/sites/antony...series-up-to-16-cores-and-5-1ghz-frequencies/

It's highly likely the "Ryzen 9" 16 core parts are actually for TR4 socket. Each core when overclocked typically uses 20-22W of power, plus another 20W for Ryzen 5 and Ryzen 7 uncore. That's easily 350W which would reconcile the need for additional power connectors other than 8 pin CPU power , whereas a 12 core would still be under ~300W.


----------



## cooljaguar

mouacyk said:


> Right, most indication is that the chiplet design is moving the communications off-die to a separate I/O die. It will be interesting to see how that kind of design reduces the memory and communications latency compared to CCX. Intuition would suggest that this (network) design incurs more latency but will be even more modular than CCX was. Perhaps, chiplets will not be used for desktop skus but only server skus where AMD currently has the advantage in core count.


The whole point of Zen is scalability and ease of manufacturing. Not using chiplets for desktop completely destroys that strategy. Designing and manufacturing two chips instead of one would be way more expensive, especially on 7nm. The desktop chip would have to be significantly larger to accommodate I/O and anything else AMD tucked away on the controller chip.


----------



## Frugal

mouacyk said:


> Perhaps, chiplets will not be used for desktop skus but only server skus


That makes no sense at all.
as cooljaguar said above it's all about scalability, it's been so since Zen first released and it'll be more so with chiplets.
I'm curious to see how modular they can make it by taking physical stuff from the cores, like memory controller. If it is like in the video (and previous ones) and they can play around with number of cores, size of the I/O chip, GPU CUs... then Intel has a serious problem.


----------



## ltpdttcdft

The sources for this are much more credible than what usually gets posted here; we shall see if AMD can execute.
With SSDs prices dropping, flood of cheap midrange graphics cards, and AMD dropping this "soon"(tm) it looks like 2019 might just be the year for an upgrade to actually be worth it since Sandy Bridge.
If only DDR4 prices would also come back to normal...

Mountain of salt for now.


----------



## guttheslayer

tajoh111 said:


> guttheslayer said:
> 
> 
> 
> NOW WHO SAID THAT 16C/32T WONT COME TO MAINSTREAM! WHO?
> 
> 
> Oh I remember it was os2wiz, pony-tail, and chakku, looks like its time for them to eat their word.
> 
> 
> 
> Anyway it look like AMD offer superior performance now than Intel *at half the price, or twice the cores at same price*. A wonderful 2019 indeed.
> 
> 
> 
> Don't believe the optimistic rumors until products hit, you should learn from your confidence in the Vega 20 rumors that what gets the most hits and excitement in rumors does not reflect in reality what does come out.
> 
> Without any competition from Intel since their plans are so fubared, AMD would price a 16 core 5.1 ghz chip at 700 dollars + because they would have the outright performance crown and can sell cut down and lesser chips at 500.
> 
> When your competition is in a position where it cannot counter in the near future, this is the time when you raise prices which AMD will because it simply profit maximization and there will be supply issues at 7nm since TSMC is the only source of it at the moment. Higher prices along with top end performance also increases your brand value.
> 
> Intel cannot release 5ghz 16 core chips(without a 400 watt tdp) and won't be able to till 10nm. If AMD is in such a position where intel cannot counter for a year and they have the performance crown, they will rightfully attempt to increase pricing to increase their margin and ASP.
> 
> Have your product sell out because your priced your product too low and your supply is insufficient is not smart business. It also leaves less room for price drops and updates in terms of refreshing your product in the future.
> 
> Considering the rumors seem to already be debunk in the original source, this is more like an attempt by someone to raise AMD's future profile to keep the stock propped up.
Click to expand...

The vega expectation i had didnt come from adoredtv, granted it was a miscalcution on the 7nm cell amd was using on my part.

I am not backing down on the expectation of anything less than 16/32t. It is just a mixed of 2 chiplets. Dont forget its still another 7 months away and by then we are into mid 2019.

I dont believe in 5.1ghz clock tbh. But the 16/32t is pretty much expected. And easy for them as well.


----------



## Jarhead

Kinda glad I lost my job earlier this year. I would have bought a 2700X/470. Now I am going to go with a 3700X(3850?)/570 and probably two of the RX 3080s. Maybe I'll get really lucky and LG/Dell/Samsung/whatever will produce a 43in monitor in 4K 120hz with HDR, Freesync, and low imput lag.


----------



## rv8000

If any of this is remotely true, wow!

Until then, temper your hype people.


----------



## epic1337

Jarhead said:


> Maybe I'll get really lucky and LG/Dell/Samsung/whatever will produce a 43in monitor in 4K 120hz with HDR, Freesync, and low imput lag.



i'm not sure about 43" but Acer has a good one on 27".
https://www.anandtech.com/show/1329...xv273k-monitor-4kp144-displayhdr-400-freesync


----------



## xzamples

Way too good to be true IMHO


----------



## mouacyk

Frugal said:


> That makes no sense at all.
> as cooljaguar said above it's all about scalability, it's been so since Zen first released and it'll be more so with chiplets.
> I'm curious to see how modular they can make it by taking physical stuff from the cores, like memory controller. If it is like in the video (and previous ones) and they can play around with number of cores, size of the I/O chip, GPU CUs... then Intel has a serious problem.


Why doesn't it make sense? This is precisely why Intel owned both markets for the longest time. They had a low latency and mid-core count SKU for desktop and high latency high core count for servers. Most of the people here on OCN would actually be more interested in AMD returning to dominate the Desktop side of things, where latency matters most. The problem with current Ryzen refresh is still the high memory latency, when compared to Intel.


----------



## Jarhead

epic1337 said:


> i'm not sure about 43" but Acer has a good one on 27".
> https://www.anandtech.com/show/1329...xv273k-monitor-4kp144-displayhdr-400-freesync


Thanks for trying, but no, I switched over to a 45in 1080p TV ten years ago and I am not going back. I had to quicky-replace with a Sceptre 32in 1080p TV six months ago($120). This year I found the 43 in monitors that look like a perfect replacement, except that there's no point not going with a cheaper TV if I'm going to be limited to 60FPS anyway.

LG 43in 4K 60hz-$699
https://www.lg.com/us/monitors/lg-43UD79-B-4k-uhd-led-monitor 

Dell 43in 4k 60hz-$949
https://www.dell.com/en-us/work/sho...17q/apd/210-ahsq/monitors-monitor-accessories 

TCL 43in 60hz with limited HDR TV$279.99
https://www.amazon.com/dp/B07DK5PZF...&pd_rd_r=189c9b73-f901-11e8-8cd5-474558e042bf 

If I'm stuck at 60hz anyway, I'll just go with the $280 option. What pisses me off is that Zisworks has already done a 39in 4K display at 120hz native, and they used a five year old panel to do it. TWO YEARS AGO. This is not a technology problem. 





I do more non-gaming than gaming, I don't own a TV, so my desktop PC has to do everything. This is is perfect for that, especially since I am going to become a content creator next year:


----------



## epic1337

Jarhead said:


> Thanks for trying, but no, I switched over to a 45in 1080p TV ten years ago and I am not going back. I had to quicky-replace with a Sceptre 32in 1080p TV six months ago($120). This year I found the 43 in monitors that look like a perfect replacement, except that there's no point not going with a cheaper TV if I'm going to be limited to 60FPS anyway.
> 
> LG 43in 4K 60hz-$699
> https://www.lg.com/us/monitors/lg-43UD79-B-4k-uhd-led-monitor
> 
> Dell 43in 4k 60hz-$949
> https://www.dell.com/en-us/work/sho...17q/apd/210-ahsq/monitors-monitor-accessories
> 
> TCL 43in 60hz with limited HDR TV$279.99
> https://www.amazon.com/dp/B07DK5PZF...&pd_rd_r=189c9b73-f901-11e8-8cd5-474558e042bf
> 
> If I'm stuck at 60hz anyway, I'll just go with the $280 option. What pisses me off is that Zisworks has already done a 39in 4K display at 120hz native, and they used a five year old panel to do it. TWO YEARS AGO. This is not a technology problem.



mhmm, from what i can see most manufacturers are prioritizing TVs, or otherwise HDR features over high refreshrates.
the technical limitation of the interconnects may be one of the issues, but this really isn't a problem when considering the fact that two cables can be ganged together.

as for sizes, technically the monitors rarely exceeds 40", take for example LG's 4K/5K monitor lists only one out of 22 is 43", so you'd probably be hard-pressed to find an ideal monitor.
TVs on the other hand often exceeds 40", even going as large as 80" behemoth of a screen, yet it is rare for a TV to actually have high refreshrates besides interpolated input.


----------



## Biorganic

Look at AMD, BACK IN THE GAME!!!!

Probably going to build a full AMD rig as soon as 7 nm drops! This old 2600k is about played out! 8-P 

It is nice to see real competition again.


----------



## Jarhead

epic1337 said:


> mhmm, from what i can see most manufacturers are prioritizing TVs, or otherwise HDR features over high refreshrates.
> the technical limitation of the interconnects may be one of the issues, but this really isn't a problem when considering the fact that two cables can be ganged together.
> 
> as for sizes, technically the monitors rarely exceeds 40", take for example LG's 4K/5K monitor lists only one out of 22 is 43", so you'd probably be hard-pressed to find an ideal monitor.
> TVs on the other hand often exceeds 40", even going as large as 80" behemoth of a screen, yet it is rare for a TV to actually have high refreshrates besides interpolated input.


I don't want 5K or any of the screwy ultra wide resolutions. 4K is going to be the new broadcast standard, so that's what I'm going with because that's what everything is going to be based around. 1080p has been the standard for ten years, and 4K is going to take that spot in the next year or two.

As for Freesync, it costs nothing in terms of licensing or hardware, and thus should just be how monitors/TVs leave the factory now, which is what Samsung is doing. I would take the 39in size if it came with Freesync and HDR because 32in hasn't been that bad of a downgrade, but it is smallish for watching a movie. The 45in was good for that, and oddly enough, also really great for open world games. The downside is that normal computer desks are not deep enough for proper viewing. You need a desk with three feet of depth rather than two.

And like you say, 4K @ 120hz is so technically challenging that the end user will have to plug in TWO HDMI cables. How will anybody ever manage it?

... and I just realized that's why they don't make them. A focus group came back and said that people are too stupid to plug in two cables, so instead of just including two cables in the box with the monitor and clear printed instructions with big pictures that show two cables, they just don't make one.


----------



## ILoveHighDPI

xzamples said:


> Way too good to be true IMHO


No this is exactly what you should expect to see when an industry goes through a decade of virtual monopolies running the show, growing huge profit margins over time, and then suddenly a competitor gets a break.
Back in the 90's this sort if thing was on an 18 month cycle.

Moore's Law is dying but we've always known the lack of competition from 2010 onward was hurting progress in modern hardware.


----------



## Majin SSJ Eric

tajoh111 said:


> Don't believe the optimistic rumors until products hit, you should learn from your confidence in the Vega 20 rumors that what gets the most hits and excitement in rumors does not reflect in reality what does come out.
> 
> Without any competition from Intel since their plans are so fubared, AMD would price a 16 core 5.1 ghz chip at 700 dollars + because they would have the outright performance crown and can sell cut down and lesser chips at 500.
> 
> When your competition is in a position where it cannot counter in the near future, this is the time when you raise prices which AMD will because it simply profit maximization and there will be supply issues at 7nm since TSMC is the only source of it at the moment. Higher prices along with top end performance also increases your brand value.
> 
> Intel cannot release 5ghz 16 core chips(without a 400 watt tdp) and won't be able to till 10nm. If AMD is in such a position where intel cannot counter for a year and they have the performance crown, they will rightfully attempt to increase pricing to increase their margin and ASP.
> 
> Have your product sell out because your priced your product too low and your supply is insufficient is not smart business. It also leaves less room for price drops and updates in terms of refreshing your product in the future.
> 
> Considering the rumors seem to already be debunk in the original source, this is more like an attempt by someone to raise AMD's future profile to keep the stock propped up.


Can always count on you to try and downplay any and all hype for AMD every single time. 

I agree that this is clearly just a rumor and that nothing is confirmed, but just looking at the data I don't really see anything that is ridiculously out of the realm of possibility. TSMC has promised up to a 25% increase in clocks on their 7nm node and AMD already has equal to better IPC than Intel right now. I would expect at least a modest IPC increase (and probably more than modest) out of Zen 2, so yeah, I don't think this is totally pie-in-the-sky hype-mongering at all. I mean, what do YOU predict out of Zen 2 on 7nm??? Do you really think clocks and IPC are going to remain the same or go DOWN???

As far as price is concerned, AMD is still in a very precarious position in terms of mindshare and marketshare so I would expect them to price Zen 2 aggressively just as they have priced all of Ryzen and TR chips aggressively thus far. Perhaps the 3850X (if this rumored performance is true) would go for more than $500, but the pricing of TR really doesn't give AMD much wiggle room in pricing for their mainstream chips. They certainly are not going to start pricing all of their mainstream chips significantly higher than their direct competitors from Intel...


----------



## guttheslayer

ILoveHighDPI said:


> No this is exactly what you should expect to see when an industry goes through a decade of virtual monopolies running the show, growing huge profit margins over time, and then suddenly a competitor gets a break.
> Back in the 90's this sort if thing was on an 18 month cycle.
> 
> Moore's Law is dying but we've always known the lack of competition from 2010 onward was hurting progress in modern hardware.


This. this is what happen when everyone keep telling others this is not impossible. Nothing is impossible.

It uses to be 100% performance every 18 months. Granted Moore law is diminishing but that is not an excuse on why we have been on near zero state improvement for the past 7 years. Why is it even surprising 16C/32T coming to mainstream in mid 2019? Back in Pentium 4 days Intel claim they could do 20-30C Nahelam CPUs cores in 2008. We are already 10 years late my friends.


----------



## ibb27

Kyle Bennett from HardOCP about the AdoredTV leaks:


> There is a whole lot of reality in that video. A lot. There is a little wrong, but not a lot.


I think CPU configurations are close to reality, I'm ready for new PC next year and return to RED camp. :thumb: 
Don't know what to think about Navi parts and perf.


----------



## guttheslayer

ibb27 said:


> Kyle Bennett from HardOCP about the AdoredTV leaks:
> 
> 
> I think CPU configurations are close to reality, I'm ready for new PC next year and return to RED camp. :thumb:
> Don't know what to think about Navi parts and perf.


So basically embrace ourself for 16C/32T coming in at close to 5GHz for pricing lower than current 9900K.


I kinda pity those who bought the 9900K.


----------



## Defoler

Feels like AMD are intentionally trying to make people think they are buying the next nvidia cards.


----------



## ltpdttcdft

_Nvidia has GTX 10xx, AMD has RX 5xx_
*Nvidia tries to impose GPP forcing board manufacturers' gaming brands to be Nvidia-only claiming to prevent "confusion"
Nvidia creates RTX brand*
_Nvidia has RTX 20xx, AMD has RX 5xx_
*AMD increases numbering*
_Nvidia has RTX 20xx, AMD has RX 30xx_


Defoler said:


> Feels like AMD are intentionally trying to make people think they are buying the next nvidia cards.


----------



## figuretti

ltpdttcdft said:


> _Nvidia has GTX 10xx, AMD has RX 5xx_
> *Nvidia tries to impose GPP forcing board manufacturers' gaming brands to be Nvidia-only claiming to prevent "confusion"
> Nvidia creates RTX brand*
> _Nvidia has RTX 20xx, AMD has RX 5xx_
> *AMD increases numbering*
> _Nvidia has RTX 20xx, AMD has RX 30xx_


Just imagine trying to search for an RTX card next year on Amazon, Newegg or Ebay when the 3000 series land (if the naming is correct)

This is a big middle finger from AMD to Nvidia... when at the same time this creates an association between both of their new CPU & GPU families


----------



## pony-tail

Good if true ! .... but too much hype train and too little salt .
Too early for me to get all excited about it .
Jim ( AdoredTV) is not always right , but more often than not .


Anyways I just bought AMD X470 , would hate to have to buy again next year .


----------



## reqq

tajoh111 said:


> You need to look at Energizer vs duracell. Energizer was way behind Duracell in terms of sales and their volume and profits did not grow until they raised their price. Why? because consumers perceived Energizers as the inferiors product because it was priced lower and assumed it was a value product.


Ye this makes sense when you think about it, im definately a victim of this haha.



tajoh111 said:


> If AMD is able to release a 16 core chip with a 10% IPC at 5GHZ, it will be the fastest consumer chip on the market beating out the $2100 9980xe. At $500, demand will outstrip supply which will lead to retailers scalping the chips leading to them gaining no more marketshare than if they released it at $800 because supply in both cases is the same and at $800 still sells out because its the fastest consumer chip on the market and beats out Intels fastest chip that cost $2100 dollars. On the other hand, if AMD releases a 12core 5ghz chip at $550, AMD is able to gain more marketshare at the 500 dollar price point which we will call the bottom tier of the enthusiast segment because they are able to supply this 500 dollar segment with more volume because using cutdown chips means higher supply. You don't think they will sell that well? It's performance competition are 1600-1700 Intel chips and with the cost of the X299 MB, 4 channel memory your looking at more than triple the cost of ownership.


But if this should work, a 800 dollar ryzen, doesnt threadripper need; a: increase in price b: entry level version have less cores? You think entry level threadripper zen 2 wont have 16 cores?


----------



## Shatun-Bear

What I find incredible is the fact that the only CPUs Intel can counter these with will be yet another 14nm+++++ Skylake refresh, that old, long in the tooth architecture, hot and power hungry by comparison, to compete with TSMC's 7nm! It's crazy. What's more where do they go after the 9900K? That thing is pushed to the limit and Skylake is not meant to run 4.7-5ghz with that many cores. It's pushed way past optimum frequency curve for the arch.


----------



## Clocknut

Remember guys, this chip is design under assumption that Intel 10nm do not screw up. 

If you look back in i7-6700K era and didnt think Intel will screw up 10nm. You will believe Intel's 10nm will get clocks speed up to 5.5GHz easy(an extra 500MHz), die strink will aloow them to add cores without increase die. I think Zen 2 is design to deal with Intel 5.5GHz chips + extra intel core count with even more core count on AMD side. 

We also get TSMC extra capacity dumped by phone chip makers due to slump phone sales. I am guessing TSMC will force give their better silicon to AMD because no one else can use it atm. (even Nvidia is on 12nm)

Seems to me a couple of good things line up together to make AMD release Zen 2 early with better silicon


----------



## guttheslayer

Shatun-Bear said:


> It's crazy. What's more where do they go after the 9900K? That thing is pushed to the limit and Skylake is not meant to run 4.7-5ghz with that many cores.



Comet lake S with 10C, reusing the same architecture since 2016, and same nodes as well. 14nm+++++++++++++++


----------



## Scotty99

No way any of this is true lol. 6c12t 99.99 chips at the bottom of the lineup with 16c32 5.1ghz at the top for 499.99, ya no.

How this got taken serious by any "leak" site is beyond me, do people still have functioning brains....hello??

Edit: While all of the CPU's are unrealistic in terms of cores price and clockspeed the 3700x in particular is just lol'able. Do you guys actually legitimately think amd has not only gained ~600mhz (at minimum) they are going to sell that chip with 4 more cores for the same price as the 2700x? Like you actually believe that lol?


----------



## urb4n

So lads, i have a question. currently running an i7-6600 , 16GB 2400mhz (2x8gb), Asrock h110m mother, 1060gtx, 512gb ssd

was going to buy Gigabyte z390 elite, i7 9700k and aorus 16gb 3200mhz during these Chritmas Holidays.

Should I wait for the Ryzen 3XXX series?

Any idea on what socket the Ryzen 3 series will be using? So at least i buy something for myself (was thinking a water cooling system I can use on my current system as well as on the Ryzen 3 series in 5 months)


----------



## figuretti

urb4n said:


> Any idea on what socket the Ryzen 3 series will be using? So at least i buy something for myself (was thinking a water cooling system I can use on my current system as well as on the Ryzen 3 series in 5 months)


All the AMD Ryzen 1000 & 2000 series works on socket AM4 (3000 series will work too)... if you buy today a mobo with socket AM4, chipset X470, and a ryzen 2600 or below CPU, when the new processors come, with just a BIOS update you could swap the old chip for a new one... hell, i'm waiting with my B350 motherboard (from the first gen Ryzen series)... if this chipset supports at least the 12 core version, I'll be very happy


----------



## Blze001

My god, can you imagine if AMD gets their GPU shop up to the same level as their CPU one? We might actually have competition across the hardware workspace.

Then the only thing standing between us and an absolutely amazing pc building world would be the rampant price fixing by Samsung and Micron! Haha.... *sighs*


----------



## NightAntilli

Scotty99 said:


> No way any of this is true lol. 6c12t 99.99 chips at the bottom of the lineup with 16c32 5.1ghz at the top for 499.99, ya no.
> 
> How this got taken serious by any "leak" site is beyond me, do people still have functioning brains....hello??
> 
> Edit: While all of the CPU's are unrealistic in terms of cores price and clockspeed the 3700x in particular is just lol'abale. Do you guys actually legitimately think amd has not only gained ~600mhz (at minimum) and are going to sell that chip with 4 more cores for the same price as the 2700x? Like you actually believe that lol?


It makes perfect sense considering their chiplet design. They will be producing a single CPU chiplet only, which are the 8c/16t chiplets. Some of them will have defects, and those will be downgraded to 6c/12t. Additionally, they will be binned to fit the whole product line. The 12c/24t are two chiplets with defects. The 16c/32t will be two fully functional chiplets.
Add the fact that one chiplet will be around 75 mm2, and the prices are more than reasonable.
For gaming, the best will probably be the Ryzen 5 3600X, because it is a single chiplet and there will be no increased latency caused by communication between chiplets.


----------



## Scotty99

NightAntilli said:


> It makes perfect sense considering their chiplet design. They will be producing a single CPU chiplet only, which are the 8c/16t chiplets. Some of them will have defects, and those will be downgraded to 6c/12t. Additionally, they will be binned to fit the whole product line. The 12c/24t are two chiplets with defects. The 16c/32t will be two fully functional chiplets.
> Add the fact that one chiplet will be around 75 mm2, and the prices are more than reasonable.
> For gaming, the best will probably be the Ryzen 5 3600X, because it is a single chiplet and there will be no increased latency caused by communication between chiplets.


No, it makes zero sense lol. 

Im not hating here btw this would be amazing, there is just literally no chance any of this is true. The biggest tipoff to people should be the lack of non hyperthreaded sku's (aside from the massive clockspeed increase that isnt happening as well).


----------



## ryboto

Scotty99 said:


> No way any of this is true lol. 6c12t 99.99 chips at the bottom of the lineup with 16c32 5.1ghz at the top for 499.99, ya no.
> 
> How this got taken serious by any "leak" site is beyond me, do people still have functioning brains....hello??
> 
> Edit: While all of the CPU's are unrealistic in terms of cores price and clockspeed the 3700x in particular is just lol'able. Do you guys actually legitimately think amd has not only gained ~600mhz (at minimum) they are going to sell that chip with 4 more cores for the same price as the 2700x? Like you actually believe that lol?


It's mostly because Adored has a reputation now for being almost spot on with his speculation. He only releases these 'leaks' in videos if he's been able to vet them to his standard. Because of that reputation people are quick to trust that he's leaking mostly correct information. In the past he's leaked information that turned out to be nearly 100% correct well ahead of official announcements. He was talking chiplets maybe 6 or 7 months ago I think.


----------



## Vesimas

Let's say that the Navi leaks are true, a Crossfire of 3080 for 500$ would be near a single 1400$ 2080Ti?


----------



## Scotty99

ryboto said:


> It's mostly because Adored has a reputation now for being almost spot on with his speculation. He only releases these 'leaks' in videos if he's been able to vet them to his standard. His sources have let him be pretty spot on in the past, so he's developed a reputation for leaking mostly correct information well ahead of official announcements. He was talking chiplets maybe 6 or 7 months ago I think.


Well to that i would say people need to engage their brains a bit and look at specifics.

If this was all true it would literally mean AMD leapfrogged intel, that clearly is and will not be the case.


----------



## ryboto

Scotty99 said:


> Well to that i would say people need to engage their brains a bit and look at specifics.
> 
> If this was all true it would literally mean AMD leapfrogged intel, that clearly is and will not be the case.


I would say that your statement screams of bias, but why is it unbelievable? AMD is on a more advanced node, and they're moving to smaller die for the most complex parts of the chips, increase yields and allowing them to bin differently. They'll produce more working die per wafer vs monolithic designs, and if TSMC's process is working efficiently, it means they've got a lot of options for speed binning once they bin for TR parts. 

It's not too good to be true, but the leaks would mean AMD reached the best possible outcome with the Ryzen 2 design and 7nm node as compared to the current generation Ryzen. The Navi stuff is probably a stretch, as they have only just recently had R&D resources to push the RTG into competitive mode.


----------



## Scotty99

ryboto said:


> I would say that your statement screams of bias, but why is it unbelievable? AMD is on a more advanced node, and they're moving to smaller die for the most complex parts of the chips, increase yields and allowing them to bin differently. They'll produce more working die per wafer vs monolithic designs, and if TSMC's process is working efficiently, it means they've got a lot of options for speed binning once they bin for TR parts.
> 
> It's not too good to be true, but the leaks would mean AMD reached the best possible outcome with the Ryzen 2 design and 7nm node as compared to the current generation Ryzen. The Navi stuff is probably a stretch, as they have only just recently had R&D resources to push the RTG into competitive mode.


Why it is believable to anyone on here is the real perplexing part.

Why would amd start off the lineup with a 6c12t chip? I could imagine them foregoing a 4c chip maybe, but that chip would absolutely not have SMT. They also are not going to be hitting 5 or 5.1ghz boost clocks on 7nm when intel who has 1000x more resources and smarter people working for them cannot do the same.

This is so clearly faked it actually pains me that so many of you believe it, its good to be optimistic but you also have to be realistic.


----------



## doritos93

I wish they would bring back Hybrid Crossfire for their APU's. I'm sure it'd work better today than it did back when they first tried it


----------



## ibb27

Scotty99 said:


> ...
> If this was all true it would literally mean AMD leapfrogged intel, that clearly is and will not be the case.


They leapfrogged them in server space with 64core Rome, where Intel was untouchable.  Dude, you live in the past.


----------



## NightAntilli

Scotty99 said:


> No, it makes zero sense lol.
> 
> Im not hating here btw this would be amazing, there is just literally no chance any of this is true. The biggest tipoff to people should be the lack of non hyperthreaded sku's (aside from the massive clockspeed increase that isnt happening as well).


If the faults tackle the HT sections of the CPU, why would they release an 8c/8t CPU rather than a 6c/12t? 6c/12t seems a lot better for marketing purposes, at least to me. 

As for clock speed... Your judgment is based on what? We're dropping from 14nm and 12nm to 7nm, and we have an improved architecture. Not to mention these are chiplets rather than monolithic dies, which completely changes the game. You're stuck in the old ways.


----------



## ryboto

Scotty99 said:


> Why it is believable to anyone on here is the real perplexing part.
> 
> Why would amd start off the lineup with a 6c12t chip? I could imagine them foregoing a 4c chip maybe, but that chip would absolutely not have SMT. They also are not going to be hitting 5 or 5.1ghz boost clocks on 7nm when intel who has 1000x more resources and smarter people working for them cannot do the same.
> 
> This is so clearly faked it actually pains me that so many of you believe it, its good to be optimistic but you also have to be realistic.


Well, it's speculation, someone's educated speculation. Still,the fact is, AMD has moved the most complex logic to smaller dies. They can bin more easily and yield faster parts more easily than Intel with monolithic parts. They'll bin the highest perf/watt, fully working dies for Epyc chips, then the next bin might be for high speed desktop parts.

We also don't know the cost of a single Ryzen 2 die. If it's as small as the speculation suggests, combining two with failed cores to make a low end part could indeed up the core count for entry level. *COULD*. 

Still, I agree, 6c/12t at entry level seems like it might be a stretch. It depends on core failure rate between CCX's, and which part range sells the most volume. 

Again, Adored generally doesn't just throw info out if someone sends it to him. He has a process to vet this information, most likely via other industry sources.


----------



## Scotty99

See i dont watch AMD fanboy channels like that, he only exists to cater to people that have for one odd reason or another chosen a massive company to be cheerleaders for, its just weird. I owned AMD chips all the way up to sandy bridge my first PC had a athlon xp 1600 in like 2001. I just dont believe for a second that for how far amd was behind just two years ago did they now achieve on 7nm what intel still hasnt proven to do. The core count and pricing are also again not believable, there are a bunch of b350 boards that cant even support a 2700x and you want me to believe AMD is going to release a 16c 135w part that will be incompatible with the majority of AM4 boards on the market, and for under 500 bucks lol (cutting the 2950x price nearly in half!)?

Cmon now guys lol.


----------



## ryboto

Scotty99 said:


> See i dont watch AMD fanboy channels like that, he only exists to cater to people that have for one odd reason or another chosen a massive company to be cheerleaders for, its just weird. I owned AMD chips all the way up to sandy bridge my first PC had a athlon xp 1600 in like 2001. I just dont believe for a second that for how far amd was behind just two years ago did they now achieve on 7nm what intel still hasnt proven to do. The core count and pricing are also again not believable, there are a bunch of b350 boards that cant even support a 2700x and you want me to believe AMD is going to release a 16c 135w part that will be incompatible with the majority of AM4 boards on the market, and for under 500 bucks lol (cutting the 2950x price nearly in half!)?
> 
> Cmon now guys lol.


If it were a fanboy channel, he wouldn't be making speculations based on facts, he'd just speculate randomly wild things, and he'd never be correct and we'd never believe a word he says. 

In fact, that's untrue. He's leaked and speculated regarding AMD, Intel and Nvidia. He's been right. He also has a quite healthy understanding of the architectures at play and has several technical videos on the big three. His reputation is why we aren't discounting him...yet. If he gets this wildly wrong, we'll start raising an eyebrow when he talks future tech with regard to leaks.


----------



## Alex132

@Scotty99


You'll just be beating your head against the wall here, there's no real point in trying to convince people to not drink the Koolaid.


----------



## NightAntilli

Scotty99 said:


> Why it is believable to anyone on here is the real perplexing part.
> 
> Why would amd start off the lineup with a 6c12t chip? I could imagine them foregoing a 4c chip maybe, but that chip would absolutely not have SMT.


Depends on yields obviously. The smaller the chip, the less the chance for defects. Yields might be so good for the ~75nm chip that they probably don't produce many chiplets where they require to lock off half the die.



Scotty99 said:


> They also are not going to be hitting 5 or 5.1ghz boost clocks on 7nm when intel who has 1000x more resources and smarter people working for them cannot do the same.


Nice bias you have there. In all your posts you ooze bias towards Intel. You have this idea in your mind that Intel is superior. Yet TSMC got 7nm to work, and Intel is still stuck trying to fix their 10nm. You cannot simply compare Intel to AMD, because AMD does not produce its own chips. Compare the 10nm Intel process and the 7nm TSMC process, and you'll easily see why Intel has fallen behind.



Scotty99 said:


> This is so clearly faked it actually pains me that so many of you believe it, its good to be optimistic but you also have to be realistic.


Let me guess. Realistic means Intel always being better than AMD, right? AMD beating Intel with less resources is nothing new...
https://arstechnica.com/information...all-of-amd-how-an-underdog-stuck-it-to-intel/



Scotty99 said:


> See i dont watch AMD fanboy channels like that,


And there goes your credibility. He was one of the few so-called "AMD fanboys" that said Vega would be unable to compete and would be a failure. What a fanboy he is, right? I guess you're unaware that your opinion on something you didn't watch is moot.



Scotty99 said:


> he only exists to cater to people that have for one odd reason or another chosen a massive company to be cheerleaders for, its just weird. I owned AMD chips all the way up to sandy bridge my first PC had a athlon xp 1600 in like 2001.


Sure you did...



Scotty99 said:


> I just dont believe for a second that for how long amd was behind two years ago did they now achieve on 7nm what intel still hasnt proven to do.


There we are again with the Intel bias. 



Scotty99 said:


> The core count and pricing are also again not believable,


Only because you don't understand the change that chiplets bring to the table. All you have been giving are empty statements.



Scotty99 said:


> there are a bunch of b350 boards that cant even support a 2700x and you want me to believe AMD is going to release a 16c 135w part that will be incompatible with the majority of AM4 boards on the market, and for under 500 bucks lol (cutting the 2950x price nearly in half!)?
> 
> Cmon now guys lol.


Actually, yes. The Ryzen 9 parts will require an X570 chipset, most likely, to avoid the ignorant from putting it in a 4+1 phase motherboard. The rest will probably work fine on current AM4 motherboards.


----------



## amd955be5670

Interest:
RTX2070/1080 performance at 150$.
5Ghz parts. This will be big if AMD can finally dethrone intel in single threaded perf once again like the Athlon Days.

Sad:
They still cannot beat a GTX1080Ti. Its going to be 2 years old by the time we get RX3000 parts. Imagine GTX1080Ti perf. for 250$ or even 300$. That would be huge. It would be the 4k Killer for High-Ultra settings.


----------



## kingduqc

Alex132 said:


> @Scotty99
> 
> 
> You'll just be beating your head against the wall here, there's no real point in trying to convince people to not drink the Koolaid.


Koolaid? What's so improbable? They will be using the same chiplets for their whole stack in massive quantities and bin them for consumer products. They are small and cost next to noting and can be reused in their whole offering from consoles to servers. We'll see in a month, meanwhile I'll grab some AMD stocks to make a quick buck come CES.


----------



## Scotty99

Alex132 said:


> @Scotty99
> 
> 
> You'll just be beating your head against the wall here, there's no real point in trying to convince people to not drink the Koolaid.


You right.

I forget how unreasonable these threads get, and apparently me saying more morey buys smarter people=intel bias. Nutty people in threads like these lol.


----------



## NightAntilli

Alex132 said:


> @Scotty99
> 
> 
> You'll just be beating your head against the wall here, there's no real point in trying to convince people to not drink the Koolaid.


What do you have to say about HOCP, a completely separate entity, basically confirming it?


----------



## Defoler

ltpdttcdft said:


> _Nvidia has GTX 10xx, AMD has RX 5xx_
> *Nvidia tries to impose GPP forcing board manufacturers' gaming brands to be Nvidia-only claiming to prevent "confusion"
> Nvidia creates RTX brand*
> _Nvidia has RTX 20xx, AMD has RX 5xx_
> *AMD increases numbering*
> _Nvidia has RTX 20xx, AMD has RX 30xx_


Everything you wrote is wrong or irrelevant.

Nvidia wanted to be associated with a brand, like ROG, and didn't want that to include AMD. They wanted that when someone say ROG, people think nvidia, not AMD or nvidia.

Using 3080 vs 2080, makes it confusing. If you see RTX 2080 and RX 3080, some people will think the RX 3080 is newer and hence better, if they don't check to see what is what. 
AMD could continue to 6xx series, instead they jumped to 30xx. When nvidia bring out their next version, will it also be 30xx? Will there be a confusion of which 30xx someone refers to without adding RTX or RX, and even then, can create more confusion?

It does look like AMD intentionally trying to create confusion.
Of course that all depends if it's true. This whole thing could be one huge fake scam.


----------



## EniGma1987

tajoh111 said:


> You need to look at Energizer vs duracell. Energizer was way behind Duracell in terms of sales and their volume and profits did not grow until they raised their price. Why? because consumers perceived Energizers as the inferiors product because it was priced lower and assumed it was a value product.
> 
> When Energizer raised their pricing, sales increased because consumers saw the brands in more comparable light and the products were actually comparable in performance.



That may or may not be true, but regardless, Energizer actually is an inferior product. There have been multiple times where I have seen Energizers that are brand new and read as full charge will spontaneously die after 10 minutes of use. There is a reason certain industries have an unofficial ban on anything but Duracell








Shatun-Bear said:


> What I find incredible is the fact that the only CPUs Intel can counter these with will be yet another 14nm+++++ Skylake refresh, that old, long in the tooth architecture, hot and power hungry by comparison, to compete with TSMC's 7nm! It's crazy. What's more where do they go after the 9900K? That thing is pushed to the limit and Skylake is not meant to run 4.7-5ghz with that many cores. It's pushed way past optimum frequency curve for the arch.



Intel is having trouble with 10nm thats for sure. It first became available in late 2013 I believe? Thats 5 years now they have been stuck on what is effectively the same node. They are making the best of 14nm that they can and it actually is an extremely good process. As for the arch, Intel probably just does not want to dump tens of millions into architecture modifications with very diminishing returns when they are already working hard and putting the entirety of their CPU design budgets into their new next-gen architecture. That one has the most promise since Core i is already maxed out in capability. Oh, and lets not forget that new Intel next-gen design is being spearheaded by the guy who was one of the main designers of these Ryzen chips.


----------



## gopackersjt

Scotty99 said:


> No way any of this is true lol. 6c12t 99.99 chips at the bottom of the lineup with 16c32 5.1ghz at the top for 499.99, ya no.
> 
> How this got taken serious by any "leak" site is beyond me, do people still have functioning brains....hello??
> 
> Edit: While all of the CPU's are unrealistic in terms of cores price and clockspeed the 3700x in particular is just lol'able. Do you guys actually legitimately think amd has not only gained ~600mhz (at minimum) they are going to sell that chip with 4 more cores for the same price as the 2700x? Like you actually believe that lol?



While the pricing is a bit absurd, the technology side of things is not at all impossible. The way that Zen is designed and implemented was to be done in a cheap way. Their yields have been ridiculously high so far, and they've been refining the process for about two years now. If a chiplet starts at 8C, then there's no reason that 6C, 8C, and then 8C + 4C and 8C + 8C is impossible. Any failed 8C yields will be turned into a 6C part, or go into one of the 12C SKU's as the 4C part. Look into how the new EPYC CPU's are setup, this is all in line. As far as clock speeds go, this is a new architecture, and a brand new process that we know nothing about, other than it' the high performance variant. Also, remember the jump in clockspeed from Piledriver to Bulldozer? AMD needs mindshare, and stomping out Intel and their last gen parts are good way to show that they're back on their feet.


----------



## keikei

Adore was on the $ in regards to RTX. I hope he's right on these.


----------



## Alex132

NightAntilli said:


> What do you have to say about HOCP, a completely separate entity, basically confirming it?


Some friends of mine made a fake article about AMD CPU leaks and got several news outlets to take it.


@*CynicalUnicorn* 



https://www.overclock.net/forum/10-...o-apu-spotted-features-fm3-20nm-afterall.html
https://wccftech.com/amds-carrizo-apu-a10-8890k-cpuz-hexacore-20nm/
https://www.eteknix.com/leaked-cpu-z-images-reveal-amd-carrizo-apu-a10-8890k-specs/


etc.


----------



## ZealotKi11er

Forget $250. Its 7nm so more exp than 14nm, its G6 vs G5 which was years old and RTX 2070 is $499 with 8GB of G6. I suspect $300-400 if the performance is in line with RTX 2070. Also for people saying AMD still cant beat 1080 Ti, well Nvidia did beat it for $500 more lol.


----------



## andrews2547

NightAntilli said:


> What do you have to say about HOCP, a completely separate entity, basically confirming it?



Unless it's coming directly from AMD or an article reporting on what AMD said, it's not confirmed.


https://www.overclock.net/a-troubling-trend/


----------



## Scotty99

gopackersjt said:


> While the pricing is a bit absurd, the technology side of things is not at all impossible. The way that Zen is designed and implemented was designed to be done in a cheap way. Their yields have been ridiculous high so far, and they've been refining the process for about two years now. If a chiplet starts at 8C, then there's no reason that 6C, 8C, and then 8C + 4C and 8C + 8C is impossible. Any failed 8C yields will be turned into a 6C part, or go into one of the 12C SKU's as the 4C part. Look into how the new EPYC CPU's are setup, this is all in line. As far as clock speeds go, this is a new architecture, and a brand new process that we know nothing about, other than it' the high performance variant. Also, remember the jump in clockspeed from Piledriver to Bulldozer? AMD needs mindshare, and stomping out Intel and their last gen parts are good way to show that they're back on their feet.


Lets assume pricing is obviously wrong.

1. Answer me where did 4c/8t chips go, or 6c6t to start off the ryzen 2 lineup? Is this amd just being generous giving people 6c12t cpu's for a benjamin? Remember thats MSRP, ryzen 5's go on sale all the time and if this was a real leak you would be seeing this 6c12t cpu on newegg for 70 bucks, its just not possible.
2. Clockspeeds. How do you expect me to believe amd went from 4.3 max turbo on their highest end ryzen 1 part to 5.1 with ryzen 2? Has intel ever made a leap this big?

These leaks are garbage and anyone who believes them should have their head examined.


----------



## Cyclonic

Scotty99 said:


> Lets assume pricing is obviously wrong.
> 
> 1. Answer me where did 4c/8t chips go, or 6c6t to start off the ryzen 2 lineup? Is this amd just being generous giving people 6c12t cpu's for a benjamin? Remember thats MSRP, ryzen 5's go on sale all the time and if this was a real leak you would be seeing this 6c12t cpu on newegg for 70 bucks, its just not possible.
> 2. Clockspeeds. How do you expect me to believe amd went from 4.3 max turbo on their highest end ryzen 1 part to 5.1 with ryzen 2? Has intel ever made a leap this big?
> 
> These leaks are garbage and anyone who believes them should have their head examined.


And what if they turn out toe be true?  Will you go to the dokter and let him examined your head?  And please livestream it for us


----------



## cooljaguar

Scotty99 said:


> Lets assume pricing is obviously wrong.
> 
> 1. Answer me where did 4c/8t chips go, or 6c6t to start off the ryzen 2 lineup? Is this amd just being generous giving people 6c12t cpu's for a benjamin? Remember thats MSRP, ryzen 5's go on sale all the time and if this was a real leak you would be seeing this 6c12t cpu on newegg for 70 bucks, its just not possible.
> 2. Clockspeeds. How do you expect me to believe amd went from 4.3 max turbo on their highest end ryzen 1 part to 5.1 with ryzen 2? Has intel ever made a leap this big?
> 
> These leaks are garbage and anyone who believes them should have their head examined.


1. It's called chipets, you've obviously done zero research on Zen 2. This isn't your traditional CPU generation, we're looking at one of the biggest game changers in over a decade.
2. No. But then again they also haven't switched from using a mobile focused node to make desktop CPUs, to a high performance focused node. Zen 2 isn't shackled by LPP 14nm.


----------



## gopackersjt

Scotty99 said:


> Lets assume pricing is obviously wrong.
> 
> 1. Answer me where did 4c/8t chips go, or 6c6t to start off the ryzen 2 lineup? Is this amd just being generous giving people 6c12t cpu's for a benjamin? Remember thats MSRP, ryzen 5's go on sale all the time and if this was a real leak you would be seeing this 6c12t cpu on newegg for 70 bucks, its just not possible.
> 2. Clockspeeds. How do you expect me to believe amd went from 4.3 max turbo on their highest end ryzen 1 part to 5.1 with ryzen 2? Has intel ever made a leap this big?
> 
> These leaks are garbage and anyone who believes them should have their head examined.


1. Zen 1 used 4C CCX's. This meant that an individual die had 4C's, and you needed to add another CCX to create a 6C or 8C CPU. They have a base of 8C's per chiplet now, so depending on yields, they would make 6C CPU's out of the bad chiplets, and make use the good 4C chiplets to create the higher clocking 12C parts. The 4C chiplets that don't clock high enough will probably become the Athlon CPU's (just like the 200GE is a failed 4C right now with only 2C's).

2. Intel did it going from the i7 6700k over to the i7 8086k. They even added 2 more cores in the process of doing so. 

I'm not taking these leaks as anything official, but these claims (maybe aside from price) are not that unrealistic. Not to mention that 5.1Ghz would probably only be on one core, similar to how XFR only get's a single core 4.35Ghz right now.


----------



## EniGma1987

gopackersjt said:


> Also, remember the jump in clockspeed from Piledriver to Bulldozer? AMD needs mindshare, and stomping out Intel and their last gen parts are good way to show that they're back on their feet.


 Ironic isnt it that Piledriver gained a nice speedup because it fixed the l1 -> l2 cache issue that Bulldozer launched with, and now the thing holding zen back in clock speed is an l2 cache issue yet again. As this Zen2 arch does have real, significant arch tweaks you are right that it is entirely possible AMD fixed the clock wall and could gain a good bit of MHz. The process node alone will only give 3-4% speed improvement of the design, so it really just depends if AMD fixed their architecture issue on whether we get that 10% speed jump from the arch or not.








Scotty99 said:


> 2. Clockspeeds. How do you expect me to believe amd went from 4.3 max turbo on their highest end ryzen 1 part to 5.1 with ryzen 2? Has intel ever made a leap this big?



yes? OG Haswell to Skylake was big. 4.4-4.5GHz average oc to 5GHz average oc. I was skipping Broadwell simply because it was not generally available to the market, but if you want to include it then Broadwell clocked around 4.4GHz too, so Broadwell to Skylake was a big jump as well. Also Nahalem average oc of 4ghz to Sandy Bridge average 4.7ghz oc. People with golden sample SB CPUs we hitting 4.9-5.0GHz too.


----------



## Scotty99

gopackersjt said:


> 1. Zen 1 used 4C CCX's. This meant that an individual die had 4C's, and you needed to add another CCX to create a 6C or 8C CPU. They have a base of 8C's per chiplet now, so depending on yields, they would make 6C CPU's out of the bad chiplets, and make use the good 4C chiplets to create the higher clocking 12C parts. The 4C chiplets that don't clock high enough will probably become the Athlon CPU's (just like the 200GE is a failed 4C right now with only 2C's).
> 
> 2. Intel did it going from the i7 6700k over to the i7 8086k. They even added 2 more cores in the process of doing so.
> 
> I'm not taking these leaks as anything official, but these claims (maybe aside from price) are not that unrealistic. Not to mention that 5.1Ghz would probably only be on one core, similar to how XFR only get's a single core 4.35Ghz right now.


Chiplets schniplets man i get it, still does not convince me that the LOWEST END ryzen part is going to be 6c/12t. Also when talking about turbo speed im obviously talking about single core turbo (as that is how they are all rated). You conveniently skipped two CPU's in your 6700k to 8086k comparison, and that barely manages to reach the gap this leak is claiming AMD has done in one.

Its bogus, and im sure of it.


----------



## NightAntilli

andrews2547 said:


> Unless it's coming directly from AMD or an article reporting on what AMD said, it's not confirmed.
> 
> 
> https://www.overclock.net/a-troubling-trend/


Maybe. But Kyle from HOCP is known to have insider information. And people have leaked things to AdoredTV multiple times in the past. He predicted Vega being bad, he predicted Zen being good, he predicted chiplets, he predicted RTX, and now, we have this. His track record is quite solid lately.

Is it confirmed? No. But if two independent sources basically agree on the information, and Kyle from HOCP says it's mostly correct with some minor inaccuracies, what reason do we have to think this is completely bogus? 

The main inaccuracy I can think of is the pricing. And maybe, some of the 8c/16t CPUs are actually two 4c/8t chips, rather than one 8c/16t chiplet with a dummy chiplet like Jim from AdoredTV mentioned.


----------



## Scotty99

Cyclonic said:


> And what if they turn out toe be true?  Will you go to the dokter and let him examined your head?  And please livestream it for us


if these leaks are accurate down to pricing ill do better than that, ill take a dump in my cats litter box on stream:
https://www.twitch.tv/araxxis


----------



## EniGma1987

Scotty99 said:


> if these leaks are accurate down to pricing ill do better than that, ill take a dump in my cats litter box on stream:





why would anyone want to see that? Id rather you just livestreamed going in for a mental evaluation. lol


----------



## cooljaguar

Scotty99 said:


> Chiplets schniplets man i get it, still does not convince me that the LOWEST END ryzen part is going to be 6c/12t


Then you need to stop posting and go look up how big of a deal chiplets are. 6c/12t for the lowest end Ryzen part is easily doable now. Chiplets that are so defective that only 4c are functional will likely be used for the budget Athlon lineup, or possibly on the 8c/16t Ryzen SKUs in a 2 chiplet layout.


----------



## Cyclonic

Scotty99 said:


> if these leaks are accurate down to pricing ill do better than that, ill take a dump in my cats litter box on stream:
> https://www.twitch.tv/araxxis


Lets hope so then  Prices of the leaks tho are if you are a retailer if you buy a 1000 same like intel price tables. So dont backout because of that


----------



## Scotty99

cooljaguar said:


> Then you need to stop posting and go look up how big of a deal chiplets are. 6c/12t for the lowest end Ryzen part is easily doable now. Chiplets that are so defective that only 4c are functional will likely be used for Athlon branding instead, or possibly on the 8c/16t Ryzen SKUs in a 2 chiplet layout.


Oh im not debating that, im saying the entry level ryzen part will not have SMT. I posted two pages back that was the first thing that stood out to me, every single chip on that list has SMT.


----------



## gopackersjt

Scotty99 said:


> Chiplets schniplets man i get it, still does not convince me that the LOWEST END ryzen part is going to be 6c/12t. Also when talking about turbo speed im obviously talking about single core turbo (as that is how they are all rated). You conveniently skipped two CPU's in your 6700k to 8086k comparison, and that barely manages to reach the gap this leak is claiming AMD has done in one.
> 
> Its bogus, and im sure of it.


I'm using logic, and you're ignoring the facts of how Zen actually works, but ok... and I didn't skip anything... an i7 7700k was the 6700k with a DRM feature in it, and the i7 6700k, 7700k, and 8700k, are the same architecture, socket and process as the i7 8086k. Literally nothing changed between them except the software walls that Intel put up to segment the motherboard market.


----------



## NightAntilli

Scotty99 said:


> if these leaks are accurate down to pricing ill do better than that, ill take a dump in my cats litter box on stream:
> https://www.twitch.tv/araxxis


Down to pricing? Nice cop-out. The price is most likely to change, considering the launch is probably at least 3 months out. It already changed compared to a previous reddit leak, so... Yeah.

But... Are you implying that these leaks are actually reasonable, except the pricing? Because your attitude was completely different just a few posts earlier.


----------



## Scotty99

gopackersjt said:


> I'm using logic, and you're ignoring the facts of how Zen actually works, but ok... and I didn't skip anything... an i7 7700k was the 6700k with a DRM feature in it, and the i7 6700k, 7700k, and 8700k, are the same architecture, socket and process as the i7 8086k. Literally nothing changed between them except the software walls that Intel put up to segment the motherboard market.


You are now simply trolling, a 2700x is just a 1800x too i suppose?

Ugh forums man, even when you are right you lose.


----------



## mouacyk

NightAntilli said:


> Maybe. But Kyle from HOCP is known to have insider information. And people have leaked things to AdoredTV multiple times in the past. He predicted Vega being bad, he predicted Zen being good, he predicted chiplets, he predicted RTX, and now, we have this. His track record is quite solid lately.
> 
> Is it confirmed? No. But if two independent sources basically agree on the information, and Kyle from HOCP says it's mostly correct with some minor inaccuracies, *what reason do we have to think this is completely bogus? *
> 
> The main inaccuracy I can think of is the pricing. And maybe, some of the 8c/16t CPUs are actually two 4c/8t chips, rather than one 8c/16t chiplet with a dummy chiplet like Jim from AdoredTV mentioned.


Every. Until it's not bogus. As the "troubling trend" article reveals, leaks should be questioned, researched further, and, not least of all, upheld/defended simply for the wishful anguishing triumphant comeback of everyone's favorite underdog.


----------



## Alex132

gopackersjt said:


> Literally nothing changed between them except the software walls that Intel put up to segment the motherboard market.


Yeah my 2500k is basically half a 9900k. Heck, it's all the same at the _core_


----------



## gopackersjt

Scotty99 said:


> You are now simply trolling, a 2700x is just a 1800x too i suppose?
> 
> Ugh forums man, even when you are right you lose.


No, it's not. It's a different process, supports higher clocked memory, has different cache configurations, and also has lower latency between each CCX.

At any rate, this is getting off topic. I'm done lol


----------



## NightAntilli

mouacyk said:


> Every. Until it's not bogus. As the "troubling trend" article reveals, leaks should be questioned, researched further, and, not least of all, upheld/defended simply for the wishful anguishing triumphant comeback of everyone's favorite underdog.


You sure highlight what is convenient to your views. But whatever... To what you say here I answer; Even if the last couple of leaks of this specific source have all been solid? Even if we have a whole line-up with cores, threads, pricing, and naming, rather than one invented CPU leak like your reference article?

And I'll add this to the mix... Go back to 2015- 2016 and say that AMD will come up with an 8C/16T mainstream CPU for around $300. How many believed it? How many said it was too good to be true? And yet, that is what happened.

I understand the argument for skepticism regarding leaks. But I also understand dismissive attitudes and denial. 

I am not defending the leaks as such, but there is nothing unreasonable about them.


----------



## doritos93

NightAntilli said:


> You sure highlight what is convenient to your views. But whatever... To what you say here I answer; Even if the last couple of leaks of this specific source have all been solid? Even if we have a whole line-up with cores, threads, pricing, and naming, rather than one invented CPU leak like your reference article?
> 
> And I'll add this to the mix... Go back to 2015- 2016 and say that AMD will come up with an 8C/16T mainstream CPU for around $300. How many believed it? How many said it was too good to be true? And yet, that is what happened.
> 
> I understand the argument for skepticism regarding leaks. But I also understand dismissive attitudes and denial.
> 
> I am not defending the leaks as such, but there is nothing unreasonable about them.


You're dealing with a special case here, the guys trolled the Ryzen owners thread for weeks about how people make the 1700 sound better than it is and how AMD will never be faster than Intel so and so forth. People having legit technical discussion, and then this guy pops up "Here's a video ...."

The number of times I've seen him laughed out of news threads too. I mean I don't understand how mods don't pick up on this stuff. Just look at his post history


----------



## ozlay

Scotty99 said:


> Oh im not debating that, im saying the entry level ryzen part will not have SMT. I posted two pages back that was the first thing that stood out to me, every single chip on that list has SMT.


Doesn't the entire current lineup have SMT?


----------



## Scotty99

ozlay said:


> Doesn't the entire current lineup have SMT?


2200g replaced the ryzen 3 1200, both had no SMT. They also each retail around 100 bucks. 

We arent getting a 6c12t ryzen 3 for 99 bucks lol.


----------



## guttheslayer

Other youtuber on their so-called leaks I would have discard it straightaway.

But adoredTV so far has been pretty accurate as far as I rmb. The RTX one was almost spot on, I rmb he even mentioned Titan RTX (way before we even know RTX moniker exist) that was +15% faster than Titan V and cost close to $3000, it came out almost perfect.


Not just him, but the retired-engineer guy who conceptualized the block diagram for Zen 2 in Oct (and he turn out to be > 95% spot on for EPYC ROME, even down to the chiplet size in mm) mentioned how a AM4 16C/32T can be wired together.


Based on these 2 sourced combined that share almost identical view when it comes to 16C/32T inbound for Zen 2, made me believe 16C/32T in mainstream socket is not a dream. It is REAL.


----------



## cooljaguar

Scotty99 said:


> 2200g replaced the ryzen 3 1200, both had no SMT. They also each retail around 100 bucks.
> 
> We arent getting a 6c12t ryzen 3 for 99 bucks lol.


R3 1200 and 2200G were both large Zen 1 monolithic dies. The rumored 6c/12t 3300 would be a chiplet that's less than half the size. $99 is easily doable.

I would've liked to see how you responded to the initial Ryzen leaks.. "We arent getting a 8c16t ryzen 7 for 300 bucks lol."


----------



## guttheslayer

cooljaguar said:


> R3 1200 and 2200G were both large Zen 1 monolithic dies. The rumored 6c/12t 3300 would be a chiplet that's less than half the size. $99 is easily doable.
> 
> I would've liked to see how you responded to the initial Ryzen leaks.. "We arent getting a 8c16t ryzen 7 for 300 bucks lol."


That 8C/16T if based on the lineup, will be a R5 series. Which is a 33% cores bump from first 2 generation of Ryzen R5. It is not entirely far-fetch. So you Should expect $2XX price range.


----------



## Scotty99

The real leak:

Athlons:
4c+4c8t for 70-100 bucks

Ryzen 3:
6c12t for 150-220

Ryzen 5:
8c16t for 270-330

Ryzen 7:
12c24t for 429-499

Adjust the clockspeeds down a few hundred mhz and the 16c parts are threadripper.

Now thats believable and is more akin to what we will be seeing next year.


----------



## GHADthc

Some damage control going on in this thread...


----------



## Quantium40

It's going to be interesting to see which side in this thread eats weird things a few months from now.

Me, I hope AMD demolishes Intel with the next lineup. Everyone should, its the best timeline 

However as optimistic as I want to be, I am too smart to be making any claims this early.


----------



## tajoh111

ZealotKi11er said:


> Forget $250. Its 7nm so more exp than 14nm, its G6 vs G5 which was years old and RTX 2070 is $499 with 8GB of G6. I suspect $300-400 if the performance is in line with RTX 2070. Also for people saying AMD still cant beat 1080 Ti, well Nvidia did beat it for $500 more lol.


This is more reasonable expectations. 

The price of memory is more expensive than previously and I haven't read any articles about the price of GDDR5 and I assume GDDR6 going down. Also GDDR6 is more expensive than GDDR5 from price lists. Combine with the excessive cost of 7nm wafer and there's little chance that this product comes at 249. 

Polaris was priced low because Nvidia gtx 1070 was 50% faster than it and had a none founders editions pricing of $379.99. Although this 379.99 price was initially none existent, it was still the benchmark AMD needed to price against because that pricing would come and the GTX 1060 would eventually launch predictably at 249.00 to 300 in accordance to the 1070 pricing. This was lower than AMD likely wanted to price their product but they had no choice. 

You can tell the RX 480/470/580/570 are low margin products by the subsequent pricing of the rx 580/570/590(the rx 580 launched before the mining spike and rx480s were 150 dollars at the time). Typically rebrands or refreshes have come with price drops along with better performance. The 280x, 270x, 390x, r9 380/385 show this trend. This was not the cases with Polaris. 

Polaris 20 came out with the same pricing and polaris 30 came out with higher pricing. It is clear AMD did not like pricing Polaris this low but its hand was forced by the performance delta with Pascal. 

If AMD was in an more advantageous situation, we would probably see pricing more the lines of pitcairns. 

The 7870 launched at 350 dollars and was a smaller die then polaris(232mm2) vs the 213mm2 of the 7870. Remember additionally, 14nm finfet wafers are dramatically more expensive then 28nm wafers. Almost double and you certainly have a low margin product when you combine the the 256 bit bus and the overbuilt PCB. Where AMD was able o make an extra 100 + dollars on the 7870 because of the 350 dollar selling price, with the RX 480 was a more expensive product to make but priced at 240 dollars meaning AMD was making 110 dollars less + the additional cost of he GPU wafer. 

Polaris was not priced under 300 dollars because AMD wanted market penetration, they had to price it that way because the 1070 was significantly faster and combine this with the Nvidia brand, would rightfully get the nod for consumer purchases. So rather than be reactive, they were proactive and launched at a lower price to gain good will. The RX 480/470 would have naturally fell to the pricing it priced at, if we look at the subsequent price drops of the 7870/7850/7950/7970 shortly after keplers launch. 

If the Rx 3080 comes with rtx 2070 performance, they will not price their products lower than Polaris 30 when their costs are going up and their performance is more competitive with the market. I think something like 350 dollars for RTX 2070 performance and roughly 250 to 279 for Vega 64 performance for the cut down is more likely. This will still sell and give AMD a good profit. Also, unless Nvidia does some price drops for the RTX 2070, will sell out which means it is the optimum price. 

AMD GPU lineup dragged down AMD's margins and profits. This is clear from the unimpressive net profits even with the success of Ryzen. If AMD is in a position to take more marketshare and increase their margins, they will take it. They are not a charity and have shown this in the past. With public opinion of Nvidia at an all time low from what I have seen, they don't need to value price their product.



mouacyk said:


> Every. Until it's not bogus. As the "troubling trend" article reveals, leaks should be questioned, researched further, and, not least of all, upheld/defended simply for the wishful anguishing triumphant comeback of everyone's favorite underdog.


It's okay to have faith in AMD CPU department because they have been delivering since Ryzen. Although pricing I suspect won't be this aggressive, the performance expectations are possible.

However, AMD's GPU division since the move to Shanghai, has pumped out nothing but underperforming chips, well below expectations and the latest, Vega 20 seems the worst of all which is coincidentally their first 7nm design. 

Imagine if Nvidia's next flagship, came out on a new node with only 20% better performance at the same power at best(AMD performance will not scale linearly with performance). Even the failed(according to the opinion here) RTX series manages between 30-40% without the aid of a new node. And this is why Vega 20 is not launching in the consumer space not because Navi is too good. 

Look at the prehype expectations of every single card since the shanghai team took charge starting with the r9 285 and find the lowest expectation in that thread and that is the one that came true. Polaris and Vega were the very worst for this. Polaris went from GTX 980 ti + 15% down to around gtx 970. Which expectations turned out to be the accurate one? Vega went from 15-20 percent better than a gtx 1080 ti, down to gtx 1080 performance. Again what turned out to be the more accurate prediction? The latter was particularly bad because it came with record power consumption and was mega late. With Vega 20 repeating this going from atleast 21tflops with expectations going up to 30tflops of fp32. No where did anyone expect something like 14.7tflops(I was the closest at 15-16tflops) at the same power. 

Now rumors paint basically the same tflop count(which would results in Vega + 15%) at half the power from pure architecture(Vega 20 I stress is already built on 27m's and is using HBM2 which is more efficient than GDDR6). I have a hard time believing this, especially since navi is still GCN(and is hopefully the last).


----------



## miklkit

Stotty99 should be ignored. He spent most of 2017 trying to shoot down Ryzen1 and is only an argument looking for a place to happen.


The 3600X looks like the one for me. Will it fit on an X370 board?


----------



## keikei

I don't quite understand the new naming scheme. I see the gpu division trying to get some of the cpu glow with the 3xxx, but this also closely matches nvidias lineup. RTX 2070 vs RX 3070?


----------



## Scotty99

miklkit said:


> Stotty99 should be ignored. He spent most of 2017 trying to shoot down Ryzen1 and is only an argument looking for a place to happen.
> 
> 
> The 3600X looks like the one for me. Will it fit on an X370 board?


Ryzen 1 and ryzen 2 are quite terrible gaming cpu's. Ryzen 1 loses to overclocked ivy bridge, ryzen 2 loses to overclocked haswell. These new ryzen chips will likely lose to overclocked 6700k's which will be four years old by the time they launch.

You can ignore me or think whatever you want but im not an idiot, abrasive sure but i know what im talking about and these leaks are bogus.


----------



## NightAntilli

keikei said:


> I don't quite understand the new naming scheme. I see the gpu division trying to get some of the cpu glow with the 3xxx, but this also closely matches nvidias lineup. RTX 2070 vs RX 3070?


It's most likely deliberate. Likely a sort of "in your face" to nVidia's GPP. Honestly, I thought RTX was too similar to RX already. Maybe nVidia wanted to make the RX cards obscure. RX can be seen as an inferior RTX, if one knows nothing of GPUs. But now AMD stings back with naming their cards 3000 over 2000. And if nVidia comes with its successor to the current cards, naming them RTX 3000 will make things extremely confusing, which is probably an advantage for AMD, because nVidia has too much mind share.

Everyone does this btw... Intel with its B360 over AMD's B350 is another example. This business is quite dirty in terms of naming schemes.


----------



## ku4eto

Scotty99 said:


> Ryzen 1 and ryzen 2 are quite terrible gaming cpu's. Ryzen 1 loses to overclocked ivy bridge, ryzen 2 loses to overclocked haswell. These new ryzen chips will likely lose to overclocked 6700k's which will be four years old by the time they launch.
> 
> You can ignore me or think whatever you want but im not an idiot, abrasive sure but i know what im talking about and these leaks are bogus.


Oh boy, here you go baiting again.

Oh boy, here i go black listing again.


----------



## miklkit

Scotty99 said:


> Ryzen 1 and ryzen 2 are quite terrible gaming cpu's. Ryzen 1 loses to overclocked ivy bridge, ryzen 2 loses to overclocked haswell. These new ryzen chips will likely lose to overclocked 6700k's which will be four years old by the time they launch.
> 
> You can ignore me or think whatever you want but im not an idiot, abrasive sure but i know what im talking about and these leaks are bogus.



Thank you for proving my point once again.


----------



## Scotty99

ku4eto said:


> Oh boy, here you go baiting again.
> 
> Oh boy, here i go black listing again.


Baiting? Im literally stating facts.

A max overclocked 3770k will beat a max overclocked 1800x in 9/10 games on the market.
A max overclocked 4790k will beat a max overclocked 2700x in 9/10 games on the market.
A max overclocked 6700k will likely beat a max overclocked 3700x in 9/10 games on the market.

The 1/10 is being generous, most games cannot use more than 4c8t cpu's.

This isnt exactly AMD's fault, most games are optimized for intel chips....which traditionally have maxxed out at 4c8t.

We still arent at the point to where those extra threads matter, although dx12 is starting to finally come around (WoW gets its dx12 multi threading patch next week) hopefully more games follow.


----------



## Kpjoslee

NightAntilli said:


> It's most likely deliberate. Likely a sort of "in your face" to nVidia's GPP. Honestly, I thought RTX was too similar to RX already. Maybe nVidia wanted to make the RX cards obscure. RX can be seen as an inferior RTX, if one knows nothing of GPUs. But now AMD stings back with naming their cards 3000 over 2000. And if nVidia comes with its successor to the current cards, naming them RTX 3000 will make things extremely confusing, which is probably an advantage for AMD, because nVidia has too much mind share.
> 
> Everyone does this btw... Intel with its B360 over AMD's B350 is another example. This business is quite dirty in terms of naming schemes.


Nah, lets be honest here. Intel was forced to use B360 because AMD intercepted the number. Nvidia has no reason to bother with AMD's naming scheme. 
AMD do have reasons to do so since they are heavy underdogs and lack a brand presence of others, but I am not going to try to defend it as industry wide practice since it was pretty much AMD's doing 99% of time.


----------



## NightAntilli

Scotty99 said:


> *Ryzen 1 and ryzen 2 are quite terrible gaming cpu's.* Ryzen 1 loses to overclocked ivy bridge, ryzen 2 loses to overclocked haswell. These new ryzen chips will likely lose to overclocked 6700k's which will be four years old by the time they launch.
> 
> You can ignore me or think whatever you want but im not an idiot, abrasive sure but i know what im talking about and these leaks are bogus.


If beating an i5 7600k at a the same price point qualifies as terrible, sure.


----------



## ZealotKi11er

Scotty99 said:


> Ryzen 1 and ryzen 2 are quite terrible gaming cpu's. Ryzen 1 loses to overclocked ivy bridge, ryzen 2 loses to overclocked haswell. These new ryzen chips will likely lose to overclocked 6700k's which will be four years old by the time they launch.
> 
> You can ignore me or think whatever you want but im not an idiot, abrasive sure but i know what im talking about and these leaks are bogus.


Almost 2 years since Ryzen 1 has been out. A lot more games are starting to use more than 4C. Also 6700K is same as 9900K.


----------



## EniGma1987

keikei said:


> I don't quite understand the new naming scheme. I see the gpu division trying to get some of the cpu glow with the 3xxx, but this also closely matches nvidias lineup. RTX 2070 vs RX 3070?


Nvidia is the one to blame for that. They are the ones who changed to RTX model names when AMD has already been on RX model names for years now.


----------



## Scotty99

Does this forum have a facepalm emote?

I contended that older i7's beat out current ryzen cpu's when overclocked and someone links a benchmark of an i5? Those extra 4 threads matter and is why older i7's can not only keep up to but beat ryzen in most games because of the higher clockspeeds when overclocked. Its all about optimizations and we are a far cry away from even 1/3rd of the games on the market being able to use more than 4c/8t efficiently.



EniGma1987 said:


> Nvidia is the one to blame for that. They are the ones who changed to RTX when AMD has been on RX names or years now.


You gotta be loopy to actually think this, nvidia couldnt give a flying you know what about AMD the cards are named RTX because the main selling point of the cards is they have ray tracing hardware....

Is this forum ok? Are you guys doing alrite lol?


----------



## Alex132

Scotty99 said:


> You gotta be loopy to actually think this, nvidia couldnt give a flying you know what about AMD the cards are named RTX because the main selling point of the cards is they have ray tracing hardware....
> 
> Is this forum ok? Are you guys doing alrite lol?



Nah, it's clearly evil Nvidia trying to trick people into buying RTX cards when they thought they were buying AMD RX cards.


----------



## CynicalUnicorn

Alex132 said:


> Some friends of mine made a fake article about AMD CPU leaks and got several news outlets to take it.
> 
> 
> @*CynicalUnicorn*
> 
> 
> 
> https://www.overclock.net/forum/10-...o-apu-spotted-features-fm3-20nm-afterall.html
> https://wccftech.com/amds-carrizo-apu-a10-8890k-cpuz-hexacore-20nm/
> https://www.eteknix.com/leaked-cpu-z-images-reveal-amd-carrizo-apu-a10-8890k-specs/
> 
> 
> etc.
> 
> 
> *snip*


lol yeah the best part about that was that some Russian site picked up on it and said "by the way _we_ have a source with EXCLUSIVE BENCHMARKS for you all!"

And to this day I'm still salty that WCCF called it wrong because of the amount of L2 cache. I was being meticulous when it came to finer details like that and halved L2 size vs the rest of the Bulldozer family had been confirmed earlier in the month. But WCCF is the same outlet that published a 25nm Phenom IV leak on purpose so I can't say I'm surprised.





andrews2547 said:


> Unless it's coming directly from AMD or an article reporting on what AMD said, it's not confirmed.
> 
> 
> https://www.overclock.net/a-troubling-trend/












Kyle from [H]ardOCP posted my 10-core Comet Lake leak a couple weeks ago. I found a die shot of the 8700K on Wikichip, went into Paint, duplicated the middle two cores twice, and stretched out a few other bits of the die. This was not hard work. I did it while cramming Pop-Tarts into my face before class, and I wasn't even late!

Seriously guys people will just lie on the Internet. When I do it it's because I think it's think it's funny. When ad-supported websites do it, well...


----------



## Jarhead

amd955be5670 said:


> Interest:
> RTX2070/1080 performance at 150$.
> 5Ghz parts. This will be big if AMD can finally dethrone intel in single threaded perf once again like the Athlon Days.
> 
> Sad:
> They still cannot beat a GTX1080Ti. Its going to be 2 years old by the time we get RX3000 parts. Imagine GTX1080Ti perf. for 250$ or even 300$. That would be huge. It would be the 4k Killer for High-Ultra settings.


Yeah, but who cares when you'll be able to buy three of them for the same price as a 1080ti? Multi-gpu is about to get interesting again just due to the price points of these cards. And once the new PeasantStation is announced as Vulkan-powered the handwriting is on the wall for Vulkan as a major player in the PC gaming world, which AMD has already shown can be a set up to scale almost linearly with multi-gpu setups. Their Vega64/1080 level card is going to be a whopping $250 and you're only gonna need two of them for 100FPS at 4K. So I'll take two, and they will be plugged into a 3850X/570 board/ 64gb of DDR4/ 1TB NVME running a whatever 4K 39in/43in display can get me native 120hz, Freesync, and HDR.


----------



## Kpjoslee

Jarhead said:


> Yeah, but who cares when you'll be able to buy three of them for the same price as a 1080ti? Multi-gpu is about to get interesting again just due to the price points of these cards. And once the new PeasantStation is announced as Vulkan-powered the handwriting is on the wall for Vulkan as a major player in the PC gaming world, which AMD has already shown can be a set up to scale almost linearly with multi-gpu setups. Their Vega64/1080 level card is going to be a whopping $250 and you're only gonna need two of them for 100FPS at 4K. So I'll take two, and they will be plugged into a 3850X/570 board/ 64gb of DDR4/ 1TB NVME running a whatever 4K 39in/43in display can get me native 120hz, Freesync, and HDR.
> 
> https://www.youtube.com/watch?v=LxEbirNAsbA


You are way too optimistic about the future of multi-GPU, when it has been gradually getting worse for past 5 years in terms of support from developers. It doesn't matter if the future is Vulkan or DX12 when developers don't bother about multi-GPU setups.


----------



## TrueForm

Well AMD showing off their 64 core epic cpu was a hint that they would increase their lower end SKU core counts also. As Lisa said, she likes to WIN!


----------



## rluker5

Jarhead said:


> Yeah, but who cares when you'll be able to buy three of them for the same price as a 1080ti? Multi-gpu is about to get interesting again just due to the price points of these cards. And once the new PeasantStation is announced as Vulkan-powered the handwriting is on the wall for Vulkan as a major player in the PC gaming world, which AMD has already shown can be a set up to scale almost linearly with multi-gpu setups. Their Vega64/1080 level card is going to be a whopping $250 and you're only gonna need two of them for 100FPS at 4K. So I'll take two, and they will be plugged into a 3850X/570 board/ 64gb of DDR4/ 1TB NVME running a whatever 4K 39in/43in display can get me native 120hz, Freesync, and HDR.
> 
> https://www.youtube.com/watch?v=LxEbirNAsbA


Dang, now I can't claim Vulcan and mgpu are mutually exclusive anymore. It is about time. Complaining about that didn't make me happy.

As for a $250 V64+15% perf, 150w card: if one is released, only the ebay resellers will see that price before they sell it back to us for $700. Not that AMD couldn't put out that performance at 30% less watts than Vega lc, we just aren't going to see that price in this market. That is a better deal than old miners on Ebay. 

And I'm going to go out on a limb here and predict that whatever cpu configuration AMD puts out, it will still be slower than an unlocked i7 for basically all games while just gaming and there will be a chorus against ultra mega frame rate gaming at the speed of gaming monitors whenever that is brought up. But a lot of cores are good for other things that (other than gaming benchmarks that test something other than gaming performance on their cpu test) I have no interest in.


----------



## Amw86

damn it, I was just about to build a 9900k system


----------



## Jarhead

rluker5 said:


> Dang, now I can't claim Vulcan and mgpu are mutually exclusive anymore. It is about time. Complaining about that didn't make me happy.
> 
> As for a $250 V64+15% perf, 150w card: if one is released, only the ebay resellers will see that price before they sell it back to us for $700. Not that AMD couldn't put out that performance at 30% less watts than Vega lc, we just aren't going to see that price in this market. That is a better deal than old miners on Ebay.
> 
> And I'm going to go out on a limb here and predict that whatever cpu configuration AMD puts out, it will still be slower than an unlocked i7 for basically all games while just gaming and there will be a chorus against ultra mega frame rate gaming at the speed of gaming monitors whenever that is brought up. But a lot of cores are good for other things that (other than gaming benchmarks that test something other than gaming performance on their cpu test) I have no interest in.


No, more cores haven't been good for games... but the only way to they can get 4K/60fps on the consoles is better multi-core scaling. Which means that's finally gonna happen. Multi-GPU is the way forward for PC gaming because there's no way in hell anybodies going to keep plunking down $1000 for a NVIDIA monster card that doesn't really beat two $250 AMD ones. That means game devs are going to start supporting it, because they won't have a choice, which is why game devs do anything.


----------



## rluker5

Jarhead said:


> No, more cores haven't been good for games... but the only way to they can get 4K/60fps on the consoles is better multi-core scaling. Which means that's finally gonna happen. Multi-GPU is the way forward for PC gaming because there's no way in hell anybodies going to keep plunking down $1000 for a NVIDIA monster card that doesn't really beat two $250 AMD ones. That means game devs are going to start supporting it, because they won't have a choice, which is why game devs do anything.


I don't see a viable way to keep improving total cpu performance without better core scaling either. I just don't see them scaling a lot better in games yet. And while I object to Nvidia keeping the price/fps ratio the same with the release of Turing, that is still out there and probably why the 590 costs so much compared to the 580. AMD buyers can blame Nvidia for raising the accepted market prices. 

The market is pretty saturated with capable 1080p gpus so hopefully people just buy AMD's cheaper 580 that is just as good as Nvidia's current alternative and hold off on Nvidia's expensive ones until the raytracing/overpricing/resolution+framerate cap mess gets sorted out. 

MGPU in Shadow of the Tomb Raider is really nice too.


----------



## LancerVI

I think it right and proper to be skeptical of this leak. Hell, Jim with AdoredTV said himself that "you should grab yourself a pinch of salt before watching further" for what he was about to discuss was an amalgamation of multiple leaks that he investigated and came up with his "theory" for lack of a better term.

But I agree with Jim when he says Zen2 or Ryzen 3XXX whatever, that it is not far fetched to believe that it has about 10% gains, in general over the previous gen. That's not far fetched at all considering all the information we do have. 

5.1ghz might be a stretch and we don't know if that was all core or single core boost. But a ryzen 2700x @ +10% would put it at the 4.6-4.7ghz range. I don't know what is so unbelievable about that and it's only good news that 7nm seems to have worked out well for AMD.

I have no clue about the gpu side of this argument and neither does anyone else. Navi is a completely new architecture, is it not? We have no basis of comparison, while on the CPU side, we already know alot about what Zen is. The wildcards are chiplets and the move to 7nm as I understand it.

Time, of course, will tell.


----------



## mouacyk

@LancerVI but you're wrong, because Jim has been right more than once in the past. Then there's Kyle who called out NVidia's shady GPP so the leak has to be real.


----------



## ILoveHighDPI

keikei said:


> I don't quite understand the new naming scheme. I see the gpu division trying to get some of the cpu glow with the 3xxx, but this also closely matches nvidias lineup. RTX 2070 vs RX 3070?


It's cut-throat marketing, Nvidia jumped first and now AMD gets to use numbers that make it look like they're a generation ahead, which as far as I'm concerned isn't that far from the truth when Nvidia is crippling their GPUs with useless tech.
Good job AMD.


----------



## SuperZan

To be fair, I don't see many people taking this as gospel. I see lots of people saying that we shouldn't discredit the speculation out of hand as unreasonable. There's a significant difference between those two things. It's speculation, so *of course* it's all up in the air, but it's also entirely plausible that he's got at least a sense of the plot here. Jim's record with leaks and speculation simply makes these rumours more interesting than, for example, WCCFtech reporting 12GHz 56 cores CPU's that also toast bread and day trade for you when idle.


----------



## delboy67

I dont see what is so unbelieveable about this leak....Epyc has already been shown, we were always buying broken opterons/xeons
Clock speeds in leak up 10-15%, 12nm to 7nm easily accounts for that
Ipc up 10-13%, from amd iirc although no gaming loads yet
Base sku has 6cores in leak, that is a HALF defective chip with 3 cores on each chiplet which is the 7nm part, why would they bother with 4 cores if they can make and price a 6 core that way
SMT on every sku in leak, they can salvage half broken chips as above
Prices in the leak could be right, only some of the cpu is shrunk, the 7nm parts are tiny and as above can be half defective and still be used. The big chip which holds the i/o is good old 14nm.

Its not really far fetched at all imo, I hope there is some unofficial unlocking of cores and no locked mutipliers but maybe thats asking a little to much.


----------



## LancerVI

mouacyk said:


> @LancerVI but you're wrong, because Jim has been right more than once in the past. Then there's Kyle who called out NVidia's shady GPP so the leak has to be real.



I do like Jim with AdoredTV. He's probably my favorite tech youtuber as he's thoughtful and even handed, IMHO. I'm not saying he's the "Sage of Silicon", but he's nailed a lot of things down correctly.


----------



## LancerVI

delboy67 said:


> I dont see what is so unbelieveable about this leak....Epyc has already been shown, we were always buying broken opterons/xeons
> Clock speeds in leak up 10-15%, 12nm to 7nm easily accounts for that
> Ipc up 10-13%, from amd iirc although no gaming loads yet
> Base sku has 6cores in leak, that is a HALF defective chip with 3 cores on each chiplet which is the 7nm part, why would they bother with 4 cores if they can make and price a 6 core that way
> SMT on every sku in leak, they can salvage half broken chips as above
> Prices in the leak could be right, only some of the cpu is shrunk, the 7nm parts are tiny and as above can be half defective and still be used. The big chip which holds the i/o is good old 14nm.
> 
> Its not really far fetched at all imo, I hope there is some unofficial unlocking of cores and no locked mutipliers but maybe thats asking a little to much.


I agree. I think it wise always to be skeptical of leaks, but I don't see what is so outrageous about this one when it comes to the specs.


----------



## dantoddd

Too good to be true.


----------



## rv8000

SuperZan said:


> To be fair, I don't see many people taking this as gospel. I see lots of people saying that we shouldn't discredit the speculation out of hand as unreasonable. There's a significant difference between those two things. It's speculation, so *of course* it's all up in the air, but it's also entirely plausible that he's got at least a sense of the plot here. Jim's record with leaks and speculation simply makes these rumours more interesting than, for example, WCCFtech reporting 12GHz 56 cores CPU's that also toast bread and day trade for you when idle.


Would you be so kind as to point me in the direction of this 12ghz 56c toaster?

The most unreasonable thing I see is the price points. By hitting their estimated ipc gains and potentially hitting clock speeds of 4.5-4.8, there should be little difference between comparable intel cpus @ the same c/t counts. I don't think they could provide enough stock to not shoot themselves in the foot at those prices.


----------



## deepor

LancerVI said:


> I agree. I think it wise always to be skeptical of leaks, but I don't see what is so outrageous about this one when it comes to the specs.


For me it's just the clock speeds that sounds suspicious. Something like 5GHz on a totally new process seems crazy.

But the general layout of the different CPUs makes total sense. AMD using one large I/O chiplet plus two of the same 8-core chiplets as on that Rome server CPU they already showed seems like a pretty smart thing to do. With those 8-core chiplets they would be able to concentrate on how to best manufacture just one type of 7nm wafer for literally every single product they offer. The physical size of the chiplets is also tiny so those are a good thing to try to get to work on a brand new process. And because it's a new process, it makes sense that they would have a bunch of crappy chiplets where a lot of the cores are broken so offering 3+3, 4+4 and 6+6 Ryzen CPUs makes sense.


----------



## ToTheSun!

SuperZan said:


> WCCFtech reporting 12GHz 56 cores CPU's that also toast bread and day trade for you when idle.


When are those rumored to release?


----------



## SuperZan

ToTheSun! said:


> When are those rumored to release?



This coming Saint Quispin's Feast.  I'm in for three!


----------



## makkara

Seems believable that this could be close to real.


----------



## Jarhead

rluker5 said:


> I don't see a viable way to keep improving total cpu performance without better core scaling either. I just don't see them scaling a lot better in games yet. And while I object to Nvidia keeping the price/fps ratio the same with the release of Turing, that is still out there and probably why the 590 costs so much compared to the 580. AMD buyers can blame Nvidia for raising the accepted market prices.
> 
> The market is pretty saturated with capable 1080p gpus so hopefully people just buy AMD's cheaper 580 that is just as good as Nvidia's current alternative and hold off on Nvidia's expensive ones until the raytracing/overpricing/resolution+framerate cap mess gets sorted out.
> 
> MGPU in Shadow of the Tomb Raider is really nice too.


I'm not talking now, I'm talking five years from now. The game devs are going to be eeking out all the performance they can from the PS5 and Xbox, and we all know that they only support features found in the consoles for the first five years of the new generation until the hardware starts holding things back really noticeably on the PC side of things. AMD is going to be providing the GPU/CPU for both consoles, again. There's a chance that AMD could provide a GPU design that forces MGPU on the market via this approach, and makes double GPU the default, which is where AMD wants to move the industry over to. This would explain why AMD has continued to support it when the consumer side of the industry wants nothing to do with it.

I think this is actually a good breakdown and speculation: 





We also don't know what Intel's upcoming GPU is going to bring to the table.


----------



## dantoddd

rv8000 said:


> Would you be so kind as to point me in the direction of this 12ghz 56c toaster?
> 
> The most unreasonable thing I see is the price points. By hitting their estimated ipc gains and potentially hitting clock speeds of 4.5-4.8, there should be little difference between comparable intel cpus @ the same c/t counts. I don't think they could provide enough stock to not shoot themselves in the foot at those prices.


the price is what i find difficult to believe. unless they're really trying to undercut intel and move people away from intel


----------



## AlphaC

The counterpoint:
https://www.extremetech.com/computi...mds-ryzen-3000-series-are-too-good-to-be-true


> First, it’s highly unlikely that AMD would kill off its entire product family below the six-core + SMT space or that it would stop using SMT as a feature differentiation in its products. SMT is useful to both AMD and Intel for the same reason — it allows both companies to offer a significant performance uplift as an incentive to customers to buy higher-performing parts, while costing virtually nothing in terms of die size or OS support, since all modern operating systems robustly support the feature. AMD already offers SMT on most of its Ryzen CPUs, but leaving it off the lowest-end models is an up-sell technique.
> Second, the closest CPU to the claimed “Ryzen 3 3300X” is the current Ryzen 5 2600X (3.6GHz base, 4.2GHz boost), at $240. The chances that AMD slashes its equivalent CPU pricing by 54 percent on the basis of 7nm improvements is nil*.* AMD has spent the year emphasizing to investors that its margins should continue to improve over time. The absolute worst way to make that happen is to take a chainsaw to your own product pricing. If you wanted to see a meteoric leap in AMD’s price/performance ratio, the company already delivered it back in 2017.



I'd be very skeptical of the > 12 core parts. Anything matching the i9-9900k (~160W of power for 4.7GHz all core non AVX) without using 200W is already a decent step from 4-4.2GHz all core.


----------



## guttheslayer

AlphaC said:


> The counterpoint:
> https://www.extremetech.com/computi...mds-ryzen-3000-series-are-too-good-to-be-true
> 
> 
> 
> I'd be very skeptical of the > 12 core parts. Anything matching the i9-9900k (~160W of power for 4.7GHz all core non AVX) without using 200W is already a decent step from 4-4.2GHz all core.


The counter-point to counterpoint:

AMD want to crush Intel completely and dominate the market for CPU which they have been losing out for the past 10 years. Its all about getting the market shares. Don't forget Intel is still a tough giant to defeat despite their failing advancement on their entire lineup.


I am still 95% confident 16C/32T is incoming for AM4 socket. Brace for the real CPU war coming 2019.


----------



## Jarhead

guttheslayer said:


> The counter-point to counterpoint:
> 
> AMD want to crush Intel completely and dominate the market for CPU which they have been losing out for the past 10 years. Its all about getting the market shares. Don't forget Intel is still a tough giant to defeat despite their failing advancement on their entire lineup.
> 
> 
> I am still 95% confident 16C/32T is incoming for AM4 socket. Brace for the real CPU war coming 2019.


Exactly. The engineering decisions are made years in advance. The last time things swung this way AMD got Athlons beating the Netburst Pentiums, but the whole time Intel's engineering was toiling away on Core. Ten years later AMD blindsided Intel with Zen, this year Intel suffers a manufacturing setback while AMD refines Zen, and next year Zen2 will be competitive with whatever Intel throws at it, not only in core count but in IPC, and for about 60% of the cost. Even if the IPC gains don't outright beat the Intel chips(probably will though), the price to performance will be such that AMD beats them handily. Intel only has so much room to respond based on what they started working on years ago. So enjoy this (I know I am) because Intel will eventually find it's footing again within the next few years. 

Remember that Intel is supposed to be wading into the discrete GPU market in 2020, which means that it's very likely that new hardware is going to be part of the companies overall market strategy.


----------



## Majin SSJ Eric

Scotty99 said:


> No way any of this is true lol. 6c12t 99.99 chips at the bottom of the lineup with 16c32 5.1ghz at the top for 499.99, ya no.
> 
> How this got taken serious by any "leak" site is beyond me, do people still have functioning brains....hello??
> 
> Edit: While all of the CPU's are unrealistic in terms of cores price and clockspeed the 3700x in particular is just lol'able. Do you guys actually legitimately think amd has not only gained ~600mhz (at minimum) they are going to sell that chip with 4 more cores for the same price as the 2700x? Like you actually believe that lol?


I have a feeling you are going to be so butthurt when Zen 2 comes out.


----------



## Majin SSJ Eric

ryboto said:


> I would say that your statement screams of bias, but why is it unbelievable? AMD is on a more advanced node, and they're moving to smaller die for the most complex parts of the chips, increase yields and allowing them to bin differently. They'll produce more working die per wafer vs monolithic designs, and if TSMC's process is working efficiently, it means they've got a lot of options for speed binning once they bin for TR parts.
> 
> It's not too good to be true, but the leaks would mean AMD reached the best possible outcome with the Ryzen 2 design and 7nm node as compared to the current generation Ryzen. The Navi stuff is probably a stretch, as they have only just recently had R&D resources to push the RTG into competitive mode.


Yeah, the Navi stuff is the only thing that sticks out to me as being unrealistic; all the CPU stuff seems certainly plausible (if more of a "best case scenario" kind of thing). As I said above, they are moving from 12nm to 7nm on a new Zen architecture (not just a minor update like Ryzen 2 was) so I'd honestly be surprised if they couldn't manage at least a 5% clock speed increase and just that would get them over 4.5 GHz. If anything, I'd be a little surprised if they didn't get to a 10% increase or more and that would get Zen 2 chips up over 4.7-4.8 GHz. Hell, the 5 GHz rumor itself would be assuming just a 17% increase, and TSMC has been talking about up to a 25% increase on their new 7nm node. So no, not completely impossible at all.

That said, the pricing of these chips is where things might not pan out as Intel is still pricing their flagship mainstream processor (9900K) well over $500 so AMD has some room to price these Zen 2 chips higher if they want to. The only problem then becomes where do they price entry-level TR? Do they all get massive price increases to maintain separation from regular Ryzen 3? I dunno, but the speed and core increases are (to me) the MOST believable of this particular rumor, which (to be fair) is still most definitely a rumor.


----------



## NightAntilli

AlphaC said:


> The counterpoint:
> https://www.extremetech.com/computi...mds-ryzen-3000-series-are-too-good-to-be-true
> 
> 
> 
> I'd be very skeptical of the > 12 core parts. Anything matching the i9-9900k (~160W of power for 4.7GHz all core non AVX) without using 200W is already a decent step from 4-4.2GHz all core.


The lowest price we've recently seen for the 2600X is $170. Also note that the 2600X requires a beefier cooler, due to it being 95W, compared to the supposed 65W of the 3300X. That's if they come with a cooler at all...

Not saying the argument is wrong (although the wrong percentage calculation of $240 to $130 doesn't really add to the credibility)... I personally think the prices should be around $30-$50 higher, especially in the lower end parts. They simply seem too close together. This would make a lot more sense to me, but I'm simply speculating;

3300 $120
3300X $150
3600 $200 
3600X $250
3700 $300
3700X $375

Also, we can't really compare Zen(+) to Zen 2. With Zen(+), any fault in the I/O components would also make a whole core on the CCX unusable. That issue is simply no longer there to the same degree. The chiplet design drastically drives costs down and mitigates issues with yields. Cores are the least of AMD's concern right now. We know they are creating 8c/16t chiplets of around ~75 mm2, which is almost three times as small as a Zeppelin die. Since the I/O is done on 14nm, yields there are very high there as well. Despite 7nm being expensive, I wouldn't be surprised if Zen 2 is cheaper for AMD to produce than Zen(+), thus these prices..


----------



## kyrie74

AMD is already producing a bunch of wafers for design wins for EPYC Zen2 Server chips that have been announced recently, so they are going to have a lot of chips that didn't make the cut as EPYC chips that they are able to sell as Ryzens. The key to these is the fact that they come from wafers that were paid for by the chips that went into the EPYC chips, so the chips sold as Ryzens are basically all profit outside of the packaging and marketing costs. Some people are looking at this completely wrong when they say that 16c/32t will hurt TR sales. Ryzen has 8c/16t already and TR starts at that same 8c/16t does it hurt TR sales as well? No, it doesn't because TR offers more PCIe lanes and quad channel memory. If Zen2 TR starts at 8c/16t it will still have the advantage of those extra lanes and memory channels. AMD also knows what is going into the next consoles and if both are using 8 core Zen2 then it makes sense to start your own lineup at 6c/12t so the people can play the new console ports on low settings. The consoles would also explain the GPUs that were leaked as well as the prices.


----------



## Shatun-Bear

Couple of things:

- Why the excitement for the Navi rumours of GTX 1080 performance when these Navi cards are not going to be released for 8-12 months? Do some of you think these cards are coming Q1-Q2?

With a release at the end of the year, I wouldn't bet against Nvidia (actually I am sure) that Nvidia will beat them to the punch with their first 7nm cards that move the performance target away from Radeon again.

- The Ryzen-side of the rumour is amazing but what's glaring to me are some of the base clocks, which have me worried the entire thing is made up. You've got a Ryzen 12 and 16-core with 4.2Ghz and 4.3Ghz base clocks. Which is crazy and not happening even if it were possible. A 12-core with a 4.2Ghz BASE CLOCK would exceed the TDPs listed here and then some.


----------



## dj_tokyu

Shatun-Bear said:


> Couple of things:
> 
> - Why the excitement for the Navi rumours of GTX 1080 performance when these Navi cards are not going to be released for 8-12 months? Do some of you think these cards are coming Q1-Q2?
> 
> With a release at the end of the year, I wouldn't bet against Nvidia (actually I am sure) that Nvidia will beat them to the punch with their first 7nm cards that move the performance target away from Radeon again.
> 
> - The Ryzen-side of the rumour is amazing but what's glaring to me are some of the base clocks, which have me worried the entire thing is made up. You've got a Ryzen 12 and 16-core with 4.2Ghz and 4.3Ghz base clocks. Which is crazy and not happening even if it were possible. A 12-core with a 4.2Ghz BASE CLOCK would exceed the TDPs listed here and then some.


Has NVidia even displayed their 7nm silicon? I don't think i've seen anything other than slides thus far.


----------



## amd955be5670

I would love the 6/12 parts to be single ccx modules. It would be the right direction to beat Intel's latency crown. But I doubt anything can beat Ringbus at this point.


----------



## LancerVI

Shatun-Bear said:


> Couple of things:
> 
> - Why the excitement for the Navi rumours of GTX 1080 performance when these Navi cards are not going to be released for 8-12 months? Do some of you think these cards are coming Q1-Q2?
> 
> With a release at the end of the year, I wouldn't bet against Nvidia (actually I am sure) that Nvidia will beat them to the punch with their first 7nm cards that move the performance target away from Radeon again.
> 
> - *The Ryzen-side of the rumour is amazing but what's glaring to me are some of the base clocks, which have me worried the entire thing is made up. You've got a Ryzen 12 and 16-core with 4.2Ghz and 4.3Ghz base clocks. Which is crazy and not happening even if it were possible. A 12-core with a 4.2Ghz BASE CLOCK would exceed the TDPs listed here and then some.*


I agree with that sentiment. I just don't think a 10% bump in frequency is not out of the question. I wouldn't be surprised to see 4.6 across all cores at the top end, but the base rates seem a bit high.


----------



## CynicalUnicorn

Shatun-Bear said:


> - The Ryzen-side of the rumour is amazing but what's glaring to me are some of the base clocks, which have me worried the entire thing is made up. You've got a Ryzen 12 and 16-core with 4.2Ghz and 4.3Ghz base clocks. Which is crazy and not happening even if it were possible. A 12-core with a 4.2Ghz BASE CLOCK would exceed the TDPs listed here and then some.


I can't remember if I said it in this thread or not but whatever here goes: 5GHz Zen 2 is a bad rumor. We can prove with 7nm Vega, the MI60, that the process shrink does not allow significantly higher frequencies compared to existing 14nm processors. I calculated less than 100MHz faster than existing Vega 64 configurations based on advertised FLOPS. I do not see Zen 2 changing the architecture in such a way as to allow higher-frequency operation either, or at least not to such a significant degree.

It's all way too optimistic and it stems from this weird desire for AMD to be successful and "win." Like people have found this corporation that wants to take your money and make shareholders happy and said "yup I'm part of this team now." And it all misses the really interesting bits of Zen 2 that we know about now: the chiplets and IO hub. That's a fascinating result of choices designed to mitigate Moore's law's end, and it's going to have huge implications for performance.


----------



## Shatun-Bear

dj_tokyu said:


> Has NVidia even displayed their 7nm silicon? I don't think i've seen anything other than slides thus far.


But you know AMD (or at least the GPU side) like to showcase their GPUs way, way ahead of when they actually release them. I remember Polaris being revealed November/December the year before the summer it released. Then the same with Vega. It gives Nvidia a massive window to be the first out the gate because Nvidia on the other hand give little away right until close to launch. So basically I wouldn't put any stock in not seeing anything from from Nvidya yet, Turing is basically just Pascal with RT stuff bolted on, their development team have been working on 7nm for a good while I reckon.



LancerVI said:


> I agree with that sentiment. I just don't think a 10% bump in frequency is not out of the question. I wouldn't be surprised to see 4.6 across all cores at the top end, but the base rates seem a bit high.


I would expect 4.7-4.8Ghz single or dual-core boost and yeah, all-core around 4.6Ghz.



CynicalUnicorn said:


> I can't remember if I said it in this thread or not but whatever here goes: 5GHz Zen 2 is a bad rumor. We can prove with 7nm Vega, the MI60, that the process shrink does not allow significantly higher frequencies compared to existing 14nm processors. I calculated less than 100MHz faster than existing Vega 64 configurations based on advertised FLOPS. I do not see Zen 2 changing the architecture in such a way as to allow higher-frequency operation either, or at least not to such a significant degree.
> 
> It's all way too optimistic and it stems from this weird desire for AMD to be successful and "win." Like people have found this corporation that wants to take your money and make shareholders happy and said "yup I'm part of this team now." And it all misses the really interesting bits of Zen 2 that we know about now: the chiplets and IO hub. That's a fascinating result of choices designed to mitigate Moore's law's end, and it's going to have huge implications for performance.


It certainly should be the most interesting thing from the rumour, but Joel Hruska over at ExtremeTech doubts the IO hub part of it (actually, all of it).


----------



## Kpjoslee

Any rumor that is too good to be true, ended up being not true most of the time. Rather be skeptical and be surprised later than being disappointed due to bein overhyped before.


----------



## kd5151

https://www.techpowerup.com/250412/...-korea-teases-amd-ryzen-7-3700x-ryzen-5-3600x

Any takers?


----------



## SuperZan

I prefer not to couch things so emotionally when it comes to hardware. Speculation is either interesting or uninteresting and the potential positives in even pieces of this 'leak' proving out are very interesting indeed. Either way, I'm not going to get hyped by or disappointed in any leak or speculation.


----------



## AlphaC

CynicalUnicorn said:


> I can't remember if I said it in this thread or not but whatever here goes: 5GHz Zen 2 is a bad rumor. We can prove with 7nm Vega, the MI60, that the process shrink does not allow significantly higher frequencies compared to existing 14nm processors. I calculated less than 100MHz faster than existing Vega 64 configurations based on advertised FLOPS. I do not see Zen 2 changing the architecture in such a way as to allow higher-frequency operation either, or at least not to such a significant degree.
> 
> It's all way too optimistic and it stems from this weird desire for AMD to be successful and "win." Like people have found this corporation that wants to take your money and make shareholders happy and said "yup I'm part of this team now." And it all misses the really interesting bits of Zen 2 that we know about now: the chiplets and IO hub. That's a fascinating result of choices designed to mitigate Moore's law's end, and it's going to have huge implications for performance.


 Maybe it's confusing IPC improvement for clock gain. 7nm promises 25% more performance vs 14nm in the same power envelope, not 25% more clockspeed.



10%-15% IPC is actually better than +10% clockspeed since if you use 102-104 BCLK for example it's multiplicative and it also affects all power states.


For this ad:
https://www.techpowerup.com/250412/...-korea-teases-amd-ryzen-7-3700x-ryzen-5-3600x


Game , office, home use (HTPC?)
The computer is a risen.
2019 Year 7nm Public release of Rizen

For the closest performer, we will give you a Rizen 7 3700X (home)! <-- something like a prediction contest


----------



## Aenra

Thoughts:

- If true, that 8core APU is perfect: Linux, VMs, passthrough, without need of Intel and its backdoor IME firmware and without performance compromises, because 8core. Same functions, less physical space and with zero side efects. Give me give me give me.

- I couldn't care less about the final boost clock being 4.7, 4.8 or 5.0GHZ, have stopped equating my virtuan epeen with my real one for decades now. The leaked specs seem a bit-up to overly optimistic, but at the same time are close to a flat 15% increase, so as always the case, it will come down to how it's implemented and of course, how it's.. "measured".

- Infinity fabric functions equate to a respectable amount of joules that up to now had to be 'soaked up' internally (see first gen EPYCs to get the real picture). With the ever so wise (no irony, it_ is_ wise) decision to keep the entire I/O separate, physically, i can honestly see how a larger than average performance boost can come to pass.
* minor asterisk here depending on whether we're getting an L4 somewhere in there or not. Both good and bad.

- Am more 'worried' about the actual die size and as such the actual IHS size; something tells me the 9 offshoots will require their own dedicated cooler; if TR4 history's any indication, that may well mean reduced options and more importantly, reduced cooling (no NHD15 for TR4s for example.. and while the Silver Arrow is real close to it, it also doesn't come in the form of a D15s..[angled]) and reduced options in terms of components. I would remind that unless one had an AsRock board, good cooler and Tr4 meant abandoning the 3.0x16 slot that was wired to the 'fast' die.

- If the TDP ratings are accurate, we're talking about a 12core gen2 Ryzen that can reach its OC cap on air! And that sir really _is_ exciting ^^

- Don't know if AdoredTV (what a name for eff's sake..) covers this too, but i'd be more cautious about the Navis than about the CPUs. Way too aggressive pricing; obviously nice if true though, won't complain.

- *Important! Epeen aside, those that actually, really need 12 or 16 cores running simultaneously also need a looot of RAM. To be taken into consideration before celebrating; we're still talking about dual memory channel CPUs here.

Very excited personally speaking. We've reached a point where i can go crazy, literally FUBAR with what i use, without resorting to the overpriced likes of NuGreedia and Toothpaste inside(tm). Life couldn't be better


----------



## Alex132

I think this might be worth posting.... https://www.reddit.com/r/Amd/comments/a44f4b/the_excel_spreadsheet_ryzen_leak_was_me_it_is_not/


----------



## mouacyk

Aww man, why ruin the fun?


----------



## randomizer

It would be nice if this was true. It's almost time for my decennial rig replacement.


----------



## Jarhead

epic1337 said:


> mhmm, from what i can see most manufacturers are prioritizing TVs, or otherwise HDR features over high refreshrates.
> the technical limitation of the interconnects may be one of the issues, but this really isn't a problem when considering the fact that two cables can be ganged together.
> 
> as for sizes, technically the monitors rarely exceeds 40", take for example LG's 4K/5K monitor lists only one out of 22 is 43", so you'd probably be hard-pressed to find an ideal monitor.
> TVs on the other hand often exceeds 40", even going as large as 80" behemoth of a screen, yet it is rare for a TV to actually have high refreshrates besides interpolated input.


I FOUND ONE! Problem, it kinda sucks:






WASABI MANGO 43" 120Hz Real 4K UHD IPS Gaming Monitor $1,279.00
https://www.amazon.com/gp/product/B07KMLDWRX/ref=ox_sc_saved_title_1?smid=A3JJPBW9Z68A27&psc=1

Hey Samsung or LG, how about one of these with Q-dot and Freesync?


----------



## epic1337

Jarhead said:


> I FOUND ONE! Problem, it kinda sucks:
> 
> https://www.youtube.com/watch?v=bsytUzKrP58
> 
> WASABI MANGO 43" 120Hz Real 4K UHD IPS Gaming Monitor $1,279.00
> https://www.amazon.com/gp/product/B07KMLDWRX/ref=ox_sc_saved_title_1?smid=A3JJPBW9Z68A27&psc=1
> 
> Hey Samsung or LG, how about one of these with Q-dot and Freesync?



not bad, but for something that doesn't have Freesync/G-Sync, Q-dot or an actual HDR support, its asking for too much $.
i suppose the large format can demand a bit of a premium, but really though its not like you can't get a 4K 43" for a lot less, albeit limited to 60Hz.


----------



## Majin SSJ Eric

Kpjoslee said:


> *Any rumor that is too good to be true, ended up being not true most of the time.* Rather be skeptical and be surprised later than being disappointed due to bein overhyped before.


That may be true, EXCEPT when it comes to Ryzen's entire existence. I can't tell you just how similar all the comments were in the various threads leading up to Ryzen's eventual release in 2017 to the same comments I have been seeing here (by most of the same usual suspects as well). "There's no way Zen is going to have twice the cores/threads as Skylake for less money!" "AMD will never be able to match or beat a chip like the 6900K!" "Zen will be BD all over again, wait and see!!" "Keller is not a magician, AMD still sucks and Zen will just be BD part deux!" "Zen will not get close to even 50% the performance of Intel's chips!" "Even if AMD somehow miraculously gets close to Intel's performance, they will price these things just as high!"

If you want to see what I'm talking about, just take a look through ANY of the Zen rumor threads leading up to Ryzen's launch. I was quite reserved in my expectations myself and remember saying that even if Zen had just Ivy Bridge IPC that it would be a huge win for AMD. Well, turns out Zen's IPC is far beyond that and it really only came down to the lack of clock speed that kept Ryzen from easily taking the performance crown from Intel. With a massive node shrink half the nm of Ryzen 1 and 2, as well as a new Zen architecture (not just a refresh like Ryzen 2), I really don't understand where all of this skepticism is coming from? Hell, nobody expected any clock speed increase AT ALL out of Ryzen 2 and yet they did manage anywhere from a 3% to 5% speed increase anyway. And that was on the same node with only very minor tweaks to the architecture. I would expect at bare minimum at LEAST the same speed bump out of Zen 2 as we saw from the Ryzen 2 chips, and that would just be a baseline, with a rather more significant boost more than likely.

I'm not saying I believe these rumors in totality to be basically confirmed or anything, just saying that there isn't anything inherently unrealistic about a lot of the stuff in this "leak". Sure, the idea of 4.2 GHz BASE clocks is a bit out there, and perhaps the pricing of a few of the proposed skus may be on the rather optimistic side, but *I would be pretty surprised if we don't see at least a 6C / 12T product that boosts to around 4.7-4.8 GHz for significantly less money than the 9900K.*


----------



## CynicalUnicorn

Majin SSJ Eric said:


> *I would be pretty surprised if we don't see at least a 6C / 12T product that boosts to around 4.7-4.8 GHz for significantly less money than the 9900K.*


Ignoring the clockspeed prediction, I don't think anybody is expecting an AMD CPU to cost anything _but_ significantly less than the Intel CPU with two more cores.


----------



## EniGma1987

AlphaC said:


> Maybe it's confusing IPC improvement for clock gain. 7nm promises 25% more performance vs 14nm in the same power envelope, not 25% more clockspeed.
> 
> 
> 
> 10%-15% IPC is actually better than +10% clockspeed since if you use 102-104 BCLK for example it's multiplicative and it also affects all power states.


You are confusing node improvements with IPC, they are not even close to the same thing. IPC is instructions per clock cycle, and is dependent on architecture design in how many operations it can retire every clock cycle. The design being run on 7nm or 14nm doesnt change what it can do.

The performance improvement of a node that is advertised is about the transistor performance. Which does somewhat relate to clock speed capabilities, but the architecture design also has an even stronger influence on clock speed capability. You will gain some MHz speed in a design simply from going 14->7 because the transistors can all switch faster and because transistors are closer together, but the gain is usually fairly small because the architecture itself still has something holding it back at a certain point. That is why for instance with Vega going from 14nm to 7nm it did gain a small bump in clock speed, but the arch itself is still the main thing holding back the clocks so it only gained a small bit and not the 25% that is advertised by the foundries. The numbers they advertise of a node are always significantly higher than any design ever gets when being ported to it.





amd955be5670 said:


> I would love the 6/12 parts to be single ccx modules. It would be the right direction to beat Intel's latency crown. But I doubt anything can beat Ringbus at this point.


It is already confirmed that they are 4 core CCXs still. Next year AMD will move to 6 core designs so that they can keep the same 8, 8-core dies but gain 50% more cores in server parts. They really have no room to put more dies on the package so AMD will have to move to 6 core CCXs or change to a new socket with a larger package.


----------



## tyvar

CynicalUnicorn said:


> I can't remember if I said it in this thread or not but whatever here goes: 5GHz Zen 2 is a bad rumor. We can prove with 7nm Vega, the MI60, that the process shrink does not allow significantly higher frequencies compared to existing 14nm processors. I calculated less than 100MHz faster than existing Vega 64 configurations based on advertised FLOPS. I do not see Zen 2 changing the architecture in such a way as to allow higher-frequency operation either, or at least not to such a significant degree.


I don't think its entirely appropriate to compare a instinct card to all the possible permutations of a consumer card, instinct cards don't push the margins anywhere near as much as OEM consumer cards will with their exotic cooling solutions.

while the MI8 has the same 1000mhz clock as the Fury Nano did, the Mi6 boosts a full 100mhz slower then the a generic RX580

The Instinct MI60 is a passively cooled board and runs a 1800mhz boost speed vs stock vega 64 aircooled cards which come in about 1600mhz. 

To me this suggests a uplift of more akin to 200mhz considering the likelyhood the MI60 is conservative in its boosting


----------



## CynicalUnicorn

tyvar said:


> I don't think its entirely appropriate to compare a instinct card to all the possible permutations of a consumer card, instinct cards don't push the margins anywhere near as much as OEM consumer cards will with their exotic cooling solutions.
> 
> while the MI8 has the same 1000mhz clock as the Fury Nano did, the Mi6 boosts a full 100mhz slower then the a generic RX580
> 
> The Instinct MI60 is a passively cooled board and runs about about 200mhz faster then stock vega 64 aircooled cards.


"Passively cooled" in a datacenter GPU means "there's no fan on the GPU because you have enough 40mm fans in the chassis." It's not passive in the same way a passive GPU for fanless consumer systems would be.

Going by AMD's claim of 14.7 TFLOPS of FP32 performance, we get a peak boost clock of 1800 MHz. Vega Frontier Edition meanwhile hits 1600 MHz peak on the core, and the RX Vega 64 is somewhat slower at 1536 MHz peak. But note that I said "peak." We're not taking thermal throttling into consideration and are only considering what the GPU core is rated to run at. These speeds can be achieved by all respective Vega GPUs. This tells us about the architecture and the process node.

Finally, and I think this is the best comparison since it's in the same product line with a similar cooler, we've got the Instinct MI25 hitting 1500 MHz peak with a 300W TDP. I suspect then, based on AMD's other claims including 25% higher performance at the same power, the MI60 will also be a 300W GPU but will only add a few hundred megahertz to the peak boost clock. Average clocks remain to be seen (likely 25% higher...), although that's beside the point.

7nm just is not enough of a jump to enable 5GHz reliably on Zen. Unlike Vega, it isn't hitting a thermal wall (the core can still be cooled), it's hitting a _voltage_ wall. I don't buy it.

Oh and the rumor is crap that a guy on reddit made up, which is funny. It was not me, unfortunately, but the motivations were the same lol. :thumb:


----------



## looniam

CynicalUnicorn said:


> Spoiler
> 
> 
> 
> "Passively cooled" in a datacenter GPU means "there's no fan on the GPU because you have enough 40mm fans in the chassis." It's not passive in the same way a passive GPU for fanless consumer systems would be.
> 
> Going by AMD's claim of 14.7 TFLOPS of FP32 performance, we get a peak boost clock of 1800 MHz. Vega Frontier Edition meanwhile hits 1600 MHz peak on the core, and the RX Vega 64 is somewhat slower at 1536 MHz peak. But note that I said "peak." We're not taking thermal throttling into consideration and are only considering what the GPU core is rated to run at. These speeds can be achieved by all respective Vega GPUs. This tells us about the architecture and the process node.
> 
> Finally, and I think this is the best comparison since it's in the same product line with a similar cooler, we've got the Instinct MI25 hitting 1500 MHz peak with a 300W TDP. I suspect then, based on AMD's other claims including 25% higher performance at the same power, the MI60 will also be a 300W GPU but will only add a few hundred megahertz to the peak boost clock. Average clocks remain to be seen (likely 25% higher...), although that's beside the point.
> 
> 7nm just is not enough of a jump to enable 5GHz reliably on Zen. Unlike Vega, it isn't hitting a thermal wall (the core can still be cooled), it's hitting a _voltage_ wall. I don't buy it.
> 
> 
> 
> Oh and the rumor is crap that a guy on reddit made up, which is funny. It was not me, unfortunately, but the motivations were the same lol. :thumb:


so you're saying stuffing a face full of pop tarts was involved?


----------



## Jspinks020

I think they've done pretty well done with Ryzen at 4.2ghz with the 2600x and the msi it booted right up. So I mean it's another good Runner. 
Probaly could Increase clocks but that is still an ok clock though.


----------



## ryan92084

Alex132 said:


> I think this might be worth posting.... https://www.reddit.com/r/Amd/comments/a44f4b/the_excel_spreadsheet_ryzen_leak_was_me_it_is_not/


Yep was admitted as fake before the video even came out.


----------



## Shatun-Bear

Some things that stick out to me all add up to alarm bells ringing:

- There is no way anyone could know the prices for all the Navi graphics cards this far out from release. HardwareUnboxed guy said the same thing - if the cards are not due until Q3-Q4 at the earliest, even AMD themselves would not have settled on a price. Come that time Nvidia will have 7nm cards ready or be close to it, changing the landscape of the market.

- The base clocks and TDPs are fantasy on some of the SKUs, it can't be stated enough. What worries me is AdoredTV guy himself doesn't seem to understand CPUs that well when in one of his previous recent videos to this, he predicted 4.4Ghz base clocks on the 3700X or 3800X. I shook my head. 

- AdoredTV says the source is not the same one that leaked the RTX stuff which, by the way, was leaked much, much closer to when that info was actually revealed, making it far more plausible. So you all could be putting faith in some prankster who typed this up in an email.

The only curve ball is HardOCP Kyle Bennett chiming in with the comment that most of the info he is correct. But what is his sources? I generally trust Kyle as someone in the know, so that makes it interesting. But it doesn't rule out the figures in the Ryzen list being wrong, and we also don't know whether Kyle was referring to the CPU or GPU side being 'mostly correct'.


----------



## ozlay

Can we just get a duron in an AM1 like design. 7w-15w would be cool. No heatsink required


----------



## Scotty99

ozlay said:


> Can we just get a duron in an AM1 like design. 7w-15w would be cool. No heatsink required


That'd be pretty sweet actually, im still using an athlon 5350 in my HTPC. It works, but just barely lol.

Also i think people are pretty onto the fact this is bogus by now, even hardware unboxed did a video with similar thoughts that i had. The most people should be hoping for with zen 2 is a 12 core ryzen 7, but get 5ghz out your head cause it aint happenin.


----------



## thebski

I haven't taken the time to read all 240 replies, so my apologies if the video has already been proven false or something. However, if there's much truth to it, I may be going AMD for the first time. I have been waiting on either the Z390 Dark or M11 Apex to come available so I can get a 9900K, but now I may just wait and see what AMD is going to do at CES. If that 3850X is true or even the 3700X, I'm all in.


----------



## AlphaC

EniGma1987 said:


> You are confusing node improvements with IPC, they are not even close to the same thing. IPC is instructions per clock cycle, and is dependent on architecture design in how many operations it can retire every clock cycle. The design being run on 7nm or 14nm doesnt change what it can do.
> 
> The performance improvement of a node that is advertised is about the transistor performance. Which does somewhat relate to clock speed capabilities, but the architecture design also has an even stronger influence on clock speed capability. You will gain some MHz speed in a design simply from going 14->7 because the transistors can all switch faster and because transistors are closer together, but the gain is usually fairly small because the architecture itself still has something holding it back at a certain point. That is why for instance with Vega going from 14nm to 7nm it did gain a small bump in clock speed, but the arch itself is still the main thing holding back the clocks so it only gained a small bit and not the 25% that is advertised by the foundries. The numbers they advertise of a node are always significantly higher than any design ever gets when being ported to it.
> 
> 
> 
> 
> 
> It is already confirmed that they are 4 core CCXs still. Next year AMD will move to 6 core designs so that they can keep the same 8, 8-core dies but gain 50% more cores in server parts. They really have no room to put more dies on the package so AMD will have to move to 6 core CCXs or change to a new socket with a larger package.



Ryzen 3000 series is going to have the front end improvements of Rome as well as AVX2 improvements so it's not a simple port to 7nm. I'm just saying that expecting more than +25% in the same power envelope or -50% power for same performance (as far as 16 cores) is overtly optimistic , which is why I doubted the 12/16 core parts.


https://www.servethehome.com/amd-next-horizon-event-live-coverage/


----------



## CynicalUnicorn

looniam said:


> so you're saying stuffing a face full of pop tarts was involved?


I mean I'd hope it was a better breakfast pastry like Pillsbury Toaster Strudel.

There's just so many red flags. The only people talking about it as if it has any legitimacy are rumor mills or /r/AMD's favorites.


----------



## tyvar

CynicalUnicorn said:


> "Passively cooled" in a datacenter GPU means "there's no fan on the GPU because you have enough 40mm fans in the chassis." It's not passive in the same way a passive GPU for fanless consumer systems would be.
> 
> Going by AMD's claim of 14.7 TFLOPS of FP32 performance, we get a peak boost clock of 1800 MHz. Vega Frontier Edition meanwhile hits 1600 MHz peak on the core, and the RX Vega 64 is somewhat slower at 1536 MHz peak. But note that I said "peak." We're not taking thermal throttling into consideration and are only considering what the GPU core is rated to run at. These speeds can be achieved by all respective Vega GPUs. This tells us about the architecture and the process node.
> 
> Finally, and I think this is the best comparison since it's in the same product line with a similar cooler, we've got the Instinct MI25 hitting 1500 MHz peak with a 300W TDP. I suspect then, based on AMD's other claims including 25% higher performance at the same power, the MI60 will also be a 300W GPU but will only add a few hundred megahertz to the peak boost clock. Average clocks remain to be seen (likely 25% higher...), although that's beside the point.


Yes? that was what I was pointing out? you made the claim that the 7nm brought a gain of sub 100mhz, and I pointed out that it was more like 200mhz, and backed that up with evidence. 

as for passive cooling, I know exactly what it means in a server setting, and its still less effective of a cooling method then a AIO, which is why comparing it to those flavors of Vega 64s would be disingenuous.



> 7nm just is not enough of a jump to enable 5GHz reliably on Zen. Unlike Vega, it isn't hitting a thermal wall (the core can still be cooled), it's hitting a _voltage_ wall. I don't buy it.
> 
> Oh and the rumor is crap that a guy on reddit made up, which is funny. It was not me, unfortunately, but the motivations were the same lol. :thumb:


We have no idea what the performance of Zen on 7nm will be, other then we probably will see higher clocks. Id just point out that hitting 5.0 ghz boost from the highest boosting threadripper is only about 15% jump.


----------



## doritos93

So Kyle helped corroborate a fake leak posted by a now deleted redditor?

Yeah, sounds much more likely


----------



## ryan92084

doritos93 said:


> So Kyle helped corroborate a fake leak posted by a now deleted redditor?
> 
> Yeah, sounds much more likely


The video was comparing a leak given directly to adoredtv with a confirmed fake leak on reddit. So, no.


----------



## Wezzor

If this is true, I can finally upgrade my 9 year old Dell monitor to a FreeSync monitor


----------



## looniam

CynicalUnicorn said:


> I mean I'd hope it was a better breakfast pastry like Pillsbury Toaster Strudel.
> 
> There's just so many red flags. The only people talking about it as if it has any legitimacy are rumor mills or /r/AMD's favorites.


but then they would look even flakier. the specs are better with some crust and evenly on the icing.


----------



## mohit9206

cooljaguar said:


> Then you need to stop posting and go look up how big of a deal chiplets are. 6c/12t for the lowest end Ryzen part is easily doable now. Chiplets that are so defective that only 4c are functional will likely be used for the budget Athlon lineup, or possibly on the 8c/16t Ryzen SKUs in a 2 chiplet layout.


Sorry to break this to you but Ryzen will not start at 6C/12T. You are being overly optimistic. Don't ride the hype train we saw what happened with Vega hype train. Ryzen entry level part will still be a 4 core part. Maybe AMD might make it 4C/8T but no chance it will be 6C/12T. Also the rumored "leaked" pricing is indeed extremely unrealistic. AMD is a business not a charity organization.


----------



## ozlay

mohit9206 said:


> Sorry to break this to you but Ryzen will not start at 6C/12T. You are being overly optimistic. Don't ride the hype train we saw what happened with Vega hype train. Ryzen entry level part will still be a 4 core part. Maybe AMD might make it 4C/8T but no chance it will be 6C/12T. Also the rumored "leaked" pricing is indeed extremely unrealistic. AMD is a business not a charity organization.


If they are using two 8c/16t chiplets on each AM4 chip. Then it would make sense to start at 6c/12t. Unless yields are really bad. They shouldn't have any chips that are more then 50% dead. Now if they are adding a GPU to them. Then yeah a 4c/8t APU makes sense. But with the new chiplet design a cpu with less then 6c/12t in my opinion would be a waste. If they are using 2 chiplets anyway. But if they aren't using 2 chiplets and use a dummy. Wouldn't that off center the heat?


----------



## Neilthran

I think we will know more in January during CES. The leaks, true or not, are more or less about core count, clocks and price. Each Zen2 CPU on 7nm has 8 cores, so i think the consumer CPUs will also have some kind of chiplet design like Rome. Which means that each cpu will have a 14nm chiplet and 2 8-core 7nm chiplets at least. Will there be 4c Cpus? Almost guaranteed there will be. But i think the leaks about entry level 6c CPUs is not being overoptimistic at all.

Regarding clocks.....who knows? Zen2 will be made in a high power 7nm node, so it won't be like the 14nm low power node from Global Foundry. And even so, are there not zen+ CPUs oced to 4.2-4.4Ghz? Something around 4.5Ghz turbo doesn't sound like expecting too much.

Pricing is certainly where the leaks are too optimistic in my opinion.


----------



## mohit9206

ozlay said:


> If they are using two 8c/16t chiplets on each AM4 chip. Then it would make sense to start at 6c/12t. Unless yields are really bad. They shouldn't have any chips that are more then 50% dead. Now if they are adding a GPU to them. Then yeah a 4c/8t APU makes sense. But with the new chiplet design a cpu with less then 6c/12t in my opinion would be a waste. If they are using 2 chiplets anyway. But if they aren't using 2 chiplets and use a dummy. Wouldn't that off center the heat?


I mean i would love it if i could buy a $100 or even $130 6C/12T CPU next year but i feel it just sounds too good. I'm not going to complain about that for sure. I doubt games gonna use more than 12 threads anytime soon.


----------



## LancerVI

The only things that I think are a stretch in this leak are the 5.1 clock and the prices for both Navi and Zen2. Navi truly is the wildcard here. Nobody knows and it's the leak I'm most skeptical of. 

For Zen2, 4.5 to 4.8 clocks are NOT a stretch in my mind. 6 core procs are not either. There is nothing unreasonable either of those things. I'm not saying it's 100% right, but I don't think you can dismiss that as unreasonable or pie in the sky as some have done.


----------



## ozlay

If AMD wants to market those 16 and 12 core parts as mainstream chips. They have to lower the price to mainstream prices or they wont sell. Even if they destroy intels 9900k. It will still be priced the same because AMD doesn't have consumer confidence yet. Especially if it runs everything the same as or close to the 9900k do to lack of multi-core games/apps. They already have that issue with the R5-2600 which runs games just as well as the R7-2700. Why would i buy a 16 or 12 core for gaming if a 6 or 8 core run games just as well. And if they price the 8 core the same as intels 8 core why would i chose AMD over Intel.

I would expect the top end part to be priced a little higher then intel's offerings but not much higher. They need to gain some consumer confidence first.


----------



## rluker5

LancerVI said:


> The only things that I think are a stretch in this leak are the 5.1 clock and the prices for both Navi and Zen2. Navi truly is the wildcard here. Nobody knows and it's the leak I'm most skeptical of.
> 
> For Zen2, 4.5 to 4.8 clocks are NOT a stretch in my mind. 6 core procs are not either. There is nothing unreasonable either of those things. I'm not saying it's 100% right, but I don't think you can dismiss that as unreasonable or pie in the sky as some have done.


I heard Navi was still GCN. If that is true it means no wild card, just count the tflops to get your performance. They can still beat a 2080 with that if the clocks are high enough though.


----------



## CynicalUnicorn

ozlay said:


> If they are using two 8c/16t chiplets on each AM4 chip. Then it would make sense to start at 6c/12t. Unless yields are really bad. They shouldn't have any chips that are more then 50% dead. Now if they are adding a GPU to them. Then yeah a 4c/8t APU makes sense. But with the new chiplet design a cpu with less then 6c/12t in my opinion would be a waste. If they are using 2 chiplets anyway. But if they aren't using 2 chiplets and use a dummy. Wouldn't that off center the heat?


Do keep in mind that AMD has yet to configure a Zen CPU in which CCXes are not configured identically. Each CCX on existing Zen CPUs contains the same number of enabled cores and the same amount of L3 cache. I _think_ quad-core Ryzen 2000-series CPUs are the exceptions, using a 4+0 configuration rather than 2+2 like last generation parts. It's hard to call that asymmetric though when the entire CCX is disabled. 

Assuming the 16-core rumor is true and the CPUs use a scaled-down chiplet+IO hub design like Rome's, I would be shocked if AMD didn't have single-chiplet and dual-chiplet CPUs. A single chiplet would be the only way for them to offer six cores (Zen 2 appears to still be using a quad-core CCX) at all and, depending on yields, four cores while only disabling 50% rather than 75% of otherwise-good dies. The latter case assumes 7nm yields will be good, but first-gen Ryzen had upwards of 80% perfect dies within a couple months after release. 7nm is a different beast entirely though.




rluker5 said:


> I heard Navi was still GCN. If that is true it means no wild card, just count the tflops to get your performance. They can still beat a 2080 with that if the clocks are high enough though.


GCN is less like Skylake or Bulldozer and more like ARM or x86. It's an ISA, not a particular architecture. AMD has had four architectures which use it, I believe: GFX6 was used by first-gen GCN GPUs, GFX7 by second-gen, GFX8 by both third-gen and fourth-gen, and GFX9 by Vega. (Incidentally, this means the R9 285 released in 2014 is very similar to the RX 590 released weeks ago, just with smaller shader engines.)

I won't be surprised if Navi still uses the GCN ISA, but that does not necessarily mean it will resemble Vega and the GFX9 architecture as far as the finer details go. I really hope it doesn't.


EDIT:



tyvar said:


> Yes? that was what I was pointing out? you made the claim that the 7nm brought a gain of sub 100mhz, and I pointed out that it was more like 200mhz, and backed that up with evidence.
> 
> as for passive cooling, I know exactly what it means in a server setting, and its still less effective of a cooling method then a AIO, which is why comparing it to those flavors of Vega 64s would be disingenuous.



Whoops, missed this one earlier, sorry.

You're right, I misremembered the boost clocks on Vega. Still, reference 14nm Vega chips are only rated to hit 1600 MHz while 7nm Vega can only ever hit 1800 MHz. Maybe a GPU in the style of the Frontier Edition or RX Vega boards would hit somewhat higher clocks, but it's not going to be by much.

Yup, that's why I threw the MI25 in there as well. :thumb: The reference active-cooled Vega SKUs only add 100 MHz to the boost clock over the passive datacenter SKU. I'm confident claiming that this trend would hold true with a hypothetical family of 7nm Vega SKUs. So let me amend what I said previously and say Vega is *300 MHz* or *<=20%* faster due to frequency alone as a result of the 7nm process.




> We have no idea what the performance of Zen on 7nm will be, other then we probably will see higher clocks. Id just point out that hitting 5.0 ghz boost from the highest boosting threadripper is only about 15% jump.


We don't know the performance of Zen 2 because it is a new microarchitecture. If it were a straight port, much like 14nm Zen to 12nm Zen+ (at least Threadripper thanks to some L2 latency changes), we would have a good idea based on the new process's performance characteristics with high-power processors such as Vega. Math says we'd get 5.1GHz or 5.2GHz peak XFR boost on the 3700X or 3800X, but I just don't buy that based on Zen's behavior now.


----------



## finalturismo

I believe the 5Ghz+ is accurate because AMD has got their node size down to 7nm. In the CPU world, node size is one of the most important things when it comes to efficiency. Also note that going from 14nm to 7nm is a MASSIVE shrink as opposed to back in the days when were going from 90nm to 60nm.

When it comes to node size AMD currently has a MASSIVE technological advantage over Intel. Problem is AMD is fabless so they are at the mercy of the manufactures (TSMC). 14nm to 7nm means 2x the size on the die to work with. These leaks should not be regarded as supersizing at all. They are perfectly within the means of being logical. AMD now has an advantage, if intel does not play their cards right they could be in serious trouble. 

We could be seeing a massive shift in the current top dog players in the CPU market. Keep in mind if you look throughout computing history it is not rare for the big dogs to come and go. If intel has any more problems in releasing a small node size they will be in trouble for sure. Because than AMD will be working on architecture upgrades for its 7nm by the time Intel does another shrink. 7nm is supposed to be a long term player before another node shrink....

Commodore 64, Cyrix Cpus, IBM Main stream CPUs, Compaq are all parts of history now. Keep in mind Intel has been out sourcing some of its 14nm work to Tsmc as well. If intel is having a problem at their FABs and they are stuck on 10nm. We might be seeing both AMD and Intel purchasing manufacturing contracts from TSMC. If this ends up happening we might see someone else in the CPU game and it wont be AMD or Intel. Intel might have the money, but it might be losing its infrastructure foot hold on the market. No one knows for sure but we will find out soon.


----------



## ozlay

I wonder if the new APU's use the infinity fabric so they dont take up pcie lanes.


----------



## Aenra

I can understand the hesitation regarding the clock speeds (on which as mentioned, i agree), but i fail to see how one can simply discard it all in advance.
Or yet further, take it to the extreme like CynicalUnicorn or whatever the name is and instantly equate us all to "fanboys".

All these years here, plus whatever a number prior to joining this place, surely we're not conversing with juveniles; are we?
I don't appreciate strawmen, because this thread is about Adored's leaks; which, unsubstantiated as they are, have nothing to do with Reddit; which in fact no mature person should have bothered with in the first place, but that's a diferent book altogether. Using an obvious red herring so as to discount something (that thus far at least is) entirely different? Seriously?
And calling us fanboys on top; an excellent way to move a conversation forward, i'll say that much. Won't even project it further, as i see moderator status granted at some point; the connotations are frankly embarassing dear.

Regardless, i would once again point out that for the majority of folks out there, it's not the clocks that excite about any of this. Never distance yourselves so much from reality that you fail to grasp it.


----------



## NightAntilli

mohit9206 said:


> I mean i would love it if i could buy a $100 or even $130 6C/12T CPU next year but i feel it just sounds too good. I'm not going to complain about that for sure. I doubt games gonna use more than 12 threads anytime soon.


We'll have to wait and see... But look at it this way... AMD is creating one type of chiplet. It is an 8C/16T part that is going to be used from the bottom of the pack all the way to their Epyc CPUs. Now, depending on yield, everything changes... 

Let's say they have exceptionally bad yields. Even then, what are the chances, that 5 of the 8 cores are not usable? Very small I think, but... If it really is that bad, would they want to release a 3 core CPU? I don't think so. They'd rather put two of them on a die, and sell it as a 6C/12T part, right? It's also much better than selling them as a 2C/4T CPU and wasting a perfectly good core, and having to handle double the packaging and shipping costs, basically... 

Now let's say additional to the setup above, many of them are half dead, as in, 4 cores are damaged. What would be better? Selling two of them as a 8C/16T CPU where your standard I/O die can handle two of them anyway, or waste half your I/O and use one 4C/8T chiplet and use a dummy die? I honestly don't know the answer to this one, but considering the same argument as above regarding the double packaging and shipping, I think they'd opt for using two on one die and selling them as 8 core CPUs... For them to really sell 4 core CPUs, I think their chiplets need to have 6 dysfunctional cores on them... And even then, why would you sell two of them together, rather than combining a 2 core chiplet with a 4 core chiplet and sell it as a 6C/12T CPU?

Then the question remains, what do you do with the few chiplets that have 5 good cores, 6 good cores, 7 good cores, or are fully functional? If they have good yields, which apparently they have, the same also applies.
The chiplets that have 5 good cores and 7 good cores cores are the most difficult to deal with. So what do you do to not waste them? Combine them to make 12C/24T CPUs.
The chiplets with 6 functional cores can be used to make both 6C/12T CPUs and 12C/24T CPUs. 
The fully functional chiplets can be used to make 8C/16T CPUs, and 16C/12T CPUs, aside from the obvious Threadrippers and Epyc CPUs. If they are somehow too slow or whatever, they can also be combined with the 4 core chiplets to form a 12C/24T CPU.


----------



## guttheslayer

NightAntilli said:


> We'll have to wait and see... But look at it this way... AMD is creating one type of chiplet. It is an 8C/16T part that is going to be used from the bottom of the pack all the way to their Epyc CPUs. Now, depending on yield, everything changes...
> 
> Let's say they have exceptionally bad yields. Even then, what are the chances, that 5 of the 8 cores are not usable? Very small I think, but... If it really is that bad, would they want to release a 3 core CPU? I don't think so. They'd rather put two of them on a die, and sell it as a 6C/12T part, right? It's also much better than selling them as a 2C/4T CPU and wasting a perfectly good core, and having to handle double the packaging and shipping costs, basically...
> 
> Now let's say additional to the setup above, many of them are half dead, as in, 4 cores are damaged. What would be better? Selling two of them as a 8C/16T CPU where your standard I/O die can handle two of them anyway, or waste half your I/O and use one 4C/8T chiplet and use a dummy die? I honestly don't know the answer to this one, but considering the same argument as above regarding the double packaging and shipping, I think they'd opt for using two on one die and selling them as 8 core CPUs... For them to really sell 4 core CPUs, I think their chiplets need to have 6 dysfunctional cores on them... And even then, why would you sell two of them together, rather than combining a 2 core chiplet with a 4 core chiplet and sell it as a 6C/12T CPU?
> 
> Then the question remains, what do you do with the few chiplets that have 5 good cores, 6 good cores, 7 good cores, or are fully functional? If they have good yields, which apparently they have, the same also applies.
> The chiplets that have 5 good cores and 7 good cores cores are the most difficult to deal with. So what do you do to not waste them? Combine them to make 12C/24T CPUs.
> The chiplets with 6 functional cores can be used to make both 6C/12T CPUs and 12C/24T CPUs.
> The fully functional chiplets can be used to make 8C/16T CPUs, and 16C/12T CPUs, aside from the obvious Threadrippers and Epyc CPUs. If they are somehow too slow or whatever, they can also be combined with the 4 core chiplets to form a 12C/24T CPU.


Oh please, that sound way too pessimistic, if a size as small as 74mm yet having so many yield loss, I think 7nm in the first place wouldn't be considered for mass production. The fact that they are ramping up (and doing so well) shows they are already hitting close to the market standard.


These 8C/16T full chip are probably close to 70-80% yield per wafer.


----------



## kyrie74

Reading thru I see the same people that doubted Ryzen would have 8c/16t doubting that Ryzen 3000 will have 16c/32t for the same reasons that they said Ryzen wouldn't have 8c/16t. If AMD was stuck on monolithic chips at 7nm I would be agreeing with you guys, but they are making a 74mm chip with 8c/16t, yields should be thru the roof on such a tiny chip.


----------



## Chargeit

If this ends up being true I think I'll ditch my 7820x for a 3850X. I just hope ram compatibility is stronger with Ryzen 3000.


----------



## Particle

Chargeit said:


> If this ends up being true I think I'll ditch my 7820x for a 3850X. I just hope ram compatibility is stronger with Ryzen 3000.


I would expect it will be entirely adequate. 1st generation Zen parts gained a speed grade or two over time just with firmware. 2nd generation Zen+ parts gained another couple speed grades. It would stand to reason that the memory controller will have been improved yet again to some degree.


----------



## Chargeit

Particle said:


> I would expect it will be entirely adequate. 1st generation Zen parts gained a speed grade or two over time just with firmware. 2nd generation Zen+ parts gained another couple speed grades. It would stand to reason that the memory controller will have been improved yet again to some degree.


Hopefully. My steam streaming/VR rig uses a Ryzen 7 2700 (Got cheap on sale). Have been very happy with Ryzen's performance but the memory support was an issue for me. I was able to get it ironed out using, "DRAM Calculator for Ryzen" though I had to make my timings pretty loose.


----------



## Particle

Chargeit said:


> Hopefully. My steam streaming/VR rig uses a Ryzen 7 2700 (Got cheap on sale). Have been very happy with Ryzen's performance but the memory support was an issue for me. I was able to get it ironed out using, "DRAM Calculator for Ryzen" though I had to make my timings pretty loose.


What memory are you using? One can generally apply the XMP on the module if present and not have to configure anything so long as the module is using "friendly" memory chips.


----------



## NightAntilli

guttheslayer said:


> Oh please, that sound way too pessimistic, if a size as small as 74mm yet having so many yield loss, I think 7nm in the first place wouldn't be considered for mass production. The fact that they are ramping up (and doing so well) shows they are already hitting close to the market standard.
> 
> 
> These 8C/16T full chip are probably close to 70-80% yield per wafer.


I was discussing a worst case scenario. And even then, the 6C/12T makes sense.


----------



## Chargeit

Particle said:


> What memory are you using? One can generally apply the XMP on the module if present and not have to configure anything so long as the module is using "friendly" memory chips.


It wasn't a friendly kit. Quad channel, 3200. Older kit from when x99 came out. I picked the kit up when I first got my 7820x then decided I wanted to get a faster kit. Used the xmp profile without issue on my x299 system but my 2700 did not like the kit. Even with everything else stock the xmp profile was totally unstable.


----------



## EniGma1987

finalturismo said:


> Commodore 64, Cyrix Cpus, IBM Main stream CPUs, Compaq are all parts of history now. Keep in mind Intel has been out sourcing some of its 14nm work to Tsmc as well. If intel is having a problem at their FABs and they are stuck on 10nm. We might be seeing both AMD and Intel purchasing manufacturing contracts from TSMC. If this ends up happening we might see someone else in the CPU game and it wont be AMD or Intel. Intel might have the money, but it might be losing its infrastructure foot hold on the market. No one knows for sure but we will find out soon.





Intel sure is having problems, having been stuck on a fab node for 5 years now. However if the situation gets truly dire, they will end up licensing 7nm from Samsung probably and if not them then TSMC. I would expect that decision to be made next year some time, probably in May(ish). Intel will have been able to see the potential of Samsung's 7nm EUV process as well as TSMC 7nm conventional. If performance is good enough and Intel hasnt yet made any more breakthroughs, I would expect them to license the next node tech and move on to focusing on 5nm and scrap their whole 10nm and possibly their 7nm as well which is currently underdevelopment too. It actually wouldnt surprise me at all if Intel has already secretly done some tests of both nodes from competitors to do initial evaluations


----------



## toncij

As much as I'd love AMD to take the lead and keep Intel down for at least a few years, I'm hearing otherwise from industry professionals (you know, those colleagues that work in all the connected industry fabs, like node tech, etc.) - Intel is not sitting and waiting, they're ramping up their 7nm EUV process, which is unlike 10nm, going fine. They'll most probably be ready with 7nm for the 2020 volume production. 10nm will probably be a low yield special node that won't end up in large production. The Intel 7nm EUV is superior to TSMC 7nm and will be matching TSMC 5nm.
Allegedly.
Also, those clocks for AMD are a bit on a high side for the TDP, but not completely impossible.


----------



## miklkit

Dunno about the others but if AMD makes an 8c/16t cpu that can run at 4.4-4.6 ghz for $250-$280 I'm all in. Especially if it runs on X370.


----------



## NightAntilli

toncij said:


> As much as I'd love AMD to take the lead and keep Intel down for at least a few years, I'm hearing otherwise from industry professionals (you know, those colleagues that work in all the connected industry fabs, like node tech, etc.) - Intel is not sitting and waiting, they're ramping up their 7nm EUV process, which is unlike 10nm, going fine. They'll most probably be ready with 7nm for the 2020 volume production. 10nm will probably be a low yield special node that won't end up in large production. The Intel 7nm EUV is superior to TSMC 7nm and will be matching TSMC 5nm.
> Allegedly.
> Also, those clocks for AMD are a bit on a high side for the TDP, but not completely impossible.


Might be... But unless Intel also switches to chiplets, I doubt they'll be able to compete in terms of value for money, even with a better node.


----------



## deags

From the looks of it it seems that the leaks may not be right:

"The Ryzen rumours are fake, pretty certain at this point. I've got info off the record so I can't released [sic] it, but yeah people are going to be upset when they really shouldn't be... don't assume Ryzen will be anything like Rome. "

Steve - HW unboxed

Source:
https://www.notebookcheck.net/South...MD-is-notified-of-its-existence.376371.0.html


----------



## betam4x

toncij said:


> As much as I'd love AMD to take the lead and keep Intel down for at least a few years, I'm hearing otherwise from industry professionals (you know, those colleagues that work in all the connected industry fabs, like node tech, etc.) - Intel is not sitting and waiting, they're ramping up their 7nm EUV process, which is unlike 10nm, going fine. They'll most probably be ready with 7nm for the 2020 volume production. 10nm will probably be a low yield special node that won't end up in large production. The Intel 7nm EUV is superior to TSMC 7nm and will be matching TSMC 5nm.
> Allegedly.
> Also, those clocks for AMD are a bit on a high side for the TDP, but not completely impossible.


Note this is all just theory below, but I wouldn't be surprised if I were right if the rumors I've heard/read are true: 

While the clocks could very well be fake, they aren't on the high side at all. The IO chip is 14nm and likely runs at a fixed speed. Let's say for example, 2 GHz. The chiplets themselves can run much faster, but that won't affect the IO chip. This means that on (non APU chips) Ryzen 2, clocks can be cranked up without the memory controller, PCIE controller, etc. running out of spec. Memory compatibility should be MUCH better as a result and the 'old' limitation of clock speed should be removed. 

Remember, many 2600k chips (which were 32nm) could hit 5 GHz without issue (mine does, and runs under prime95 at 28C using an AiO) and to this day many Intel chips can get up to 5 GHz. There isn't some magical limiting factor that stops AMD to do the same, it was the way they built the chip. They recognized Zen weaknesses and have corrected them with Zen2. I wouldn't be surprised to see a 6 GHz overclock thanks to the IO chip and the better process. Of course, the limiting factor will be the clock speed of the I/O chip. It must be kept in sync with the chiplets for things to work. Even if it's 1/2 clock speed. For example, if you try to overclock the CPU to 5555 MHz and the IO chip is running at 2000 MHz, you are going to run into issues with memory and IO not being synced up to the CPU. However a 2500 MHz IO chip and 5 GHz CPU would work perfectly (theorizing here, a wait state could be thrown in, this would hurt IPC, but a larger L3 cache would mitigate this). Likely, as mentioned, AMD will include a divider, or limit the multiplier/bclk to whole divisions of the IO chip. Or the bclk may essentially even control the speed of the IO chip itself.


----------



## guttheslayer

mohit9206 said:


> Sorry to break this to you but Ryzen will not start at 6C/12T. You are being overly optimistic. Don't ride the hype train we saw what happened with Vega hype train. Ryzen entry level part will still be a 4 core part. Maybe AMD might make it 4C/8T but no chance it will be 6C/12T. Also the rumored "leaked" pricing is indeed extremely unrealistic. AMD is a business not a charity organization.


I will be betting my life 16C/32T is coming for AM4. Easily.


But the clock speed will be more subtle toward 4.5GHz



deags said:


> From the looks of it it seems that the leaks may not be right:
> 
> "The Ryzen rumours are fake, pretty certain at this point. I've got info off the record so I can't released [sic] it, but yeah people are going to be upset when they really shouldn't be... don't assume Ryzen will be anything like Rome. "
> 
> Steve - HW unboxed
> 
> Source:
> https://www.notebookcheck.net/South...MD-is-notified-of-its-existence.376371.0.html



The video by Steve is not very true as well, if you look through the entire comment section. HWboxed actually agree both get their sources from different side. But in the video where they mentioned AMD never undercut their competition by 50%, that is absolutely not true. It happened before.


Also the 1.25x performance, please dont take it a face value when there is a big ">" sign before it. And that performance matrix is talking about 7nm transistor jump from 14nm, nothing to do micro-architecture.


----------



## mohit9206

They can't blow all their load for Zen 2, gotta save something for Zen 3 as well so clock speeds and pricing will be far more conservative.


----------



## ToTheSun!

betam4x said:


> Remember, many 2600k chips (which were 32nm) could hit 5 GHz without issue (mine does, and runs under prime95 at 28C using an AiO)


Outside in alaskan winter?


----------



## momonz

deags said:


> From the looks of it it seems that the leaks may not be right:
> 
> "The Ryzen rumours are fake, pretty certain at this point. I've got info off the record so I can't released [sic] it, but yeah people are going to be upset when they really shouldn't be... don't assume Ryzen will be anything like Rome. "
> 
> Steve - HW unboxed
> 
> Source:
> https://www.notebookcheck.net/South...MD-is-notified-of-its-existence.376371.0.html


It works both ways. Steve has his sources, while AdoredTV has his. Still no absolute truth until AMD officially release Ryzen 2
And AdoredTV ryzen leaks are well within possibility even for the gpu.


----------



## Streetdragon

Even when only 50% of this stuff is true, it will be a great CPU! I mean: 16/32 with 4.1GhzBase, Boost 4.5 all cores and singecore boost to 4.7 would be really nice. I would burn my old CPU in a secound and buy that good stuff


----------



## Pirx

ILoveHighDPI said:


> I haven't felt this excited for PC hardware since 2010.



this! 

seriously, i watched adored on youtube on my phone before going to bed, and suddenly i had no sleep. 

i had planned building a new rig, was upset about current prices (8700k up from summer, 20xx laughable imo), had even watched some ebay offers, but now i'm just glad i had not spent my money on intel's incremental increases. 16c/32t is simply a jump that didn't seem possible (or affordable) for a long time.


----------



## EniGma1987

betam4x said:


> While the clocks could very well be fake, they aren't on the high side at all. The IO chip is 14nm and likely runs at a fixed speed. Let's say for example, 2 GHz. The chiplets themselves can run much faster, but that won't affect the IO chip. This means that on (non APU chips) Ryzen 2, clocks can be cranked up without the memory controller, PCIE controller, etc. running out of spec. Memory compatibility should be MUCH better as a result and the 'old' limitation of clock speed should be removed.



For well over a decade now we have been able to clock cores without affecting memory controller or PCI-E lane speed, with the exception of specifically locked CPUs and llano. The days of a front side bus and single core processors has long passed. And the IO die as a whole wont have a clock speed at all, it isnt a core that controls all the IO. lol






betam4x said:


> Remember, many 2600k chips (which were 32nm) could hit 5 GHz without issue (mine does, and runs under prime95 at 28C using an AiO) and to this day many Intel chips can get up to 5 GHz. There isn't some magical limiting factor that stops AMD to do the same, it was the way they built the chip. They recognized Zen weaknesses and have corrected them with Zen2. I wouldn't be surprised to see a 6 GHz overclock thanks to the IO chip and the better process. Of course, the limiting factor will be the clock speed of the I/O chip. It must be kept in sync with the chiplets for things to work. Even if it's 1/2 clock speed. For example, if you try to overclock the CPU to 5555 MHz and the IO chip is running at 2000 MHz, you are going to run into issues with memory and IO not being synced up to the CPU. However a 2500 MHz IO chip and 5 GHz CPU would work perfectly (theorizing here, a wait state could be thrown in, this would hurt IPC, but a larger L3 cache would mitigate this). Likely, as mentioned, AMD will include a divider, or limit the multiplier/bclk to whole divisions of the IO chip. Or the bclk may essentially even control the speed of the IO chip itself.



Wow. just... wow. :kookoo:


----------



## Ithanul

miklkit said:


> Dunno about the others but if AMD makes an 8c/16t cpu that can run at 4.4-4.6 ghz for $250-$280 I'm all in. Especially if it runs on X370.


Yep, I'm going to be keeping an eye out for official reviews. If the next ones can pull that off, I definitly be nabbing one. Just hope some better motherboards show up as well.
Though, this makes me hope the next Threadrippers be beefed up as well.


----------



## KyadCK

EniGma1987 said:


> For well over a decade now we have been able to clock cores without affecting memory controller or PCI-E lane speed, with the exception of specifically locked CPUs and llano. The days of a front side bus and single core processors has long passed. *And the IO die as a whole wont have a clock speed at all,* it isnt a core that controls all the IO. lol
> 
> 
> 
> 
> 
> 
> 
> Wow. just... wow. :kookoo:


Everything in a computer has a clock speed, even your LEDs, VRMs and fans.

IF was based on RAM speed, for example, which was part of the problem with the original Ryzen chips. You can also overclock Cache/Uncore on Intel parts.


----------



## mohit9206

I hope everyone with ryzen 1 chips doesn't upgrade to Zen 2


----------



## EniGma1987

KyadCK said:


> Everything in a computer has a clock speed, even your LEDs, VRMs and fans.
> 
> IF was based on RAM speed, for example, which was part of the problem with the original Ryzen chips. You can also overclock Cache/Uncore on Intel parts.





Only if the fans are run on PWM. Straight DC power isn't really considered 120hz clock electricity 


Can always find someone on here who nitpicks things so far it is like they didnt even understand the post in the first place...


----------



## tajoh111

One thing that I could see AMD is not making low end graphics for now and focusing on a middle tier die in the same vein as the 5870 and killing off something akin to polaris and pitcairns(replacing these with larger dies) and focusing on the tier below that in terms of die size and upping the price to match polaris along with a slight bump in die size. 

What I mean by this is not making a 220mm2 but making something along the lines of 150-170mm2 which is the middle ground between the RX 480 and RX 460 but pricing more along the lines of Polaris and subsequently, the larger die in the polaris line up becomes 270-300mm2. 

The 150-170mm2 will have a 128bit or potentially 256 bit but perform something along the lines of Vega 56 minus 5-10% at a $250 dollar price point, with the cut down costing 180 and performing more along the lines of an Rx 590. 

I think something something with a 384bit bus like the 7970 could happen on a 270-300mm2 die maxing out at around RX Vega 64 + 15% or RTX 1070 levels with a price something like $429 with the cut down performing like an RX vega 64 for 350 dollars. 

I think AMD's problem right now is scalability which is no surprise since the GCN architecture is so mature. 

As we can see from the performance gap between polaris and vega in terms of performance per tflop, polaris is ahead of Vega meaning performance scales badly upwards. 

As a result, there is no reason for AMD to make big chips with it as they will run into those sames issues and performance will get worse scaling up. 

As a result, RX vega + 20% is the upward limits of performance on GCN. This is a combination of the hardware scheduler being maxed out and the power wall AMD has run into. 

So as a result, it's not worth building a proper 7nm flagship on Navi and this is why we will get next gen 2020 for the flagship. 

As a result, its more about cost reduction and making a card with Rx vega + 20% as cheap and as small as possible so its easier to manufacturer and has more pricing flexibility. 

AMD kind of has a blue print with 7nm vega 20. 

What can AMD take away while still achieving Vega 64 + 20% performance. I think they could do a 384bit bus like the 7970 and perhaps 48Rops. I used to think Vega was ROP limited but what I have noticed is Vega 64 is inferior to the RX 590 in terms of performance per tflop. RX Vega 64 has 34-44% more performance but has 65% more flops.

We should see better scaling than this. If it was truly ROPS, the scaling should be better since RX Vega 64 has 2x the ROPS of the rx590. And as we know, Vega 64 is not shader starved because if we clock the RX Vega 64 and RX 56 the same, the difference in performance is negligible. 






This mean't there is a fundamental flaw in GCN and if they haven't cured it and it is actually becoming worse(Vega 64 is horrible for this). An R9 390x has better performance per tflop than Vega 64 even though the later is a new build of GCN with far superior bandwidth. 

Vega 64 is also more than twice the size of Polaris and doesn't remotely have this level of performance. 

Compare this to the GTX 1080 and Titan Xp which is a die 50% bigger while having about 50 percent more resouces, when clocked the same, have a 40 percent performance difference between the cards. This is why it would pointless to make Vega any larger beyond a certain point. I think AMD could have made a card as fast as RX Vega 64 for under 400mm2 on 16/14nm if they found did a bit more software simulation before hand designing the card. 

As a result, it's becoming more and more pointless to design high end cards using GCN. 

AMD used to have a performance per mm2 advantage over Nvidia but now its getting worse and worse which tells me GCN needs to be replaced. Navi won't help AMD too much and it will be primarily the move to 7nm that will do all the lifting. Navi is the last itteration of GCN which means the gains are mature in terms of IPC. Nvidia's has gotten a bit worse with turing, but I suspect with age, turing will close the gap with pascal. 

I think something like a 56 Cu, 48 ROP, 1800mhz Navi can perform about 65-70% faster than an rx590 while being about 270-290mm2 on 7nm.


----------



## KyadCK

EniGma1987 said:


> Only if the fans are run on PWM. Straight DC power isn't really considered 120hz clock electricity
> 
> 
> Can always find someone on here who nitpicks things so far it is like they didnt even understand the post in the first place...


But the voltage controllers that supply the DC power to non-PWM fans _are_ digital and run on a polling rate, just like any other VRM. They are not static power and they are not analog dials.

Don't say factually incorrect things and mislead others. If you did not want to be nitpicked, then say "the IO has been separated from the clock speed multiplier and reference clock for most recent CPUs" and leave it at that, not make some wildly inaccurate comment about how an IO bridge doesn't have a clock speed.

... Except the reference clock has not been separated by sections of CPU for all that long, and again, you CAN overclock uncore and others. As recently as Haswell-E your bootstrap DID impact things beyond CPU clocks, and you have been able to overclock things like NB/HT on chips like Piledriver which did not use an FSB either. In ryzen 1, the IF clock speed is tied to your RAM, which directly impacted overclocking ability, exactly as the person you quoted implied.


----------



## betam4x

EniGma1987 said:


> For well over a decade now we have been able to clock cores without affecting memory controller or PCI-E lane speed, with the exception of specifically locked CPUs and llano. The days of a front side bus and single core processors has long passed. And the IO die as a whole wont have a clock speed at all, it isnt a core that controls all the IO. lol



Zen is a completely different architecture. Just what do you think the IO die is controlling? It's mom? lol. The IO die (as has been explained in rumors) contains the memory controllers, PCIE controller, etc. The 'chiplets' as they are called do not contain this stuff any longer. Instead they contain 8 compute cores, which are all connected to the IO controller via infinity fabric. If you've been following along with the development of 7nm EPYC you would know this. The infinite fabric has to run at a fixed clock speed to stay in sync with the CPU, much like the olden days where the memory controller was OUTSIDE the CPU (albeit, at much faster speeds than the bus speeds of old). You don't have to take my word for it, wait for launch. This allows for the chiplets to run at much higher clock speeds, while not pushing the memory controller and PCIE controller to the breaking point. DDR4-2666 for instance, runs at 1333 MHz. This is one of the reasons why Ryzen 1 had such a horrible memory controller setup. each 'die' had a number of memory controllers. On the Threadripper 2990wx, this meant that 2 of the dies had NO direct access to memory. By enlarging the L3, and creating a central IO controller, AMD no longer runs into that scenario. They can have as many cores as they want. They all share the same IO chip. This means that externally X370 boards will still support dual channel memory, but internally they can support 16 cores. Now, there will be thermal constraints, but with the 7nm process in place, those constraints will be MUCH less of an issue. As a matter of fact, AMD could have done this with Zen1 and 14LPP and squeezed a few hundred more MHz out. However, this is a different design team presumably, with a different design. 

EDIT #2: This also means that Threadripper can contain up to 64 cores and not have the issues the 2990wx did. Also, since the IO chip runs at a much lower speed, it would require less heat and less power for a given clock speed. (as mentioned above). Having a nice, large, L3 cache will help prevent memory latency issues, which will raise IPC. Along with higher clock speeds and other optimizations, these chips are going to place AMD clearly in the lead for the first time since the Athlon 64 X2. Funny how that works. Also, as mentioned earlier EVERYTHING has a clockspeed, even LED lights. Coincidentally, most standard monitor back lights run at a fixed 60 hertz. For example. That's the 'clock speed' of the monitor.

Think what you want, the beer is on you if I'm right, and I bet I am.

EDIT: Oh and Intel uses a similar setup. What do you think the uncore clock in the 2600k+ is? It is a similar setup to what I was referring to. AMD is just taking a different approach, one that will allow them to scale in both clock speed and core count. They have Intel by the balls, at least until Intel creates a new architecture.

source: https://en.wikipedia.org/wiki/Uncore. -- Save your trolling for someone else.


----------



## MishelLngelo

AMD was gathering steam with 1st and 2nd gen (and making money to finance it further). 3rd gen is looking like something they were striving for all the time. Prices don't even have to go down if they can hit those projected frequencies and get that damn IPC up to Intel levels. It was always an Intel's forte.


----------



## ToTheSun!

betam4x said:


> Coincidentally, most standard monitor back lights run at a fixed 60 hertz. For example. That's the 'clock speed' of the monitor.


That's... not how monitors work.

But it's strange you went with such a meager value for LED's after your 6 GHz Ryzen 2 and 28ºC AIO 2600K Prime95 comments.


----------



## Telimektar

There's a follow-up video by AdoredTv :


----------



## ryan92084

No chiplets/io chip setup on ryzen makes everything else ryzen even less likely imo. Also still referencing the reddit "leak" :thumbsdow


----------



## pony-tail

looks like the hype train has been derailed due to lack of salt .


----------



## LancerVI

ryan92084 said:


> No chiplets/io chip setup on ryzen makes everything else ryzen even less likely imo. *Also still referencing the reddit "leak" :thumbsdow*


He addresses that specifically.


----------



## ryan92084

LancerVI said:


> He addresses that specifically.


He didn't address the fact that the person who posted it on reddit admitted it was their friend pranking them before the first video went up...


Edit: AdoredTV did address them on reddit by saying they are a liar https://www.reddit.com/r/Amd/commen...readsheet_ryzen_leak_was_me_it_is_not/ebbm75t


----------



## NoGuru

Pump the brakes.


----------



## rbarrett96

AlphaC said:


> $130 6 core 12 thread with 4GHz turbo... that's crazy. I would expect $150ish.
> 
> ~$180 8c/16t 4.4GHz turbo isn't as amazing though. R7 2700X is already 4.35GHz. The $230 4.8GHz turbo variant looks to be the chip to buy.
> 
> ~$300 for 12cores / 16 threads are another highlight , especially with 5GHz.
> 
> ~$450-500 16cores/32 threads is roughly TR 1950X pricing with a large speed bump to ~5GHz
> --->You need a B550 or X570 motherboard for Ryzen 9
> 
> The main competitor is the 8c/16t with IGPU that has 20CU for $200. You can't beat that on value. Even an i5 costs you $200ish already.
> 
> ---------------
> 
> GPU wise:
> 
> RX 3060 4GB GDDR6 (Navi12) , 75W with RX 580 performance for $130 vs GTX 2050 / GTX 1060 <--- a major OEM win for all weak PSU desktops
> 
> RX 3070 8GB GDDR6 (Navi12) , 120W with VEGA 56 performance for $200 vs GTX 2060 / GTX 1070
> 
> RX 3080 8GB GDDR6 (Navi10), 150W with VEGA 64 +15% for $250 vs GTX 2070 / GTX 1080



When is AMD going to put out something high end around the $400 range to finally compete with nvidia? Their value proposition has anyways been great but it's time to shake things up and finally get some competion and finally force nvidia to lower their ridiculous prices. This is the only way since consumers have no patience to wait and vote with their wallet. I'm sick of seeing all these freesync monitors knowing there isn't a you that can run a game at 60 fps in 4k.


----------



## LancerVI

ryan92084 said:


> He didn't address the fact that the person who posted it on reddit admitted it was their friend pranking them before the first video went up...
> 
> 
> Edit: AdoredTV did address them on reddit by saying they are a liar https://www.reddit.com/r/Amd/commen...readsheet_ryzen_leak_was_me_it_is_not/ebbm75t


He clearly indicates that he has multiple sources and that his main source gave him the information he reported on PRIOR to the existence of the reddit leak.


----------



## EniGma1987

KyadCK said:


> Don't say factually incorrect things and mislead others. If you did not want to be nitpicked, then say "the IO has been separated from the clock speed multiplier and reference clock for most recent CPUs" and leave it at that, not make some wildly inaccurate comment about how an IO bridge doesn't have a clock speed.



Betam4x said the IO die would have a main clock speed and everything would run on that, and he imagines we would be able to overclock the IO clock and bring up all the stuff on the IO die. It would make no sense to have the entire IO die clocked together as a single clock reference. Nobody is going to make the mistake again of tying PCI-E clock into memory controller. What the guy said was "Ryzen 1 cannot clock high because core clock is tied to memory controller, pcie, etc", which is completely false. Then he went on to say that now the core would be able to clock on its own, and all the "memory controller, pcie, etc" would be on the IO die clock, which is also going to be false, and we have been able to clock cores independently for a long time now. Further, he said that now because core clocks wont affect anything else, that we would now probably get up to 6GHz cores. lol.




KyadCK said:


> ... Except the reference clock has not been separated by sections of CPU for all that long, and again, you CAN overclock uncore and others. As recently as Haswell-E your bootstrap DID impact things beyond CPU clocks, and you have been able to overclock things like NB/HT on chips like Piledriver which did not use an FSB either. In ryzen 1, the IF clock speed is tied to your RAM, which directly impacted overclocking ability, exactly as the person you quoted implied.


Never said we couldn't overclock things like the memory controller. YOU said that's what I said, when I did not. I also never said Infinity Fabric wasnt tied to memory controller, everyone knows that it is. You are simply making it up that I somehow said that just to argue a point I was never making. Go back and read the stuff that was posted instead of trying to say the same thing I am, in a far more drawn out way, and then say that I am somehow wrong


----------



## ryan92084

LancerVI said:


> He clearly indicates that he has multiple sources and that his main source gave him the information he reported on PRIOR to the existence of the reddit leak.


I'm aware, the timing of his leaker also has no bearing on my criticism of continuing to treat the reddit one as credible (but outdated) or his not acknowledging their controversial source in the video.


----------



## momonz

ryan92084 said:


> I'm aware, the timing of his leaker also has no bearing on my criticism of continuing to treat the reddit one as credible (but outdated) or his not acknowledging their controversial source in the video.


Still doesn't hide the fact that AdoredTV has his own multiple sources. He simply mention that it's a coincidence someone at reddit has the same very similar leak.

Basically you won't believe AdoredTV who has a good history (not perfect) about leaks than some random redditor who suddenly did a 360.


----------



## ozlay

I'll take a low profile 3060 or 3070.


----------



## Larky_the_mauler

momonz said:


> Still doesn't hide the fact that AdoredTV has his own multiple sources. He simply mention that it's a coincidence someone at reddit has the same very similar leak.
> 
> Basically you won't believe AdoredTV who has a good history (not perfect) about leaks than some random redditor who suddenly did a 360.


Not a coincidence, he maintains that the reddit leaker (who is a well known newegg employee) simply leaked a legit old OEM spec sheet, and is backpedaling to try to save his job.


----------



## NightAntilli

So no separate I/O die. Interesting... That, I did not expect. Then there's a difference still between the Threadrippers and the consumer parts. Hm...

Whatever the case may be, he does make a good point at the end of the second video... If the parts really are like this, we can expect prices to become higher now that everyone says it's too good to be true. Probably not much higher though, because they don't want to completely kill the hype. 

I'm still fully skeptical regarding Navi.


----------



## ryan92084

momonz said:


> Still doesn't hide the fact that AdoredTV has his own multiple sources. He simply mention that it's a coincidence someone at reddit has the same very similar leak.
> 
> *Basically you won't believe AdoredTV* who has a good history (not perfect) about leaks than some random redditor who suddenly did a 360.


I didn't say or even imply anything of the sort. The lack of transparency just lowers the quality of his videos in my eyes. His leaker could still be 100% on point for all I know /shrug.


----------



## LancerVI

ryan92084 said:


> I didn't say or even imply anything of the sort. The lack of transparency just lowers the quality of his videos in my eyes. His leaker could still be 100% on point for all I know /shrug.


Even he said to take these with a grain of salt from the outset. He's just using his knowledge of the industry, mixed with leaks with which he attempts to forecast an outcome. Even he isn't saying it's true. What he's saying is that what was leaked is not out of the realm of possibility and I completely agree with him.


----------



## ozlay

I wonder why they chose 14nm and not 12nm for the I/O die.


----------



## Hwgeek

Maybe since moving all AMD products to 12nm now the 14nm fabs are open for more orders and the price is cheaper ?


----------



## reqq

So no IO die and it will have NUMA that caused rubbish fps in games on threadripper??? oh god this hype train just stopped


----------



## Larky_the_mauler

reqq said:


> So no IO die and it will have NUMA that caused rubbish fps in games on threadripper??? oh god this hype train just stopped


If only Microsoft could write a thread scheduler.


----------



## SuperZan

reqq said:


> So no IO die and it will have NUMA that caused rubbish fps in games on threadripper??? oh god this hype train just stopped



Would you like to invest in my new product? It's a mat with a conclusion on it that you can jump to!


----------



## ibb27

reqq said:


> So no IO die and it will have NUMA that caused rubbish fps in games on threadripper??? oh god this hype train just stopped


No IO die for Ryzen, for Threadripper we don't know yet....


----------



## ozlay

I wonder if they forgot about that 220ge and 240ge like they did the 2200ge and 2400ge.


----------



## pony-tail

NoGuru said:


> Pump the brakes.


It's a train - open the tap and let all the hot air out , then it will stop .


----------



## white owl

SuperZan said:


> reqq said:
> 
> 
> 
> So no IO die and it will have NUMA that caused rubbish fps in games on threadripper??? oh god this hype train just stopped
> 
> 
> 
> 
> Would you like to invest in my new product? It's a mat with a conclusion on it that you can jump to!
Click to expand...

Haha. That made me very happy.


----------



## coelacanth




----------



## deags

Already posted 2 pages back:

https://www.overclock.net/forum/379...3000-series-leaks-s-game-30.html#post27756418


----------



## Majin SSJ Eric

ryan92084 said:


> Yep was admitted as fake before the video even came out.


Except that Jim claims his leak came from a long time before the Reddit leak was ever posted. 

We can go round and round about what's too good to be true or what is impossible but if Jim's leaks are correct we will find out at CES. I personally don't see anything about his leak that is outright ridiculous just looking at the specs. The real question to me is what the actual pricing turns out to be because that is something that AMD can change their minds about at any time, right up to the day of the announcement.

Oh, and just for the "This is too good to be true" crowd; if I had told you in late 2016 that AMD would release an 8-core / 16-thread mainstream CPU with IPC on par or better than Intel's current chips for $300 you would have said "That's simply too good to be true." 

I think one of the problems here is that the skeptics


Spoiler



Intel fanboys


 are judging what AMD is likely to do based upon what they have become used to Intel doing. The thing is, Zen's entire existence has been all about AMD single-handedly and fundamentally changing the CPU paradigm from the foundation up, yet now with the hotly anticipated and important Zen 2 coming up we are expected to believe AMD will be happy to simply release small, incremental improvements??? I just don't believe that, but we'll see soon enough...


----------



## ILoveHighDPI

We're expecting higher IPC, we're expecting higher clocks, we're expecting higher core count.

All of that is consistent with the rumors, the only question is the degree to which they are true.


----------



## ryan92084

Majin SSJ Eric said:


> *Except that Jim claims his leak came from a long time before the Reddit leak was ever pos*ted.
> 
> We can go round and round about what's too good to be true or what is impossible but if Jim's leaks are correct we will find out at CES. I personally don't see anything about his leak that is outright ridiculous just looking at the specs. The real question to me is what the actual pricing turns out to be because that is something that AMD can change their minds about at any time, right up to the day of the announcement.
> 
> Oh, and just for the "This is too good to be true" crowd; if I had told you in late 2016 that AMD would release an 8-core / 16-thread mainstream CPU with IPC on par or better than Intel's current chips for $300 you would have said "That's simply too good to be true."
> 
> I think one of the problems here is that the skeptics
> 
> 
> Spoiler
> 
> 
> 
> Intel fanboys
> 
> 
> are judging what AMD is likely to do based upon what they have become used to Intel doing. The thing is, Zen's entire existence has been all about AMD single-handedly and fundamentally changing the CPU paradigm from the foundation up, yet now with the hotly anticipated and important Zen 2 coming up we are expected to believe AMD will be happy to simply release small, incremental improvements??? I just don't believe that, but we'll see soon enough...


Again I don't see how that counters my point. One leak has been disavowed by the original poster and by the person who allegedly faked it and the other was given to AdoredTV sometime before by sources who have been good in the past. The only reason the two are even tangled together is because AdoredTV has chosen to make them so. Having one leak be faked doesn't have bearing on the second and I've said as much. 

If Jim had acknowledged this in the video instead of doubling down by calling the redditors liars and continuing to use it the same as any other leak it wouldn't even be under discussion. Even now it isn't worth the amount of discussion that is going on. One is most likely faked or at the very least very outdated and the other potentially credible.

As for the rest I treat the AdoredTV leak with a healthy amount of optimism salted with skepticism just as I would any other (non wccf) leak while I wait for zen3 to do a cpu swap.


----------



## Ithanul

Larky_the_mauler said:


> If only Microsoft could write a thread scheduler.


Yeah Microsoft needs to work on it, but not super bad to the point one can't get some decent fps out.
I game on a 1950X with a 1080Ti at 1440p on occasions. Usually get around 80-90fps on Monster Hunter World (though, I turn that stupid fog and blur off in the settings).


----------



## Larky_the_mauler

Ithanul said:


> Yeah Microsoft needs to work on it, but not super bad to the point one can't get some decent fps out.
> I game on a 1950X with a 1080Ti at 1440p on occasions. Usually get around 80-90fps on Monster Hunter World (though, I turn that stupid fog and blur off in the settings).


The guy I replied to was talking about NUMA configuration on the 2990WX, which because Microsoft's trash scheduler isn't NUMA aware causes cases where it's slower than a 2950x. He's blaming it on the CPU though even though when you use an OS with an actual scheduler (Linux) it doesn't have this issue.


----------



## Ithanul

Larky_the_mauler said:


> The guy I replied to was talking about NUMA configuration on the 2990WX, which because Microsoft's trash scheduler isn't NUMA aware causes cases where it's slower than a 2950x. He's blaming it on the CPU though even though when you use an OS with an actual scheduler (Linux) it doesn't have this issue.


NUMA is on the others as well, but yeah, Linux does handle it better.


----------



## mouacyk

AdoredTV is not the "friend", right?


----------



## steelbom

This could be my first switch to AMD. The 2700X doesn't look too bad even against the i9 9900K... anyone know when we'll find out about these?


----------



## EniGma1987

steelbom said:


> This could be my first switch to AMD. The 2700X doesn't look too bad even against the i9 9900K... anyone know when we'll find out about these?



CES, in about a month. AMD has a keynote address specifically about 7nm and the advancements they are doing. If they do things like normal they will show off both their new Ryzen2 CPUs and Navi GPUs. We may get some announcements about general core counts as well as general availability, but I doubt we will see hard numbers on things and specific pricing. More of a: "7nm is awesome, we have awesome CPUs, look we actually have some real ones that are done and on stage with us, they are awesome, they have even more cores so they are awesome. They are available Q3 2019 so wait for them and dont buy Intel. Thanks for coming"


----------



## LancerVI

EniGma1987 said:


> CES, in about a month. AMD has a keynote address specifically about 7nm and the advancements they are doing. If they do things like normal they will show off both their new Ryzen2 CPUs and Navi GPUs. We may get some announcements about general core counts as well as general availability, but I doubt we will see hard numbers on things and specific pricing. More of a: "7nm is awesome, we have awesome CPUs, look we actually have some real ones that are done and on stage with us, they are awesome, they have even more cores so they are awesome. They are available Q3 2019 so wait for them and dont buy Intel. Thanks for coming"



LOL

Sounds about right. 

I have to admit; I'm still looking forward to it.


----------



## Majin SSJ Eric

You really think it'll be Q3? I don't honestly have any idea when we will actually get a Zen 2 release but I thought it would be more like around the time Ryzen originally launched or Ryzen 2 last year?


----------



## ibb27

Majin SSJ Eric said:


> You really think it'll be Q3? I don't honestly have any idea when we will actually get a Zen 2 release but I thought it would be more like around the time Ryzen originally launched or Ryzen 2 last year?


Lisa Su said that Zen 2 desktop CPUs are coming after EPYC 7nm CPUs. I'm not expecting Ryzen 3k before Computex (beginning of June) next year.


----------



## andrews2547

momonz said:


> Still doesn't hide the fact that AdoredTV has his own multiple sources. He simply mention that it's a coincidence someone at reddit has the same very similar leak.
> 
> Basically you won't believe AdoredTV who has a good history (not perfect) about leaks than some random redditor who suddenly did a 360.



AdoredTVs correct leaks have always been very vague unless close to the time of release.


----------



## CynicalUnicorn

andrews2547 said:


> AdoredTVs correct leaks have always been very vague unless close to the time of release.


That's not true. Sometimes, like with Rome's use of chiplets, he gets it right. I wonder where he gets his information from. Perhaps from others in the know with a proven track record. It's too bad we'll never find out who that is.


----------



## Ultracarpet

CynicalUnicorn said:


> That's not true. Sometimes, like with Rome's use of chiplets, he gets it right. I wonder where he gets his information from. Perhaps from others in the know with a proven track record. It's too bad we'll never find out who that is.


IIRC he was not even on the right planet with his Polaris predictions. Mind you, that was less about leaks and more about him making guesses and assumptions about what AMD had provided. 

I'm really skeptical of this, ESPECIALLY the pricing. There is no way in hell a 6 core 12 thread processor is going to cost $99. That is less than what their 4c4t CPU's are going for right now.

My guess is that max OC is going to be around 4.5-4.8 ghz, the core counts are going to top out at 12 cores, and every sku will be adjusted one tier down in pricing (8c16 @ 6c12t price etc.)


----------



## Particle

Larky_the_mauler said:


> The guy I replied to was talking about NUMA configuration on the 2990WX, which because Microsoft's trash scheduler isn't NUMA aware causes cases where it's slower than a 2950x. He's blaming it on the CPU though even though when you use an OS with an actual scheduler (Linux) it doesn't have this issue.


The Windows thread scheduler is certainly NUMA aware and has been for a very long time. It has other deficiencies when compared to the thread scheduler used by Linux, but it would be more accurate to call it less advanced than trash. It's not actively or purposefully bad. It just hasn't been made as smart as the other.

I read this article from 2009 a while back. It was written by Microsoft developers that work on the thread scheduler as I recall. They discuss many of the performance challenges and the evolution of the Windows thread scheduler over time. There is a pretty lengthy section on NUMA and how Windows deals with it in there. Things have certainly changed a lot in the nine years since the article was written, but it should give a good idea of how it isn't an area Microsoft has simply ignored by any means. Even that long ago they were giving it a lot of attention.

https://www.microsoftpressstore.com/articles/article.aspx?p=2233328&seqNum=7


----------



## ryan92084

CynicalUnicorn said:


> andrews2547 said:
> 
> 
> 
> AdoredTVs correct leaks have always been very vague unless close to the time of release.
> 
> 
> 
> That's not true. Sometimes, like with Rome's use of chiplets, he gets it right. I wonder where he gets his information from. Perhaps from others in the know with a proven track record. It's too bad we'll never find out who that is.
Click to expand...

It's been a long time since I've watched it but weren't his chiplet predicitions based on his personal speculation from reading patents, tech papers, etc rather than from a mystery leaker?


----------



## ozlay

I will be waiting for Threadripper. It seems like AM4 just doesn't have enough lanes for me.

They ryzen 2600 is currectly $160 on amazon. So i can see them offering a sixcore at $99. 

AMD just needs to stop making stuff that never comes out. Like: 2700E, 2600E, 2500x, 2300x, 2400GE and 2200GE.


----------



## CynicalUnicorn

ryan92084 said:


> It's been a long time since I've watched it but weren't his chiplet predicitions based on his personal speculation from reading patents, tech papers, etc rather than from a mystery leaker?


Perhaps, but Charlie D. beat him to it and as far as I can tell Semi|Accurate is quite reliable. If I had a YouTube career doing this kind of stuff, I'd absolutely drop $1000/year as a business expense on a S|A subscription and talk about those leaks to build my credibility. Is Jim doing this? I dunno. I would.

But yeah, the chiplet idea has been floating around for a while :thumb:




ozlay said:


> I will be waiting for Threadripper. It seems like AM4 just doesn't have enough lanes for me.
> 
> They ryzen 2600 is currectly $160 on amazon. So i can see them offering a sixcore at $99.
> 
> AMD just needs to stop making stuff that never comes out. Like: 2700E, 2600E, 2500x, 2300x, 2400GE and 2200GE.


It has four more lanes than your Kaby Lake system.  What do you need them for? With AM4, you can have two PCIe x8 slots and a single PCIe x4 slot hooked up to the CPU directly, plus a few extras routed from the chipset. That's more than enough for two GPUs, NVMe storage, and 10Gb networking.


----------



## ozlay

CynicalUnicorn said:


> It has four more lanes than your Kaby Lake system.  What do you need them for? With AM4, you can have two PCIe x8 slots and a single PCIe x4 slot hooked up to the CPU directly, plus a few extras routed from the chipset. That's more than enough for two GPUs, NVMe storage, and 10Gb networking.


I have 2 NVME and 2 GPU so im using 24 lanes. My updated sig doesn't show up. Unless Ryzen 2 adds faster chipset lanes. I'm not sure it will work for me. Plus i would like to add another NVME in the future.


----------



## Majin SSJ Eric

Ultracarpet said:


> IIRC he was not even on the right planet with his Polaris predictions. Mind you, that was less about leaks and more about him making guesses and assumptions about what AMD had provided.
> 
> I'm really skeptical of this, ESPECIALLY the pricing. There is no way in hell a 6 core 12 thread processor is going to cost $99. That is less than what their 4c4t CPU's are going for right now.
> 
> My guess is that max OC is going to be around 4.5-4.8 ghz, the core counts are going to top out at 12 cores, and every sku will be adjusted one tier down in pricing (8c16 @ 6c12t price etc.)


For the most part I think your predictions are certainly reasonable (though I am personally more optimistic about clock speeds as I really think Zen 2 on 7nm is going to OC much better than Zen 1, perhaps in the 4.8-5.1 GHz range). I will quibble about your assertion that there's no way they will release a 6C/12T CPU for $99. Not that I think they actually will deliver on that sort of pricing, but I don't personally believe they are just going to keep their sku structure the same as it is now so comparisons vs current hardware will become irrelevant. 4C/4T parts will not exist on Ryzen 3 and I doubt that even 4C/8T will be enough of a shift in their sku lineup for what they are planning. 

Tbh I don't really care much about entry-level Ryzen 3 though, and I absolutely believe that the high end Ryzen will be ditching 8C/16T and going to at least 12C/24T or more. Intel is in a bad spot right now and they simply cannot match the core and thread counts that AMD can put out there, and I think AMD smells blood in the water. They will want to release an entire lineup of CPU's (from entry-level to high end) that Intel simply CANNOT compete with (at least in core count) as that is the best way they can market themselves going forward. And Zen 2 (with what I suspect will be its significant improvements to IPC and clock speed) gives them a great platform to release something truly revolutionary, rather than a simple evolution that would be seen as just more of the same.

Of course all of this is just my speculation, though, and we'll have to wait and see what they actually end up doing when all is said and done.


----------



## Cursedqt

Well Im rooting for amd for 2020 and 2019

On a side note why is the news section filled with useless gaming related news?


----------



## Jspinks020

For sure Probably will pick up the 3600x. get it to 4.5ghz. That is Monster....


----------



## Catscratch

Aussiejuggalo said:


> If true AMD could take a good chunk of the market seeing Intel has crap all and might not have crap all till 2020.
> 
> 
> But... AMD need to kick the motherboard manufactures in the ass, we have a handful of semi decent boards, most of them are the stupidly expensive high end ATX, mATX and ITX don't really have much, sure some have nice features but the power delivery and cooling of that power delivery on most boards is laughable.





Imouto said:


> What AMD needs is to make reference motherboards.


I still couldn't upgrade due to ridonculus ram prices especially in my country. If this round, amd cpus can make love with RAMs outta box, it'll be their best launch since what.... 

PS: as long as i can't throw any ram with any mobo, it's a bad product from both sides. No one should accept "compatibility" issues in this day and age.


----------



## Jspinks020

Catscratch said:


> I still couldn't upgrade due to ridonculus ram prices especially in my country. If this round, amd cpus can make love with RAMs outta box, it'll be their best launch since what....
> 
> PS: as long as i can't throw any ram with any mobo, it's a bad product from both sides. No one should accept "compatibility" issues in this day and age.


Like they said gonna win in sheer core line up with all that. There's no issue with Ram anymore..should be able to run most decent kits with ease. It looks good on their end it does.


----------



## ToTheSun!

ozlay said:


> I have 2 NVME and 2 GPU so im using 24 lanes. My updated sig doesn't show up. Unless Ryzen 2 adds faster chipset lanes. I'm not sure it will work for me. Plus i would like to add another NVME in the future.


You have to disable and enable back rig in sig feature in userCP's signature tab for it to show the updates you have made in rigbuilder.


----------



## Jspinks020

ToTheSun! said:


> You have to disable and enable back rig in sig feature in userCP's signature tab for it to show the updates you have made in rigbuilder.


well 2nd gen..They talked about Infinity Fabric or something stuff tech...I don't know stuff to reduce more latencies and input Lag..and smoother experience. I have to say it's pretty smooth at 4.3.


----------



## KyadCK

Catscratch said:


> I still couldn't upgrade due to ridonculus ram prices especially in my country. If this round, amd cpus can make love with RAMs outta box, it'll be their best launch since what....
> 
> PS: as long as i can't throw any ram with any mobo, it's a bad product from both sides. No one should accept "compatibility" issues in this day and age.


Any ram that fits in the socket will work at JEDEC just fine. There is no compatibility issue.

If you are expecting any RAM you can find in the world to work on it's XMP profile without it being on the board and CPUs QVL, you're in for a bad day no matter what CPU you buy.


----------



## miklkit

Catscratch said:


> I still couldn't upgrade due to ridonculus ram prices especially in my country. If this round, amd cpus can make love with RAMs outta box, it'll be their best launch since what....
> 
> PS: as long as i can't throw any ram with any mobo, it's a bad product from both sides. No one should accept "compatibility" issues in this day and age.





^^^^This.


I'm unhappy with the motherboard manufacturers and hope the memory manufacturers get sued for their price gouging. If I can just drop a Zen2 onto my X370 mobo it will save me $200-$700 depending on how much new stuff i would otherwise buy. If I do end up getting another mobo it will probably be another Biostar just to thumb me nose at the big boys.


----------



## Jspinks020

miklkit said:


> ^^^^This.
> 
> 
> I'm unhappy with the motherboard manufacturers and hope the memory manufacturers get sued for their price gouging. If I can just drop a Zen2 onto my X370 mobo it will save me $200-$700 depending on how much new stuff i would otherwise buy. If I do end up getting another mobo it will probably be another Biostar just to thumb me nose at the big boys.


Well good time to maybe buy an x470. Looking at thrid gen chips next year and should be drop in and go I guess. If it plays out that way.


----------



## ozlay

Jspinks020 said:


> Well good time to maybe buy an x470. Looking at thrid gen chips next year and should be drop in and go I guess. If it plays out that way.


Now why would you upgrade from a x370 to an x470. To run a next gen cpu. If a newer x570 is coming out with that next gen cpu. If nothing is wrong with the x370 it would be better to wait for x570. Or wait for the prices to drop on the x470?


----------



## Majin SSJ Eric

Agreed. If you have a good X370 board already there isn't any reason to upgrade to X470 unless it has some feature you really want that you don't currently have. That said, if this leak is correct then I suspect those flagship 16C/32T chips would require a new board anyway because of the power requirements...


----------



## steelbom

Wait, if I switch to an AMD motherboard do I need to replace my RAM??


----------



## Particle

steelbom said:


> Wait, if I switch to an AMD motherboard do I need to replace my RAM??


If you're already on a DDR4 based Intel platform, no. You might get better performance if you do, but it's not strictly necessary.


----------



## mohit9206

Why would you want to change your mobo? The whole point of Zen CPU is that you dont have to change the mobo. Even a B350 should support a 2020 Zen 3 CPU


----------



## white owl

mohit9206 said:


> Why would you want to change your mobo? The whole point of Zen CPU is that you dont have to change the mobo. Even a B350 should support a 2020 Zen 3 CPU


 Idk little things like better VRM, extended RAM and CPU overclocking, native CPU support, etc.
The point of Zen is to be a good CPU, the ability to reuse a motherboard is a perk associated with AMD but just because a CPU will work in a board doesn't mean you should do it or that it's ideal. Sure a B350 board might have support for an 8+ core 3000 CPU but if the B350 board has a weaker VRM you're not going to have much fun overclocking your new CPU. It works well enough now because more vcore doesn't give you more speed in most cases. Imagine how much vcore you'd throw at a 2700X if it scaled all the way up to 5Ghz? There are also load of H/Z370 boards that will work with a 9900k in them but that doesn't mean it's a great idea, hell if you're overclocking you'd be chucking more vcore for less speed than you'd get with a better board.

The point of socket longevity isn't to use a motherboard as long as possible, it's so you have the option to do so if you need to or want to. That doesn't mean that boards haven't gotten any better.


----------



## steelbom

Particle said:


> If you're already on a DDR4 based Intel platform, no. You might get better performance if you do, but it's not strictly necessary.


Ahh, I see. Well, that's good -- I don't want to be buying new RAM as well as a mobo and CPU.


----------



## Jspinks020

white owl said:


> Idk little things like better VRM, extended RAM and CPU overclocking, native CPU support, etc.
> The point of Zen is to be a good CPU, the ability to reuse a motherboard is a perk associated with AMD but just because a CPU will work in a board doesn't mean you should do it or that it's ideal. Sure a B350 board might have support for an 8+ core 3000 CPU but if the B350 board has a weaker VRM you're not going to have much fun overclocking your new CPU. It works well enough now because more vcore doesn't give you more speed in most cases. Imagine how much vcore you'd throw at a 2700X if it scaled all the way up to 5Ghz? There are also load of H/Z370 boards that will work with a 9900k in them but that doesn't mean it's a great idea, hell if you're overclocking you'd be chucking more vcore for less speed than you'd get with a better board.
> 
> The point of socket longevity isn't to use a motherboard as long as possible, it's so you have the option to do so if you need to or want to. That doesn't mean that boards haven't gotten any better.


Why I like them...They said and did Plan it up to 2020. Yeah maybe some won't be able to take full advantage of trying to overclock more core cpu's. But at least you might have the better chip replacement. And they will Probably have an attractive Price. That said I feel 2nd gen is Powerful, but yeah if we could drop in 5ghz capable cpu's, there is nothing wrong with that.


----------



## ozlay

I wonder if the chipset lanes on the B550 and x570 will be upgraded to Gen3/4 My guess would be Gen3 if they moving up to gen4 CPU lanes.

Then it might have enough Pcie lanes to upgrade to AM4 over threadripper.


----------



## miklkit

The motherboards are not getting better and are still only adequate at best. The only reason to get a new motherboard is if you want the improved AMD chipset.


----------



## ozlay

miklkit said:


> The motherboards are not getting better and are still only adequate at best. The only reason to get a new motherboard is if you want the improved AMD chipset.


Yeah, If only DFI would come back and show the others how to make an AMD motherboard.


----------



## pony-tail

miklkit said:


> The motherboards are not getting better and are still only adequate at best. The only reason to get a new motherboard is if you want the improved AMD chipset.


I have an Asus Crossfire VI extreme in one of my machines . I have not replaced it because I have not , as yet, found a better board . That said , I believe that they did not sell well .


----------



## Lisanderus

https://www.computeruniverse.net/en/amd-ryzen-7-2700-8-kern-octa-core-cpu-mit-320-ghz


----------



## EniGma1987

Lisanderus said:


> https://www.computeruniverse.net/en/amd-ryzen-7-2700-8-kern-octa-core-cpu-mit-320-ghz





Ok? 2700's have been out for quite as while. This thread is awaiting the release of the "3700"


----------



## Lisanderus

EniGma1987 said:


> Ok? 2700's have been out for quite as while. This thread is awaiting the release of the "3700"


You did not understand. In this store, of course, there is the usual 2700. And this thing, which appeared very recently, seems like a kind of placeholder. And the date (an announcement in January at the CES) intersects with multiple leaks.


----------



## Raghar

Lisanderus said:


> You did not understand. In this store, of course, there is the usual 2700. And this thing, which appeared very recently, seems like a kind of placeholder. And the date (an announcement in January at the CES) intersects with multiple leaks.


Likely just Tray CPU.


----------



## LancerVI

Lisanderus said:


> https://www.computeruniverse.net/en/amd-ryzen-7-2700-8-kern-octa-core-cpu-mit-320-ghz


What exactly are you posting? I'm confused.


EDIT: Sorry, Didn't see the reply.


----------



## tyvar

The price on that thing is crazy so its gotta be a place holder too, unless AMD really is sitting on something ridiculous. 

Anyways between the price and the January availability notice, it seems that retailer is going to be offering something new.


My big problem with all these rumors is we haven't seen a single leak on the motherboard side, or benchmark leaks from people testing machines at vendors etc. Usually as we get close to release we see the occasional benchmark show up in various testing suite databases, even if it is just from ES samples.

If there is release happening as soon as next month, its the tightest ship AMD has run regarding information in several years. 

on that note, What graphics card was it they released back in the day where things where we had no idea anything was coming till product was like hitting distributors?


----------



## white owl

Pretty sure it's just the announcement next month, not a hard launch.


----------



## Raghar

Also there is stuff called Ryzen professional coming. Not for 3x higher. But that might be just a fancy way how to name a tray CPU.


----------



## CynicalUnicorn

Raghar said:


> Also there is stuff called Ryzen professional coming. Not for 3x higher. But that might be just a fancy way how to name a tray CPU.


The Ryzen 1000 series was cloned as the Ryzen PRO 1000 series, so probably something like that. I don't recall the MSRP changing (if it did, it was maybe a few dollars extra), and it only introduced features akin to Intel's vPro. It's the kind of CPU that would be used in PCs on a corporate network with fun remote management features so your fascist IT department can stop you from opening random DOCX files or something. :thumb:


----------



## reqq

where all the leaks?? this must mean summer or autumn release 2019????


----------



## dubldwn

tyvar said:


> The price on that thing is crazy so its gotta be a place holder too, unless AMD really is sitting on something ridiculous.
> 
> Anyways between the price and the January availability notice, it seems that retailer is going to be offering something new.
> 
> 
> My big problem with all these rumors is we haven't seen a single leak on the motherboard side, or benchmark leaks from people testing machines at vendors etc. Usually as we get close to release we see the occasional benchmark show up in various testing suite databases, even if it is just from ES samples.
> 
> If there is release happening as soon as next month, its the tightest ship AMD has run regarding information in several years.
> 
> on that note, What graphics card was it they released back in the day where things where we had no idea anything was coming till product was like hitting distributors?


Can only speak for myself, but I was totally caught off guard by the 7970. First time I heard about it was the Anand review, although I don't think it was actually available until a couple weeks later.


----------



## Jspinks020

Until next year, and the same stuff hope you're mobo don't die in the mean time.


----------



## guttheslayer

AdoredTV came out another video.. This is definitely worth a watch!


----------



## konspiracy

guttheslayer said:


> AdoredTV came out another video.. This is definitely worth a watch!
> 
> 
> 
> https://www.youtube.com/watch?v=ReYUJXHqESk


I think you should start a new thread due to this video, this is fantastic.


----------



## ozlay

Yeah, The I/O die is just too large to make on 7nm for Epyc and Threadripper. That might be why they chose 14nm for them. However the I/O die for ryzen would be much smaller and could be made on 7nm.


----------



## ryan92084

konspiracy said:


> guttheslayer said:
> 
> 
> 
> AdoredTV came out another video.. This is definitely worth a watch!
> 
> 
> 
> https://www.youtube.com/watch?v=ReYUJXHqESk
> 
> 
> 
> I think you should start a new thread due to this video, this is fantastic.
Click to expand...

It's already been discussed here for the last 8 pages...


----------



## tyvar

Suprised this hasn't been put here but:

https://twitter.com/TUM_APISAK/status/1077025927203217408 and
https://twitter.com/i/web/status/1077041301210591232


Somebody spotted a interesting AMD Identifier code in the 3dmark database, various people have tried to decipher it and the consensus floating round is it looks like its a ES sample fixed at 3.7ghz, but the important part to note is it is indeed a 8 core 65 watt part so that part of the rumors looks like its happening.


----------



## EniGma1987

tyvar said:


> Suprised this hasn't been put here but:
> 
> https://twitter.com/TUM_APISAK/status/1077025927203217408 and
> https://twitter.com/i/web/status/1077041301210591232
> 
> 
> Somebody spotted a interesting AMD Identifier code in the 3dmark database, various people have tried to decipher it and the consensus floating round is it looks like its a ES sample fixed at 3.7ghz, but the important part to note is it is indeed a 8 core 65 watt part so that part of the rumors looks like its happening.


The info they are throwing out for "decoding" it seems entirely made up to me. The part number, if it is one, is nothing close to other Ryzen model numbers so you cant match anything. The main things that match the layout of older Ryzen processors is the D for the 2nd digit and the M as 5th from the end. The "0108" that people are pulling the supposed 8 cores from is normally where the model number would be, for instance 170X or 2700 for an R7 1700X and R7 2700 CPU from those generations. So suddenly changing model number to core count makes no sense. Also, the first digit normally is a letter for the "generation" not a number like this has. It does make some sense to use a number for ES chips and letters for production chips, but deciding that 5 = engineering sample generation 4? lol ok.


----------



## tyvar

EniGma1987 said:


> The info they are throwing out for "decoding" it seems entirely made up to me. The part number, if it is one, is nothing close to other Ryzen model numbers so you cant match anything. The main things that match the layout of older Ryzen processors is the D for the 2nd digit and the M as 5th from the end. The "0108" that people are pulling the supposed 8 cores from is normally where the model number would be, for instance 170X or 2700 for an R7 1700X and R7 2700 CPU from those generations. So suddenly changing model number to core count makes no sense. Also, the first digit normally is a letter for the "generation" not a number like this has. It does make some sense to use a number for ES chips and letters for production chips, but deciding that 5 = engineering sample generation 4? lol ok.


Uh there is a lot more in common with current model numbers then that:

http://www.moepc.net/content/uploadfile/201811/8d7b1542898939.png

To compare

5D0108BBM8SH2_37 part from database
YD2600BBM6IAF 2600 model number
YD2700BBM88AF 2700 model number

There were Ryzen ES samples floating around back in the day with the model numbers starting with 1 and 2 and 5 evidently. 1 and 2 were known ES chips, ES0 and ES 1 so it seems reasonable that 5 would be also, and its also how you end up with 5 equaling ES4 Throw out the next 4 numbers because that would be the frequency or model number, which a ES might not be assigned, then your left with the BB code, which is actually currently used on 65 watt parts, see the 2600 and 2700. After that is the M which indicates package, AM4, Its after that you get the core count number, 8 and, again on the current 2600 you have a 6 there a 8 on the 2700. I'm not sure of any parts that have a S code, because afaik no current parts have that cache arrangement of 8x512 L2 + 32mb L3, as the chart claims, but 8x512 L2 is what we would expect to see on a 8 core proc.

The notable things are increase in L3, and possibly aiming at a higher base clockspeed if it is indeed running at a fixed 3.7.


----------



## pony-tail

Jim ( AdoredTV) seems to have gone very quiet lately .
nothing new at all


----------



## guttheslayer

pony-tail said:


> Jim ( AdoredTV) seems to have gone very quiet lately .
> nothing new at all


What you expect him to do, make a video on every comment that disagree with him?


----------



## ozlay

If the rumor of the Anniversary Edition is true. Hopefully it comes with a copper IHS. 

Something special that sets it apart cosmetically from the others.


----------



## bigjdubb

As long as the IHS doesn't cost more. I'd hate to pay extra for something that I will see a total of 5 minutes the entire time I own it.


----------



## ZealotKi11er

My prediction is: Epyc stuff, APU 12nm, Vega II. Zen 2 Desktop probably ComputeX.


----------



## bucdan

bigjdubb said:


> As long as the IHS doesn't cost more. I'd hate to pay extra for something that I will see a total of 5 minutes the entire time I own it.


I always wondered, why are they using IHS now? The bare CPU was always nice back in the days.


----------



## white owl

bucdan said:


> I always wondered, why are they using IHS now? The bare CPU was always nice back in the days.


 I'd imagine dies got smaller and thinner and thus more fragile. All it takes is a company like coolermaster to give you a bad name. Imagine buying a CPU and a cooler, installing them and turns out the die cracked. Coolermaster said user error and Intel said the cooler broke it so user error. Can you imagine being Intel and having to test every cooler out there just to see if it will crack a die? Or crush the pins?
IHS is just easier for every one, and better for most people.


----------



## bucdan

white owl said:


> I'd imagine dies got smaller and thinner and thus more fragile. All it takes is a company like coolermaster to give you a bad name. Imagine buying a CPU and a cooler, installing them and turns out the die cracked. Coolermaster said user error and Intel said the cooler broke it so user error. Can you imagine being Intel and having to test every cooler out there just to see if it will crack a die? Or crush the pins?
> IHS is just easier for every one, and better for most people.


Hmm, good point. So it's basically an idiot shield. In that case, AMD should start making it with copper.


----------



## white owl

It's no reflection of the end user I don't think, you would expect as a consumer to buy a compatible CPU and cooler and be able to install them as instructed without issue. Since some cooler manufacturer's will ignore intel's maximum die pressure and just do what's easier or cheaper, it's probably best to have the IHS unless people are fine breaking CPU's with cheap coolers and getting no compensation from Intel since they aren't at fault.



The IHS is already copper. If people want something special done to the IHS, I'd request one that's similar to Rockits IHS. Pure copper machined into shape as opposed to cast/punched into shape.


----------



## Vidati

New video from AdoredTV, cant argue with that logic, we should see something about the new desktop CPU's and probably GPU's. 

Last sentence kinda makes sense to me.


----------



## LancerVI

Vidati said:


> https://www.youtube.com/watch?v=MG-onUm__c8&t=0s
> 
> New video from AdoredTV, cant argue with that logic, we should see something about the new desktop CPU's and probably GPU's.
> 
> Last sentence kinda makes sense to me.




EDIT: Disregard, Fixed


----------



## KyadCK

bucdan said:


> Hmm, good point. So it's basically an idiot shield. In that case, AMD should start making it with copper.


An IHS is, usually, nickel plated copper and a small gold plate for the solder on the inside. You can lap (sand down) the IHS to a mirror copper finish if you want to. Example:


----------



## Diffident

Vidati said:


> https://www.youtube.com/watch?v=MG-onUm__c8&t=0s
> 
> New video from AdoredTV, cant argue with that logic, we should see something about the new desktop CPU's and probably GPU's.



It's still going to be awhile before Navi GPU's are launched. There are no Navi ID's as of yet in the Linux kernel, where as Vega 10 & Vega 20 ID's started being added 6 months ago.

Linux kernel development cycles last about 2 months with a 2 week merge window, which just opened for kernel 4.21 last week. If the ID's aren't added within the current merge window, it will be 4 months at the earliest before we see Navi. This is the scenario AMD has followed since their drivers went open source.


----------



## pony-tail

If the "8c/16t with IGPU that has 20CU for $200 " is real I will have one - for my 7litre Silverstone Milo ML09 rig , it currently has a Ryzen 5 2400G - 8 cores just has to make Netflix run faster and just think how it will improve Solitaire and Mahjong .
Jokes aside -- I want one at that price !


----------



## dantoddd

Any idea what Navi will perform like?


----------



## guttheslayer

dantoddd said:


> Any idea what Navi will perform like?


Jim predicted its RTX2070 level or Vega 64+15%. What AMD strives to promote is a comparable system of i9-9900K with RTX 2070 at half the price, hence its a one-stone kill 2 birds competition against both NV and intel.


I am not going to argue whether is Jim leak reliable this time, but I will not sway from my stand that 16C/32T is coming to AM4+ platform at $499~ price range.


----------



## dantoddd

guttheslayer said:


> Jim predicted its RTX2070 level or Vega 64+15%. What AMD strives to promote is a comparable system of i9-9900K with RTX 2070 at half the price, hence its a one-stone kill 2 birds competition against both NV and intel.
> 
> 
> I am not going to argue whether is Jim leak reliable this time, but I will not sway from my stand that 16C/32T is coming to AM4+ platform at $499~ price range.


Well that's a bit disappointing. I was expecting them to come close to 2080 performance.


----------



## miklkit

Indeed. AMD users have been lapping their cpus for years. I have several.


----------



## Aenra

guttheslayer said:


> I will not sway from my stand that 16C/32T is coming to AM4+ platform at $499~ price range.


Very possible yeah. And you know, i've been thinking on that..

More and more cores are nice for those that need them, but they're introduced at the cost of frequency and extra latency; not sure if you have use (read: real use, not just want) for so many cores, but for those of us that don't?
I doubt anything's changed, same binning process, same hierarchy, same waffer from server chips down to APUs; meaing that if this keeps up, i'll sooner or later be looking at either a waste of my money (partially, for cores i've no use for), or a chip that i know could have clocked a lot higher, but never shall. Because binning 
[do i really want 4 CCXs on a dual memory channel chip? With all this entails? Again, real use.. and real use, for so many cores, entails significantly higher a RAM bandwidth; among other things]

Obviously, O.K., they want to motivate you to spend more, i get that.
I just honestly wish they grasped the dilemma they're introducing and moved on to offering Black editions as well. Or something along those lines anyway.

Cost per se is one factor, sure, but not the only one here.


----------



## Ultracarpet

Aenra said:


> Very possible yeah. And you know, i've been thinking on that..
> 
> More and more cores are nice for those that need them, but they're introduced at the cost of frequency and extra latency; not sure if you have use (read: real use, not just want) for so many cores, but for those of us that don't?
> I doubt anything's changed, same binning process, same hierarchy, same waffer from server chips down to APUs; meaing that if this keeps up, i'll sooner or later be looking at either a waste of my money (partially, for cores i've no use for), or a chip that i know could have clocked a lot higher, but never shall. Because binning
> [do i really want 4 CCXs on a dual memory channel chip? With all this entails? Again, real use.. and real use, for so many cores, entails significantly higher a RAM bandwidth; among other things]
> 
> Obviously, O.K., they want to motivate you to spend more, i get that.
> I just honestly wish they grasped the dilemma they're introducing and moved on to offering Black editions as well. Or something along those lines anyway.
> 
> Cost per se is one factor, sure, but not the only one here.


you could... just buy one with less cores...?


----------



## Aenra

Ultracarpet said:


> you could... just buy one with less cores...?


Why really! Never occured to me, thanks!

Do you read before you post? 

*sigh, O.K.. cheap is one thing, nice when you can get it. Performance however is another; heavily parallelised, all good today; mostly single or low core count focused though? Not really. So put differently, at a time when a "mainstream" (ie* non*-HEDT) rig made from competitor parts can have 4,2 Gigs of RAM frequency and 5,2 Gigs of CPU frequency? No, i'm not asking for _yet_ more cores. Even more so when i know, know, that had they or should they choose to reserve some of the binned chips? Things would be a lot better, more.. smoothed out in between the various consumer segments, more representative of the market out there. I hope that helps, not always patient with people, but i did try this time, holidays and all.


----------



## bmgjet

Aenra said:


> Why really! Never occured to me, thanks!
> 
> Do you read before you post?
> 
> *sigh, O.K.. cheap is one thing, nice when you can get it. Performance however is another; heavily parallelised, all good today; mostly single or low core count focused though? Not really. So put differently, at a time when a "mainstream" (ie* non*-HEDT) rig made from competitor parts can have 4,2 Gigs of RAM frequency and 5,2 Gigs of CPU frequency? No, i'm not asking for _yet_ more cores. Even more so when i know, know, that had they or should they choose to reserve some of the binned chips? Things would be a lot better, more.. smoothed out in between the various consumer segments, more representative of the market out there. I hope that helps, not always patient with people, but i did try this time, holidays and all.


Youd get the single CCX 8 core 16 thread ryzen if your worried about latancy and clock speed. (Some of the leaks been pointing towards 4.2 base and 5ghz XFR but we will see next week in the CES annoucement.)

If you want cores but still game then you get the 2 CCX 16 core 32 thread ryzen and run this chip with gamer mode in ryzen master.

If you only want cores and dont game you get the thread ripper.


----------



## Majin SSJ Eric

I'd be perfectly fine with the flagship Zen 2 8C/16T CPU whenever it comes out so long as the new architecture and node addresses Ryzen's only real downside: OCing. If the new 3700X (or whatever they call it) can get to 4.7+ GHz then it will take Zen from an impressive CPU to a truly great CPU. I don't really need more than 8C/16T personally so come on AMD, just get these things clocking 400-500 MHz higher than Ryzen 2 and we'll be all set!


----------



## Raghar

Actually this got me thinking. 14 nm are made by multipatterning. 250W EUV allows single stage, not 4, thus even when substrate might be more expensive than 14 nm, net result might be cheaper CPU per wafer even when size of CPU on wafer would be the same. Of course heat transfer out of die might be PITA.

Thus this might be one of last CPUs that's both cheaper and faster than previous generation.


----------



## Cyrious

https://www.reddit.com/r/Amd/comments/abpzt7/16_core_amd_zen_2_cpu_listed/

16c 32t, 3.9 base 4.7 turbo, leak (maybe) is russian.

As always, take with a grain of salt.


----------



## JackCY

Majin SSJ Eric said:


> I'd be perfectly fine with the flagship Zen 2 8C/16T CPU whenever it comes out so long as the new architecture and node addresses Ryzen's only real downside: OCing. If the new 3700X (or whatever they call it) can get to 4.7+ GHz then it will take Zen from an impressive CPU to a truly great CPU. I don't really need more than 8C/16T personally so come on AMD, just get these things clocking 400-500 MHz higher than Ryzen 2 and we'll be all set!


What if it has higher performance at lower clock? This clock hunting to infinity is nonsense when designing products when you can increase performance otherwise. For a user, yeah sure what can you do but try not fry it and clock it as high as it will go but then why buy products made on a process meant for efficiency not for extreme clocks.

I would be quite happy with higher performance than competition at lower clocks while keeping good value.


----------



## guttheslayer

Majin SSJ Eric said:


> I'd be perfectly fine with the flagship Zen 2 8C/16T CPU whenever it comes out so long as the new architecture and node addresses Ryzen's only real downside: OCing. If the new 3700X (or whatever they call it) can get to 4.7+ GHz then it will take Zen from an impressive CPU to a truly great CPU. I don't really need more than 8C/16T personally so come on AMD, just get these things clocking 400-500 MHz higher than Ryzen 2 and we'll be all set!


I mentioned because alot of ppl here are claiming 16C coming to AM4 socket (using mainstream platform) is complete BS. 16C/32T will only be limited only to TR3.


I hope AMD will make them eat their words, literally.


----------



## battlenut

i think people not liking the amount of cores is funny. I remember A time on OCN when the thing to say was we need more cores. now they have a plethora of core and still complain. BWAAHAAHAAHAAHAA!


----------



## Raghar

Cyrious said:


> https://www.reddit.com/r/Amd/comments/abpzt7/16_core_amd_zen_2_cpu_listed/
> 
> 16c 32t, 3.9 base 4.7 turbo, leak (maybe) is russian.
> 
> As always, take with a grain of salt.


AMD-3700X.htm is more interesting.

Kinda wonder if these would be affordable, or if AMD would get higher marging until Intel gets it's process right, and then they would drop down prices.


----------



## kd5151

Cyrious said:


> https://www.reddit.com/r/Amd/comments/abpzt7/16_core_amd_zen_2_cpu_listed/
> 
> 16c 32t, 3.9 base 4.7 turbo, leak (maybe) is russian.
> 
> As always, take with a grain of salt.


Matches what adoredTV said in the first place.hmmmmm.


----------



## keikei

We'll see if this is true or not very soon. If so, me want:


----------



## Rabit

I waiting patiently for CES and AMD Conference before to make verdict about this "

I personally will be happy even with even Ryzen 3xxx series will be only 8 core 16T with god IPC and high clock, for affordable price but if rumours are true I even more happy


----------



## Aenra

bmgjet said:


> If you want cores .. If you only want ..


I'm not sure what gave you the impression i'm unaware of all this.
I would -in vain- point out once more to either my original post or the follow up explanation.


----------



## LancerVI

Majin SSJ Eric said:


> I'd be perfectly fine with the flagship Zen 2 8C/16T CPU whenever it comes out so long as the new architecture and node addresses Ryzen's only real downside: OCing. If the new 3700X (or whatever they call it) can get to 4.7+ GHz then it will take Zen from an impressive CPU to a truly great CPU. I don't really need more than 8C/16T personally so come on AMD, just get these things clocking 400-500 MHz higher than Ryzen 2 and we'll be all set!



Absolutely agree with this. That's all I'm looking for. Couldn't have said it better.

Here's to hoping these 'leaks' are anywhere near accurate.


----------



## doom26464

16 core will be useless on main stream outside of special use case and synthetic benchmarks, software is not ready for that many cores. 


However if amd boost ipc and clocks it will mask the useless added cores and make the product look much more appealing. 

Adding cores is very easy for AMD even if it is borderline useless in the mainstream ecosystem. It allows them to stay ahead of the competition.


----------



## Ithanul

LancerVI said:


> Absolutely agree with this. That's all I'm looking for. Couldn't have said it better.
> 
> Here's to hoping these 'leaks' are anywhere near accurate.


Agree as well. Be very nice if AMD releases such a chip (be tempted to get a 3600X or 3700X if the price is right). Even though I am happily sitting on a 1950X, just makes me hope the next gen TRs will benifit as well.


----------



## Pro3ootector

Didn't AMD mentioned somwhere there will be no big frequency bost?


----------



## guttheslayer

keikei said:


> We'll see if this is true or not very soon. If so, me want: https://www.overclock.net/forum/attachment.php?attachmentid=243018&d=1546433289


Isn't this match completely with what AdoreTV had speculated? in that case, it means Jim is actually the real NDA leaker lol.


----------



## EniGma1987

dantoddd said:


> Any idea what Navi will perform like?


Probably somewhere close to GTX1080 level. Its not a high end GPU, and jumping from 1060 to 1080 level in the midrange segment is a pretty good jump for 1 generation from them.






Raghar said:


> Actually this got me thinking. 14 nm are made by multipatterning. 250W EUV allows single stage, not 4, thus even when substrate might be more expensive than 14 nm, net result might be cheaper CPU per wafer even when size of CPU on wafer would be the same. Of course heat transfer out of die might be PITA.
> 
> Thus this might be one of last CPUs that's both cheaper and faster than previous generation.



Unfortunately TSMC 7nm uses quad patterning still on conventional exposure, not EUV. The improved EUV process is slated to come later down the road as a "7nm+".


----------



## ToTheSun!

That single 8 core CCX 3600X, though.


----------



## lightsout

3700x looking pretty great. Double the core count and a nice clock boost from my current chip. The question is can my board handle it.


----------



## Syldon

doom26464 said:


> 16 core will be useless on main stream outside of special use case and synthetic benchmarks, software is not ready for that many cores.
> 
> 
> However if amd boost ipc and clocks it will mask the useless added cores and make the product look much more appealing.
> 
> Adding cores is very easy for AMD even if it is borderline useless in the mainstream ecosystem. It allows them to stay ahead of the competition.


I think the famous quote is "If you build it, they will come". Software isn't ready now, but AMD is set to make 8 core CPU the average CPU to aim for. Software devs will be well aware of this.


----------



## tpi2007

Syldon said:


> I think the famous quote is "If you build it, they will come". Software isn't ready now, but AMD is set to make 8 core CPU the average CPU to aim for. Software devs will be well aware of this.


Indeed. The same thing happened when Intel launched the Core 2 Quad in late 2006, shortly after the Core 2 Duos. At first there was little use for them on the mainstream desktop, but you have to start somewhere and Intel made sure quads would go mainstream with the Q6600 with quick price drops throughout 2007 in what is actually a good showcase for what AMD has been doing to gain marketshare and will have to keep doing with 7nm Zen 2. For those who don't remember, the Q6600 debuted in January of 2007 at $851 and in an unprecedented move Intel announced at launch that they would drop the price in 3-4 months to $530, which they did in April. And then they dropped it again to $266 in July (!). And the same time the following year it was at ~$175, essentially becoming a mainstream option.

So when people are doubtful that AMD would sell processor A, B or C for an excellent price, remember the above. Intel did it back then when they were still regaining the market from AMD after the Pentium 4 / D, so I wouldn't rule out AMD coming out with both more IPC and more cores.


----------



## coelacanth

12 cores 24 threads 5.0 GHz boost? If it's true that would be awesome. Can't wait to find out.


----------



## Scotty99

lol you guys are STILL buying these rumors, really?

Sure amd in less than a year fit a 12c 24t 4.2 base 5.0 turbo into the same tdp envelope as the 8c 16t 3.7 base 4.3 turbo 2700x. 

Cmon people, stahppppp


----------



## Majin SSJ Eric

JackCY said:


> What if it has higher performance at lower clock? This clock hunting to infinity is nonsense when designing products when you can increase performance otherwise. For a user, yeah sure what can you do but try not fry it and clock it as high as it will go but then why buy products made on a process meant for efficiency not for extreme clocks.
> 
> I would be quite happy with higher performance than competition at lower clocks while keeping good value.


Ryzen already has rough IPC parity with Intel though, but where Intel continues to kick AMD's butt is in clockspeed. At 4 GHz Ryzen is already just as good as Intel's offerings at 4 GHz but stuff like the 9900K (and 8700K / 7700K before it) can go all the way up to 5.1 GHz and that is where Intel's last real area of dominance remains. I just want Zen 2 to be the CPU that makes everybody finally admit that Intel is no longer the king in any category, not because I'm an AMD fanboy but because I'm sick of Intel (and Nvidia even more so). Zen is still largely written off by many of the elites in this hobby despite the fact that Ryzen is basically better overall than the Haswell and Broadwell chips from just a few years ago that everybody still considers to be great CPU's but it is just so hard for AMD to get the respect that they deserve.


----------



## Majin SSJ Eric

Scotty99 said:


> lol you guys are STILL buying these rumors, really?
> 
> Sure amd in less than a year fit a 12c 24t 4.2 base 5.0 turbo into the same tdp envelope as the 8c 16t 3.7 base 4.3 turbo 2700x.
> 
> Cmon people, stahppppp


Um, you do realize they are going to a new node that is HALF the nm of the 2700X right? Sure, the 5GHz claim may be a bit optimistic, but your assertion that it is laughably unrealistic to think that AMD is going to enjoy a massive efficiency gain with 7nm is silly. Why on earth would halving the process node NOT net a very significant boost in efficiency and/or clock speeds???


----------



## Scotty99

Majin SSJ Eric said:


> Um, you do realize they are going to a new node that is HALF the nm of the 2700X right? Sure, the 5GHz claim may be a bit optimistic, but your assertion that it is laughably unrealistic to think that AMD is going to enjoy a massive efficiency gain with 7nm is silly. Why on earth would halving the process node NOT net a very significant boost in efficiency and/or clock speeds???


Eh bro, im addressing the specific leaks of 5ghz boost 12c ryzen cpu's with 105w tdp. Of course they are going to see some clockspeed increases and better efficiency, but its going to be nothing like these absurd leaks are indicating. 

Realistically? If they do have a 12c mainstream chip it will be 125w TDP and the boost will be ~4.5 at best.


----------



## n4p0l3onic

Scotty99 said:


> lol you guys are STILL buying these rumors, really?
> 
> Sure amd in less than a year fit a 12c 24t 4.2 base 5.0 turbo into the same tdp envelope as the 8c 16t 3.7 base 4.3 turbo 2700x.
> 
> Cmon people, stahppppp


https://www.techpowerup.com/forums/attachments/1546461554308-png.113901/

Supposedly gigabyte leaked it, with the same specs

It's happening.


----------



## Scotty99

n4p0l3onic said:


> https://www.techpowerup.com/forums/attachments/1546461554308-png.113901/
> 
> Supposedly gigabyte leaked it, with the same specs
> 
> It's happening.


lol


----------



## rdr09

n4p0l3onic said:


> https://www.techpowerup.com/forums/attachments/1546461554308-png.113901/
> 
> Supposedly gigabyte leaked it, with the same specs
> 
> It's happening.



If the 3700 can be oc to 4.5GHz on all cores that would be like the Threadripper on steroids.


----------



## guttheslayer

Scotty99 said:


> lol


Someone refuse to admit these are getting real and possible and still believe that Intel 5% improvement is the best the world can do.

Time to wake up.


----------



## Defoler

guttheslayer said:


> Someone refuse to admit these are getting real and possible and still believe that Intel 5% improvement is the best the world can do.
> 
> Time to wake up.


Someone also refuse to be doubtful about rumours that could possibly just rumours to reduce intel sales so people will wait and see what AMD does, and might end up completely unrealistic. 

There is a difference between accepting those rumpots as the truth of the universe, and being doubtful that such a huge jump in performance (really, 12/24 at 5ghz from AMD?) suddenly comes out of the blue?

How about they come out and then we will see before you call it real and 100% possible?

So far all those "OMG AMD are bringing the fastest cpu in the universe!!11!!" rumours have always ended up being a bit far from the end results regarding ryzen. So realistically, he can be more right than you.

Just like intel, nvidia rumours, always be doubtful until it comes out.


----------



## Scotty99

guttheslayer said:


> Someone refuse to admit these are getting real and possible and still believe that Intel 5% improvement is the best the world can do.
> 
> Time to wake up.


Man ill buy it the second its up for pre-order if its true, i have no affiliation to any company lol. 

Like many people have said in this thread before (including myself), its just too good to be true. I do think its very possible we see 12c mainstream chips from AMD announced but the clockspeeds will be no where near what these leaks indicate, and i dont think we will see 16c. See to compete against something like a 9900k they cant rely on clockspeed they have to have the core advantage and market cinebench scores like they did with first gen ryzen, they could actually sell a 12c part that competes against the 9900k price bracket and come out looking ok.


----------



## white owl

The only thing I doubt is easily possible is the clocks, I'm not sure if they'll have the arch ready for 5ghz yet.
The rest is pretty logical. A 12 core with a 105w TDP isn't too far fetched when you consider several things:
They already have a 16c CPU that can operate with a 180w TDP, if you cut off 4 cores you'd be much closer.
This is still an almost brand new arch so there was probably a lot that could have been done to make current CPUs better so I'm sure they will be able to significantly increase clockspeeds OR IPC to make a better CPU at a lower speed.
The 5Ghz boost on a 12 core with a 105w TDP part...well if it's to be assumed that the arch CAN do 5ghz this early on I really don't see it as that hard to believe. Turbo is usually only 1 or 2 cores, I can believe that as long as the arch is capable.


I don't take all the content as gospel but I don't really see them as terribly unrealistic. To assume that they can't make a 12 core with a 105w TDP and a much higher boost is almost to assume that there will be no improvement at all. And lets be honest...for a gaming rig do we really care what the TDP is as long as it isn't insane and it performs well?


----------



## Hydroplane

If AMD can give us 4.5-4.8 I'd be happy.


----------



## Derion

Will get the 3700 if its gaming performance is on par with 9700k


----------



## JackCY

coelacanth said:


> 12 cores 24 threads 5.0 GHz boost? If it's true that would be awesome. Can't wait to find out.


People are unrealistic and believe any made up leak. What's next Intel CPUs at 10GHz on 1nm?

Clocks depend a lot on the process and other optimizations, Intel has taken a decade to get to 5GHz while they control both manufacturing and design.
Any improvement over Samsung/GF process with TSMC's is a plus.


----------



## Streetdragon

Hydroplane said:


> If AMD can give us 4.5-4.8 I'd be happy.


AND that with 8-16Cores? ahh yeaaasssss would be nice

and i know why i have scotty on igno


----------



## doom26464

Since this train is already on a runway.....with no sight of stopping till it crashes.


Was there also not a big rumour that these new chips would also have big benefits to AVX workloads too?? Something where intel has always had a small leg up on amd.


----------



## os2wiz

Scotty99 said:


> Eh bro, im addressing the specific leaks of 5ghz boost 12c ryzen cpu's with 105w tdp. Of course they are going to see some clockspeed increases and better efficiency, but its going to be nothing like these absurd leaks are indicating.
> 
> Realistically? If they do have a 12c mainstream chip it will be 125w TDP and the boost will be ~4.5 at best.


 You are not being "realistic". You are being a cynical negativist. 7nm plus binning allows for achievements that AMD nor Intel have previously come close to. The chips will all easily exceed 4.5 GHZ boost clocks.


----------



## n4p0l3onic

lol.


----------



## rancor

doom26464 said:


> Since this train is already on a runway.....with no sight of stopping till it crashes.
> 
> 
> Was there also not a big rumour that these new chips would also have big benefits to AVX workloads too?? Something where intel has always had a small leg up on amd.


Not a rumour AMD directly stated they increased the FPU to 256-bit and then stated double the floating point performance so AVX2 performance should be double what it is now and inline with intel.


----------



## Scotty99

os2wiz said:


> You are not being "realistic". You are being a cynical negativist. 7nm plus binning allows for achievements that AMD nor Intel have previously come close to. The chips will all easily exceed 4.5 GHZ boost clocks.



Im talking SPECIFICALLY about the leaks that started with the adored tv guy, and the wccf tech and reddit leak of gigabyte motherboards that happened the past couple of days apparently "backing this up".

We are not getting a 12c 24t 5.0 boost 105w tdp mainstream ryzen, i dont know how to put it any clearer than this lol. What are we getting? I have no freaking idea, but the leaks are BOGUS.


----------



## kd5151

n4p0l3onic said:


> https://www.techpowerup.com/forums/attachments/1546461554308-png.113901/
> 
> Supposedly gigabyte leaked it, with the same specs
> 
> It's happening.


 is this real?


----------



## JackCY

The AVX and other changes are indeed in the available information from AMD, just read it. Clocks, prices, configurations, etc. are speculated.

I guess after CES or other show we will know more from AMD as they ready to sell.
Also I wouldn't hold your breath for chiplet on mainstream platform, just not really worth it, doable sure but why, they would have to really go the route of doing 16C 32T on mainstream platform AM4. Aka doubling cores on all platforms, nice yes, probable, not much. Simpler to design chiplets for server parts and then copy the same stuff onto single 7nm die for mainstream platform.

https://www.amd.com/system/files/documents/next_horizon_mark_papermaster_presentation.pdf
https://www.amd.com/en/events/next-horizon

Also the highest I see in the Gigabyte leak is 4.2GHz not 5.0GHz, learn the difference between all core clock and single core clock. Even latest Intel won't always do max single core clock on all core when OCing.

Next thing is pricing, how much are they gonna want for the 16C on 7nm if it will happen. You never know when these corporations start milking more again and how much 7nm will cost.

It would certainly finally offer something to upgrade to from 22nm if the price is right.


----------



## ejb222

Scotty99 said:


> Im talking SPECIFICALLY about the leaks that started with the adored tv guy, and the wccf tech and reddit leak of gigabyte motherboards that happened the past couple of days apparently "backing this up".
> 
> We are not getting a 12c 24t 5.0 boost 105w tdp mainstream ryzen, i dont know how to put it any clearer than this lol. What are we getting? I have no freaking idea, but the leaks are BOGUS.


I get you dont buy it, but to be so confident, your sources must be very knowledgeable. Or plainly you dont understand the advance in technology. You're cynicism can be fully interpreted into how great of a product these would be if true.


----------



## Scotty99

ejb222 said:


> I get you dont buy it, but to be so confident, your sources must be very knowledgeable. Or plainly you dont understand the advance in technology. You're cynicism can be fully interpreted into how great of a product these would be if true.


I don't need to understand the technology in this situation, common sense/logic/current market standings are enough for me to 100% call these leaks bogus. 

We can go back and forth all day but it really does not matter until we get the actual specs, i guess i could take bets lol?

Edit, lets also not forget the original leaked pricing structure. What was it, 329.00 for the 3700x. lol good luck with that!


----------



## CynicalUnicorn

os2wiz said:


> You are not being "realistic". You are being a cynical negativist. 7nm plus binning allows for achievements that AMD nor Intel have previously come close to. The chips will all easily exceed 4.5 GHZ boost clocks.


Remember when Intel went from 22nm to 14nm and their clocks sucked? Or AMD went from 32nm to 28nm and their clocks sucked? Process shrink doesn't necessarily mean higher frequency, and it often means the opposite. We have quite literally no reliable evidence to suggest that frequencies will increase significantly. Given the specs on 7nm Vega compared to 14nm Vega, they _probably_ will, but we don't know for sure, nor do we know to what degree. 7nm's increased density will also make cooling the CPUs much harder: the same amount of heat will need to be removed from a much smaller section of the die.


----------



## LancerVI

...we also don't know if these leaks, if correct, are all core or single core boost. My instincts tell me that if the leaks are true, then it would almost have to be single core boost to get 5+ on a 12, let alone 16 core proc, right?


But I've said it before and I'll say it again. Zen 2 (Ryzen3XXX) with 8c16T at 4.6+ is not unreasonable. In fact, I believe it's highly probable.

Prices and product segmentation is a whole other discussion and is changeable at a moments notice. Read.....no one knows.


----------



## ComansoRowlett

I think the 5GHz boost is in regard to single/dual core boost, which is very believable considering 2700X's can do around 4.5GHz with PBO scalar/BCLK, if the behavior is the same then I can see it happening as well as 8 core dies since they are so small (looking at the 64 core chip, you could easily fit 2 into that normal small package, the I/O die is questionable though, we may even see a 7nm I/O die for mainstream). Honestly what I really want is improved memory support, my poor 4600MHz single rank b-die is being held back majorly. I'd be happy if i can get 3800 CL15/16.


----------



## Redwoodz

Scotty99 said:


> Im talking SPECIFICALLY about the leaks that started with the adored tv guy, and the wccf tech and reddit leak of gigabyte motherboards that happened the past couple of days apparently "backing this up".
> 
> We are not getting a 12c 24t 5.0 boost 105w tdp mainstream ryzen, i dont know how to put it any clearer than this lol. What are we getting? I have no freaking idea, but the leaks are BOGUS.





Scotty99 said:


> I don't need to understand the technology in this situation, common sense/logic/current market standings are enough for me to 100% call these leaks bogus.
> 
> We can go back and forth all day but it really does not matter until we get the actual specs, i guess i could take bets lol?
> 
> Edit, lets also not forget the original leaked pricing structure. What was it, 329.00 for the 3700x. lol good luck with that!


 Let's see how many times you revise these posts. 

https://www.tomshardware.com/news/amd-ryzen-3000-series-matisse-specs,38310.html
My next cpu purchase will be Matisse 3700X
http://www.e-katalog.ru/AMD-3700X.htm


----------



## delerious

Scotty99 said:


> lol you guys are STILL buying these rumors, really?
> 
> Sure amd in less than a year fit a 12c 24t 4.2 base 5.0 turbo into the same tdp envelope as the 8c 16t 3.7 base 4.3 turbo 2700x.
> 
> Cmon people, stahppppp


Almost like going from a 4.5GHz turbo 7700K at 91W on 14nm to a 5.0GHz turbo 9900K at 95W on 14nm. There's no way Intel could've doubled cores/threads on the same process node while only increasing the TDP 4W.


----------



## Eusbwoa18

So what, 5 or 6 days until we find out for sure? When is AMD scheduled to take the stage? Someone should at least have something other than a rumor about that.


----------



## agatong55

pgdeaner said:


> So what, 5 or 6 days until we find out for sure? When is AMD scheduled to take the stage? Someone should at least have something other than a rumor about that.


SANTA CLARA, Calif., Jan. 02, 2019 (GLOBE NEWSWIRE) -- AMD (NASDAQ: AMD) today announced that Dr. Lisa Su, president and chief executive officer, will present a keynote address at CES 2019. Dr. Su’s presentation is scheduled for Wednesday, Jan. 9 at 9:00 a.m. PST in the Venetian Palazzo Ballroom.

A live stream of the event will be available at: https://www.youtube.com/c/AMD/live. An archived version of the webcast will be available approximately one hour after the event and can be found on the AMD YouTube channel.

https://globenewswire.com/news-rele...nd-CEO-Dr-Lisa-Su-to-Keynote-at-CES-2019.html

there you go they announced it yesterday, so i guess another 6 days of leaks on Reddit, Russian sites, etc etc but boy i can not wait to see if these rumors are true or not.


----------



## EniGma1987

delerious said:


> Almost like going from a 4.5GHz turbo 7700K at 91W on 14nm to a 5.0GHz turbo 9900K at 95W on 14nm. There's no way Intel could've doubled cores/threads on the same process node while only increasing the TDP 4W.





Intel's TDP is not even close to what most get though:














> Here we have four horizontal lines from bottom to top: cooling limit (PL1), sustained power delivery (PL2), battery limit (PL3), and power delivery limit.
> The bottom line, the cooling limit, is effectively the TDP value. Here the power (and frequency) is limited by the cooling at hand. It is the lowest sustainable frequency for the cooling, so for the most part TDP = PL1. This is our ‘95W’ value.
> The PL2 value, or sustained power delivery, is what amounts to the turbo. This is the maximum sustainable power that the processor can take until we start to hit thermal issues. When a chip goes into a turbo mode, sometimes briefly, this is the part that is relied upon. The value of PL2 can be set by the system manufacturer, however Intel has its own recommended PL2 values.
> In this case, for the new 9th Generation Core processors, Intel has set the PL2 value to 210W. This is essentially the power required to hit the peak turbo on all cores, such as 4.7 GHz on the eight-core Core i9-9900K. So users can completely forget the 95W TDP when it comes to cooling. If a user wants those peak frequencies, it’s time to invest in something capable and serious.
> Luckily, we can confirm all this in our power testing.


----------



## bigjdubb

I'm not sure why everyone is arguing about this. There is nothing in the rumors/leaks that seem impossible or even implausible. At the same time, I don't see anything in these leaks that seem too good to be true. It looks like a natural progression of what AMD started with the Ryzen processors.

Maybe we got so used to CPU progress moving at a snails pace that we forgot how fast is used to advance.


----------



## ibb27

pgdeaner said:


> So what, 5 or 6 days until we find out for sure? When is AMD scheduled to take the stage? Someone should at least have something other than a rumor about that.


https://www.timeanddate.com/countdo...0=127&msg=CES+AMD+Keynote&font=sanserif&csz=1


----------



## Eusbwoa18

*Keynote*



pgdeaner said:


> So what, 5 or 6 days until we find out for sure? When is AMD scheduled to take the stage? Someone should at least have something other than a rumor about that.


I'd guess this is it. 
--------------------------------------------------------------------------------
https://www.ces.tech/conference/Keynotes/AMD-Keynote.aspx

Wednesday, January 99:00 – 10:00 AM
AMD Keynote
AMD guests and its president and CEO Dr. Lisa Su will provide a view into the diverse applications for new computing technologies ranging from solving some of the world’s toughest challenges to the future of gaming, entertainment and virtual reality with the potential to redefine modern life. AMD is catapulting computing, gaming, and visualization technologies forward with the world’s first 7nm high-performance CPUs and GPUs, providing the power required to reach technology’s next horizon.
-------------------------------------------------------------------------------

There is a link on the page for a livestream. Then the speculation can really begin with fake leaked benchmarks and such! Can't wait!


----------



## 99belle99

TDP is if you run the CPU stock. This goes for both Intel and AMD. Take Intel for example and i7 8700k you get 3.7Ghz base and 4.7GHz boost. That 4.7GHz boost is only one or two cores and as soon as you overclock the chip TDP is gone completely out the window and you need more that a 95W rated cooler.


----------



## Scotty99

TDP is whatever, that's only a small part of why i know these leaks are absurd lol. We all know that is rated funny and actual power draw is going to vary massively on workload. So much of this just screams fake, its actually incredible how many sites are reporting on it. But i guess were in a day and age where it does not matter what sort of content you put out, as long as people are clicking the link you are making money.


----------



## ejb222

Scotty99 said:


> TDP is whatever, that's only a small part of why i know these leaks are absurd lol. We all know that is rated funny and actual power draw is going to vary massively on workload. So much of this just screams fake, its actually incredible how many sites are reporting on it. But i guess were in a day and age where it does not matter what sort of content you put out, as long as people are clicking the link you are making money.


You spam these threads cynicism like you know something we dont. Care to add fact or numbers or some sort of logical reasoning that adds to this DISCUSSION?


----------



## keikei

so comparing the current threadripper offerings against the new rumors:


----------



## Pro3ootector

pgdeaner said:


> I'd guess this is it.
> --------------------------------------------------------------------------------
> https://www.ces.tech/conference/Keynotes/AMD-Keynote.aspx
> 
> Wednesday, January 99:00 – 10:00 AM
> AMD Keynote
> AMD guests and its president and CEO Dr. Lisa Su will provide a view into the diverse applications for new computing technologies ranging from solving some of the world’s toughest challenges to the future of gaming, entertainment and virtual reality with the potential to redefine modern life. AMD is catapulting computing, gaming, and visualization technologies forward with the world’s first 7nm high-performance CPUs and GPUs, providing the power required to reach technology’s next horizon.
> -------------------------------------------------------------------------------
> 
> There is a link on the page for a livestream. Then the speculation can really begin with fake leaked benchmarks and such! Can't wait!



It sounds to me like a Chiplets for consumer PC's. And it will be interesting.


----------



## 99belle99

Scotty99 said:


> TDP is whatever, that's only a small part of why i know these leaks are absurd lol. We all know that is rated funny and actual power draw is going to vary massively on workload. So much of this just screams fake, its actually incredible how many sites are reporting on it. But i guess were in a day and age where it does not matter what sort of content you put out, as long as people are clicking the link you are making money.


I don't know I believe these leaks tbh. Look at the R7 2700X 4.35GHz boost. Not all of them overclock to that on all cores the is XFR. So 4.7-5GHz XFR on new chips is believable so lets say the majority overclock to 4.5-4.8GHz all cores and those who win the silicon lottery like with first and second gen Ryzen who will clock to the magical 5Ghz.


----------



## Scotty99

99belle99 said:


> I don't know I believe these leaks tbh. Look at the R7 2700X 4.35GHz boost. Not all of them overclock to that on all cores the is XFR. So 4.7-5GHz XFR on new chips is believable so lets say the majority overclock to 4.5-4.8GHz all cores and those who win the silicon lottery like with first and second gen Ryzen who will clock to the magical 5Ghz.


Again, the reason you see these leaks all over the internet now is people are clicking the links, AMD ryzen is a popular product now and people eat this stuff up.

The funny thing is when this all turns out to be rubbish, no one will be to blame they will keep making their videos and posting their new leaks like nothing happened. Just not a fan of this clickbait era we live in now, thats what this thread is ultimately about.


----------



## KyadCK

Scotty99 said:


> *I don't need to understand the technology in this situation*, common sense/logic/current market standings are enough for me to 100% call these leaks bogus.
> 
> We can go back and forth all day but it really does not matter until we get the actual specs, i guess i could take bets lol?
> 
> Edit, lets also not forget the original leaked pricing structure. What was it, 329.00 for the 3700x. lol good luck with that!


You do actually, but that certainly explains your point of view.

AMD can fit an 8-core in 105w (3.7/4.3), 65w (3.2/4.1), and 45w (2.8/4.0). 7nm cuts power consumption in half, making an 8-core 3.7/4.3 a 55w chip. Adding 50% more cores (12c/24t) at the same clocks will increase power draw by a bit under 50% (Core power +50%, uncore power + <50%), dragging it up to 85w TDP.

With that remaining 20w (+25%) power, they can easily raise the boost clocks by just 10.5% from 3.8 to 4.2.

The 5Ghz value is going to be a one or two core XFR boost. While 5Ghz all-core is unlikely, the CPU can dump a LOT more power into just a few cores without going anywhere near the TDP cap, as seen on current Ryzen chips:

























Of course the easier way to do less math is the 2950X is capable of 16c/32t @4.1Ghz inside it's 180w TDP, which would make it doable in 90w with 7nm.











LancerVI said:


> ...we also don't know if these leaks, if correct, are all core or single core boost. My instincts tell me that if the leaks are true, then it would almost have to be single core boost to get 5+ on a 12, let alone 16 core proc, right?
> 
> 
> But I've said it before and I'll say it again. Zen 2 (Ryzen3XXX) with 8c16T at 4.6+ is not unreasonable. In fact, I believe it's highly probable.
> 
> Prices and product segmentation is a whole other discussion and is changeable at a moments notice. Read.....no one knows.


It's 1-2 core turbo/boost, just like every other CPU on the planet.


----------



## LancerVI

Scotty99 said:


> Again, the reason you see these leaks all over the internet now is people are clicking the links, AMD ryzen is a popular product now and people eat this stuff up.
> 
> The funny thing is when this all turns out to be rubbish, no one will be to blame they will keep making their videos and posting their new leaks like nothing happened. Just not a fan of this clickbait era we live in now, thats what this thread is ultimately about.


So you've told us all that it's "impossible."

So, now that we've established what is NOT going to happen, what do you think they'll be able to do? Give us a basic spec line, ie (3700x 8c/16T 3.8 base/4.5 boost @ 105tdp)


----------



## white owl

Scotty99 said:


> Again, the reason you see these leaks all over the internet now is people are clicking the links, AMD ryzen is a popular product now and people eat this stuff up.
> 
> The funny thing is when this all turns out to be rubbish, no one will be to blame they will keep making their videos and posting their new leaks like nothing happened. Just not a fan of this clickbait era we live in now, thats what this thread is ultimately about.


I wouldn't have taken them too seriously if they'd come from anywhere else.
The dude's been about 95% accurate on all his leaks so far. I can't imagine he'd just trash his reputation making stuff up. What would be the point? It would hurt the channel in the long run.
It would be like GN skewing benchmarks to favor a sponsored product.
BTW you've made the same post several times now.


----------



## looniam

EniGma1987 said:


> Intel's TDP is not even close to what most get though:
> 
> 
> 
> 
> 
> Spoiler


here, maybe add this:


Spoiler













not trying to critique your post w/adding that. your post is somewhat along the lines of what i was thinking. like intel, amd can add/change their criteria for measuring when/where/what for their TDP specs. unlike intel, they are usually more forth coming with that info and i hope it continues; because imo, intel caused a lot of people to get angry at a lot of the wrong people.

with that said, its not far fetched to hit a 16c 125w TDP w/some efficiency gain switching nodes and a little embellishment of the gain via criteria _as long as its communicated_.

i guess there could be issues if voltage scaling is horrible and people don't over engineer their cooling solution but i think we'll see.


----------



## Scotty99

white owl said:


> I wouldn't have taken them too seriously if they'd come from anywhere else.
> The dude's been about 95% accurate on all his leaks so far. I can't imagine he'd just trash his reputation making stuff up. What would be the point? It would hurt the channel in the long run.
> It would be like GN skewing benchmarks to favor a sponsored product.
> BTW you've made the same post several times now.


What would be the point, are you being obtuse?
https://www.youtube.com/user/adoredtv/videos?flow=grid&view=0&sort=p

Thats his second most popular video since he made the channel.

Its also the reason these absurd leaks have made their rounds on the internet, whether its true or not makes no difference, they are making money.


----------



## ejb222

Scotty99 said:


> What would be the point, are you being ironic?
> https://www.youtube.com/user/adoredtv/videos?flow=grid&view=0&sort=p
> 
> Thats his second most popular video since he made the channel.
> 
> Its also the reason these absurd leaks have made their rounds on the internet, whether its true or not makes no difference, they are making money.


Just ignore this dude. He has made it obvious now that he only likes the attention he gets from being contrarian


----------



## Scotty99

ejb222 said:


> Just ignore this dude. He has made it obvious now that he only likes the attention he gets from being contrarian


Right....thats it lol.

I should just ignore my brain and say stuff like "Cool, imma get the 3700x, rip intel!", right lol?

I guess this forum probably just isnt for me, /shrug.


----------



## LancerVI

Scotty99 said:


> Right....thats it lol.
> 
> I should just ignore my brain and say stuff like "Cool, imma get the 3700x, rip intel!", right lol?
> 
> I guess this forum probably just isnt for me, /shrug.


Can't help but notice you ignored my question entirely. Do you have anything else to offer from your "brain"?

This forum is for enthusiasts. Talking about builds, sharing tech news, talking about OC'ing.....and, oh yeah...speculating about what's coming out. What's wrong with that?

BTW, nice passive aggressive way of saying people on this forum don't have brains. If you're going to cast shade, be upfront about it. Now you're just coming off like a whiny little %$^(&. ..and if you find it's not the place for you, don't let the door hit ya in the ass.


----------



## Scotty99

LancerVI said:


> Can't help but notice you ignored my question entirely. Do you have anything else to offer from your "brain"?
> 
> This forum is for enthusiasts. Talking about builds, sharing tech news, talking about OC'ing.....and, oh yeah...speculating about what's coming out. What's wrong with that?
> 
> BTW, nice passive aggressive way of saying people on this forum don't have brains. If you're going to cast shade, be upfront about it. Now you're just coming off like a whiny little %$^(&. ..and if you find it's not the place for you, don't let the door hit ya in the ass.


No, im talking about fanboys. Its ok to be excited for a product from a company you prefer, but to ignore your brain on the likelihood of said products mucks up threads like these. How am i to have a conversation with individuals who refuse to put the effort in when they would rather cheerlead?

Its just odd and isn't something i can relate to, especially products like CPU's where there are no intangible benefits.

As for my predictions i posted that somewhere earlier in the thread, but in my opinion the absolute best scenario you can hope for is a 12c 24t chip that competes against the 9900k and costs from 430-500, maybe two models x and non x. Again thats best scenario imo, it might just be the same as ryzen 1 and 2 with clockspeed bumps of ~200mhz across the board. AMD is not in a position like they were in 2016, the market NEEDED more cores but that isnt the case now.

Edit: And realistically, 10c20t makes more sense in my best case scenario above, i only listed 12c because some people in this thread were claiming you couldnt do a 10c with the way they are doing chiplets.


----------



## LancerVI

Scotty99 said:


> No, im talking about fanboys. Its ok to be excited for a product from a company you prefer, but to ignore your brain on the likelihood of said products mucks up threads like these. How am i to have a conversation with individuals who refuse to put the effort in when they would rather cheerlead?
> 
> Its just odd and isn't something i can relate to, especially products like CPU's where there are no intangible benefits.
> 
> As for my predictions i posted that somewhere earlier in the thread, but in my opinion the absolute best scenario you can hope for is a 12c 24t chip that competes against the 9900k and costs from 430-500, maybe two models x and non x. Again thats best scenario imo, it might just be the same as ryzen 1 and 2 with clockspeed bumps of ~200mhz across the board. AMD is not in a position like they were in 2016, the market NEEDED more cores but that isnt the case now.



Fair enough. 

Hopefully we'll find out next week.


----------



## ku4eto

LancerVI said:


> Can't help but notice you ignored my question entirely. Do you have anything else to offer from your "brain"?
> 
> This forum is for enthusiasts. Talking about builds, sharing tech news, talking about OC'ing.....and, oh yeah...speculating about what's coming out. What's wrong with that?
> 
> BTW, nice passive aggressive way of saying people on this forum don't have brains. If you're going to cast shade, be upfront about it. Now you're just coming off like a whiny little %$^(&. ..and if you find it's not the place for you, don't let the door hit ya in the ass.


There is a reason this guy is on my ignore list. He goes beyond reasoning.


----------



## Ultracarpet

Scotty99 said:


> No, im talking about fanboys. Its ok to be excited for a product from a company you prefer, but to ignore your brain on the likelihood of said products mucks up threads like these. How am i to have a conversation with individuals who refuse to put the effort in when they would rather cheerlead?
> 
> Its just odd and isn't something i can relate to, especially products like CPU's where there are no intangible benefits.
> 
> As for my predictions i posted that somewhere earlier in the thread, but in my opinion the absolute best scenario you can hope for is a 12c 24t chip that competes against the 9900k and costs from 430-500, maybe two models x and non x. Again thats best scenario imo, it might just be the same as ryzen 1 and 2 with clockspeed bumps of ~200mhz across the board. AMD is not in a position like they were in 2016, the market NEEDED more cores but that isnt the case now.
> 
> *Edit: And realistically, 10c20t makes more sense in my best case scenario above, i only listed 12c because some people in this thread were claiming you couldnt do a 10c with the way they are doing chiplets.*


Well, as far as the chiplet configuration goes, if they can do 12 core, then they can do 16 core. As far as the tdp, clock speeds, and most importantly, price is concerned, I am inclined to agree with you that people are way too optimistic.


----------



## Scotty99

Ultracarpet said:


> Well, as far as the chiplet configuration goes, if they can do 12 core, then they can do 16 core. As far as the tdp, clock speeds, and most importantly, price is concerned, I am inclined to agree with you that people are way too optimistic.


Ya i think we are in the same scenario we have been the past two years with AMD and intel, they cant match intel on clockspeed so they have to win on cores and value. If they do a 12c24t 4.5 boost ryzen for 9900k pricing people should be really excited about that, these leaks are so unrealistic i honestly cannot believe this thread is 48 pages long.


----------



## lightsout

Scotty99 said:


> Ya i think we are in the same scenario we have been the past two years with AMD and intel, they cant match intel on clockspeed so they have to win on cores and value. If they do a 12c24t 4.5 boost ryzen for 9900k pricing people should be really excited about that, these leaks are so unrealistic i honestly cannot believe this thread is 48 pages long.


http://www.e-katalog.ru/AMD-3700X.htm

So are companies like this just filling out specs based upon a youtube video?

I haven't followed a launch this closely since Sandy Bridge, those were good times, the same back and forth was going on, those chips turned out to be monsters, hopefully these do as well. Like most folks here I am interested in the 3700x if things turn out good, I'll probably let the initial hype (price) drop a bit before I jump in.


----------



## Scotty99

lightsout said:


> http://www.e-katalog.ru/AMD-3700X.htm
> 
> So are companies like this just filling out specs based upon a youtube video?
> 
> I haven't followed a launch this closely since Sandy Bridge, those were good times, the same back and forth was going on, those chips turned out to be monsters, hopefully these do as well. Like most folks here I am interested in the 3700x if things turn out good, I'll probably let the initial hype (price) drop a bit before I jump in.


No idea, but im not going to abandon my logic because some russian site listed specs that match the leak....neither should others in this thread 

Just don't get your hopes up on scoring a 12c processor that boosts to 5ghz for 330 bucks, thats all im sayin.


----------



## Abaidor

It would be an interesting experiment from AMD to launch the top SKUs (IF they are top performers) at high prices just to give Intel another shake......if their 8 core part matches the 9900K that is....but I want to see evidence first before I believe anything.


----------



## mouacyk

Still plenty of fruit punch going around...


----------



## JedixJarf

mouacyk said:


> Still plenty of fruit punch going around...


And salt


----------



## Pro3ootector

Scotty99 said:


> No idea, but im not going to abandon my logic because some russian site listed specs that match the leak....neither should others in this thread
> 
> Just don't get your hopes up on scoring a 12c processor that boosts to 5ghz for 330 bucks, thats all im sayin.


High end APUs. That's what i think they are after. Why would Intel be interested in making their own graphics in the first place?


They will just call Iris graphics Xe. Like AMD MI Radeon is Vega. For marketing purposes in APU of cause.


----------



## SuperZan

ejb222 said:


> You spam these threads cynicism like you know something we dont. Care to add fact or numbers or some sort of logical reasoning that adds to this DISCUSSION?



He couldn't figure out how to get his Ryzen setup to perform at baseline (and wouldn't listen to advice) so he's carried around the fruit of his own incompetence as gospel truth of AMD's inability to create workable products. Comb through posts in the Ryzen Owner's thread if you're bored or a masochist.


----------



## ejb222

SuperZan said:


> He couldn't figure out how to get his Ryzen setup to perform at baseline (and wouldn't listen to advice) so he's carried around the fruit of his own incompetence as gospel truth of AMD's inability to create workable products. Comb through posts in the Ryzen Owner's thread if you're bored or a masochist.


I think I remeber that now that you mention it. Now he trolls and basically calls people fan boys for believing or hoping rumours are true. I never even owned a AMD cpu and I hope these are true. He is just a miserable person that cant stand when things aren't going his way. If these rumours turn out to be true, I bet he spends his time trolling on the slightest inaccuracy he can find in the rumours instead of admitting he was wrong.


----------



## Hydroplane

I'm ready for a 24 core threadripper 3 at 4.5 GHz paired with Titan RTX SLI, that would be a nice upgrade


----------



## looniam

SuperZan said:


> Comb through posts in the Ryzen Owner's thread if you're bored or a masochist.


i'll get to that after i finish:

Martin The Bored Masochist - Book 1 Kindle Edition


----------



## 113802

Hydroplane said:


> I'm ready for a 24 core threadripper 3 at 4.5 GHz paired with Titan RTX SLI, that would be a nice upgrade /forum/images/smilies/smile.gif


Hopefully you put that compute power to actual use. If it's just for gaming that's a complete waste of both money and hardware. Especially for the compute performance of the RTX Titan.


----------



## guttheslayer

Scotty99 said:


> os2wiz said:
> 
> 
> 
> You are not being "realistic". You are being a cynical negativist. 7nm plus binning allows for achievements that AMD nor Intel have previously come close to. The chips will all easily exceed 4.5 GHZ boost clocks.
> 
> 
> 
> 
> Im talking SPECIFICALLY about the leaks that started with the adored tv guy, and the wccf tech and reddit leak of gigabyte motherboards that happened the past couple of days apparently "backing this up".
> 
> We are not getting a 12c 24t 5.0 boost 105w tdp mainstream ryzen, i dont know how to put it any clearer than this lol. What are we getting? I have no freaking idea, but the leaks are BOGUS.
Click to expand...

Lol i hope u perform a twitch to eat ur shoes if it turn out to be correct.

Man i going to quote u all once its turn out to be 200% legit. Scotty will be in my mind first for sure, follow by that jacky guy.

Adoretv predicted the tu104 chip for rtx 2080 when no one had a single clue on what the f is turing naming based on. Wake up!! That jim guy had more credible than most ppl here that post nth but all the BS reason we are so sick of hearing.


----------



## white owl

Hydroplane said:


> I'm ready for a 24 core threadripper 3 at 4.5 GHz paired with Titan RTX SLI, that would be a nice upgrade


 Confirmed. It's not possible to own a Titan without mentioning it in every thread you can squeeze it into.


----------



## Majin SSJ Eric

Redwoodz said:


> Let's see how many times you revise these posts.
> 
> https://www.tomshardware.com/news/amd-ryzen-3000-series-matisse-specs,38310.html
> My next cpu purchase will be Matisse 3700X
> http://www.e-katalog.ru/AMD-3700X.htm


He was saying all the same crap in the year preceding Ryzen's launch (along with all the other usual suspects), that Zen would just be another BD, that it was impossible for AMD to catch up to Intel, etc. As I said, I don't know if these rumors are true or not, but they are not the unrealistic impossibility that Scotty would have you believe. There is nothing about these leaks that seems like "magic" to me, just optimism about the new Zen 2 architecture and the shrink from 12nm to 7nm.


----------



## Scotty99

Majin SSJ Eric said:


> He was saying all the same crap in the year preceding Ryzen's launch (along with all the other usual suspects), that Zen would just be another BD, that it was impossible for AMD to catch up to Intel, etc. As I said, I don't know if these rumors are true or not, but they are not the unrealistic impossibility that Scotty would have you believe. There is nothing about these leaks that seems like "magic" to me, just optimism about the new Zen 2 architecture and the shrink from 12nm to 7nm.


Oh i hope im wrong, what would intel need to do to their lineup if AMD sold a 12c 5.0ghz chip for 330.00? Only reason i switched from my ryzen 1700 was games like WoW, but that recently got a dx12 patch that bumped fps by a good ~15-20%.


----------



## guttheslayer

Scotty99 said:


> Oh i hope im wrong, what would intel need to do to their lineup if AMD sold a 12c 5.0ghz chip for 330.00? Only reason i switched from my ryzen 1700 was games like WoW, but that recently got a dx12 patch that bumped fps by a good ~15-20%.


Like I said AdoreTV Jim had more credibility in the past than most ppl here did for the last few years.


I am not sure how he is able to escape NDA issue, but yeah he since to be able to leak everything that was considered impossible or too risky.


----------



## Raghar

guttheslayer said:


> I am not sure how he is able to escape NDA issue, but yeah he since to be able to leak everything that was considered impossible or too risky.


He's doing it like me. Don't sign anything, don't break anything.


----------



## ku4eto

Raghar said:


> He's doing it like me. Don't sign anything, don't break anything.


The issue with that is, that it gets hard to find information  Or how correct that information is.


----------



## Syldon

ku4eto said:


> The issue with that is, that it gets hard to find information  Or how correct that information is.



He keeps an open policy, where he says what he feels. As such he never gets invites to the staged events. I can only remember him doing one sponsered review. He was very open about where his bottom line was in accepting the sponser item for the review.


----------



## guttheslayer

Soo now we all agree what AdoredTV had predicted has a 90% confident level of becoming true.


It is impressive but not outside the realm of impossibility, 4.7GHz is not something that is impossible for 7nm. Again so is the 16C / 2 chipset on a CPU design. There is nothing to be super-hype IMHO, just that I9-9900K has been downgrade to a low tier CPU it probably couldnt even compete with a $229 AMD Ryzen 5. Intel deserve 200% a hard slap on this for selling us a low range product at $500. They have been anti-innovative for way way way too long.



So lets us just wait for CES for Lisa Su reveal.


----------



## Lee Patekar

Although the core count and clock speeds look great on paper I'd like to see real world results before getting hyped. The chiplet design is great for data centers but I'm worried about latency.. which is basically intel's gaming advantage of current Rysen designs.


----------



## teh-yeti

Lee Patekar said:


> Although the core count and clock speeds look great on paper I'd like to see real world results before getting hyped. The chiplet design is great for data centers but I'm worried about latency.. which is basically intel's gaming advantage of current Rysen designs.


Agreed. Let’s see what IF 2 has to offer. Core configuration on the supposed 2 chiplet design will also be more important. Possibly 4 CCX’s per CPU? With double the victim cache, hopefully the cores will be fed with the info they need. If a core has to go through 2-4 IF hops, that could require more developer optimization, and we know how that goes...


----------



## 113802

guttheslayer said:


> Soo now we all agree what AdoredTV had predicted has a 90% confident level of becoming true.
> 
> 
> It is impressive but not outside the realm of impossibility, 4.7GHz is not something that is impossible for 7nm. Again so is the 16C / 2 chipset on a CPU design. There is nothing to be super-hype IMHO, just that I9-9900K has been downgrade to a low tier CPU it probably couldnt even compete with a $229 AMD Ryzen 5. Intel deserve 200% a hard slap on this for selling us a low range product at $500. They have been anti-innovative for way way way too long.
> 
> 
> 
> So lets us just wait for CES for Lisa Su reveal.


Anti-innovative in which way? If you are just speaking purely about processors both companies have been stagnant. When I think of innovation it's usually the introduction of something new. AMD hasn't released impressive performance. They released an affordable one which they've done in the past with the Phenom II x6 but than Sandy Bridge was released.

If you're running applications that are well threaded and you're looking to improve performance in them, AMD generally offers you better performance for the same money as Intel. It all boils down to AMD selling you more cores than Intel at the same price point.

AMD released Broadwell(released 2014) single threaded performance 3 years later with more cores. The cause for the stagnant improvement in processor performance is due to silicon. Just get used to the small IPC improvements from both AMD and Intel until hybrid silicon/graphene photonic chips are released. Intel's been on a tear with their storage solutions and mobile solutions. Let's not forget how Thunderbolt 3 changed portable storage along with converting any mobile device into a gaming machine.


----------



## Lee Patekar

As for innovation.. AMD was first to 64 bits, first at bringing a true multiprocessor designs, first in bringing parts of the north-bridge into the CPU (memory controller, multi-socket link, PCIe, etc).. essentially the core i7 is an intel variant of opterons.

And since AMD hit the wall with bulldozer what has intel brought? 2-3% speed improvements per year and a stale platform. Without AMD, intel sits on its laurels.


----------



## figuretti

Moar news...

https://www.reddit.com/r/Amd/comments/aci6rz/12c_am4_ryzen_3000_appear_in_the_wild/

References this tweet... https://twitter.com/KOMACHI_ENSAKA/status/1081174660136353792



> AMD 1D1212BGMCWH2 Decode.
> 1 = ES0
> D = Desktop
> 121 = Base : 1.21Ghz (?)
> 1 = Model Revision
> BG = 105W
> M = AM4
> C = 12C
> W = ??? (?× 512KB L2 + ?MB L3)
> H2 = ??? (H2 Stepping?)


Most upvoted comment


> Here, I will write a recap of what has happened so far from memory. Dates and details may be incorrect.
> 
> December 4th: AdoredTV gets very detailed product sheets from a source he clearly trusts completely. He very publicly stands by the figures that others say is "too good to be true," but he's adamant that they will see at CES.
> 
> Late December: AMD officially announces big CES presentation for "high performance computing."
> 
> Early January: retailers leak listings for Ryzen 3000 that completely match all SKUs AdoredTV initially discussed. People speculate that it's coincidence or that those figures are placeholders that came from AdoredTV rather than AMD.
> 
> Today: AMD engineering sample for a 12 core Ryzen 3000 chip leaks. The serial is decoded and it matches AdoredTV leaks perfectly except the clock speed seems to be 1.21 GHz, which is far too low for an actual release CPU. People realize that if you multiply the "factor" value listed in Russian retailer product listings by the 1.21 listed in the ES serial, you actually get precisely the boost clock described by AdoredTV and other listings. This weird base clock is possible because of the separate I/O die shown by AMD themselves, isolating BCLK changes from breaking bus speeds/RAM speeds.
> 
> January 9th: AMD conference begins
> 
> Edited dates.


The "factor" in question is showed on this pic
https://i1.wp.com/www.tech-critter....-Matisse-Ryzen-3000-series-specifications.jpg


----------



## 113802

Lee Patekar said:


> As for innovation.. AMD was first to 64 bits, first at bringing a true multiprocessor designs, first in bringing parts of the north-bridge into the CPU (memory controller, multi-socket link, PCIe, etc).. essentially the core i7 is an intel variant of opterons.
> 
> And since AMD hit the wall with bulldozer what has intel brought? 2-3% speed improvements per year and a stale platform. Without AMD, intel sits on its laurels.


I think the first 64 bit processor was the R4000. IBM made the first multi core processor. 

Ryzen to Ryzen+ IPC was also a 3% improvement. 

Intel's been transforming the moblile tech industry since their release of the first iGPU in 2010. 

It's obvious silicon is reaching its limits. The question is how economical it is to keep trying to shrink silicon instead of ditching it. 

Sandy Bridge to Ivy Bridge: Average ~5.8% Up
Ivy Bridge to Haswell: Average ~11.2% Up
Haswell to Broadwell: Average ~3.3% Up
Broadwell to Skylake (DDR3): Average ~2.4% Up
Broadwell to Skylake (DDR4): Average ~2.7% Up


----------



## Lee Patekar

We both remember history quite differently you and I.



WannaBeOCer said:


> I think the first 64 bit processor was the R4000. IBM made the first multi core processor.


And AMD was the first for consumer CPUs.. in fact its AMD's 64 bit extensions are still running in current intel hardware.



WannaBeOCer said:


> Ryzen to Ryzen+ IPC was also a 3% improvement.


For the refresh yes. With Zen 2 it appears to be quite more. AMD also brought in more cores on their workstation line.



WannaBeOCer said:


> Intel's been transforming the moblile tech industry since their release of the first iGPU in 2010.


After getting the idea from AMD's first APU.. ^_^ Its not like AMD could compete very well with bulldozer (and being late on every process node).



WannaBeOCer said:


> It's obvious silicon is reaching its limits. The question is how economical it is to keep trying to shrink silicon instead of ditching it.


Easy to say, hard to achieve. Right now the limits are bing circumvented with chiplets and 3D designs.

Essentially AMD couldn't compete against intel's manufacturing advantage so they innovated their designs. Which is the purpose of my original argument: AMD was the source of innovation in the CPU market for quite some time now. When AMD ceased to push due to bulldozer/etc, intel simply sat on its laurels miniaturizing its existing core (TM) architecture for profit. They never increased core count.. they kept the desktop and laptop stagnant and worked on power consumption for data centers.

When I say AMD innovated more I'm not saying they had more performance. They were always at a process node disadvantage compared to intel. Except now, for a time, they have a leg up with TSMC's 7nm node. How it will compare to intels 10nm (once they fix it) remains to be seen. (node numbers aren't indicators of real size / performance anymore.. intels 14nm is superior to TSMC's 12nm process.. so we'll see soon enough how 10nm fares).


----------



## tymbarq

Maybe something like this

AMD 1D1212BGMCWH2 Decode.
1 = ES0
D = Desktop
1 = one IO chiplet
2 = two CPU chiplets
12 = Boost Frequency : 1.2GHz 
BG = 105W
M = AM4
C = 12C
W = ??? (?× 512KB L2 + ?MB L3)
H2 = Matisse



AMD 5D0108BBM8SH2_37
5 = ES5 ?
D = Desktop
0 = no IO chiplet
1 = one CPU chiplet
08 = Boost Frequency : 0.8GHz 
BB = 65W
M = AM4
8 = 8C
S = (8× 512KB L2 + 32 MB L3)
H2 = Matisse
BASE CLOCK 3.7 GHz
BOOST CLOCK = 3.7 + 0.8 = 4.5 GHz


----------



## 113802

Lee Patekar said:


> We both remember history quite differently you and I.
> 
> And AMD was the first for consumer CPUs.. in fact its AMD's 64 bit extensions are still running in current intel hardware.
> 
> For the refresh yes. With Zen 2 it appears to be quite more. AMD also brought in more cores on their workstation line.
> 
> After getting the idea from AMD's first APU.. ^_^ Its not like AMD could compete very well with bulldozer (and being late on every process node).
> 
> Easy to say, hard to achieve. Right now the limits are bing circumvented with chiplets and 3D designs.
> 
> Essentially AMD couldn't compete against intel's manufacturing advantage so they innovated their designs. Which is the purpose of my original argument: AMD was the source of innovation in the CPU market for quite some time now. When AMD ceased to push due to bulldozer/etc, intel simply sat on its laurels miniaturizing its existing core (TM) architecture for profit. They never increased core count.. they kept the desktop and laptop stagnant and worked on power consumption for data centers.
> 
> When I say AMD innovated more I'm not saying they had more performance. They were always at a process node disadvantage compared to intel. Except now, for a time, they have a leg up with TSMC's 7nm node. How it will compare to intels 10nm (once they fix it) remains to be seen. (node numbers aren't indicators of real size / performance anymore.. intels 14nm is superior to TSMC's 12nm process.. so we'll see soon enough how 10nm fares).


We definitely are, I remember Motorola releasing one of the first 64 bit consumer PowerPC processors in 1997. AMD's first APU was released Q1 of 2011 Llano demo was in June 2010, Intel's was first released in 2010 Q1 example the Core i5 661. AMD tried to market their iGPU as something else so they named it an APU and there was a ton of discussion if it was any different but it wasn't aside from AMD's amazing GPU performance. I'm aware of nodes and nodelets and I'm sure you're aware that Intel's been working on their yields. That's why they already released their 10nm Canon Lakes just like they did when they first released Broadwell. 

AMD didn't have the resources to release a new architecture and they drove their K8 to the ground. After that they were banking on multi-threaded programs with bulldozer which failed, it was never a node disadvantage but a R&D disadvantage caused by Intel's unfair business practices.


----------



## Eusbwoa18

*New chipset?*

Has anyone heard if AMD will be announcing a new chipset to go with the new processors? The NVME performance from drives connected to the 470 chipset seems to be slower than it should be.

https://www.techspot.com/review/1646-storage-performance-intel-z370-vs-amd-x470/


----------



## ozlay

pgdeaner said:


> Has anyone heard if AMD will be announcing a new chipset to go with the new processors? The NVME performance from drives connected to the 470 chipset seems to be slower than it should be.
> 
> https://www.techspot.com/review/1646-storage-performance-intel-z370-vs-amd-x470/


There are rumors of a x570 chipset. Which might add PCIE 4.0 support. But so far they are just rumors. But it would make sense for them to release a new chipset if they are bringing out new cpu's. Hopefully it will have improved pcie chipset lanes as well.


----------



## Redwoodz

pgdeaner said:


> Has anyone heard if AMD will be announcing a new chipset to go with the new processors? The NVME performance from drives connected to the 470 chipset seems to be slower than it should be.
> 
> https://www.techspot.com/review/1646-storage-performance-intel-z370-vs-amd-x470/



Just to be clear they were referring to installing an m.2 in the secondary slot.


----------



## Eusbwoa18

*Yup*



Redwoodz said:


> Just to be clear they were referring to installing an m.2 in the secondary slot.


Correct. The primary is direct CPU connected (x4). It's the 470 chipset that only supports PCI 2.0.


----------



## ryan92084

As counterpoint to the 12 core engineering sample 
"So far no confirmation on 16-core Zen2 (mainstream series).
One source is quite confident new series are still only 8-core."
https://mobile.twitter.com/VideoCardz/status/1081246710976917505


----------



## ozlay

I am hoping that X499 drops some Sata for some OCuLink-2 ports. Like some of the eypc boards have.


----------



## guttheslayer

WannaBeOCer said:


> Anti-innovative in which way? If you are just speaking purely about processors both companies have been stagnant. When I think of innovation it's usually the introduction of something new. AMD hasn't released impressive performance. They released an affordable one which they've done in the past with the Phenom II x6 but than Sandy Bridge was released.
> 
> If you're running applications that are well threaded and you're looking to improve performance in them, AMD generally offers you better performance for the same money as Intel. It all boils down to AMD selling you more cores than Intel at the same price point.
> 
> AMD released Broadwell(released 2014) single threaded performance 3 years later with more cores. The cause for the stagnant improvement in processor performance is due to silicon. Just get used to the small IPC improvements from both AMD and Intel until hybrid silicon/graphene photonic chips are released. Intel's been on a tear with their storage solutions and mobile solutions. Let's not forget how Thunderbolt 3 changed portable storage along with converting any mobile device into a gaming machine.


AMD was doing badly due to bad purchasing decision and spend time trying to fight back what they lost but they end up cannot handle them.

Once Lisa Su take overs things start to change for the better, that doesnt give Intel the reason to anti-innovate and milk the market. Giving excuse like how TIM can help to reduce CPU die crack as compared to solder (when in fact it cheaper and they want to use cheap material for premium product) and they eat their word with I9-9900k. Feeding us Quad cores for over a decade and even went to extend of trying to force low-budget quad core user into intel high end platform (X399) with that I7-7740K.


I can see from NV / intel thread that you are someone who support company milking customer.


----------



## guttheslayer

ozlay said:


> There are rumors of a x570 chipset. Which might add PCIE 4.0 support. But so far they are just rumors. But it would make sense for them to release a new chipset if they are bringing out new cpu's. Hopefully it will have improved pcie chipset lanes as well.


It would not make sense to have no new chipset when Ryzen 3000 is a new micro-Arch and new 7nm fab.


Ryzen 2 was just a improved Ryzen 1 and AMD release a new chipset for them. So I dont see why Ryzen 3 wouldnt have.


----------



## ozlay

guttheslayer said:


> It would not make sense to have no new chipset when Ryzen 3000 is a new micro-Arch and new 7nm fab.
> 
> 
> Ryzen 2 was just a improved Ryzen 1 and AMD release a new chipset for them. So I dont see why Ryzen 3 wouldnt have.


Agreed, But wasn't x470 a fix to add features/optimizations that were left out of X370. Those features/optimizations were added to x399 but not x370. That is why an X499 wasn't needed but an x470 was. I believe it was mostly to fix some of the ram issues? I might be wrong but there really isn't much difference between the two.


----------



## guttheslayer

ryan92084 said:


> As counterpoint to the 12 core engineering sample
> "So far no confirmation on 16-core Zen2 (mainstream series).
> One source is quite confident new series are still only 8-core."
> https://mobile.twitter.com/VideoCardz/status/1081246710976917505


Jim from adoreTV have already replied that comment.

Both of his sources have to be bad at the same time for only 8 cores to happen.


----------



## Shatun-Bear

guttheslayer said:


> Jim from adoreTV have already replied that comment.
> 
> Both of his sources have to be bad at the same time for only 8 cores to happen.


Even if he turns out to be right about there being a 16-core on AM4 or mainstream, he still deserves to lose half his Patreons if his frequencies are significantly wrong (by significant I mean by 200 or more megahertz) as he's created a hype train here that's near impossible to live up to.

Near impossible because these two SKUs are pie in the sky stuff:

3700X 12-core 4.2Ghz base clock, 5Ghz boost
3850X 16-core 4.3Ghz base clock, 5.1Ghz boost.

Dig this post up after the 9th Jan (or whenever the SKUs are revealed) I'm pretty confident these figures are not happening.


----------



## VeritronX

3850X I'd imagine won't be 100% locked down until right before they announce it, but I reckon the 3700X is about in line with what AMD and TSMC have said officially about improvements with their products on the new 7nm node.

Will be interesting to see how games respond to the new core design with higher clocks and more than 10% IPC gain. It's a pity Intel employed most of the people I liked from PcPer, the core to core latency stuff and nvme performance is what I was looking forward to the most.


----------



## elina08

https://twitter.com/AdoredTV/status/1081306671916482560

"You don't need to worry about boards lol, only yesterday I was asked if I wanted to review an "upcoming" X570 mobo. "


----------



## ejb222

elina08 said:


> https://twitter.com/AdoredTV/status/1081306671916482560
> 
> "You don't need to worry about boards lol, only yesterday I was asked if I wanted to review an "upcoming" X570 mobo. "


would love to see some reviews! First day CES is Wed 1/9 right?


----------



## CynicalUnicorn

Lee Patekar said:


> We both remember history quite differently you and I.


But only one version is correct. 




> And AMD was the first for consumer CPUs.. in fact its AMD's 64 bit extensions are still running in current intel hardware.


Just a reminder that Itanium still exists and is being actively produced in 2019. On purpose. And Intel and HP will be sued if they stop.

Not really relevant to the topic at hand, but I think it's funny. :thumb:




> After getting the idea from AMD's first APU.. ^_^ Its not like AMD could compete very well with bulldozer (and being late on every process node).


Technically, AMD was first to release a monolithic APU... by 5 days. The first Sandy Bridge CPUs launched January 9, 2011, and AMD had beaten Intel with their first Bobcat APUs on January 4. Llano - the full-power desktop APU - would not launch until August.

Of course if we want to consider the _idea_ of an APU, a processor with integrated graphics on the package, then Intel beat AMD by close to a year with Clarkdale and Arrandale in January 2010. With all this talk of chiplets from every company, with AMD's and Nvidia's HBM interposers, Intel's EMIB, and now Rome, I'm not so sure I'd consider a monolithic die to be a requirement for an APU. Though to be fair I am looking at this from the perspective of somebody in 2018+1. It's all semantics anyway.




> Essentially AMD couldn't compete against intel's manufacturing advantage so they innovated their designs. Which is the purpose of my original argument: AMD was the source of innovation in the CPU market for quite some time now. When AMD ceased to push due to bulldozer/etc, intel simply sat on its laurels miniaturizing its existing core (TM) architecture for profit. They never increased core count.. they kept the desktop and laptop stagnant and worked on power consumption for data centers.


That's a flat out lie. _How_ has AMD innovated? CMT was their biggest innovation, which is now very much dead, and a convincing argument can be made that CMT was mainly marketing and that a contemporary Intel core implements a better version of CMT. APUs have been developed independently by several companies, and the idea of a SoC with onboard graphics isn't exactly new if you look at mobile processors.

Intel's core count has increased substantially, going from 8 cores with Nehalem to 10 with Westmere to 15 with Ivy Bridge to 18 with Haswell to 24 with Broadwell to 28 with Skylake. As a consequence, Intel had to develop a highly-efficient ringbus, followed by CPUs with _multiple_ ringbuses, followed by a retirement of the ringbus design because they had added so many cores that it just could not scale well enough. Downplaying power consumption for datacenters misses a lot. High core count CPUs _cannot exist without highly efficient designs!_ If you compare Nehalem-EX and Skylake-SP, you'll see that Intel is now offering 250% more cores with similar power budgets and, in general, higher core clocks. A hypothetical Nehalem ported to 14nm simply could not be that big because it could not be cooled, and Skylake's aggressive (really, "existent") power management would allow large CPUs by contemporary standards if it were built on 45nm.

How else has Intel innovated? How about multiple designs of L4 cache? Haswell and Broadwell's victim buffer gave way to Skylake's side cache, available on "APUs" which are able to not only compete with but actually outperform many low-end graphics cards, and until Raven Ridge were flat out better than any of AMD's offerings.

And let's keep talking efficiency: Sandy Bridge CPUs were available as 17W dual-cores in laptops, and with the exception of a few single-core SKUs (like this one that existed on purpose lol) never dropped below that. One of these CPUs could boost to 3.5GHz on one core only. Now let's fast forward to Kaby Lake. Low-power 15W CPUs can boost up to 4.2GHz on two cores while containing twice as many improved CPU _and_ GPU cores. Or what about the dual-core Core Y CPUs which can be configured to run at 3.5W while Sandy Bridge's single-core CPUs ran at 10W minimum? Kaby Lake is more efficient than Sandy Bridge to a staggering degree and allows more cores in a whole lotta systems, but because the 2600K and 7700K are both 4C/8T CPUs, you and so many other enthusiasts have concluded that Intel has been sitting around.

Intel hasn't been sitting on their laurels. You just haven't paid attention to their improvements. Intel's _only_ stagnant market was high-end CPUs on LGA-115X, and I'd argue that the 8700K had been in the works long before Ryzen.




> When I say AMD innovated more I'm not saying they had more performance. They were always at a process node disadvantage compared to intel. Except now, for a time, they have a leg up with TSMC's 7nm node. How it will compare to intels 10nm (once they fix it) remains to be seen. (node numbers aren't indicators of real size / performance anymore.. intels 14nm is superior to TSMC's 12nm process.. so we'll see soon enough how 10nm fares).


The number is pretty much meaningless yeah. As far as I know there is no physical meaning to it at all.




ozlay said:


> Agreed, But wasn't x470 a fix to add features/optimizations that were left out of X370. Those features/optimizations were added to x399 but not x370. That is why an X499 wasn't needed but an x470 was. I believe it was mostly to fix some of the ram issues? I might be wrong but there really isn't much difference between the two.


Threadripper differed a little bit from first-gen Ryzen. L2 cache latency was cut from I think 19 cycles to 12, which increased performance by ~1-2%. I don't want to call AM4 Ryzen rushed, but if it had been delayed a couple months I don't think these (let's be honest, miniscule) issues would have existed. :thumb:

Yes I'm nitpicking lol.


----------



## tpi2007

We do know a few things more or less for certain:


1. AdoredTV has good sources but you should take these leaks with a grain of salt as they usually don't match up 100%. This is reinforced this time with Kyle from HardOCP saying that the leaks are good, but some things are off. At this point it could be anything, in variable doses of clocks / core count / TDP.

2. VideoCardz, as much as it's identifiable as a video card site, has had some notable Intel and AMD CPU leaks in the past which were actually 100% correct. Namely: Haswell-E line-up, Broadwell-E line-up, Skylake-X (first gen) and AMD Threadripper 2 pricing.

3. AMD is on the record saying that they made Zen 2 to compete favourably with the upcoming Ice Lake, so if it pans out it means that they will have a window of time of leadership against Coffee Lake refresh. I don't believe in super high clocks from first 7nm yields, but having higher IPC than Intel is almost a given at this point as they are not far behind. Also, there is a slide where they say that they will have higher than the ~7-8% industry trend performance uplift with Zen 2. That already puts them ahead of Intel and higher IPC gives them some headroom if the clocks don't match up to Intel's. Now, Intel's 9900K's 95w TDP shenanigans and complicity with motherboard makers not implementing the Turbo Boost specs correctly probably has something to do with that. It will probably be a closer race because of that on the 8C/16T front, hence why AMD probably won't stop there.

4. There is a person on HardOCP with purportedly good sources that said a few weeks ago that the Radeon group has an 8C/16T CPU at 4.5 Ghz running in the labs to test with their GPUs. Is that base clock? All core Turbo? Single core Turbo? Knowing what exactly does make a difference, but if the 2950X already boosts to 4.4 Ghz on two cores, it would be a mediocre improvement if the new ones only single core / dual core boosted to 4.5 Ghz, so I'm leaning towards it at least being an all core Turbo. So far it makes sense.

5. Now, what we need to know is what is the most effective strategy to reclaim marketshare from Intel and how to best play the long game in the coming years. There are several factors at play here. Is 16C/32T a wow factor with or without consequence at this point in time? That's basically the question that needs answering. If their 8C/16T CPU can beat the 9900K at its own game _very convincingly_, they don't have much reason to lower their margins. Remember, they have some debt service to pay and they need to pour some much needed cash on the Radeon group. On the other hand, there is lots of marketshare to reclaim from Intel, so the exact strength of the push they should make next is up for debate, but they also need to consider their next moves, what their offers will be on 7nm EUV in 2020, etc.

6. Now, stopping at an 8C/16T mainstream offer as the VideoCardz source seems to indicate, only makes sense if it's really something else, you know, if it performs admirably better than the 9900K. So far, from everything we know, I don't see that happening. It will no doubt be faster, but if it's just 5%-10% faster, then there is the problem of AMD starting to be perceived as slowing down to Intel levels and getting comfortable. They are the ones catching up, and with Ice Lake later this year, it would make for a relatively small window of opportunity to get comfortable. I don't think that they can afford to do that at this point, they have to keep impressing and moving forward. 

7. Looking at the price that the 1920X is going for at just over 400 € and some places selling them under that mark and the 2920X at 650 €, I can perfectly see a 12C/24T CPU making it into mainstream at the 500€ price point. It's 50% more cores, so AMD can't be accused of slowing down, and they keep their margins, while allowing them to get back to the 1800X's price point, with the 16C/32T still enjoying HEDT status for a while more. Now, I can perfectly see them releasing a 16C/32T CPU if they really want to make a slam dunk on Intel, but there's the question of margins, TDP, and if it's not too soon, if it makes sense overall. However, given that a 12C/24T CPU will be made of two chiplets, a 16C/32T CPU is possible at any time, and that's a good card to have to respond to Intel at any time if needed. Personally, I'd say that it makes more sense for them to go 16C/32T with 7nm EUV.


That's my 7 cents.


----------



## Ha-Nocri

There is no chance 8c/16t Zen2 will be faster than 9900K. But we will have more than 8 core parts. IMO


----------



## tpi2007

Ha-Nocri said:


> There is no chance 8c/16t Zen2 will be faster than 9900K. But we will have more than 8 core parts. IMO



I have zero doubt that Zen 2 will have higher IPC than Intel's Skylake / Kaby Lake / Coffee Lake arch, so if the Zen 2 8C/16T part ends up tied or slightly slower than the 9900K it will be due to clockspeed differences alone and Intel's 95w TDP shenanigans and their complicity with motherboard makers in not implementing the Turbo guidelines properly. They probably know what is coming, hence the move. 

And I agree, whatever it turns out to be, I very much doubt that AMD will stop at 8 cores on the mainstream platform, staying there isn't enough momentum and AMD needs to keep the momentum consistent for several years.


----------



## Shatun-Bear

Ha-Nocri said:


> There is no chance 8c/16t Zen2 will be faster than 9900K. But we will have more than 8 core parts. IMO


WAT?

You do realise the 9900K is only 7% faster than the 2700X in 1080p gaming. That's 5Ghz max boost CPU vs 4.3Ghz. 12-15% IPC bump for the Ryzen chip plus minimum 300-400Mhz clock boost and it will beat the 9900K across the board and consume around half the power at the same time.


----------



## Streetdragon

I think i leave the hype train and enter the quantum slipstream ship! Hopefully it runs better for AMD than for the Voyager


----------



## PureBlackFire

Ha-Nocri said:


> There is no chance 8c/16t Zen2 will be faster than 9900K. But we will have more than 8 core parts. IMO


do you really think Intel's current cpus are clock for clock significantly faster than Zen+? if the current 2700X could reach the clocks rumored for 7nm Zen2 it would be beating the 9900k outright in several workloads, especially those reliant on threads since AMD's SMT has been proven enough times by now to scale better than Intel's hyper threading. some of these takes are impressively uninformed.


----------



## tyvar

CynicalUnicorn said:


> But only one version is correct.


And evidently part of it isn't yours  see below



> Just a reminder that Itanium still exists and is being actively produced in 2019. On purpose. And Intel and HP will be sued if they stop.



I'm going to nitpick you in turn Itanium ended development in 2017, as that was when the contract between Intel and HP ended, we know this from documents released from the lawsuit of HP vs Oracle and here https://www.pcworld.idg.com.au/arti...-once-destined-replace-x86-pcs-hits-end-line/ is a news article talking about its final release. Im not sure if at this juncture its still being fabbed, or if HP is sitting on a stockpile and it isn't being fabbed anymore. Considering how low volume it is, I'm betting it probably only took so many wafers ran packaged and shipped over HP to allow them to build up a healthy stockpile that will last them through their contractual support obligations.


----------



## Ha-Nocri

When there is no other bottlenecks, 9900K is up to 35% faster than 2700X in games. We are talking about 720p in most cases. Check out reviews.

But Zen2 will be close enough and cheap enough for most people to not care if they pick RyZen.

https://www.overclock.net/forum/attachment.php?attachmentid=243882&thumb=1


----------



## ToTheSun!

Shatun-Bear said:


> WAT?
> 
> You do realise the 9900K is only 7% faster than the 2700X in 1080p gaming. That's 5Ghz max boost CPU vs 4.3Ghz. 12-15% IPC bump for the Ryzen chip plus minimum 300-400Mhz clock boost and it will beat the 9900K across the board and consume around half the power at the same time.


That's not correct. It's appealing to Ryzen owners to look at some aggregate percentage and draw favorable conclusions, but it doesn't say much about processor capability. The 2700X will compete with the 9900K in mostly GPU bound scenarios and lose very significantly in others, especially when the CPU is the biggest bottleneck. This is true even if you match clock between Intel and AMD.

IPC is not what's holding back the 2700X in gaming against the 9900K. Clocks alone won't change that. I have no doubt the 3600X (or whatever model is 8/16) can match and even beat any Intel counterpart, but it's due to other factors, namely a different design for inter-core communication.


----------



## JMattes

Rumors and speculation on cores, clockspeed, etc. aside..

If these chips are announced this week at CES (which is pretty much guaranteed)

When will they be in stores. Regardless of how good they are, I need an upgrade.

Thoughts? March?


----------



## 99belle99

Ha-Nocri said:


> When there is no other bottlenecks, 9900K is up to 35% faster than 2700X in games. We are talking about 720p in most cases. Check out reviews.
> 
> But Zen2 will be close enough and cheap enough for most people to not care if they pick RyZen.
> 
> https://www.overclock.net/forum/attachment.php?attachmentid=243882&thumb=1


720p who plays at 720p. Most if not all should be moving away from 1080p. 1440p should be the standard now. I was at 1440p a few years ago. 4k is where it's at.


*Edit:* I see you were just trying to remove bottlenecks but I still don't know why reviewers are still benching 1080p. IMO it should be dead by this stage. Even consoles are leaving it behind.


----------



## deepor

CynicalUnicorn said:


> [...]
> 
> That's a flat out lie. _How_ has AMD innovated? CMT was their biggest innovation, which is now very much dead, and a convincing argument can be made that CMT was mainly marketing and that a contemporary Intel core implements a better version of CMT. APUs have been developed independently by several companies, and the idea of a SoC with onboard graphics isn't exactly new if you look at mobile processors.
> 
> [...]


I think that paragraph you answered to was really thinking about everything that came before Bulldozer. A long time ago AMD was just doing boring copies of Intel's CPUs. The innovation then came when they first did their own stuff that was different. That was their K6 and K7 and then K8 stuff. That K8 introduced AMD64, the 64-bit extensions to the x86 architecture. Besides introducing 64-bit stuff, there was also the performance of those CPUs being really good. Intel was caught flat-footed with their Pentium 4 designs and had to throw those away and lost years going back to a different design as a new base, and had to do that infamous cheating where they bribed OEMs to not build PCs with AMD CPUs.

This whole thing was interesting as Intel had planned to have x86 stay as a 32-bit architecture. They thought it would die eventually. But with the competition pushing up performance and AMD64 being introduced, it then lead to all the RISC designs giving up. Here where I am in Germany, for a time there were PCs being sold with DEC Alpha CPUs by a PC OEM that had a chain stores all over the country. Microsoft had a version of Windows NT 3.1 for those machines, and I think they also had a version of Office for it. I don't quite remember, but I think that experiment went on for around two years. They also were surprised by the Athlon 64 stuff and had to give up on it. I had learned the Assembly language of those DEC Alpha machines, I remember thinking the design was neat and super clean compared to x86.

Then with Bulldozer, mistakes were made. That shouldn't count as "innovation".


----------



## Ultracarpet

99belle99 said:


> 720p who plays at 720p. Most if not all should be moving away from 1080p. 1440p should be the standard now. I was at 1440p a few years ago. 4k is where it's at.
> 
> 
> *Edit:* I see you were just trying to remove bottlenecks but I still don't know why reviewers are still benching 1080p. IMO it should be dead by this stage. Even consoles are leaving it behind.


1080p is far from dead. At 24 inches which is the most common size of display/basically a standard for eSports, the ppi is plenty. 

Also I've used this argument many times before, but resolution is only a part of the equation if the goal is photo realistic graphics. Does any video game at 4k look more life like than a planet Earth documentary on a 1080p tv???


----------



## 99belle99

deepor said:


> Then with Bulldozer, mistakes were made. That shouldn't count as "innovation".


And the mad thing is the FX line are actually decent gaming CPU's in titles that use a lot of cores these days. 







AMD were just ahead of time.


----------



## sdch

tpi2007 said:


> We do know a few things more or less for certain:
> 
> 
> Spoiler
> 
> 
> 
> 1. AdoredTV has good sources but you should take these leaks with a grain of salt as they usually don't match up 100%. This is reinforced this time with Kyle from HardOCP saying that the leaks are good, but some things are off. At this point it could be anything, in variable doses of clocks / core count / TDP.
> 
> 2. VideoCardz, as much as it's identifiable as a video card site, has had some notable Intel and AMD CPU leaks in the past which were actually 100% correct. Namely: Haswell-E line-up, Broadwell-E line-up, Skylake-X (first gen) and AMD Threadripper 2 pricing.
> 
> 3. AMD is on the record saying that they made Zen 2 to compete favourably with the upcoming Ice Lake, so if it pans out it means that they will have a window of time of leadership against Coffee Lake refresh. I don't believe in super high clocks from first 7nm yields, but having higher IPC than Intel is almost a given at this point as they are not far behind. Also, there is a slide where they say that they will have higher than the ~7-8% industry trend performance uplift with Zen 2. That already puts them ahead of Intel and higher IPC gives them some headroom if the clocks don't match up to Intel's. Now, Intel's 9900K's 95w TDP shenanigans and complicity with motherboard makers not implementing the Turbo Boost specs correctly probably has something to do with that. It will probably be a closer race because of that on the 8C/16T front, hence why AMD probably won't stop there.
> 
> 4. There is a person on HardOCP with purportedly good sources that said a few weeks ago that the Radeon group has an 8C/16T CPU at 4.5 Ghz running in the labs to test with their GPUs. Is that base clock? All core Turbo? Single core Turbo? Knowing what exactly does make a difference, but if the 2950X already boosts to 4.4 Ghz on two cores, it would be a mediocre improvement if the new ones only single core / dual core boosted to 4.5 Ghz, so I'm leaning towards it at least being an all core Turbo. So far it makes sense.
> 
> 5. Now, what we need to know is what is the most effective strategy to reclaim marketshare from Intel and how to best play the long game in the coming years. There are several factors at play here. Is 16C/32T a wow factor with or without consequence at this point in time? That's basically the question that needs answering. If their 8C/16T CPU can beat the 9900K at its own game _very convincingly_, they don't have much reason to lower their margins. Remember, they have some debt service to pay and they need to pour some much needed cash on the Radeon group. On the other hand, there is lots of marketshare to reclaim from Intel, so the exact strength of the push they should make next is up for debate, but they also need to consider their next moves, what their offers will be on 7nm EUV in 2020, etc.
> 
> 6. Now, stopping at an 8C/16T mainstream offer as the VideoCardz source seems to indicate, only makes sense if it's really something else, you know, if it performs admirably better than the 9900K. So far, from everything we know, I don't see that happening. It will no doubt be faster, but if it's just 5%-10% faster, then there is the problem of AMD starting to be perceived as slowing down to Intel levels and getting comfortable. They are the ones catching up, and with Ice Lake later this year, it would make for a relatively small window of opportunity to get comfortable. I don't think that they can afford to do that at this point, they have to keep impressing and moving forward.
> 
> 7. Looking at the price that the 1920X is going for at just over 400 € and some places selling them under that mark and the 2920X at 650 €, I can perfectly see a 12C/24T CPU making it into mainstream at the 500€ price point. It's 50% more cores, so AMD can't be accused of slowing down, and they keep their margins, while allowing them to get back to the 1800X's price point, with the 16C/32T still enjoying HEDT status for a while more. Now, I can perfectly see them releasing a 16C/32T CPU if they really want to make a slam dunk on Intel, but there's the question of margins, TDP, and if it's not too soon, if it makes sense overall. However, given that a 12C/24T CPU will be made of two chiplets, a 16C/32T CPU is possible at any time, and that's a good card to have to respond to Intel at any time if needed. Personally, I'd say that it makes more sense for them to go 16C/32T with 7nm EUV.
> 
> 
> That's my 7 cents.


Some good, reasonable thoughts here. I'm also expecting faster 8C/16T performance than the 9900K (at honest, enforced stock settings) and a general AMD lead over Intel for a few months. On the flip side, I'm also expecting poor memory related design choices similar to previous Ryzen processors, especially affecting >2x8GB kits. That will be the only real handicap. I think in gaming, AMD/Intel will trade blows, with Intel still coming out on top... barely.


----------



## white owl

99belle99 said:


> 720p who plays at 720p. Most if not all should be moving away from 1080p. 1440p should be the standard now. I was at 1440p a few years ago. 4k is where it's at.
> 
> 
> *Edit:* I see you were just trying to remove bottlenecks but I still don't know why reviewers are still benching 1080p. IMO it should be dead by this stage. Even consoles are leaving it behind.


 I'm not sure the CPU really cares about any of that. To test a CPU in games you need to make the game CPU bound. It's insane how many PC builders, gamers and overclockers...enthusiasts even, don't understand the absolute basics, and furthermore feel like what they use is somehow relevant to every other gamer as if they are the standard.


----------



## Scotty99

Shatun-Bear said:


> WAT?
> 
> You do realise the 9900K is only 7% faster than the 2700X in 1080p gaming. That's 5Ghz max boost CPU vs 4.3Ghz. 12-15% IPC bump for the Ryzen chip plus minimum 300-400Mhz clock boost and it will beat the 9900K across the board and consume around half the power at the same time.


That really isnt true tho, the games people actually play (gta 5, fortnite, wow etc etc) the lead is closer to 30%. This is down to not only clocks but latency, there is a reason the xbox one x version of destiny 2 is locked to 30 fps even tho the GPU is capable of way more.....it has a potato cpu.
If you play a ton of single player AAA games ya AMD is a better choice but most people are playing online games that need that cpu horsepower as well.

Basically until AMD comes out with a 5ghz capable cpu with better latency it boils down to this:

Do you have a 144hz+ monitor=buy intel
Are you running a 60 or 75hz panel=AMD will suffice


----------



## rdr09

Scotty99 said:


> That really isnt true tho, the games people actually play (gta 5, fortnite, wow etc etc) the lead is closer to 30%. This is down to not only clocks but latency, there is a reason the xbox one x version of destiny 2 is locked to 30 fps even tho the GPU is capable of way more.....it has a potato cpu.
> If you play a ton of single player AAA games ya AMD is a better choice but most people are playing online games that need that cpu horsepower as well.
> 
> Basically until AMD comes out with a 5ghz capable cpu with better latency it boils down to this:
> 
> Do you have a 144hz+ monitor=buy intel
> Are you running a 60 or 75hz panel=AMD will suffice



AT 1080P. At 1440, does not matter if 60Hz or 240Hz. The gpu will be the limiting factor.


----------



## Scotty99

rdr09 said:


> AT 1080P. At 1440, does not matter if 60Hz or 240Hz. The gpu will be the limiting factor.


Actually not true in online games, the bottleneck will always be the CPU. Go test it right now if you dont believe me, load up WoW go to a busy city and you will get the exact same FPS at 4k as you will at 1080p. I dont play fortnite but i imagine its the same story in situations where draw calls are your bottleneck (lots of players on screen).


----------



## white owl

Scotty99 said:


> That really isnt true tho, the games people actually play (gta 5, fortnite, wow etc etc) the lead is closer to 30%. This is down to not only clocks but latency, there is a reason the xbox one x version of destiny 2 is locked to 30 fps even tho the GPU is capable of way more.....it has a potato cpu.
> If you play a ton of single player AAA games ya AMD is a better choice but most people are playing online games that need that cpu horsepower as well.
> 
> Basically until AMD comes out with a 5ghz capable cpu with better latency it boils down to this:
> 
> Do you have a 144hz+ monitor=buy intel
> Are you running a 60 or 75hz panel=AMD will suffice


 I've seen almost no difference and as high as 30% differences. If the difference between using AMD or Intel on a random game is say 10%, that's enough to get steady 120fps game play or have it dipping in and out into the 100s. Sure it doesn't seem like a huge difference but it really is very noticeable.
Not everyone plays the same and to some of us the frame rate is just as important as the game looking great which is why I'll sacrifice settings with no real benefit to get a higher framerate.
That being said if I had to upgrade right now I'd have to choose between the 9700k and 8700k which IMO aren't very cost effective. I shouldn't need to spend over $400 on just an 8c/8t part just to play games before factoring in every other part of the build.
What's funny is that Haswell at 4.7Ghz or even 4.5Ghz is plenty fast enough to achieve this as long as the game doesn't need more threads. That's where I get into trouble.
If AMD can make Ryzen as fast as OC'd Devil's Canyon but with more cores and threads there's no reason they can't dominate the gaming market again. I don't care if it runs 4Ghz as long as it performs better in single thread.
If we assume they made little progress with the arch and there's little benefit with the node shrink I can still see them making this happen. If the current arch would allow for 4.7Ghz I think they'd already be there so I really can't wait to see what these new parts are all about.




rdr09 said:


> AT 1080P. At 1440, does not matter if 60Hz or 240Hz. The gpu will be the limiting factor.


GPU's are plenty fast enough to be CPU bound at 1440p in a great deal of games. If you GPU can drive 1440p at 200fps but your CPU can only spit out 100fps then the res is totally irrelevant and you're limited to 100fps.
The resolution doesn't matter really, I'm going to go with a CPU that's able to run the engine at over 120fps because if the CPU can't do it, the GPU can't help you.
The resolution doesn't matter to the CPU in anyway other than if the res is too high it will choke the GPU down. If the GPU isn't bogged down at 1440p then the bottleneck goes back to the CPU.
You don't get a sense of this from benchmarks because they are usually done at the highest preset. Once you carve away the 8x MSAA and other stuff bogging the GPU down with little visual impact, the story changes quite a bit.


----------



## rdr09

Scotty99 said:


> Actually not true in online games, the bottleneck will always be the CPU. Go test it right now if you dont believe me, load up WoW go to a busy city and you will get the exact same FPS at 4k as you will at 1080p. I dont play fortnite but i imagine its the same story in situations where draw calls are your bottleneck (lots of players on screen).



I don't play WoW. If you do, show us your fps as proof. 

I did play C3 Multi-player at 60Hz. At 4K 3 yrs ago.


----------



## Scotty99

white owl said:


> I've seen almost no difference and as high as 30% differences. If the difference between using AMD or Intel on a random game is say 10%, that's enough to get steady 120fps game play or have it dipping in and out into the 100s. Sure it doesn't seem like a huge difference but it really is very noticeable.
> Not everyone plays the same and to some of us the frame rate is just as important as the game looking great which is why I'll sacrifice settings with no real benefit to get a higher framerate.
> That being said if I had to upgrade right now I'd have to choose between the 9700k and 8700k which IMO aren't very cost effective. I shouldn't need to spend over $400 on just an 8c/8t part just to play games before factoring in every other part of the build.
> What's funny is that Haswell at 4.7Ghz or even 4.5Ghz is plenty fast enough to achieve this as long as the game doesn't need more threads. That's where I get into trouble.
> If AMD can make Ryzen as fast as OC'd Devil's Canyon but with more cores and threads there's no reason they can't dominate the gaming market again. I don't care if it runs 4Ghz as long as it performs better in single thread.
> If we assume they made little progress with the arch and there's little benefit with the node shrink I can still see them making this happen. If the current arch would allow for 4.7Ghz I think they'd already be there so I really can't wait to see what these new parts are all about.


It really does boil down to what games you play as to which cpu you should pick. I am a huge mmo fan i play WoW/SWTOR/ESO/FFXIV and in all of those games the 8700k was close to 30% ahead of my ryzen 1700. I tested these vs each other with the same clocks (both locked to 3.9ghz) and cas 14 2933 memory.

Other games like overwatch i saw zero difference and i couldnt pick what system was running what. AMD is in all actuality a better recommendation for the general public on modest budgets, but once you spend 300+ on a monitor the only choice is intel.


----------



## Scotty99

rdr09 said:


> I don't play WoW. If you do, show us your fps as proof.
> 
> I did play C3 Multi-player at 60Hz. At 4K 3 yrs ago.


I don't need to "prove" anything my dude, just look up draw call bottlenecking.....this isnt a new thing.


----------



## rdr09

Scotty99 said:


> I don't need to "prove" anything my dude, just look up draw call bottlenecking.....this isnt a new thing.



The 1700 really needs oc'ing due to a very low boost. Why would you use the 1700 for games like Wow when the 7700K was still available at the time?

Check this out. Not sure if cpuz is accurate but look at the difference between a stock 1700 and a stock 2700.

The 1700 only gets a 360 in ST.


----------



## Scotty99

rdr09 said:


> The 1700 really needs oc'ing due to a very low boost. Why would you use the 1700 for single player games like Wow when the 7700K was still available at the time?
> 
> Check this out. Not sure if cpuz is accurate but look at the difference between a stock 1700 and a stock 2700.
> 
> The 1700 only gets a 360 in ST.


Well you see since im not a time traveler i had to test what was on the market.....at that time?

3.95ghz was as high as my 1700 clocked, obviously ryzen 2 goes a couple hundred mhz higher but again please note the scenario i tested in, i also locked the 8700k to 3.95ghz and used the same ram. There was a massive difference between the two CPU's in MMO's, the games i play the most and that is what this dicussion is about, buying the best product for your usage scenario.


----------



## rdr09

Scotty99 said:


> Well you see since im not a time traveler i had to test what was on the market.....at that time?
> 
> 3.95ghz was as high as my 1700 clocked, obviously ryzen 2 goes a couple hundred mhz higher but again please note the scenario i tested in, i also locked the 8700k to 3.95ghz and used the same ram. There was a massive difference between the two CPU's in MMO's, the games i play the most and that is what this dicussion is about, buying the best product for your usage scenario.


Exactly. That's why i was flabbergasted of your cpu choice. But, if im following you correctly, you are assuming that Zen+ suffers as much as Zen1 in IPC and that Zen2 will be no better is unfounded. A 4GHz 1700 is still slower than a 4GHz 2700.


----------



## Ultracarpet

rdr09 said:


> AT 1080P. At 1440, does not matter if 60Hz or 240Hz. The gpu will be the limiting factor.


This is just... not a good post lol. 

Are you trying to say that no GPU is capable of outpacing a CPU at 1440p? As a general statement that is just flat out wrong; it completely depends on the game. Even then, most GPU heavy games have enough settings you can turn off/down that the CPU would likely start becoming the bottleneck. Which is what people do with high refresh rate screens- turn settings down/off. 

As things stand right now (zen 2 not currently released), if someone has a monitor with a high refresh rate, Intel is the way to go. It is more expensive, and has worse bang for buck, but what it provides is a much more consistent level of performance across a broader spectrum of games. Hopefully Zen 2 changes that. I don't think AMD will be able to dethrone Intel, but if they come within 5-10% in the WORST scenarios at max oc vs max oc, they will have a winner on their hands as the pricing is likely to be much lower than what Intel is offering on a core for core basis.


----------



## Scotty99

rdr09 said:


> Exactly. That's why i was flabbergasted of your cpu choice. But, if im following you correctly, you are assuming that Zen+ suffers as much as Zen1 in IPC and that Zen2 will be no better is unfounded. A 4GHz 1700 is still slower than a 4GHz 2700.


Even tho i am an mmo gamer first and foremost, there was no way in hell i was giving intel 370 dollars (price of a 7700k at the time) for a 4 core cpu they have been slowly improving for over 5 years. I gave amd my money in the 1700 because they deserved it, what i didnt know was that it would perform worse in mmo's than my overclocked 2500k, which was obviously disappointing and the reason i had to switch back to intel for the long overdue 8700k 6 core mainstream part.

As for zen 2 im not predicting anything here in regards to ipc, i re-entered the thread to combat the person stating intel only holds a 7% lead in gaming......as stated in popular online titles that lead can grow as high as 30%. Surely zen 2 will close that gap some but there is zero chance they are going to wipe that lead away, intel will still be the only choice for people who play popular online titles, or those using high refresh rate monitors.


----------



## white owl

Ultracarpet said:


> This is just... not a good post lol.
> 
> Are you trying to say that no GPU is capable of outpacing a CPU at 1440p? As a general statement that is just flat out wrong; it completely depends on the game. Even then, most GPU heavy games have enough settings you can turn off/down that the CPU would likely start becoming the bottleneck. Which is what people do with high refresh rate screens- turn settings down/off.
> 
> As things stand right now (zen 2 not currently released), if someone has a monitor with a high refresh rate, Intel is the way to go. It is more expensive, and has worse bang for buck, but what it provides is a much more consistent level of performance across a broader spectrum of games. Hopefully Zen 2 changes that. I don't think AMD will be able to dethrone Intel, but if they come within 5-10% in the WORST scenarios at max oc vs max oc, they will have a winner on their hands as the pricing is likely to be much lower than what Intel is offering on a core for core basis.



I edited my post and by the time I was done we were on a new page and no one would have seen it but we pretty much said the same thing lol
I remember having a 980 thinking 1440p/60 would take a bit of fiddling, one generation later and it's replacement can run 1440p/120 in almost any game so long as you are the type to actually evaluate the settings in game. It's pretty crazy how many FPS killers offer no real benefit. Many of them are shadows and reflection settings that I wouldn't notice without a side by side comparison.




rdr09 said:


> AT 1080P. At 1440, does not matter if 60Hz or 240Hz. The gpu will be the limiting factor.


GPU's are plenty fast enough to be CPU bound at 1440p in a great deal of games. If you GPU can drive 1440p at 200fps but your CPU can only spit out 100fps then the res is totally irrelevant and you're limited to 100fps.
The resolution doesn't matter really, I'm going to go with a CPU that's able to run the engine at over 120fps because if the CPU can't do it, the GPU can't help you.
The resolution doesn't matter to the CPU in anyway other than if the res is too high it will choke the GPU down. If the GPU isn't bogged down at 1440p then the bottleneck goes back to the CPU.
You don't get a sense of this from benchmarks because they are usually done at the highest preset. Once you carve away the 8x MSAA and other stuff bogging the GPU down with little visual impact, the story changes quite a bit.


----------



## rdr09

Scotty99 said:


> Even tho i am an mmo gamer first and foremost, there was no way in hell i was giving intel 370 dollars (price of a 7700k at the time) for a 4 core cpu they have been slowly improving for over 5 years. I gave amd my money in the 1700 because they deserved it, what i didnt know was that it would perform worse in mmo's than my overclocked 2500k, which was obviously disappointing and the reason i had to switch back to intel for the long overdue 8700k 6 core mainstream part.
> 
> As for zen 2 im not predicting anything here in regards to ipc, i re-entered the thread to combat the person stating intel only holds a 7% lead in gaming......as stated in popular online titles that lead can grow as high as 30%. Surely zen 2 will close that gap some but there is zero chance they are going to wipe that lead away, intel will still be the only choice for people who play popular online titles, or those using high refresh rate monitors.


If your sig is up to date, i think you could have been better served with a gpu upgrade. I've seen videos of 2700X owners with a 1080Ti playing WoW at 1440 144Hz without issues. 

On your next upgrade i suggest to carefully study reviews after the components have been out a awhile. For example, earlier reviews of Ryzen X series did not have PBO on or did not exist at all. And that made a big difference. 

As for those arguing whether cpu or gpu matters more at 1440 high hz, afaik (maybe i know little bcos i don't own a 1440 144Hz), the difference between the fastest and the next faster cpu is 10 fps in most games. Most won't even do 100 fps in the 1% low with a 1080 Ti. You gonna need SLI hoping they will scale. True, though, in game settings can be adjusted but that goes for both cpus.


----------



## Ha-Nocri

Intel is definitely faster when you measure only CPU (720p gaming), but in more realistic scenarios where most of ppl are using 580/1060 type of GPU, you would be hard pressed to notice any difference. And RyZen has more cores, so why not choose it.

If Zen2 comes within 10-15% of 9900K at 720p, and the price is good, I almost see no reason to buy intel.


----------



## white owl

120fps isn't hard to get in most games. The only reason I can't maintain 120 in Black Ops and most other newer games is because of my CPU.
The difference can be much more than 10fps between the 2700 and 8700.
The CPU doesn't care about resolution, 120fps at 1080p is the same load on 1440p. Therefore if your GPU is capable of driving higher framerates at any resolution the CPU has to be able to provide them.
1440p is not hard to run anymore, even with just a 1080.

No one is arguing about which matters more at 1440p, GPU or CPU because they are both equally important for high framerates at any resolution.
The only settings that are relevant to the CPU are ones that involve geometry and model complexity. The rest are generally for graphics, once you've optimized your game so it will run faster while looking great you're going to need the CPU to provide those extra frames otherwise you just turned off settings for nothing.
As I've said, most games are tested with Ultra preset which makes it seem like the CPU makes no difference, take the load off the GPU and you need the CPU to provide the framerate. If it can't you're just stuck there and you can't do anything to get a higher framerate.

Load up Superposition and compare 1080p High to Ultra and you'll see what I mean. Little visible difference but it the performance difference can be between 60fps and 120. Games are about the same way, Ultra 1080p is comparable to 1440p if you pick and choose what you like or can actually see. Usually I'll have ultra textures, geometry, complexity, etc (the things that you can really see). I'll turn down AA to a point where I don't get jaggies or sparkles, I'll change shadows to a lower setting that still looks good, anything to do with lighting and god rays I'll turn from ultra to high or medium. You'd be surprised how hard "Ultra" something is to run and how little it adds to the game. 

How I set things up depends on the game, you might want great shadows in a game where you're on adventures and really taking things in (TW3 for instance) but on COD...seriously how often are you just looking at the shadow render?


BTW I'm not picking on you but we've moved past GPU bound 1440p. If I change my clock speed from 4Ghz to 4.7Ghz there is an enormous difference in games. The 4.7Ghz IMO is plenty fast with Haswell IPC, my lead weight is my CPU. The 2700x is almost the opposite, plenty of cores and threads, similar IPC but much slower clockspeed and higher latency. I promise, it all makes a difference. It especially makes a difference when your CPU is the only thing making you dip under your FPS cap. And a faster CPU might not always give you extra framerate in games but it's the minimums/lows and frame timings that make the game seem smooth. IMO if a faster CPU is the difference between 100fps dips and 120fps dips, I'm going to wait for the CPU that can do the latter. Right now a 6 core 8600k would be a great upgrade but I'm not upgrading for today again. 8c/16t at 4.7Ghz+ is all I want and I really think AMD can provide that for around $300. $440 for just an 8 core CPU is insane IMO.

5Ghz is cool but it's crazy just how little difference there is between 4.7 and 5ghz in benchmarks so I really don't care if they can't achieve 5Ghz but I'll be thrilled if they can.


----------



## ibb27

Mobile Ryzen 3000 series (probably 12nm) will launch at CES :
https://twitter.com/VideoCardz/status/1081483861706588160


----------



## Scotty99

rdr09 said:


> If your sig is up to date, i think you could have been better served with a gpu upgrade. I've seen videos of 2700X owners with a 1080Ti playing WoW at 1440 144Hz without issues.
> 
> On your next upgrade i suggest to carefully study reviews after the components have been out a awhile. For example, earlier reviews of Ryzen X series did not have PBO on or did not exist at all. And that made a big difference.
> 
> As for those arguing whether cpu or gpu matters more at 1440 high hz, afaik (maybe i know little bcos i don't own a 1440 144Hz), the difference between the fastest and the next faster cpu is 10 fps in most games. Most won't even do 100 fps in the 1% low with a 1080 Ti. You gonna need SLI hoping they will scale. True, though, in game settings can be adjusted but that goes for both cpus.


Why do you continue posting, you are only showing how inept you are at this stuff.

A GPU upgrade would give me exactly 0 FPS upgrade in the vast majority of games i play, where a 2700x would lower my fps from 15-30%.

You have 17 thousand posts on this forum and you still dont understand how bottlnecks work in games, that is quite a feat lol.


----------



## Raghar

rdr09 said:


> On your next upgrade i suggest to carefully study reviews after the components have been out a awhile. For example, earlier reviews of Ryzen X series did not have PBO on or did not exist at all. And that made a big difference.


A reviewer shouldn't review CPU with auto-overclocking feature. In fact considering PBO voids warranty, any review with PBO active should give CPU 0/10 score.
Technically, the best and most honest way to review CPU is disable turbo, and say: At base frequency...

Additional way to honestly review CPU is to place MB into small case with (at most) one fan at intake (if 140 mm then slow one), make room temperature 32C. And then monitor both temperatures of CPU and VRM. If temperature doesn't cross limits after half hour of tests, run standard deep benchmark suite, and compare data.

Reviewer isn't obliged to support cheating of CPU manufacturers and of NVidia. Special detection to change a component behavior when it detects program that's used by review tests, right NVidia? All these cheats should be found and review should be honest.


----------



## speed_demon

white owl said:


> ...
> What's funny is that Haswell at 4.7Ghz or even 4.5Ghz is plenty fast enough to achieve this as long as the game doesn't need more threads. That's where I get into trouble.
> If AMD can make Ryzen as fast as OC'd Devil's Canyon but with more cores and threads there's no reason they can't dominate the gaming market again. I don't care if it runs 4Ghz as long as it performs better in single thread.


I gamed with a Haswell G3258 dual core unlocked processor @ 4.9ghz for a little over 3 years. The G3258 along with a 1070 Ti played any game I wanted buttery smooth on ultra @ 1080p. 

I upgraded to an i7 4790K @ 4.8 as I was also running CAD/CAM software and wanted the extra horsepower... and as far as gaming goes doubling the cores and quadrupling the threads had very little noticeable effect. How's that for an anecdote. 

Honestly even the dual core at the high clock speed made for a super snappy & responsive experience. I don't have enough time in with my new Ryzen 1700x to compare, but my gut tells me I traded responsiveness for sheer horsepower.


----------



## Clukos

Quite interested to see if the quadrupling of L3 cache for the 12C part is true. I'm curious to see how the 12C part ends up, I wanna get one day one


----------



## oced

Someone leaked a dress rehearsal of AMD's CES keynote:


----------



## Raghar

speed_demon said:


> I gamed with a Haswell G3258 dual core unlocked processor @ 4.9ghz for a little over 3 years. The G3258 along with a 1070 Ti played any game I wanted buttery smooth on ultra @ 1080p.
> 
> I upgraded to an i7 4790K @ 4.8 as I was also running CAD/CAM software and wanted the extra horsepower... and as far as gaming goes doubling the cores and quadrupling the threads had very little noticeable effect. How's that for an anecdote.


Try witcher 3 on dual core. It has been designed for quad cores and it shows.

HT is irrelevant however. If someone wants to have wide execution, he should use 4-way HT, or something more wide.


----------



## NeoConker

Too bad...


----------



## Frosted racquet

Too bad about what?


----------



## PureBlackFire

Frosted racquet said:


> Too bad about what?


no 8 to 16 core 35W notebook chips. ????


----------



## Frosted racquet

It's on the same process node, so no surprises there.
Although I get the feeling NeoConker implied that's all what we're going to see from the AMD CPU department...


----------



## ibb27

Frosted racquet said:


> It's on the same process node, so no surprises there.
> Although I get the feeling NeoConker implied that's all what we're going to see from the AMD CPU department...


No, this is Part One 

https://twitter.com/IanCutress/status/1081962207884206080


----------



## Ha-Nocri

Part 2, Vega 7nm. No Zen 2 and/or Navi.

That would be a disappointment.


----------



## NeoConker

Frosted racquet said:


> Too bad about what?





PureBlackFire said:


> no 8 to 16 core 35W notebook chips. ????


No proper response for i7 8565 (AMD used the old 8550 on the keynote) and nothing about i7 8850U and i9 8950HK (6/12 cores up to 4.8ghz DDR4-2666).

The iGPU only got a 100mhz increase, not a even one more CU? Even the current 2800H have 11 CUs.

I'm was waiting to choose, but no more reason for waiting, the solution it's any Intel 8565H or 8850U plus eGPU (TB3).


----------



## PureBlackFire

NeoConker said:


> No proper response for i7 8565 (AMD used the old 8550 on the keynote) and nothing about i7 8850U and i9 8950HK (6/12 cores up to 4.8ghz DDR4-2666).
> 
> The iGPU only got a 100mhz increase, not a even one more CU? Even the current 2800H have 11 CUs.
> 
> I'm was waiting to choose, but no more reason for waiting, the solution it's any Intel 8565H or 8850U plus eGPU (TB3).


that is pretty disappointing. I at least expected 2 6c/12t cpus. maybe TDP headroom isn't as much as expected in the first round of cpus.

edit: these are not on a new node?


----------



## ibb27

PureBlackFire said:


> Тhese are not on a new node?


No, new mobile CPUs are 12nm tech, it's written on AMD slides.


----------



## PureBlackFire

meh


----------



## ozlay

Maybe there will be some lower end ryzen chips 3400g and 3200g that will fit in between desktop and mobile. A 45w chip to compete with intels 45w 6c/12t mobile chips. AMD has yet to release any desktop 12nm APU's. And the leaks are saying Q3 for the zen 2 APU's. So AMD has time to drop 12nm desktop APU's.


----------



## Shatun-Bear

white owl said:


> I'm not sure the CPU really cares about any of that. To test a CPU in games you need to make the game CPU bound. It's insane how many PC builders, gamers and overclockers...enthusiasts even, don't understand the absolute basics, and furthermore feel like what they use is somehow relevant to every other gamer as if they are the standard.


Lol everyone understands this but it's a question of relevancy. Technically 720p tests are closer to the absolute performance of each CPU, but these tests are somewhat redundant when you're talking about the fastest consumer CPUs paired with 1080 Tis/2080 Tis.




Scotty99 said:


> That really isnt true tho, the games people actually play (gta 5, fortnite, wow etc etc) the lead is closer to 30%. This is down to not only clocks but latency, there is a reason the xbox one x version of destiny 2 is locked to 30 fps even tho the GPU is capable of way more.....it has a potato cpu.
> If you play a ton of single player AAA games ya AMD is a better choice but most people are playing online games that need that cpu horsepower as well.
> 
> Basically until AMD comes out with a 5ghz capable cpu with better latency it boils down to this:
> 
> Do you have a 144hz+ monitor=buy intel
> Are you running a 60 or 75hz panel=AMD will suffice


It is true, on average the gap is 7%. Ok 7.3% to be exact. You can't cherry-pick titles and then again, the latter two you've mentioned would run over 200 fps+ under such set-ups so small gaps in performance are even less relevant once you're past this point.

https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/19.html


----------



## tpi2007

https://www.anandtech.com/show/13771/amd-ces-2019-ryzen-mobile-3000-series-launched

Why are they naming 12nm Zen+ mobile parts as 3000 series? That will be a naming mess when 7nm Zen 2 desktop CPUs launch. Then again, why did they name the first mobile gen 2000 series instead of 1000 to begin with? And why are they launching them only now, that we are expecting 7nm Zen 2 news? I sure hope that this before-the-keynote-stuff is just to get the small stuff out of the way, otherwise it's going to be a disappointing keynote.

Good to see that they got their act together with OEMs regarding driver updates for the Ryzen mobile parts, but shouldn't that mean that the desktop APUs should also start getting monthly updates instead of one every three months? The underlying tech is the same and doesn't even have the same type of thermal constraints.

Also, quite weird to see them competing against Intel's Atom with 28nm Excavator parts in 2019. It supposedly gets the job done with a 6w TDP and wins, but you have to wonder how a Ryzen based underclocked Athlon 200GE would perform and slam dunk Intel.


----------



## Seronx

tpi2007 said:


> https://www.anandtech.com/show/13771/amd-ces-2019-ryzen-mobile-3000-series-launched
> 
> Also, quite weird to see them competing against Intel's Atom with 28nm Excavator parts in 2019. It supposedly gets the job done with a 6w TDP and wins, but you have to wonder how a Ryzen based underclocked Athlon 200GE would perform and slam dunk Intel.


It's a cost thing why there isn't a Raven2(Banded Kestrel)/Renoir2(River Hawk) SKU instead.


----------



## Kpjoslee

tpi2007 said:


> https://www.anandtech.com/show/13771/amd-ces-2019-ryzen-mobile-3000-series-launched
> 
> Why are they naming 12nm Zen+ mobile parts as 3000 series? That will be a naming mess when 7nm Zen 2 desktop CPUs launch. Then again, why did they name the first mobile gen 2000 series instead of 1000 to begin with? And why are they launching them only now, that we are expecting 7nm Zen 2 news? I sure hope that this before-the-keynote-stuff is just to get the small stuff out of the way, otherwise it's going to be a disappointing keynote.
> 
> Good to see that they got their act together with OEMs regarding driver updates for the Ryzen mobile parts, but shouldn't that mean that the desktop APUs should also start getting monthly updates instead of one every three months? The underlying tech is the same and doesn't even have the same type of thermal constraints.
> 
> Also, quite weird to see them competing against Intel's Atom with 28nm Excavator parts in 2019. It supposedly gets the job done with a 6w TDP and wins, but you have to wonder how a Ryzen based underclocked Athlon 200GE would perform and slam dunk Intel.


They still gotta improve power draw level on idle, and I assume it takes another year of optimization on existing process to achieve that. I wonder if they fixed the issue.


----------



## Majin SSJ Eric

ryan92084 said:


> As counterpoint to the 12 core engineering sample
> "So far no confirmation on 16-core Zen2 (mainstream series).
> One source is quite confident new series are still only 8-core."
> https://mobile.twitter.com/VideoCardz/status/1081246710976917505


While techies would undoubtedly be disappointed if Zen 2 tops out at 8-cores on consumer platform, I've said before that more cores out of this architecture revision don't interest me nearly as much as improvements to clock speeds and IPC. If the flagship Zen 2 processors are 8C / 16T but can get to 5GHz and top Intel IPC that would be a massive win for AMD. 

Personally, I still don't believe AMD will settle for parity with the 9900k, especially since Zen is so scalable in terms of cores. But if Zen 2 node and architecture improvements can bring it up to parity with (or even better than) the 9900k then "only" having 8 cores will not be any disappointment to me at all. That's obviously a HUGE "If" though....


----------



## EniGma1987

Scotty99 said:


> Actually not true in online games, the bottleneck will always be the CPU. Go test it right now if you dont believe me, load up WoW go to a busy city and you will get the exact same FPS at 4k as you will at 1080p. I dont play fortnite but i imagine its the same story in situations where draw calls are your bottleneck (lots of players on screen).





You said "online games" and then used WoW as an example. When what you really should have said is MMORPGs, not online games. All online games are not MMORPGs, and only MMORPGs are held back by the game code of syncing so many players in social spaces. Online shooter games do take minor hits compared to single player, but the engine design and player counts are already designed for low latency and do not have the problem of some of the other genres.


----------



## guttheslayer

Majin SSJ Eric said:


> While techies would undoubtedly be disappointed if Zen 2 tops out at 8-cores on consumer platform, I've said before that more cores out of this architecture revision don't interest me nearly as much as improvements to clock speeds and IPC. If the flagship Zen 2 processors are 8C / 16T but can get to 5GHz and top Intel IPC that would be a massive win for AMD.
> 
> Personally, I still don't believe AMD will settle for parity with the 9900k, especially since Zen is so scalable in terms of cores. But if Zen 2 node and architecture improvements can bring it up to parity with (or even better than) the 9900k then "only" having 8 cores will not be any disappointment to me at all. That's obviously a HUGE "If" though....


Actually with rumored leading to Intel 10 Cores Coffee Lake, I really doubt AMD will stop at just 8C/16T. 

Somehow Intel went with a 10 cores before Sunny Cove was probably they knew something from AMD that we didnt.


----------



## hokk

EniGma1987 said:


> You said "online games" and then used WoW as an example. When what you really should have said is MMORPGs, not online games. All online games are not MMORPGs, and only MMORPGs are held back by the game code of syncing so many players in social spaces. Online shooter games do take minor hits compared to single player, but the engine design and player counts are already designed for low latency and do not have the problem of some of the other genres.


Wow uses like 1 core 

really poor example.


----------



## Scotty99

EniGma1987 said:


> You said "online games" and then used WoW as an example. When what you really should have said is MMORPGs, not online games. All online games are not MMORPGs, and only MMORPGs are held back by the game code of syncing so many players in social spaces. Online shooter games do take minor hits compared to single player, but the engine design and player counts are already designed for low latency and do not have the problem of some of the other genres.


Is fortnite an mmo, is destiny 2 an mmo? WoW/mmo's are merely the examples that stand out the most, and are the places you will see the 20-30% gains switching from amd to intel. The bottom line is if you are an online gamer and bought something like a 2700x, you would have been much better off buying even a core i5 and overclocking it.


----------



## Scotty99

kylzer said:


> Wow uses like 1 core
> 
> really poor example.


Actually WoW just got a dx12 patch and can use a ton of cores now (not sure where the upper limit is) but IPC and clockspeeds still remain king here.


----------



## miklkit

Shatun-Bear said:


> Lol everyone understands this but it's a question of relevancy. Technically 720p tests are closer to the absolute performance of each CPU, but these tests are somewhat redundant when you're talking about the fastest consumer CPUs paired with 1080 Tis/2080 Tis.
> 
> 
> 
> 
> It is true, on average the gap is 7%. Ok 7.3% to be exact. You can't cherry-pick titles and then again, the latter two you've mentioned would run over 200 fps+ under such set-ups so small gaps in performance are even less relevant once you're past this point.
> 
> https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/19.html



A 9900k costs $220 more than a 2700x and only delivers 7-8% more performance? Typical intel price gouging............


----------



## EniGma1987

Scotty99 said:


> Is fortnite an mmo, is destiny 2 an mmo? WoW/mmo's are merely the examples that stand out the most, and are the places you will see the 20-30% gains switching from amd to intel. The bottom line is if you are an online gamer and bought something like a 2700x, you would have been much better off buying even a core i5 and overclocking it.



Irrelevant. Fortnight and Destiny are both shooters and designed for lower latency already on top of which their social spaces (if you can even call it that for fortnight) have nowhere near the player count of actual MMORPG social spaces. Destiny is limited to 20-25 and Fortnight is what, 64?


----------



## Ultracarpet

miklkit said:


> A 9900k costs $220 more than a 2700x and only delivers 7-8% more performance? Typical intel price gouging............


No, it is more like 20-30% in most tasks, 30% in the worst scenarios while gaming and all the way up to parity depending on how GPU bound the scenario is. 

Funny the person who posted the benchmark talked about cherry picking- there is only 5 games in that "relative performance" chart. I'm not saying TPU is trying to make either brand look better/worse, but that is not exactly the most comprehensive test suite.

With the 9900k you are paying for not only better performance, but much more consistent performance across all applications.


----------



## Eusbwoa18

As has been stated before, what anyone needs to do is look at the total use case for the computer including gaming and other tasks and make a buying decision based on that. 

I believe that both vendors at this point have a sweet spot from a price/performance/features perspective and different people have different use profiles and will make different choices based on budget, the applications they use, and their upgrade cycle.

Both companies have a strong financial performance right now and I don't see the competition as being anything other than a benefit to us all.

https://www.anandtech.com/show/13519/intel-announces-q3-fy-2018-earnings-record-quarter
https://www.cnbc.com/2018/10/24/amd...enue-giving-weak-fourth-quarter-guidance.html


----------



## Scotty99

EniGma1987 said:


> Irrelevant. Fortnight and Destiny are both shooters and designed for lower latency already on top of which their social spaces (if you can even call it that for fortnight) have nowhere near the player count of actual MMORPG social spaces. Destiny is limited to 20-25 and Fortnight is what, 64?


But it isn't irrelevant because both of these games also benefit from the high clock speeds and tighter latency that intel chips deliver? 

I'm not even sure what you are getting at here tbh, what point are you trying to make?


----------



## Lee Patekar

CynicalUnicorn said:


> Intel's core count has increased substantially, going from 8 cores with Nehalem to 10 with Westmere to 15 with Ivy Bridge to 18 with Haswell to 24 with Broadwell to 28 with Skylake.


Yes because intel released their 28 core chips on the consumer desktop market... and itanium is also a desktop chip now too.. Next you'll tell me intel's desktop offerings weren't stagnant with 4 cores / 4 threads for the past decade or so. I guess its just magic that intel is rushing out higher core desktop chips to compete at the < 500 price point eh.

Whatever. We all know they kept the desktop market stagnant and pushed the higher cores into the workstation / datacenters. We've also spent more than a decade on the core architecture as well.. Now suddenly there's a lot of noise about their next chip. Coincidence again? Or maybe AMD pushes intel off its laurels.


----------



## PwrSuprUsr

tpi2007 said:


> https://www.anandtech.com/show/13771/amd-ces-2019-ryzen-mobile-3000-series-launched
> 
> Why are they naming 12nm Zen+ mobile parts as 3000 series? That will be a naming mess when 7nm Zen 2 desktop CPUs launch. Then again, why did they name the first mobile gen 2000 series instead of 1000 to begin with? And why are they launching them only now, that we are expecting 7nm Zen 2 news? I sure hope that this before-the-keynote-stuff is just to get the small stuff out of the way, otherwise it's going to be a disappointing keynote.
> 
> Good to see that they got their act together with OEMs regarding driver updates for the Ryzen mobile parts, but shouldn't that mean that the desktop APUs should also start getting monthly updates instead of one every three months? The underlying tech is the same and doesn't even have the same type of thermal constraints.
> 
> Also, quite weird to see them competing against Intel's Atom with 28nm Excavator parts in 2019. It supposedly gets the job done with a 6w TDP and wins, but you have to wonder how a Ryzen based underclocked Athlon 200GE would perform and slam dunk Intel.


Perhaps this is indeed just before the keynote stuff. The AMD keynote is scheduled for Wednesday the 9th and it's supposed to focus on: 



> AMD guests and its president and CEO Dr. Lisa Su will provide a view into the diverse applications for new computing technologies ranging from solving some of the world’s toughest challenges to the future of gaming, entertainment and virtual reality with the potential to redefine modern life. AMD is catapulting computing, gaming, and visualization technologies forward with the *world’s first 7nm high-performance CPUs and GPUs,* providing the power required to reach technology’s next horizon.



https://www.ces.tech/conference/Keynotes/AMD-Keynote.aspx


----------



## Lee Patekar

WannaBeOCer said:


> AMD didn't have the resources to release a new architecture and they drove their K8 to the ground. After that they were banking on multi-threaded programs with bulldozer which failed, it was never a node disadvantage but a R&D disadvantage caused by Intel's unfair business practices.


This I remember, they overestimated the growth of multi-threaded application I think.. and intel's shenanigans didn't help their R&D at all. But they always had a node disadvantage as well, the value of intel's fabs back then cannot be ignored.

But I distinctly remember my first nehalem i7 and the feeling of deja vue compared to opteron. And AMD's 64 bit extensions are real and first in the x86 consumer desktop market.


----------



## hokk

PwrSuprUsr said:


> tpi2007 said:
> 
> 
> 
> https://www.anandtech.com/show/13771/amd-ces-2019-ryzen-mobile-3000-series-launched
> 
> Why are they naming 12nm Zen+ mobile parts as 3000 series? That will be a naming mess when 7nm Zen 2 desktop CPUs launch. Then again, why did they name the first mobile gen 2000 series instead of 1000 to begin with? And why are they launching them only now, that we are expecting 7nm Zen 2 news? I sure hope that this before-the-keynote-stuff is just to get the small stuff out of the way, otherwise it's going to be a disappointing keynote.
> 
> Good to see that they got their act together with OEMs regarding driver updates for the Ryzen mobile parts, but shouldn't that mean that the desktop APUs should also start getting monthly updates instead of one every three months? The underlying tech is the same and doesn't even have the same type of thermal constraints.
> 
> Also, quite weird to see them competing against Intel's Atom with 28nm Excavator parts in 2019. It supposedly gets the job done with a 6w TDP and wins, but you have to wonder how a Ryzen based underclocked Athlon 200GE would perform and slam dunk Intel.
> 
> 
> 
> Perhaps this is indeed just before the keynote stuff. The AMD keynote is scheduled for Wednesday the 9th and it's supposed to focus on:
> 
> 
> 
> 
> AMD guests and its president and CEO Dr. Lisa Su will provide a view into the diverse applications for new computing technologies ranging from solving some of the world’s toughest challenges to the future of gaming, entertainment and virtual reality with the potential to redefine modern life. AMD is catapulting computing, gaming, and visualization technologies forward with the *world’s first 7nm high-performance CPUs and GPUs,* providing the power required to reach technology’s next horizon.
> 
> Click to expand...
> 
> 
> https://www.ces.tech/conference/Keynotes/AMD-Keynote.aspx
Click to expand...

Tim from HBO is gonna be rekt lol


----------



## Brutuz

Shatun-Bear said:


> Lol everyone understands this but it's a question of relevancy. Technically 720p tests are closer to the absolute performance of each CPU, but these tests are somewhat redundant when you're talking about the fastest consumer CPUs paired with 1080 Tis/2080 Tis.


The tests are completely irrelevant unless you're planning on using that setup in that game at that resolution because no two games load a PC the same way, only similarly at best. You can draw very limited results from it but it's likely those results won't be the same running the exact same hardware setup even on a 2-3 year newer game.

It's like comparing two CPUs in say, WinRAR's inbuilt benchmark and saying "well clearly these results would carry over to 7zip" which is..honestly false. Bulldozer managing to actually _gain_ performance in the years since it's launch is a testament to that, because it was trash for low res gaming in its heyday.



Scotty99 said:


> Is fortnite an mmo, is destiny 2 an mmo? WoW/mmo's are merely the examples that stand out the most, and are the places you will see the 20-30% gains switching from amd to intel. The bottom line is if you are an online gamer and bought something like a 2700x, you would have been much better off buying even a core i5 and overclocking it.


That's honestly false. You'd be better off for sure, but not much better off because you're already likely pulling 60fps minimums on either platform. (Or getting lag unrelated to your system performance due to the games design)



Lee Patekar said:


> This I remember, they overestimated the growth of multi-threaded application I think.. and intel's shenanigans didn't help their R&D at all. But they always had a node disadvantage as well, the value of intel's fabs back then cannot be ignored.
> 
> But I distinctly remember my first nehalem i7 and the feeling of deja vue compared to opteron. And AMD's 64 bit extensions are real and first in the x86 consumer desktop market.


Bulldozer was hit by many things and had a "death of 1000 cuts" basically. It was delayed to the point where Intel had caught up then bypassed its performance targets, the software they were using to design it had a bug that basically lowered performance by 15% or made it much more leaky iirc and AMD simply didn't have the R&D money to fix it for all markets while keeping Radeon going, hence why Steamroller and Excavator were made for the theoretically more lucrative market (Laptops and OEMs) solely.


----------



## miklkit

Ultracarpet said:


> No, it is more like 20-30% in most tasks, 30% in the worst scenarios while gaming and all the way up to parity depending on how GPU bound the scenario is.
> 
> Funny the person who posted the benchmark talked about cherry picking- there is only 5 games in that "relative performance" chart. I'm not saying TPU is trying to make either brand look better/worse, but that is not exactly the most comprehensive test suite.
> 
> With the 9900k you are paying for not only better performance, but much more consistent performance across all applications.



That is heavily overclocked as their IPC is very similar. After the cost of all the extra water cooling is added on I could buy a new monitor with the price difference. Or a new GPU. Also don't forget that the 3xxx WILL beat the 2700X too. No one knows by how much though.


----------



## ChiTownButcher

oced said:


> Someone leaked a dress rehearsal of AMD's CES keynote:


LOL This made my day


----------



## Majin SSJ Eric

miklkit said:


> That is heavily overclocked as their IPC is very similar. After the cost of all the extra water cooling is added on I could buy a new monitor with the price difference. Or a new GPU. Also don't forget that the 3xxx WILL beat the 2700X too. No one knows by how much though.


Also don't forget that Ryzen has typically enjoyed better minimums and .1% lows than some of Intel's chips. In fact, while Intel has maintained some pretty big leads in averaged FPS rates, Ryzen has often been credited by reviewers and users as "feeling" smoother, for whatever that's worth.

I also have to massively  every time people bring up Ryzen's "Horrible" gaming performance when gushing all over Intel's chips. Fact is that none of those people were talking about how "Horrible" the 5960X and 6900K were for gaming when they came out yet once AMD released Ryzen last year with equivalent gaming performance to those chips, all the sudden it was "Horrible" gaming performance. Just because one chip is faster than another does NOT mean the slower one is automatically trash.


----------



## diggiddi

ChiTownButcher said:


> LOL This made my day


That was funny


----------



## Ultracarpet

miklkit said:


> That is heavily overclocked as their IPC is very similar. After the cost of all the extra water cooling is added on I could buy a new monitor with the price difference. Or a new GPU. Also don't forget that the 3xxx WILL beat the 2700X too. No one knows by how much though.


Heavily overclocked or not, that headroom is something ryzen can't offer. 

The 9900k's value proposition was never for the budget oriented... someone shopping in that price bracket likely already has that new monitor and GPU you are referring to. In this scenario, this person would just want a CPU that held back the rest of his/her hardware as little as possible. Right now, that is the 9900k. If you want something slightly cheaper, easier to cool, and slightly less future proof- it's the 9700k. For purely gaming, I wouldn't start looking at a ryzen 2700/2600 until i got down to the i5's. Which just so happens to be how Intel priced their stack to compete against AMD.

They are priced higher because there is a tax to having the absolute best. Nvidia does it, Intel does it, AMD has done it in the past, and they will do it in the future if they hold the crown.

I'm hoping AMD delivers with zen 2, but I am skeptical. If they can get within 5-10% performance of Intel at max OC vs max OC in the worst of scenarios, they will have achieved a massive success IMO. 

Can't wait for Wednesday my dudes.


----------



## nolive721

I have mentioned that in the NVIDIA CES keynote thread and I guess thats the perception of quite a few here but NVIDIA must have made some serious assumptions or possibly, who knows, gotten access to some of the GPU related content that Lisa Su would present this Wednesday and decided to pull the trigger on this VRR driver access as well as maybe reducing the price offering on the RTX 2060 to try keep gamers away from AMD.

I hope I am not being over hyped by the leaks from December and that really AMD is going to deliver something very tangible on the GPU side in mid to upper End performance cards (RTX 2070/2080) and cut the NVIDIA price offering there by a significant margin


----------



## delboy67

Ha-Nocri said:


> Part 2, Vega 7nm. No Zen 2 and/or Navi.
> 
> That would be a disappointment.


or....part 2 Trueaudio 2 special 3 hour preview


----------



## umeng2002

If Zen 2 was launching soon, there would have been leaks. I think we're months away from a release, maybe 6 months. Navi... even longer: 6 to 10 months.


----------



## Majin SSJ Eric

nolive721 said:


> I have mentioned that in the NVIDIA CES keynote thread and I guess thats the perception of quite a few here but NVIDIA must have made some serious assumptions or possibly, who knows, gotten access to some of the GPU related content that Lisa Su would present this Wednesday and decided to pull the trigger on this VRR driver access as well as maybe reducing the price offering on the RTX 2060 to try keep gamers away from AMD.
> 
> I hope I am not being over hyped by the leaks from December and that really AMD is going to deliver something very tangible on the GPU side in mid to upper End performance cards (RTX 2070/2080) and cut the NVIDIA price offering there by a significant margin


IMO (and admittedly based on nothing but my gut feeling), I think the GPU stuff in the AMD leaks are the very weakest aspect of them overall. I still don't think AMD are anywhere near able to compete properly with Nvidia in the GPU market right now and have been focusing very tightly on Intel and getting the CPU market to rough parity since Ryzen's launch. Ever since the 1800X came out I have believed that Zen 2 was where we were really going to see AMD's full potential with this architecture and these leaks (if remotely true) would be total vindication for that. 

But yeah, I don't think Navi is going to be any more of a game-changer than Vega was to be honest. I hope I'm wrong though because Nvidia needs to be slapped down even more than Intel at this point, but AMD just isn't big enough to go toe-to-toe with both Intel AND Nvidia right now. If Zen 2 delivers and somehow becomes the king of CPU's this year THEN they will have the momentum they need to switch focus over to Nvidia. But that's still a big "If" at this point!


----------



## Majin SSJ Eric

umeng2002 said:


> If Zen 2 was launching soon, there would have been leaks. I think we're months away from a release, maybe 6 months. Navi... even longer: 6 to 10 months.


Nobody has said that Zen 2 was launching soon. Jim's original leak very clearly showed that most of them were "TBA" at CES, not launched. I'm guessing we will see a similar launch window as both previous Ryzen CPU's (1800X and 2700X).


----------



## umeng2002

Majin SSJ Eric said:


> Nobody has said that Zen 2 was launching soon. Jim's original leak very clearly showed that most of them were "TBA" at CES, not launched. I'm guessing we will see a similar launch window as both previous Ryzen CPU's (1800X and 2700X).


I was just going off the Russian storefront leaks (lesser tech journalist seem to have gobbled that up). I think Ryzen 2 for consumers will hit in the SUMMER simply because I don't think AMD would launch it without the X570 motherboards and the X570 was leaked to come out in the summer and who the hell would fake a chipset release date? I also don't think AMD would make their Ryzen 2xxx products obsolete less than a year after they launched.

Frankly, I think the RTX 2060 is $350 instead of $275 or $300 because nVidia knows AMD doesn't have anything for awhile. They also might have used 8 GB of VRAM if Navi was about to drop. As some reviewers pointed out, it is the cut-down back-end (less VRAM and ROPS) that makes the RTX 2060 under perform for it's compute power.


----------



## VeritronX

Just a reminder for those thinking AMD's current cpu's suck at gaming:



(click picture for video link)

The R5 2600 currently retails for ~half the price of the i5 9600K here in australia.

wow the forum really chewed that up..


----------



## umeng2002

There are a certain class of PC enthusiasts that want the fastest just because it's the fastest, price be damned. It's funny, I got into PC gaming around 1999 because it was CHEAPER than a console. You already had a PC for school and dial-up porn, so a $200 to $250 GPU turned your "work" appliance into a machine that would blow away any console.

Now you have people paying $400 for a CPU just to get 160 fps in Fortnite instead of 150 fps.

What a time to be alive.


----------



## Streetdragon

umeng2002 said:


> There are a certain class of PC enthusiasts that want the fastest just because it's the fastest, price be damned. It's funny, I got into PC gaming around 1999 because it was CHEAPER than a console. You already had a PC for school and dial-up porn, so a $200 to $250 GPU turned your "work" appliance into a machine that would blow away any console.
> 
> Now you have people paying $400 for a CPU just to get 160 fps in Fortnite instead of 150 fps.
> 
> What a time to be alive.


2080TI sli and i9 9980x for minecraft xD


----------



## SuperZan

Majin SSJ Eric said:


> IMO (and admittedly based on nothing but my gut feeling), I think the GPU stuff in the AMD leaks are the very weakest aspect of them overall. I still don't think AMD are anywhere near able to compete properly with Nvidia in the GPU market right now and have been focusing very tightly on Intel and getting the CPU market to rough parity since Ryzen's launch. Ever since the 1800X came out I have believed that Zen 2 was where we were really going to see AMD's full potential with this architecture and these leaks (if remotely true) would be total vindication for that.
> 
> But yeah, I don't think Navi is going to be any more of a game-changer than Vega was to be honest. I hope I'm wrong though because Nvidia needs to be slapped down even more than Intel at this point, but AMD just isn't big enough to go toe-to-toe with both Intel AND Nvidia right now. If Zen 2 delivers and somehow becomes the king of CPU's this year THEN they will have the momentum they need to switch focus over to Nvidia. But that's still a big "If" at this point!



Super agree, with the caveat that I think with the GPU stuff AMD doesn't have to compete top to bottom to get on the Calfskin King's radar. If AMD can get 1070, 1080, and (in a dreamworld) 1080 Ti performance at substantially lower price brackets than the Turing replacements, they'd have the advantage of in-production supply and would've also been able to pitch Freesync displays and the value alternative. They'll surely have nothing to challenge overall mindshare, so Huang may have taken a preemptive shot to mitigate parts of AMD's potential strategy.


----------



## umeng2002

AMD can compete if those chose to. Polaris is a fine design, except it was too small. That was a choice AMD made. IDK what AMD was thinking with Vega, though. No one has been plugging in hard drives into Vega to get huge open worlds with no loading times. That stream raster binning thingy was broken from the start. The overall design was unbalanced, and it was clearly focused on compute for the data center with graphics as an afterthought.


----------



## miklkit

VeritronX said:


> Just a reminder for those thinking AMD's current cpu's suck at gaming:
> 
> 
> 
> (click picture for video link)
> 
> The R5 2600 currently retails for ~half the price of the i5 9600K here in australia.
> 
> wow the forum really chewed that up..



Indeed. They were comparing a cpu for the 1%ers to a cpu for the 50%ers. They shot themselves in the foot by comparing apples to oranges. The real test is comparing similarly priced hardware. 



Everone knows a Ferrari is faster than a Ford but only the 1%ers can afford a Ferrari.


----------



## pony-tail

miklkit said:


> Indeed. They were comparing a cpu for the 1%ers to a cpu for the 50%ers. They shot themselves in the foot by comparing apples to oranges. The real test is comparing similarly priced hardware.
> 
> 
> 
> Everone knows a Ferrari is faster than a Ford but only the 1%ers can afford a Ferrari.



I am more interested in IGPUs - just want 1050ti to 1060 performance out of an IGPU on a ( around 65 watt ) APU . skip the graphics card altogether !
ad an ITX mobo , good sized M.2 drive , 16gb quality ram - 6 or 7 litre shoebox = Done!


----------



## ozlay

Streetdragon said:


> 2080TI sli and i9 9980x for minecraft xD


Well have you ever tried using shaders in minecraft? It is needed for 4k especially with ray-tracing.


----------



## VeritronX

miklkit said:


> Indeed. They were comparing a cpu for the 1%ers to a cpu for the 50%ers. They shot themselves in the foot by comparing apples to oranges. The real test is comparing similarly priced hardware.
> 
> 
> 
> Everone knows a Ferrari is faster than a Ford but only the 1%ers can afford a Ferrari.


My point was more that most people wouldn't describe the intel skylake-x cpus as sucking at gaming, they'd just say the mainstream ones are better.. but for games the ryzen+ chips are on par with skylake-x stock and overclocked so they don't exactly suck. 

Intel has a faster option but they were forced to play all of their remaining cards to do it.. They are tapped out until they can get 10nm to produce a chip big enough and high clocking enough to compete, sometime in 2020 from the looks of it (they just announced 15W ice lake laptops coming at the end of this year, nothing faster than that on their roadmap for 2019 for desktops so far).


----------



## Majin SSJ Eric

VeritronX said:


> My point was more that most people wouldn't describe the intel skylake-x cpus as sucking at gaming, they'd just say the mainstream ones are better.. but for games the ryzen+ chips are on par with skylake-x stock and overclocked so they don't exactly suck.
> 
> Intel has a faster option but they were forced to play all of their remaining cards to do it.. They are tapped out until they can get 10nm to produce a chip big enough and high clocking enough to compete, sometime in 2020 from the looks of it (they just announced 15W ice lake laptops coming at the end of this year, nothing faster than that on their roadmap for 2019 for desktops so far).


This is the same point I've made to the "Ryzen sucks for gaming" crowd ever since the 1800X launched. Its just funny to me that once AMD had CPU's that were on par or faster than chips like the 5960X and the 6900K in games all the sudden that level of performance "sucked", even though I can assure you that nobody thought a 5960X sucked back when it was around.

I really think you have to put yourself back into the pre-Ryzen mindset to really appreciate what AMD has been able to do with this architecture. Too many people have already forgotten the absolute barren wasteland that was the AMD CPU catalog pre-Ryzen so the shock factor of AMD releasing a CPU that could go toe-to-toe with a 6900K just a year later (and at just a third of the cost) is lost on them. 

But remember that before Ryzen all AMD had to counter such absolutely dominant processors from Intel were derivatives of BD that were less than 50% as capable as even just the 4 core i7's at the time! In just one release AMD pretty much erased a seemingly insurmountable deficit between their CPU's and Intel's best (at least in IPC, with clock potential still being a weakness for them) and I still remember very well posting in the Zen rumors threads that I'd be happy with something that could just match the IPC of SB at the time! 

We've come a long way, and hopefully the release of Zen 2 will be the moment that AMD takes the crown, full stop!


----------



## andre02

Well, here we are, today is the day of the announcement. Any info as to at what time it will happen ?


----------



## ibb27

andre02 said:


> Well, here we are, today is the day of the announcement. Any info as to at what time it will happen ?


9 January 2019, 09:00:00 (Las Vegas time)


----------



## guttheslayer

ibb27 said:


> 9 January 2019, 09:00:00 (Las Vegas time)


So any leak a few hours before that AMD is confirm talking about desktop Ryzen 3000 series, like 3800X?


----------



## andre02

ibb27 said:


> 9 January 2019, 09:00:00 (Las Vegas time)


I see thank you , here (Europe) it is already 9 January, about 9 o'clock, that is why i said today, i know that US is about 8-10 hours behind us depending on the time zone, i'll have to check that out.

Personally, if i were to guesstimate , i think the 16 core rumours are real, we will have to see about the clocks. Anyway this would be a real performance from AMD, to quadruple the number of cores for the mainstream in such a short time, i think without this , we would be stuck in the 6 core mainstream land for a good number of years from Intel, this is something to be apreciated from AMD. I still like Intel's philosophy of doing things, and as a whole,as a company, i would say i am a fan (not fanboy  ) of Intel. But this will be a kick in the teeth from AMD, and maybe a wake-up call for Intel. Knowing Intel they will counter with a 10-12 core part with similar performance to the AMD 16 core, and although i would love to see one, i don't think we we'll see a 16 core part from Intel ,not for a long time.

You happen to know if there is a livestream i can watch with AMD's presentation ?


----------



## ibb27

andre02 said:


> You happen to know if there is a livestream i can watch with AMD's presentation ?






 After 9 hours


----------



## umeng2002

I agree with the WCCFtech dude, that Ryzen 3000 is coming in Q2. For AMD, that's April thru June. Just an extended preview today.


----------



## Streetdragon

So in Germoney its 18.00 o'clock. Finally a nice time! Must order some pizza for that


----------



## ibb27

Interview with Lisa Su:
http://fortune.com/2019/01/08/amd-ceo-lisa-su-meltdown-spectre/

and video:







> “Hardware may be sexy again,” she said, smiling. “That’s exciting for us.”


Red hot woman, we want Red Hot Hardware announcements today.


----------



## VeritronX

umeng2002 said:


> I agree with the WCCFtech dude, that Ryzen 3000 is coming in Q2. For AMD, that's April thru June. Just an extended preview today.


So far the ryzen desktop launches have been in the march / april timeframe, so it would make sense for this one to be around there. I was expecting around the start of april.

Here's a countdown for the AMD Keynote posted by someone earlier in this thread:

https://www.timeanddate.com/countdo...0=127&msg=CES+AMD+Keynote&font=sanserif&csz=1


----------



## guttheslayer

Streetdragon said:


> So in Germoney its 18.00 o'clock. Finally a nice time! Must order some pizza for that


What time is that for Singapore, HK?


----------



## battlenut

Japan it will be midnight. I stand corrected it will be about 1 Am. ooppss


----------



## GHADthc

Going to be about 5am in the morning for me...but my body is ready...


----------



## nolive721

battlenut said:


> Japan it will be midnight. I stand corrected it will be about 1 Am. ooppss


gonna follow the stream?I will....we are neighbor by the way.


----------



## battlenut

nolive721 said:


> gonna follow the stream?I will....we are neighbor by the way.


Nah, gotta wake up early (0530) get to work, then to class. probably listen to it while I am doing my work before class.


----------



## doritos93

The amount of hype around this keynote is unreal

Google "amd ces 2019" and it's all "omg AMD gonna kill everything rip Intel Nvidia etc etc" 

It'll never live up to the hype


----------



## Particle

Introducing the AMD K6-IV! *crowd cheers, albeit confusedly*


----------



## VeritronX

One thing we need to remember is that because this new 7nm node is so much smaller, the individual cores on the chips will be smaller.. and harder to cool. If it's half the size it has half the surface area and if the heat transfer rate is the same and the power consumption isn't also half then they will run hotter, no way around that.


----------



## bigjdubb

doritos93 said:


> The amount of hype around this keynote is unreal
> 
> Google "amd ces 2019" and it's all "omg AMD gonna kill everything rip Intel Nvidia etc etc"
> 
> _It'll never live up to the hype_


Nothing does. 




VeritronX said:


> One thing we need to remember is that because this new 7nm node is so much smaller, the individual cores on the chips will be smaller.. and harder to cool. If it's half the size it has half the surface area and if the heat transfer rate is the same and the power consumption isn't also half then they will run hotter, no way around that.


We can't stop physics from physicsing but we can get around it by adding more cooling. 





I will wait for the highlight reel, I can't stand watching these marketing speeches.


----------



## EniGma1987

VeritronX said:


> One thing we need to remember is that because this new 7nm node is so much smaller, the individual cores on the chips will be smaller.. and harder to cool. If it's half the size it has half the surface area and if the heat transfer rate is the same and the power consumption isn't also half then they will run hotter, no way around that.



The 7nm node they are using is technically 2.5x smaller than the 12nm node. However, since AMD uses custom (and proprietary) libraries for both 12nm and 7nm processors, we dont actually know the scaling between the nodes. That data is known only to AMD and TSMC. We can just guestimate and say it is 2.5x smaller though, since thats the only data we have.


----------



## Streetdragon

Dont know if something like that would be possible:

A good part of the heat goes to the socket of the cpu-plate. Souldnt some "heatpipes" in the cpu help with the cooling? like filling the empty space between heatspreader and the cpu-board around the DIE help cooling?

Just a offtopic brainfart


----------



## coelacanth

The end of the video has some more Ryzen 3000 information.


----------



## Particle

IBM has experimented with microfluidic cooling channels on dies in the past.

I don't have any reason to believe that 7 nm is going to be the power density increase that breaks us.


----------



## CelticGamer

Ryzen 3000 announcement is a bust. Not one single piece of technical specification shared other than a Cinebench run against a 9900K with their 8 core 16 thread chip, which barely eeked out a win over the 9900K.


----------



## Scotty99

Accepting all apologies now, got all day.


----------



## agatong55

Scotty99 said:


> Accepting all apologies now, got all day.


Apologizes about what? They did not show anything to confirm or deny the rumors.... Heck they did not show anything at all.


----------



## Scotty99

agatong55 said:


> Apologizes about what? They did not show anything to confirm or deny the rumors.... Heck they did not show anything at all.


If they had 12 or 16c zen chips releasing this year, why in the world would they have not used them in the cinbench demo....

Use your head mate.


----------



## Cuthalu

CelticGamer said:


> Ryzen 3000 announcement is a bust. Not one single piece of technical specification shared other than a Cinebench run against a 9900K with their 8 core 16 thread chip, which barely eeked out a win over the 9900K.


With 30 % less power consumption. That's a HUGE improvement.


----------



## Pimaddafakkr

CelticGamer said:


> Ryzen 3000 announcement is a bust. Not one single piece of technical specification shared other than a Cinebench run against a 9900K with their 8 core 16 thread chip, which barely eeked out a win over the 9900K.


With an early production sample with without finalized clockspeeds. I'd say it's impressive, and there seems to be more space for another chiplet there for 16c/32t ryzens.


----------



## LancerVI

Scotty99 said:


> Accepting all apologies now, got all day.


Fair enough.

Ahem...........I'm very sorry my fellow OCN members that you've had to endure Scotty99 and his insufferable posts. Even though nothing was shown to prove or disprove Adored's rumors and leaks, he's doing some weird victory lap on these forums. Please bare with him as he clearly has some self-image issues to deal with. 


Happy?


----------



## Lee Patekar

Scotty99 said:


> If they had 12 or 16c zen chips releasing this year, why in the world would they have not used them in the cinbench demo....
> 
> Use your head mate.


They showed IPC and power consumption of their 8 core engineering sample. And looking at the sample Lisa held up in her hand 16 cores is pretty much confirmed :^)

As for the rest of the leaks, that remains to be seen once they give actual numbers.. But it does confirm the chiplet aspect of it.. so cores and possibly gpu cores on the same package is a possibility.

I'd like to see 3rd party benchmarks checking the latency of the chiplet design before doing cartweels however.. since I mostly game on my home system.


----------



## Scotty99

LancerVI said:


> Fair enough.
> 
> Ahem...........I'm very sorry my fellow OCN members that you've had to endure Scotty99 and his insufferable posts. Even though nothing was show to prove or disprove Adored's rumors and leaks, he's doing some weird victory lap on these forums. Please bare with him as he clearly has some self-image issues to deal with.
> 
> 
> Happy?


That works lol.

Seriously tho, if they did have a 12c or 16c chip why wasnt it used in the cinebench demo? I just dont see it happening, and i like i said earlier in this thread the industry simply does not need more than 8/16 in the mainstream, they are just gonna shoot for small bumps over last gen.


----------



## Scotty99

Also, a 2080 for......2080 pricing? What is AMD thinking...

You know its going to fall short of the 2080 in popular games like AMD always does and consume more power, WHO would buy a radeon 7 over a 2080?


----------



## deepor

agatong55 said:


> Apologizes about what? They did not show anything to confirm or deny the rumors.... Heck they did not show anything at all.





Scotty99 said:


> If they had 12 or 16c zen chips releasing this year, why in the world would they have not used them in the cinbench demo....
> 
> Use your head mate.


Dr. Su held the chip into the camera, without IHS. You could see it has chiplets, and the I/O part and the cores are separate. There was an empty gap for another chiplet, you could see the contacts on the PCB reflecting lights at some point while she was waving the thing around a bit.

This means the worries about them being boring and just staying with the same general layout as with Ryzen 1000 and 2000 was not true. They will do all kinds of models by using two slightly broken chiplets that would otherwise have been thrown away, things like 3+3 cores, 4+4, 5+5, 6+6, and they will surely sell a 16-core model as well. It makes no sense to not do a 16-core model. If they offer it, there will be people that will buy it, and AMD want to earn money.


----------



## VeritronX

It's January, they probably won't finalise and announce the sku's and prices until within a month from launch so probably early march. We get Vega 7 in less than a month and confirmation that an ES sample chip can beat the 9900K, along with a confimed release window. That's a pretty realistic amount of stuff to have expected to receive from a CES presentation really.

Can anyone remember if the ES samples AMD has shown us in the past even had turbo enabled?


----------



## CelticGamer

Scotty99 said:


> Also, a 2080 for......2080 pricing? What is AMD thinking...
> 
> You know its going to fall short of the 2080 in popular games like AMD always does and consume more power, WHO would buy a radeon 7 over a 2080?


Dumb decision to equip the card with 16GB of HBM2. Should have made it 8GB or 12GB and cut the cost down . 16GB of HBM2 is worthless for gaming, even 4k. All it did was raise the price.


----------



## ibb27

VeritronX said:


> It's January, they probably won't finalise and announce the sku's and prices until within a month from launch so probably early march.


Don't expect Ryzen 3k earlier than Q3, Lisa Su said that EPYC CPUs will be launched first, and now she confirmed that EPYCs will be ready for launch in the middle of 2019.


----------



## LancerVI

Scotty99 said:


> Also, a 2080 for......2080 pricing? What is AMD thinking...
> 
> You know its going to fall short of the 2080 in popular games like AMD always does and consume more power, WHO would buy a radeon 7 over a 2080?


I think that this is the only point I partially agree with. They need to be very aggressive with pricing. Not $400, but $549-to-650 would have been nice. But with 16GB of HBM; that's virtually impossible. They may as well give money away.


----------



## EniGma1987

Pimaddafakkr said:


> With an early production sample with without finalized clockspeeds. I'd say it's impressive, and there seems to be more space for another chiplet there for 16c/32t ryzens.





Saying "not final clockspeeds" is a double edged sword from Lisa Su. On the one hand, they could have overclocked the CPUs to tie the Intel chips. That could have been what Lisa was so worried over, crashing from OC. Final clock speeds could be lower and thus less performance.
On the other hand, if they were OCed, that means OC ability should be good and we really could match 5GHz 9900K performance.... Was the clock speed running at a "lowly 4GHz" and AMD is about to obliterate intel? Or was the chip running above the design spec at something like 4.8ghz? We just dont know enough yet.


----------



## Scotty99

How about the dozens of people who had their wallets out ready to buy a 12c 5gz chip for 330 bucks today, where those people at lol.


----------



## Cuthalu

EniGma1987 said:


> Saying "not final clockspeeds" is a double edged sword from Lisa Su. On the one hand, they could have overclocked the CPUs to tie the Intel chips. That could have been what Lisa was so worried over, crashing from OC. Final clock speeds could be lower and thus less performance.
> On the other hand, if they were OCed, that means OC ability should be good and we really could match 5GHz 9900K performance.... Was the clock speed running at a "lowly 4GHz" and AMD is about to obliterate intel? Or was the chip running above the design spec at something like 4.8ghz? We just dont know enough yet.


Considering the total system power consumption was around 135 W and cpu itself at around 75 W it probably wasn't any kind of super high overclock. 



Scotty99 said:


> How about the dozens of people who had their wallets out ready to buy a 12c 5gz chip for 330 bucks today, where those people at lol.


Did anyone expect them to start selling today? Probably not.


----------



## Scotty99

Lets imagine for a second there is room for another chiplet to fit in there, WHY OH WHY did they not use a 12c chip for that cinebench demo? They could have demolished the 9900k but chose to use an 8c and just tie? Something does not add up.


----------



## smnzer

One reason I could think of for using only 8 cores is to highlight the power efficiency of Zen 2 - it still beat the 9900k and you could see that with the total system power consumption being lower during the Cinebench run.

If you used the 16 core sample the difference in total power draw would not be as pronounced. Or the 16 core chip isn't ready or functional to be shown. Or a host of other reasons that can include AMD holding back a 16 core sample (either because it isn't coming this year or because they don't want to give everything away).

I agree that it's weird they didn't show their theoretically best chip with that design but Adored was absolutely right about the chiplet and IO design. So he is either very good with speculation and/or he has a somewhat accurate source.

Edit - a 12 core CPU codename leaked a few days ago as well. I'd say it was a pretty poor showing from AMD today but Adored got more right than wrong today.


----------



## CelticGamer

Recently blowing a load of cash on an RTX 2080, I was nervous going into this, with the rumors surrounding the new AMD GPU.

After seeing what was shown, I can honestly say I'm not feeling bad about my purchase at all. 

If the 2080 is equal to the new Vega in AMD's hand selected benchmarks, then that tells me that in the overall bulk of games, the 2080 is likely to be superior, plus having the ability to do DLSS and Ray Tracing.


----------



## Lee Patekar

Scotty99 said:


> Lets imagine for a second there is room for another chiplet to fit in there, WHY OH WHY did they not use a 12c chip for that cinebench demo? They could have demolished the 9900k but chose to use an 8c and just tie? Something does not add up.



They're comparing performance and power to intel. The engineering sample matches intels performance with "non finalized clocks", whatever that means, with lower power consumption. Its a good early demo for a future product. I suspect we'll get the SKU details at computex.



CelticGamer said:


> Recently blowing a load of cash on an RTX 2080, I was nervous going into this, with the rumors surrounding the new AMD GPU.
> 
> After seeing what was shown, I can honestly say I'm not feeling bad about my purchase at all.


Its nothing more than a die shrinked vega card.. something to throw at investors more than gamers I'd wager.


----------



## Ramad

Scotty99 said:


> Lets imagine for a second there is room for another chiplet to fit in there, WHY OH WHY did they not use a 12c chip for that cinebench demo? They could have demolished the 9900k but chose to use an 8c and just tie? Something does not add up.



- The CPU is an _"early sample"_ means frequency is not final. My guess is it was running at 4.0GHz looking at the power usage.
- The used CPU is likely to be an R5 chip.
- Both systems have 8 cores and 16 thread.


It's quite a statement if AMD can deliver the same performance as Intel's high end CPU (9900K) using the mid range R5 CPU. And yes, there will be AMD Ryzen CPU's with higher core count than 8 cores. The empty space is for a GPU core in case of an APU package or another 8 cores die.


----------



## Scotty99

If they had a 12c chip ready they would have used it, believe me.


----------



## ibb27

smnzer said:


> I agree that it's weird they didn't show their theoretically best chip with that design but Adored was absolutely right about the chiplet and IO design. So he is either very good with speculation and/or he has a somewhat accurate source.


In fact, his "source" claimed that Ryzen did not have a separate IO die. LOL 
And all of these "leaks" about CES reveals of Ryzen 3k and Navi are just nonsense. But he have good feel about future products, and is very good with speculation. AMD can do 16 core Ryzen 3k definitely.


----------



## smnzer

ibb27 said:


> In fact, his "source" claimed that Ryzen did not have a separate IO die. LOL
> And all of these "leaks" about CES reveals of Ryzen 3k and Navi are just nonsense. But he have good feel about future products, and is very good with speculation. AMD can do 16 core Ryzen 3k definitely.


Actually, now that you mention it you are right - his recent video with the source went back on the IO die for Ryzen. Adored's own speculation a few weeks earlier had included an IO die and he was more accurate than his source. 

That being said matching IPC with Intel at lower power consumption is a pretty big deal so I'm pretty happy. However, if there are no 12 or 16 core variants this year I'm not upgrading my Ryzen 1600.


----------



## EniGma1987

Ramad said:


> - The CPU is an _"early sample"_ means frequency is not final. My guess is it was running at 4.0GHz looking at the power usage.
> - The used CPU is likely to be an R5 chip.
> - Both systems have 8 cores and 16 thread.
> 
> 
> It's quite a statement if AMD can deliver the same performance as Intel's high end CPU (9900K) using the mid range R5 CPU. And yes, there will be AMD Ryzen CPU's with higher core count than 8 cores. The empty space is for a GPU core in case of an APU package or another 8 cores die.


Im still guessing it was running 4.5-4.7GHz core speed. The 9900k they benched against has all core turbo of 4.7GHz, no way AMD 4GHz CPU is matching a 4.7GHz Intel with same core count and threads this gen. I could see them beating it within a couple hundred MHz under, but not 700MHz lower


----------



## smnzer

Technical question for anyone who knows - does the latency increase (significantly) with another chiplet with 8 cores - or is it mostly dependent on the presence of the IO die?


----------



## rdr09

EniGma1987 said:


> Im still guessing it was running 4.5-4.7GHz core speed. The 9900k they benched against has all core turbo of 4.7GHz, no way AMD 4GHz CPU is matching a 4.7GHz Intel with same core count and threads this gen. I could see them beating it within a couple hundred MHz under, but not 700MHz lower


They just used an oc'ed R7 2700X


----------



## Scotty99

My guess is 4.2-4.4 all core boost in the cinebench test, 5.0 isnt happening this gen.....but they did get a nice IPC boost.


----------



## KyadCK

smnzer said:


> Technical question for anyone who knows - does the latency increase (significantly) with another chiplet with 8 cores - or is it mostly dependent on the presence of the IO die?


All Core chiplets connect to the IO bridge via IF, not to each other.

Die to Die latency will be like Threadripper or Epyc (or inter-CCX Ryzen), though optimized more. Die to RAM latency will be more uniform and will no longer have NUMA issues.


----------



## PwrSuprUsr

Scotty99 said:


> My guess is 4.2-4.4 all core boost in the cinebench test, 5.0 isnt happening this gen.....but they did get a nice IPC boost.


It isn't "This Gen" It's a "Next Gen" 7 nm part. You don't get IPC increase from shrinking down to 7nm. What you DO get is extra clock speed.


----------



## Scotty99

PwrSuprUsr said:


> It isn't "This Gen" It's a "Next Gen" 7 nm part. You don't get IPC increase from shrinking down to 7nm. What you DO get is extra clock speed.


There are no architectural updates with zen 2? 

The reason i said 4.2 to 4.4 is the power consumption numbers they were showing, how much does a 2700x consume clocked to 4.3 or 4.4?


----------



## smnzer

Zen 2 is a (somewhat) new architecture that also happens to be on a smaller node at 7nm. IDK about 5 GHz but the 2700x is already at 4.3 GHz boost, expect the 3700x to be closer to 4.7 or 4.8 (max boost, a 10% increase). Maybe the 3850 (if it exists) will reach 5 with super binning as a limited edition part.


----------



## umeng2002

Somehow, WCCFtech claims the demo was at 4.6 GHz.


----------



## white owl

I love how AMD now has an arch that's faster core for core which was impossible yesterday but now the issue is the lack of 12c/16c parts...even though you could see an open spot on the PCB for another die.


----------



## Ultracarpet

I'm sorry but, for all the people defending adoredtv here... what exactly did he get right? His charts all had the 8c, 12c and 16c mainstream parts being announced at CES. He also had Navi as a talking point, and it was barely even mentioned due to Vega 2 actually being the announcement. All we actually got for Ryzen was a "PREVIEW" which Lisa stressed more than once, of the 8c part. It won't launch until summer, likely a few months after Epyc. 

Outside of the obvious, which was ryzen 2 would have better ipc, clocks, and power consumption- oh and the chiplet design (that was a pretty safe bet), what did he bring to the table here other than a few half truths, and just straight up being wrong? I think he just sucked a bunch of people in for ad revenue with some fortune teller-esque techniques to make it seem like he was on the right track regardless of the outcome.

As for the demo, I'm happy, I think AMD has a good chip on their hands. I'm guessing the clocks were somewhere in the 4.5-4.6ghz range (likely near the edge of the power efficiency curve), with slightly better IPC than the 9900k (I believe it has a 4.7ghz all core stock turbo). Hopefully it will get close to 5ghz, but Ryzen doesn't have a history of overclocking very far past stock clocks (for the flagship models). I'm guessing max OC is going to be somewhere around 4.7-4.8ghz.


----------



## dubldwn

14nm (Intel) Core gets matched by 7nm Ryzen

12nm RTX gets matched by 7nm Radeon

Intel and nVidia have new designs for the new processes.

I think I see the writing on the wall here.


----------



## smnzer

Ultracarpet said:


> I'm sorry but, for all the people defending adoredtv here... what exactly did he get right? His charts all had the 8c, 12c and 16c mainstream parts being announced at CES. He also had Navi as a talking point, and it was barely even mentioned due to Vega 2 actually being the announcement. All we actually got for Ryzen was a "PREVIEW" which Lisa stressed more than once, of the 8c part. It won't launch until summer, likely a few months after Epyc.
> 
> Outside of the obvious, which was ryzen 2 would have better ipc, clocks, and power consumption- oh and the chiplet design (that was a pretty safe bet), what did he bring to the table here other than a few half truths, and just straight up being wrong? I think he just sucked a bunch of people in for ad revenue with some fortune teller-esque techniques to make it seem like he was on the right track regardless of the outcome.
> 
> As for the demo, I'm happy, I think AMD has a good chip on their hands. I'm guessing the clocks were somewhere in the 4.5-4.6ghz range (likely near the edge of the power efficiency curve), with slightly better IPC than the 9900k (I believe it has a 4.7ghz all core stock turbo). Hopefully it will get close to 5ghz, but Ryzen doesn't have a history of overclocking very far past stock clocks (for the flagship models). I'm guessing max OC is going to be somewhere around 4.7-4.8ghz.


I think the problem is that he hasn't been outright disproven just yet. There is an 8 core Ryzen chip that can beat the 9900k, probably at lower clocks. That's all we know. We don't know the pricing, we don't know if the 16 core exists, we don't know if 6 core is the base, we don't know the clocks. With the chiplet/IO design being revealed, he actually got a pretty big thing right. 

People are going to spend the next few months arguing the chip seen is a Ryzen 7 or Ryzen 5 unless AMD releases more information.

As for Navi, we still know nothing about it other than it's clearly not ready for show and tell in CES. Not even Adored was expecting much on it, saying at the absolute earliest it would come mid-2019 and that we wouldn't get much information on it.


----------



## Scotty99

smnzer said:


> I think the problem is that he hasn't been outright disproven just yet. There is an 8 core Ryzen chip that can beat the 9900k, probably at lower clocks. That's all we know. We don't know the pricing, we don't know if the 16 core exists, we don't know if 6 core is the base, we don't know the clocks. With the chiplet/IO design being revealed, he actually got a pretty big thing right.
> 
> People are going to spend the next few months arguing the chip seen is a Ryzen 7 or Ryzen 5 unless AMD releases more information.
> 
> As for Navi, we still know nothing about it other than it's clearly not ready for show and tell in CES. Not even Adored was expecting much on it, saying at the absolute earliest it would come mid-2019 and that we wouldn't get much information on it.


What lol?

Are people forgetting already the enormous leaks of individual product pricing cores and clockspeeds, and that russian site apparently backing it up?

Like holy hell people, you gotta be a true fanboy to continue to believe this stuff.


----------



## ozlay

I haven't seen anything to suggest that the leaks are wrong.


----------



## Scotty99

ozlay said:


> I haven't seen anything to suggest that the leaks are wrong.


Like ive been saying all thread, were not getting 12c 5ghz cpu's from amd for 330 dollars. I told you guys the absolute best case scenario we will see out of AMD is a 9900k killer (being a 12c part) but they might not even have that considering their cinebench demo.


----------



## Ultracarpet

smnzer said:


> I think the problem is that he hasn't been outright disproven just yet. There is an 8 core Ryzen chip that can beat the 9900k, probably at lower clocks. That's all we know. We don't know the pricing, we don't know if the 16 core exists, we don't know if 6 core is the base, we don't know the clocks. With the chiplet/IO design being revealed, he actually got a pretty big thing right.
> 
> People are going to spend the next few months arguing the chip seen is a Ryzen 7 or Ryzen 5 unless AMD releases more information.
> 
> As for Navi, we still know nothing about it other than it's clearly not ready for show and tell in CES. Not even Adored was expecting much on it, saying at the absolute earliest it would come mid-2019 and that we wouldn't get much information on it.


That first line is exactly how fortune telling works lol. Regardless, the 9900k will probably still edge it out by 5-10% at max OC because I don't think the ryzen chip will have much headroom past what they showed here. Like I mentioned, the chiplet design was not an incredibly big leap, we already knew they were using it with their server chips- it just needed to be adopted into the consumer parts.

As for everything else you said in that sentence, that is precisely everything he got wrong lol. The fact that all we know is that there is an 8 core chip that can compete with a 9900k at 4.7ghz is much less than his "leaks" were proposing. This chip they just "previewed" NOT announced, is supposed to cost $229? They were supposed to announce it today? They were supposed to announce 12 core and 16 core chips today? We know nothing. The entire point of these leaks were centered around what was going to happen at CES. If they weren't, it would have just been general speculation about ryzen 2. 

You can highlight one row out of his entire table of leaks and only select the 8c16t cell, and maybe the tdp cell. That's all we got. 2 cells out of 60 cells of his table confirmed.


----------



## TonyLee

Scotty99 said:


> If they had a 12c chip ready they would have used it, believe me.


It is not hard to notice in your many posts that you are NOT a fan of AMD. If it was a 12 core chip, then you would probably be saying that it takes a 12 core chip from AMD to match an 8 core Intel chip. Most of the time I buy Intel since those chips are more powerful, and I agree that they are still more powerful than the currently released AMD chips. I do not blame you for using the 8700k for gaming since it is clearly a lot faster than AMD in your preferred games. I used my last Intel chip for 7 years (2600k), and will probably go back with Intel after I get tired of this Ryzen 5 (unless AMD really steps it up).


----------



## Scotty99

TonyLee said:


> It is not hard to notice in your many posts that you are NOT a fan of AMD. If it was a 12 core chip, then you would probably be saying that it takes a 12 core chip from AMD to match an 8 core Intel chip. Most of the time I buy Intel since those chips are more powerful, and I agree that they are still more powerful than the currently released AMD chips. I do not blame you for using the 8700k for gaming since it is clearly a lot faster than AMD in your preferred games. I used my last Intel chip for 7 years (2600k), and will probably go back with Intel after I get tired of this Ryzen 5 (unless AMD really steps it up).


Im not a fan of any company, im a fan of logical thinking. 

This leak was enticing because of the particular nature, it listed an entire product stack top to bottom including clockspeeds/cores and pricing. None of this turned out to be true of course but people brushed that away like it wasnt there in the first place, even when thats the ONLY reason it got traction!!!!!!


----------



## smnzer

Scotty99 said:


> Like ive been saying all thread, were not getting 12c 5ghz cpu's from amd for 330 dollars. I told you guys the absolute best case scenario we will see out of AMD is a 9900k killer (being a 12c part) but they might not even have that considering their cinebench demo.


Except that the design clearly allows for another 8 core chiplet to be added. Whether or not that is actually feasible at reasonable pricing on AM4 is something yet to be seen, and along with the 5 GHz these are the two most important claims about the leak. 

There are literally indents in the chip she showed us during the event where the other chiplet would be placed. 

Again, I don't know whether 8+ cores will be a reality in 2019, but there's nothing disproving the leak so far since we learned nothing about the prices or clocks. Does that mean it's real or likely? No. Again, we don't have enough information even after the CES presentation.

If you're right you can claim victory when we actually have the information to make that judgment lol


----------



## Ultracarpet

TonyLee said:


> It is not hard to notice in your many posts that you are NOT a fan of AMD. If it was a 12 core chip, then you would probably be saying that it takes a 12 core chip from AMD to match an 8 core Intel chip. Most of the time I buy Intel since those chips are more powerful, and I agree that they are still more powerful than the currently released AMD chips. I do not blame you for using the 8700k for gaming since it is clearly a lot faster than AMD in your preferred games. I used my last Intel chip for 7 years (2600k), and will probably go back with Intel after I get tired of this Ryzen 5 (unless AMD really steps it up).


All he has done here in this thread is provide a bit of skepticism about these leaks, and he was absolutely right to do so.

Adored thought they were going to showcase a 16 core 32 thread chip at 5.1 ghz according to his latest video on ryzen 2 at CES. Adored wasn't even on the right planet.


----------



## Scotty99

It doesnt matter if amd does a 12c or 16c, we didnt get an announcement from the original product stack leak......which is the WHOLE REASON THIS BLEW UP IN THE FIRST PLACE.

What you guys are arguing about now is semantics, actually irrelevant.


----------



## Ultracarpet

https://youtu.be/MG-onUm__c8?t=986

Here you guys go. According to Adored, Vega Consumer GPU was Cancelled, and 3 Navi GPU's were to be announced at CES. 

And using his own logic/words, if you don't believe the whole leak, you might as well believe none of it. He had this guess because it came from the same leaker as the ryzen info. So apparently all the ryzen info is bunk too.

Cheers.


----------



## tyvar

Scotty99 said:


> Im not a fan of any company, im a fan of logical thinking.


Then why are you posting drivel? 

You clearly don't understand the significance of this demo. I can tell you exactly why AMD showed only a 8 core 16 thread part.

When AMD released the 1700 and 1800s they showed off here, we can beat intel in multi threaded apps, with a a processor that had 2-4 more cores and 4-8 more threads then Intel's main desktop equivalents. Yeah it was nice that AMD improved the core and thread counts, but over all not that impressive, you throw more threads at the problem of course you got a better multi thread scores.

Here AMD is showing something much more powerful, 16 AMD threads are better then 16 Intel threads, and lower power. That's a powerful statement. It tells everybody who has multi threaded workloads (as long as they don't require AVX-512) To not even bother looking at Intel now, because odds are AMD will not only give you more threads at the same price point, but each thread will be more powerful and more efficient. That's a KO in most every segment other then gaming and watching videos of cats.


----------



## smnzer

Scotty99 said:


> It doesnt matter if amd does a 12c or 16c, we didnt get an announcement from the original product stack leak......which is the WHOLE REASON THIS BLEW UP IN THE FIRST PLACE.
> 
> What you guys are arguing about now is semantics, actually irrelevant.


Except that you claimed that a 12c 9900k killer is the best AMD could do in the best possible scenario two posts ago, so it does matter, clearly.

We didn't get the announcement at CES. Let's say hypothetically, AMD has a tech day and announces it in March. Or more likely Computex in May/June. And let's say hypothetically that the leak is on the money or very close. 

Would it matter that the information came 3-5 months later than expected? No, it really wouldn't in terms of judging the leak's accuracy. To clarify, I do not believe in the leak, other than the possibility of 12 or 16 core CPUs. 


-----------
Moving, just for reference in the thread, I'd argue that these four conditions must be satisfied for the leak to be judged as mostly accurate

1. 12 core, 16 core CPUs with Ryzen 7 or 9 designation on AM4 in 2019
2. A Ryzen 3 with 6 cores as a base (for $99!)
3. 5 GHz, ideally on the 3700X rather than just the 3850X
4. No wide deviation in clocks or prices across the board


----------



## Ultracarpet

smnzer said:


> Except that you claimed that a 12c 9900k killer is the best AMD could do in the best possible scenario two posts ago, so it does matter, clearly.
> 
> We didn't get the announcement at CES. Let's say hypothetically, AMD has a tech day and announces it in March. Or more likely Computex in May/June. And let's say hypothetically that the leak is on the money is very close.
> 
> Would it matter that the information came 3-5 months later than expected? No, it really wouldn't in terms of judging the leak's accuracy. To clarify, I do not believe in the leak, other than the possibility of 12 or 16 core CPUs.
> 
> 
> -----------
> Moving, just for reference in the thread, I'd argue that these four conditions must be satisfied for the leak to be judged as mostly accurate
> 
> 1. 12 core, 16 core CPUs with Ryzen 7 or 9 designation on AM4 in 2019
> 2. A Ryzen 3 with 6 cores as a base (for $99!)
> 3. 5 GHz, ideally on the 3700X rather than just the 3850X
> 4. No wide deviation in clocks or prices across the board


He was wrong about the GPU's entirely, he was wrong about everything being "announced" and he thought there was going to be a 16 core 32 thread chip running at 5.1ghz facing off against the 9900k. Adored was Not. Even. Close.


----------



## Scotty99

smnzer said:


> Except that you claimed that a 12c 9900k killer is the best AMD could do in the best possible scenario two posts ago, so it does matter, clearly.
> 
> We didn't get the announcement at CES. Let's say hypothetically, AMD has a tech day and announces it in March. Or more likely Computex in May/June. And let's say hypothetically that the leak is on the money or very close.
> 
> Would it matter that the information came 3-5 months later than expected? No, it really wouldn't in terms of judging the leak's accuracy. To clarify, I do not believe in the leak, other than the possibility of 12 or 16 core CPUs.
> 
> 
> -----------
> Moving, just for reference in the thread, I'd argue that these four conditions must be satisfied for the leak to be judged as mostly accurate
> 
> 1. 12 core, 16 core CPUs with Ryzen 7 or 9 designation on AM4 in 2019
> 2. A Ryzen 3 with 6 cores as a base (for $99!)
> 3. 5 GHz, ideally on the 3700X rather than just the 3850X
> 4. No wide deviation in clocks or prices across the board


But you are missing the point, yet again. This got hyped and became his 2nd most viewed video ever specifically because of the leaked specs. People were drooling over the 3700x and rightfully so, but here we are with no announcement of any sort. Now people are just going back and forth about chiplets and whatnot when it doesnt even matter, AMD has always had a core lead on intel in the mainstream and them doing a 12c part that is priced similar to the 9900k isnt going to excite anyone.


----------



## smnzer

Ultracarpet said:


> He was wrong about the GPU's entirely, he was wrong about everything being "announced" and he thought there was going to be a 16 core 32 thread chip running at 5.1ghz facing off against the 9900k. Adored was Not. Even. Close.


Except that he nailed the chiplet + IO die design prediction months ago, which is a big deal. There's still some truth to his speculation, despite his predictions on what we would see at CES being wrong. 


Again, doesn't mean the leak is a complete fake although it is looking much more unlikely, yes.



Scotty99 said:


> But you are missing the point, yet again. This got hyped and became his 2nd most viewed video ever specifically because of the leaked specs. People were drooling over the 3700x and rightfully so, but here we are with no announcement of any sort. Now people are just going back and forth about chiplets and whatnot when it doesnt even matter, AMD has always had a core lead on intel in the mainstream and them doing a 12c part that is priced similar to the 9900k isnt going to excite anyone.


Actually, I'd argue you are missing the point - being announced at CES was not the big deal in all of this. At all. The big deal was the possibility of 16 core chips. With 5 GHz. At $499. Which we still know nothing about, because again, AMD told us nothing about clocks or prices. Other than them having an ES that is a 9900k equivalent at 75W.


----------



## Telimektar

Is the 9900K Cinebench score seen in the AMD keynote closer to a stock forced 95W TDP 9900K or a 9900K running 4.7GHz all core frequency with Multicore Enhancement ? (this is actually the default behavior on most motherboards if I remember correctly)


----------



## ibb27

Scotty99 said:


> Lets imagine for a second there is room for another chiplet to fit in there, WHY OH WHY did they not use a 12c chip for that cinebench demo? They could have demolished the 9900k but chose to use an 8c and just tie? Something does not add up.


https://twitter.com/IanCutress/status/1083099086880952320

Helloo, hellooouu, ding ding ding! LOL


----------



## Scotty99

ibb27 said:


> https://twitter.com/IanCutress/status/1083099086880952320
> 
> Helloo, hellooouu, ding ding ding! LOL


Big deal, it literally does not matter unless the original leak is accurate. 

This thread isnt about AMD bringing 12 or 16 core cpu's to the mainstream, its about the leaked product stack. 

Threadripper already exists for people who want massive amounts of cores and if AMD does put 12c on the mainstream its not going to be much cheaper than the 12c TR chip, its going to be priced to compete with whatever intel has out on the market at that point in time.


----------



## Ultracarpet

Telimektar said:


> Is the 9900K Cinebench score seen in the AMD keynote closer to a stock forced 95W TDP 9900K or a 9900K running 4.7GHz all core frequency with Multicore Enhancement ? (this is actually the default behavior on most motherboards if I remember correctly)


If you look at reviews the 9900k it scores around 2050 at 4.7ghz all core.


----------



## firefox2501

Looks like I'll have plenty of options when I upgrade the 7700k this summer. This preliminary benchmark show's a lot of promise for the zen 2 platform.


----------



## ChiTownButcher

Scotty99 said:


> Also, a 2080 for......2080 pricing? What is AMD thinking...
> 
> You know its going to fall short of the 2080 in popular games like AMD always does and consume more power, WHO would buy a radeon 7 over a 2080?


Because if you dont and AMD Graphics goes bust you will not have another choice and you next GPU will be triple price for 5% more performance. If you care about the future of your hobby every once in a while you have to do the right thing for the big picture, not the 1 time transaction.


----------



## Scotty99

ChiTownButcher said:


> Because if you dont and AMD Graphics goes bust you will not have another choice and you next GPU will be triple price for 5% more performance. If you care about the future of your hobby every once in a while you have to do the right thing for the big picture, not the 1 time transaction.


To be honest, id be ok with that. AMD should stick to APU's and intel can take their spot.

Think about this for a second, once the reviews comes out its GOING to lose to the 1080ti in some games, a card that released nearly two years ago for the same price lol.

Just stop AMD, throw in the towel lol.


----------



## Ultracarpet

smnzer said:


> Except that he nailed the chiplet + IO die design prediction months ago, which is a big deal. There's still some truth to his speculation, despite his predictions on what we would see at CES being wrong.


Those were completely separate leaks. The chiplet design was from epyc, all he was really guessing with the ryzen leak was that they would adopt the same design that we were all already aware of from epyc. That's not that big of a bet to make. 

Regardless, the whole point of his videos was what we were going to see at CES, and he was wrong about 99% of it. All he's doing is scouring the internet for leaks and then speculating on it- which involves him jumping to a bunch of false conclusions, and people defend him as if he is some prophecy. He might land a blind punch once in a while and then run with it, but it's mostly complete bs.

He's been wrong way more than he's been right in the past. Go back and watch how far off he was with Polaris... He went from doing "let's plays" to being some coveted leaker of top secret industry hardware news in just a few years. I used to watch his videos back during the original ryzen and polaris launches, and I slowly started to realize he just summarizes the same theories and guesses that you see kicking around PC forums and reddit... except he does it with a Scottish accent.


----------



## Cuthalu

Scotty99 said:


> AMD has always had a core lead on intel in the mainstream and them doing a 12c part that is priced similar to the 9900k isnt going to excite anyone.


Considering the performance and power consumption of a Zen 2 thread it's going to excite a lot of people. 8 core has already many excited.



Scotty99 said:


> Think about this for a second, once the reviews comes out its GOING to lose to the 1080ti in some games, a card that released nearly two years ago for the same price lol.
> 
> Just stop AMD, throw in the towel lol.


And you seriously try to make the argument that you are first and foremost logical and not a fanboy? You're making it very, very hard to believe.


----------



## Zam15

Scotty99 said:


> Think about this for a second, once the reviews comes out its GOING to lose to the 1080ti in some games, a card that released nearly two years ago for the same price lol.
> 
> Just stop AMD, throw in the towel lol.


Considering most were predicting a Navi announcement with 1080 class performance and the instead they announce a GPU with 1080TI class performance people will be upset about anything. AMD is finally getting back on track and offering Nvidia some competition, Nvidia only has one card that can outperform it at double the price. Not including Ray Tracing. Maybe Navi is getting retooled for Ray Tracing, but it was hinted that something else was in the works for this year.


----------



## speed_demon

Cuthalu said:


> And you seriously try to make the argument that you are first and foremost logical and not a fanboy? You're making it very, very hard to believe.


Yes he is. 

I would like to add that the keynote speech was a look at the current state of manufacturing capabilities with an eye towards the future and not solely an AMD sales pitch. If there is more in store from AMD they certainly were not under pressure to reveal it all today.


----------



## white owl

Is Vega still a compute card? I kinda missed that part in the stream. We're they running games with it?
The specs look a lot like a compute oriented GPU to me so I'm pretty confused.


----------



## ozlay

My guess is that the Vega 2 chip wasn't ready yet. And that is why it was missing from the chip Lisa was holding. But still impressive performance compared to the 9900k. I'm sure the Vega 2 chip will be equally impressive.


----------



## Ultracarpet

Zam15 said:


> Considering most were predicting a Navi announcement with 1080 class performance and the instead they announce a GPU with 1080TI class performance people will be upset about anything. AMD is finally getting back on track and offering Nvidia some competition, Nvidia only has one card that can outperform it at double the price. Not including Ray Tracing. Maybe Navi is getting retooled for Ray Tracing, but it was hinted that something else was in the works for this year.


The hype train was calling for that Navi GPU to cost $299 as well... Considering this Vega 2 GPU costs $699 usd, that's not exactly fantastic. I bought my 1080ti a year and a half ago off the shelf for $850 CAD which is about $650 USD right now. They are almost 2 years late, and charging more for that same performance. Not that exciting. At least when Nvidia did it with the 2080 it came with an RTX gimmick.


----------



## SuperZan

With regards to whether or not Jim @ AdoredTV is a pre-cog - no, of course he's not. Anybody who expected a literal confirmation of every speculative musing on AMD's immediate future is a fool. However, and *much* more importantly, he does seem to have correctly spoken to some of the broad strokes. He ran hot and cold on the chiplet idea, but he was discussing it. Some of the HCC chips do seem plausible given the evident space for another core chiplet. Pricing is always wildly speculative and nobody should ever take price leaks seriously if they plan on avoiding bizarre emotional reactions to tech news. I don't see any reason to laud Jim as lord of leaks or whatever, but he's certainly not any more deserving of castigation than most of the better techtubers.

The Ryzen stuff was very impressive, full stop. No mitigation, no 'in the right context'. For the first time in a long time, AMD was willing to put a qualitative benchmark up at a core for core and thread for thread level. This wasn't the 1800x beating up on the 7700k in multithread. AMD has clearly made progress on what the more engineering-savvy have indicated as low-hanging fruit (Anandtech covered this in pretty good detail). AMD has unequivocally come to play in the desktop CPU market and we should all be glad for it.

For those who were disappointed by the GPU stuff - what *exactly* were you expecting? A month ago, many of the same avatars were lamenting the seeming impossibility of even reaching 1080 Ti performance. I remember because I was one of them. To paraphrase myself, I said that 'in a dream world' AMD would have a 1080 Ti-performant card at a significantly better price than the Turing equivalent. The latter didn't come through but it does at least seem that the former did. No, it's not a particularly exciting product but then anybody with half a brain *knew not to expect anything super-exciting.* This was never going to be the Navi show. Potentially saving $100 (or more depending on when you order) on a 2080 of similar component quality isn't the worst thing in the world. It's not Ryzen exciting, but Radeon has a deep, deep hole to dig out of. I'm frankly pleasantly surprised that we're not seeing another 580 rebrand as the only Radeon-related release.


----------



## Scotty99

SuperZan said:


> With regards to whether or not Jim @ AdoredTV is a pre-cog - no, of course he's not. Anybody who expected a literal confirmation of every speculative musing on AMD's immediate future is a fool. However, and *much* more importantly, he does seem to have correctly spoken to some of the broad strokes. He ran hot and cold on the chiplet idea, but he was discussing it. Some of the HCC chips do seem plausible given the evident space for another core chiplet. Pricing is always wildly speculative and nobody should ever take price leaks seriously if they plan on avoiding bizarre emotional reactions to tech news. I don't see any reason to laud Jim as lord of leaks or whatever, but he's certainly not any more deserving of castigation than most of the better techtubers.
> 
> The Ryzen stuff was very impressive, full stop. No mitigation, no 'in the right context'. For the first time in a long time, AMD was willing to put a qualitative benchmark up at a core for core and thread for thread level. This wasn't the 1800x beating up on the 7700k in multithread. AMD has clearly made progress on what the more engineering-savvy have indicated as low-hanging fruit (Anandtech covered this in pretty good detail). AMD has unequivocally come to play in the desktop CPU market and we should all be glad for it.
> 
> For those who were disappointed by the GPU stuff - what *exactly* were you expecting? A month ago, many of the same avatars were lamenting the seeming impossibility of even reaching 1080 Ti performance. I remember because I was one of them. To paraphrase myself, I said that 'in a dream world' AMD would have a 1080 Ti-performant card at a significantly better price than the Turing equivalent. The latter didn't come through but it does at least seem that the former did. No, it's not a particularly exciting product but then anybody with half a brain *knew not to expect anything super-exciting.* This was never going to be the Navi show. Potentially saving $100 (or more depending on when you order) on a 2080 of similar component quality isn't the worst thing in the world. It's not Ryzen exciting, but Radeon has a deep, deep hole to dig out of. I'm frankly pleasantly surprised that we're not seeing another 580 rebrand as the only Radeon-related release.




The point is the guy was stupid enough to report on the leaks. He is supposed to have at least enough sense to come to realistic conclusions in his mind before unleashing it on his followers and create the hysteria it has. The reason this got hyped so much was because of the specifics, and he should pay for having reported on them in the first place. What does that mean? You guys should all unsub from his youtube channel and hopefully his name gets dragged through the mud enough from this debacle that he will never have the chance to do this again.


----------



## SuperZan

Scotty99 said:


> The point is the guy was stupid enough to report on the leaks. He is supposed to have at least enough sense to come to realistic conclusions in his mind before unleashing it on his followers and create the hysteria it has. The reason this got hyped so much was because of the specifics, and he should pay for having reported on them in the first place. What does that mean? You guys should all unsub from his youtube channel and hopefully his name gets dragged through the mud enough from this debacle that he will never have the chance to do this again.



I don't see why, though. When it comes to tech speculation, nothing has _really_ changed since I first started paying attention in 1996. Some people dryly report on the stuff, some are approached by 'leakers' or other sources deemed potentially credible, and talking heads prognosticate, just as they do in _so many other fields_. I don't *expect* anybody to stop speculating in fora which are *very clearly* oriented towards that purpose. Jim hasn't made any pretence of his videos being dogged, confirmed journalism. He speculates based on the information he has. It's editorial work, not reporting. Anybody who uses him as an exclusive source of information and/or takes speculation as gospel truth is a fool with only their self to blame. 

I'm not going to blame someone who speculates for the hair-trigger hype mode that so many tech enthusiasts seem to have, especially when some of the key contentions are still up in the air.


----------



## Scotty99

SuperZan said:


> I don't see why, though. When it comes to tech speculation, nothing has _really_ changed since I first started paying attention in 1996. Some people dryly report on the stuff, some are approached by 'leakers' or other sources deemed potentially credible, and talking heads prognosticate, just as they do in _so many other fields_. I don't *expect* anybody to stop speculating in fora which are *very clearly* oriented towards that purpose. Jim hasn't made any pretence of his videos being dogged, confirmed journalism. He speculates based on the information he has. It's editorial work, not reporting. Anybody who uses him as an exclusive source of information and/or takes speculation as gospel truth is a fool with only their self to blame.
> 
> I'm not going to blame someone who speculates for the hair-trigger hype mode that so many tech enthusiasts seem to have, especially when some of the key contentions are still up in the air.


Well its pretty simple, he created a hype train that was wholly avoidable and the only driving factor for him was personal gain. The internet sucks because accountability doesn't exist here, i realize in the grand scale of things this ranks down pretty low on the list, but its still pretty gross and annoys the hell outta me.


----------



## deepor

Scotty99 said:


> The point is the guy was stupid enough to report on the leaks. He is supposed to have at least enough sense to come to realistic conclusions in his mind before unleashing it on his followers and create the hysteria it has. The reason this got hyped so much was because of the specifics, and he should pay for having reported on them in the first place. What does that mean? You guys should all unsub from his youtube channel and hopefully his name gets dragged through the mud enough from this debacle that he will never have the chance to do this again.


Thinking back, the GHz numbers in the leaks I had kind of ignored. I couldn't imagine a new 7nm process doing much better than the latest offerings on an old, mature process. Numbers like 5GHz were ridiculous to me, so in my mind it was like that part of the leaks just didn't exist.

My thinking is that the chiplet stuff is the biggest news today. The chiplets were the thing that was most unbelievable to me about the leaks. It just seems like a crazy idea. I couldn't see how it would make sense economically compared to just doing a single 7nm die for the whole processor. But doing a single die would have limited things to the current Ryzen 1000 and 2000 core counts. Now with the chiplets being actually real and the cores being on a separate die, the way is open for an 8-core APU and a 16-core CPU. This then means Intel gets hit by a crazy amount of pressure. The next few years will be great, they will be the best time ever. Intel and AMD will offer excellent improvements in their products, will then earn good money, which will then boost research into the next generation of stuff. The future looks bright.

I feel you shouldn't attack adoredtv: you were bumping this thread a lot, making sure it stayed on the front page of the site. That kind of behavior maybe really helped hype things up? You helped point people to those videos and helped increase the view counts and subscriber counts. 

Seriously, you might actually be the top post-count person in this thread here. In that sense you are then the person that hyped things up the most compared to everyone else.


----------



## CelticGamer

Someone fill me in here.

We know that the AdoreTV predictions for what was going to be announced were wrong. 

But have the actual specs that were leaked been proved to be wrong? Because I'm not seeing that here. Perhaps someone can enlighten me.


----------



## ozlay

Yeah, The chiplet stuff is interesting. I wonder if they will do any custom Threadripper/EPYC chips with multiple vega 2 chips on them. I mean yeah.. they are cut down but if you had like 6 of them on a chip. It might get interesting. 



CelticGamer said:


> Someone fill me in here.
> 
> We know that the AdoreTV predictions for what was going to be announced were wrong.
> 
> But have the actual specs that were leaked been proved to be wrong? Because I'm not seeing that here. Perhaps someone can enlighten me.


Lisa didn't show any detailed specs of anything. What she said was that specs haven't been finalized. And she also announced vega 2. Which if i remember correctly wasn't the GPU he predicted. So everything he predicted could come to be true later in the year. But the announcement date was wrong.


----------



## Ramad

EniGma1987 said:


> Im still guessing it was running 4.5-4.7GHz core speed. The 9900k they benched against has all core turbo of 4.7GHz, no way AMD 4GHz CPU is matching a 4.7GHz Intel with same core count and threads this gen. I could see them beating it within a couple hundred MHz under, but not 700MHz lower


I understand what you are saying, but there was a hint if we look closely at power draw. Power draw was 66W - 67W at the beginning of the test which jumped to 133W - 134W under load which is a difference 67W, means that this is most likely is a 65W CPU. Anyone which have a Ryzen/Ryzen 2 knows that it's impossible to run such low power draw even at 3.8GHz. Take the 2700 as an example, 3.2GHz base/4.1GHz boost at 65W TDP, which consumes around 80W more when under load compared to idle: https://www.guru3d.com/articles_pages/amd_ryzen_7_2700_review,7.html

I think that we need to understand that Ryzen 2 is not a Ryzen 1 on a smaller process nor that Ryzen 3 is Ryzen 2 on a smaller process, but there are corrections and modifications on every step forward with every die size reduction, let alone that we don't know how many extra transistors has been added in Ryzen 3 compared to Ryzen 2 along with extra cash or Infinty Fabric speed, which all have been easier after moving the I/O out of the CCX.

Edit: By the way, that Intel 9900K was running at 3.6GHz on the presentation. (This may make Intel fanboys here feel better )


----------



## ChiTownButcher

CelticGamer said:


> Someone fill me in here.
> 
> We know that the AdoreTV predictions for what was going to be announced were wrong.
> 
> But have the actual specs that were leaked been proved to be wrong? Because I'm not seeing that here. Perhaps someone can enlighten me.


No not proven wrong, or proven correct. Only thing we know is AMD has an engineering sample 8 core CPU that is faster in cinebench than the 9900k while using less power on a package that appears to have room for a second 8 core chiplet and an I/O die to connect the two. We do not know frequency used or was this done at a lower frequency with higher IPC. 

Either way this in my mind is good news for the future of AMD CPU because IF (let me repeat IF) the leak of a 16c/32t Rizen 7 was true than this is the Rizen 5 keeping up/beating with a 9900k in Cinebench using less electricity. 

That is all, but the internet must be outraged/hyped about something so why not this?


----------



## deepor

CelticGamer said:


> Someone fill me in here.
> 
> We know that the AdoreTV predictions for what was going to be announced were wrong.
> 
> But have the actual specs that were leaked been proved to be wrong? Because I'm not seeing that here. Perhaps someone can enlighten me.


Technically, nothing about the specs in the leaks was proven wrong but I'm thinking those numbers will never happen. They are surely wrong and different from what will actually be released.

There wasn't anything about specs of final parts mentioned today. It seems this will be a wait of several months until those kinds of details are shown. I assume it's to protect their current business, details might hurt AMD's current business while the Ryzen 3000 parts are not yet ready.

We don't know anything about what they did in their Cinebench demonstration against that i9-9900k. Were they overclocking aggressively and using a speed that they will never use on real CPUs? Or were they using a speed that they think is conservative and the final CPUs will be better?

There's then also the empty area for a second chiplet on that CPU that was held into the camera. Why didn't they put a second chiplet there? Shouldn't that have murdered the i9-9900k in the demonstration? They maybe decided to never sell a 16-core model and only want to use that empty space for the graphics on an APU. Or maybe they decided not to show a crazy Cinebench score so that they don't hurt business for the current Ryzen 2000 parts.


----------



## BigMack70

Wow... Radeon VII is really disappointing. We STILL don't have anything better than the 1080 Ti at the $700 price point. Wow. 1080 Ti is going to go down as a legend card, simply because AMD and Nvidia took 3+ years to beat it.

Anyway, Ryzen 3 looks interesting. Depending on how it performs it might be an upgrade from my 5930k. Haven't liked what I've seen from Intel CPUs since Haswell-E.


----------



## ozlay

No 1080ti would be 25% faster then vega 64. That isn't what she said. She said that Radeon VII is 25% faster at the same power. That doesn't mean we wont see cards that use more power then vega 64. If Nvidia can make a power hungry 2080ti. Why can't AMD do the same. :devil:


----------



## BigMack70

ozlay said:


> No 1080ti would be 25% faster then vega 64. That isn't what she said. She said that Radeon VII is 25% faster at the same power. That doesn't mean we wont see cards that use more power then vega 64. If Nvidia can make a power hungry 2080ti. Why can't AMD do the same. :devil:


Not likely...



Anandtech said:


> As a result, AMD says that the Radeon VII should beat their former flagship by anywhere between 20% and 42% depending on the game (with an overall average of 29%), which on paper would be just enough to put the card in spitting distance of NVIDIA’s RTX 2080





Anandtech said:


> The other wildcard for the moment is TDP. The MI50 is rated for 300W, and while AMD’s event did not announce a TDP for the card, I fully expect AMD is running the Radeon VII just as hard here, if not a bit harder. Make no mistake: AMD is still having to go well outside the sweet spot on their voltage/frequency curve to hit these high clockspeeds, so AMD isn’t even trying to win the efficiency race. Radeon VII will be more efficient than Radeon Vega 64 – AMD is saying 25% more perf at the same power – but even if AMD hits RTX 2080’s performance numbers, there’s nothing here to indicate that they’ll be able to meet its efficiency. This is another classic AMD play: go all-in on trying to win on the price/performance front.
> 
> ...
> 
> Accordingly, the Radeon VII is not a small card. The photos released show that it’s a sizable open-air triple fan cooled design, with a shroud that sticks up past the top of the I/O bracket. Coupled with the dual 8-pin PCIe power plugs on the rear of the card, and it’s clear AMD intends to remove a lot of heat


This is going to be a 250W card with 1080 Ti / 2080 performance. There's not going to be any headroom to jack the power up and compete with the 2080 Ti.

And even if I'm wrong, their 1080 Ti equivalent card will still cost the exact same as the 1080 Ti did... and anything faster will be more expensive.... which means this is dog poo from the beginning for gaming. 

It will be another year before we have a chance to get a $700 card that beats the 1080 Ti. Pathetic.


----------



## ChiTownButcher

BigMack70 said:


> Wow... Radeon VII is really disappointing. We STILL don't have anything better than the 1080 Ti at the $700 price point. Wow. 1080 Ti is going to go down as a legend card, simply because AMD and Nvidia took 3+ years to beat it.
> 
> Anyway, Ryzen 3 looks interesting. Depending on how it performs it might be an upgrade from my 5930k. Haven't liked what I've seen from Intel CPUs since Haswell-E.


WCCFTech mentioned Radeon 7 very briefly in a piece a few months ago with one of the executives that was leaving. 

My Hypothesis...Since its built on Vega architecture it would have been cost prohibitive to redesign the memory controller for GDDR6. Also by releasing a 16gb version it's a better option than 2080 for some compute tasks. This also leaves the option for a 8gb model for $150 less should NVidia respond with a 1180 (no RT cores) for less money until Navi is ready and using GDDR6 from ground up later this year.


----------



## BigMack70

ChiTownButcher said:


> WCCFTech mentioned Radeon 7 very briefly in a piece a few months ago with one of the executives that was leaving.
> 
> My Hypothesis...Since its built on Vega architecture it would have been cost prohibitive to redesign the memory controller for GDDR6. Also by releasing a 16gb version it's a better option than 2080 for some compute tasks. This also leaves the option for a 8gb model for $150 less should NVidia respond with a 1180 (no RT cores) for less money until Navi is ready and using GDDR6 from ground up later this year.


It just seems incredibly tone deaf from AMD to release at 16GB with 1080 Ti price point and performance, after reviewers and customers almost unanimously panned the 2080 for launching without superior price/performance to the 1080 Ti. 

16GB vram will matter for nothing in games. 8GB would have been fine, and if they could have launched at $550 with 8GB VRAM and 2080 performance, they would have had a fantastic high end value proposition that would have forced Nvidia to drop pricing (assuming AMD could meet demand).


----------



## ChiTownButcher

BigMack70 said:


> ChiTownButcher said:
> 
> 
> 
> WCCFTech mentioned Radeon 7 very briefly in a piece a few months ago with one of the executives that was leaving.
> 
> My Hypothesis...Since its built on Vega architecture it would have been cost prohibitive to redesign the memory controller for GDDR6. Also by releasing a 16gb version it's a better option than 2080 for some compute tasks. This also leaves the option for a 8gb model for $150 less should NVidia respond with a 1180 (no RT cores) for less money until Navi is ready and using GDDR6 from ground up later this year.
> 
> 
> 
> It just seems incredibly tone deaf from AMD to release at 16GB with 1080 Ti price point and performance, after reviewers and customers almost unanimously panned the 2080 for launching without superior price/performance to the 1080 Ti.
> 
> 16GB vram will matter for nothing in games. 8GB would have been fine, and if they could have launched at $550 with 8GB VRAM and 2080 performance, they would have had a fantastic high end value proposition that would have forced Nvidia to drop pricing (assuming AMD could meet demand).
Click to expand...

I didnt say 16gb was for gaming purposes. There are other things GPU's are used for that are RAM intensive. "IF" NVidia does respond with 1180 cards (no/defective RT cores) for $600 than they can match performance for $550. If not they still sell cards for the compute market. Also in the process they can use up HBM2 supplies while they still work on Navi.

It makes a lot of sense why they made this card if you take gaming as only 50% of the equation. There is not a current game that would even need 16gb of HBM2 RAM even at 4k.


----------



## LancerVI

Well, I may just be stupid, but I'm seriously considering the Vega II and using my 1080 ti in another build or selling it all together.

I really want an A-sync/monitor and I'll NEVER buy a g-sync monitor. Plus, having an all AMD system for the first time......well, I was going to say in 15 years, but they hadn't bought ATI yet. Anyway, I digress.


----------



## rv8000

Ramad said:


> I understand what you are saying, but there was a hint if we look closely at power draw. Power draw was 66W - 67W at the beginning of the test which jumped to 133W - 134W under load which is a difference 67W, means that this is most likely is a 65W CPU. Anyone which have a Ryzen/Ryzen 2 knows that it's impossible to run such low power draw even at 3.8GHz. Take the 2700 as an example, 3.2GHz base/4.1GHz boost at 65W TDP, which consumes around 80W more when under load compared to idle: https://www.guru3d.com/articles_pages/amd_ryzen_7_2700_review,7.html
> 
> I think that we need to understand that Ryzen 2 is not a Ryzen 1 on a smaller process nor that Ryzen 3 is Ryzen 2 on a smaller process, but there are corrections and modifications on every step forward with every die size reduction, let alone that we don't know how many extra transistors has been added in Ryzen 3 compared to Ryzen 2 along with extra cash or Infinty Fabric speed, which all have been easier after moving the I/O out of the CCX.
> 
> Edit: By the way, that Intel 9900K was running at 3.6GHz on the presentation. (This may make Intel fanboys here feel better )


I hope you realize the mistake you made.


----------



## SuperZan

Ramad said:


> Edit: By the way, that Intel 9900K was running at 3.6GHz on the presentation. (This may make Intel fanboys here feel better )



CB doesn't change the clockspeed readout from the base clock if the CPU is at stock using boost, which means it would've boosted all-core at 4.7GHz.


----------



## nolive721

LancerVI said:


> Well, I may just be stupid, but I'm seriously considering the Vega II and using my 1080 ti in another build or selling it all together.
> 
> I really want an A-sync/monitor and I'll NEVER buy a g-sync monitor. Plus, having an all AMD system for the first time......well, I was going to say in 15 years, but they hadn't bought ATI yet. Anyway, I digress.


I was in the same boat before CES started.

But the with the AMD VII at 700USD, and from what I feel like 20-30% mark up for me to buy it in Japan, and the anncoument from NVIDIA thye will support Async through drivers are making me seriously reconsidering that logic

I am now leaning more towards keeping my great OCer 1080Ti and get 3 monitors to be say 48-75Hz Freezsync certified in the hope they would work together in that range with my NVIDIA GPU, again thanks to this new driver implementation


----------



## jologskyblues

AMD is a business.


----------



## Ultracarpet

Ramad said:


> I understand what you are saying, but there was a hint if we look closely at power draw. Power draw was 66W - 67W at the beginning of the test which jumped to 133W - 134W under load which is a difference 67W, means that this is most likely is a 65W CPU. Anyone which have a Ryzen/Ryzen 2 knows that it's impossible to run such low power draw even at 3.8GHz. Take the 2700 as an example, 3.2GHz base/4.1GHz boost at 65W TDP, which consumes around 80W more when under load compared to idle: https://www.guru3d.com/articles_pages/amd_ryzen_7_2700_review,7.html
> 
> I think that we need to understand that Ryzen 2 is not a Ryzen 1 on a smaller process nor that Ryzen 3 is Ryzen 2 on a smaller process, but there are corrections and modifications on every step forward with every die size reduction, let alone that we don't know how many extra transistors has been added in Ryzen 3 compared to Ryzen 2 along with extra cash or Infinty Fabric speed, which all have been easier after moving the I/O out of the CCX.
> 
> Edit: By the way, that Intel 9900K was running at 3.6GHz on the presentation. (This may make Intel fanboys here feel better )


If you think AMD just showcased a 65w 8c16t CPU tying the 9900k running at 4.7ghz across all cores, I have some bad news for you. And yes, the Intel CPU was running at 4.7ghz across all cores. Compare with this review: 

https://www.techspot.com/review/1744-core-i9-9900k-round-two/



ChiTownButcher said:


> No not proven wrong, or proven correct. Only thing we know is AMD has an engineering sample 8 core CPU that is faster in cinebench than the 9900k while using less power on a package that appears to have room for a second 8 core chiplet and an I/O die to connect the two. We do not know frequency used or was this done at a lower frequency with higher IPC.
> 
> Either way this in my mind is good news for the future of AMD CPU because IF (let me repeat IF) the leak of a 16c/32t Rizen 7 was true than this is the Rizen 5 keeping up/beating with a 9900k in Cinebench using less electricity.
> 
> That is all, but the internet must be outraged/hyped about something so why not this?


It's most likely that IPC increase is sub 5%, and the all core clocks here were at about ~4.5ghz. This is all fine and dandy, but just know that adding that second chiplet is really only going to benefit massively parallel workloads, much like the way threadripper already is. The second chiplet will likely just add more latency and heat. This single chip 8c16t is going to be AMD's best CPU for things like gaming.

As far as your comment about being outraged/hyped... I just want to say that I don't have a problem with what AMD showcased/released (except Vega 2, it's poop), this Ryzen chip is looking to be a win depending on how they price it. All I'm trying to say is that these leaks were BS and Adored was not even close with 99% of everything he said was going to happen at CES. When all the tech news websites were dismissing his "leak" he had an opportunity to just shrug his shoulders and say whatever, but instead he doubled down with 2 more click-bait videos about what we would see at CES specifically, and here we are with a bunch of idealists defending him with "well he wasn't wrong about EEEEVERYTHING...". That's what rubs me the wrong way.



ozlay said:


> No 1080ti would be 25% faster then vega 64. That isn't what she said. She said that Radeon VII is 25% faster at the same power. That doesn't mean we wont see cards that use more power then vega 64. If Nvidia can make a power hungry 2080ti. Why can't AMD do the same. :devil:


The Vega 7 being 25-40% faster than the Vega 64, and using 25% less power at the same performance should spell it out for you that Vega 7 will likely use about the same power as Vega 64. Considering that the Vega 64 already uses as much power as the 2080ti, that is why AMD can't do the same.



BigMack70 said:


> Not likely...
> This is going to be a 250W card with 1080 Ti / 2080 performance. There's not going to be any headroom to jack the power up and compete with the 2080 Ti.
> 
> And even if I'm wrong, their 1080 Ti equivalent card will still cost the exact same as the 1080 Ti did... and anything faster will be more expensive.... which means this is dog poo from the beginning for gaming.
> 
> It will be another year before we have a chance to get a $700 card that beats the 1080 Ti. Pathetic.


Worse- It's going to be a 300w card. 



LancerVI said:


> Well, I may just be stupid, but I'm seriously considering the Vega II and using my 1080 ti in another build or selling it all together.
> 
> I really want an A-sync/monitor and I'll NEVER buy a g-sync monitor. Plus, having an all AMD system for the first time......well, I was going to say in 15 years, but they hadn't bought ATI yet. Anyway, I digress.


Did you miss that NVidia is opening up it's drivers to all A-sync monitors?


----------



## LancerVI

Ultracarpet said:


> Did you miss that NVidia is opening up it's drivers to all A-sync monitors?


No. I didn't.


----------



## bmgjet

If the Vega II can match my 1080ti with its overclock at 4K and has modifiable bios ill be getting one.


----------



## Ramad

rv8000 said:


> I hope you realize the mistake you made.





Ultracarpet said:


> If you think AMD just showcased a 65w 8c16t CPU tying the 9900k running at 4.7ghz across all cores, I have some bad news for you. And yes, the Intel CPU was running at 4.7ghz across all cores. Compare with this review:
> 
> https://www.techspot.com/review/1744-core-i9-9900k-round-two/



So you two would rather like to see an Intel CPU running at 4.7GHz loosing to a future Ryzen 3 R5 (at an unknown frequency) rather than loosing at 3.6GHz. 

Are you two fanboys? If your are, then I have nothing more to say and will not replay. I don't like fanboys, AMD, Intel and Nvidia fanboys are all the same and I would rather leave them alone.


----------



## Ultracarpet

Ramad said:


> So you two would rather like to see an Intel CPU running at 4.7GHz loosing to a future Ryzen 3 R5 (at an unknown frequency) rather than loosing at 3.6GHz.
> 
> Are you two fanboys? If your are, then I have nothing more to say and will not replay. I don't like fanboys, AMD and Intel are alike.


I don't know what to tell you other than it is absolutely obvious that the 9900k was running at 4.7ghz. Literally every single review ever done on the 9900k confirms this. Stock all core boost state for the 9900k is 4.7ghz, and it scores around 2050 at those clocks. If you lock the chip down to it's 95w limit, it's score drops by about 300 in cinebench, as it can't stay at 4.7ghz and stay below the tdp.

You are acting as if the 9900k wasn't already reviewed lol.

The clock of the ryzen 3 is likely around 4.5ghz, what remains to be seen is how much higher it can go, as ryzen doesn't necessarily have a great overclocking track record, outside of the low end models having low stock clocks. The 9900k will likely remain top dog as it can go up all the way up to around 5.2ghz given enough cooling.


----------



## rv8000

Ramad said:


> So you two would rather like to see an Intel CPU running at 4.7GHz loosing to a future Ryzen 3 R5 (at an unknown frequency) rather than loosing at 3.6GHz.
> 
> Are you two fanboys? If your are, then I have nothing more to say and will not replay. I don't like fanboys, AMD, Intel and Nvidia fanboys are all the same and I would rather leave them alone.


Yes the ss from my personal rig trying to show you you made a mistake makes me a fan boy. My intel 8700k makes me a fanboy, my Ryzen system makes me a fanboy, my Vega 64 makes me a fan boy, and the multiple 1070's and 1080's I've had over the past two years makes me a fan boy.

Glad you got my point.


----------



## nolive721

maybe I am wrong because its becoming a long thread and I might have missed something but did you people comment on the fact that this VII card is shown with triple fan cooler instead of the blower type VEGA 64 was launched with.

This card shown today is AMD reference rad right? Or is it already an AIB model?

because I am a bit surprised they mention they could achieve VEGA 64 with 25% less power but did design a quite beefy cooler to support it.

I know that the VEGA 64 runs hot, speaking from my own short experience with the product, but does that make sense for you?


----------



## PureBlackFire

the assumptions are strong in this thread. lol.


----------



## tpi2007

Scotty99 said:


> Lets imagine for a second there is room for another chiplet to fit in there, WHY OH WHY did they not use a 12c chip for that cinebench demo? They could have demolished the 9900k but chose to use an 8c and just tie? Something does not add up.





Scotty99 said:


> If they had a 12c chip ready they would have used it, believe me.



Yeah, because it makes total business sense to showcase a CPU with two chiplets so ahead of actual launch, with 1920X, 2920X, 1950X and 2950X's to sell in the meantime and seeing those sales tank in the next few months.

Ask them why they didn't release a 32 core Threadripper 1990WX in 2017, the space for the dies was there, nor did they comment on it when asked back then, it was just dummy dies to leverage the same EPYC package and help with stabilizing the IHS, just like the space for an extra chiplet is there now. It's all about choosing the right timing to release those news.

And yes, there is space in there for another chiplet, you don't have to imagine, you just have to look. Why on earth would the chiplet be in that weird position, not side by side with the I/O chip, if it were not to leave room to fit another one below?


----------



## Majin SSJ Eric

EniGma1987 said:


> Saying "not final clockspeeds" is a double edged sword from Lisa Su. On the one hand, they could have overclocked the CPUs to tie the Intel chips. That could have been what Lisa was so worried over, crashing from OC. Final clock speeds could be lower and thus less performance.
> On the other hand, if they were OCed, that means OC ability should be good and we really could match 5GHz 9900K performance.... Was the clock speed running at a "lowly 4GHz" and AMD is about to obliterate intel? Or was the chip running above the design spec at something like 4.8ghz? We just dont know enough yet.


Eh, at least they didn't stoop to such measures as trying to hide a chiller out of shot for their run. I mean, what kind of disingenuous and shady company would ever do something like that??? Oh...

Regarding the "disappointment" of what was shown today, I will give Scotty at least a small nod just in the fact that none of the sku's were actually announced today, which is technically what Jim's leaker claimed would happen. That small omission in no way invalidates the entire leak (regarding sku's, core counts, clock speeds, TDP, etc) but it is definitely one aspect of the leak that did not turn out to be true. 

AMD's presentation would also seem to indicate a later launch of Zen 2 than I was expecting, though not necessarily by much. Its pretty much confirmed now that Zen 2 will not be launching within the next couple of months and my precious predictions of an April-ish launch also would seem to be a bit optimistic, given the fact that they didn't actually announce any specific sku's or details. Its not impossible we won't see Zen 2 by April but highly unlikely given today's presentation. 

Jim's video from earlier today did note that one of his leakers was emphatic that the 12 and 16-core chips will NOT launch until Computex, but to me the timing of the releases was always a far less significant detail of the leak than the other info like core counts, clock speeds, and pricing. Given what we saw at CES today, I think it is pretty much confirmed that there will be Zen 2 sku's with more than 8-cores so (as I said above) the lack of actual announcements today in no way invalidates the entire leak (sorry Scotty) and we now know that at least one aspect of the leak is pretty much confirmed.



Scotty99 said:


> How about the dozens of people who had their wallets out ready to buy a 12c 5gz chip for 330 bucks today, where those people at lol.


Um, what people are you talking about??? I'll refer you to this post I made earlier in this thread:



Majin SSJ Eric said:


> Nobody has said that Zen 2 was launching soon. Jim's original leak very clearly showed that most of them were "TBA" at CES, not launched. I'm guessing we will see a similar launch window as both previous Ryzen CPU's (1800X and 2700X).


So yeah, nobody was expecting any CPU's to actually be launching today as that's not what the leak even said to begin with.



Scotty99 said:


> Lets imagine for a second there is room for another chiplet to fit in there, WHY OH WHY did they not use a 12c chip for that cinebench demo? They could have demolished the 9900k but chose to use an 8c and just tie? Something does not add up.


Maybe, just maybe they actually wanted to use the 8-core variant specifically so that nobody (like you) would immediately claim "See, AMD still can't match Intel so they had to use a 12-core chip to beat the 9900K!!!"? In fact, I'd bet money that that's exactly what you would have said. The fact that they did beat the 9900K with an 8-core Zen 2 is actually much more impressive as it is comparing apples to apples, and it bodes well for how Zen 2 is performing, at least to me. Granted, there are still plenty of caveats to the demo but if Zen 2's performance really were the humongous bust that you are obviously hoping it will be I doubt this demo would even have been possible. 

I really do believe its possible this is the year that Intel finally goes down! And no, not like "out of business" or anything hyperbolic and silly like that, but in terms of holding the performance crown for CPU's. If AMD has gotten clock speeds up by around 400-500 MHz then Zen 2 is going to be absolutely beastly given how Zen 1 already performs at just 4300 MHz...


----------



## bmgjet

tpi2007 said:


> Yeah, because it makes total business sense to showcase a CPU with two chiplets so ahead of actual launch, with 1920X, 2920X, 1950X and 2950X's to sell in the meantime and seeing those sales tank in the next few months.
> 
> Ask them why they didn't release a 32 core Threadripper 1990WX in 2017, the space for the dies was there, nor did they comment on it when asked back then, it was just dummy dies to leverage the same EPYC package and help with stabilizing the IHS, just like the space for an extra chiplet is there now. It's all about choosing the right timing to release those news.
> 
> And yes, there is space in there for another chiplet, you don't have to imagine, you just have to look. Why on earth would the chiplet be in that weird position, not side by side with the I/O chip, if it were not to leave room to fit another one below?
> 
> View attachment 245268


Hopefully that spot is used for die 2 for another 8 cores and not only used for the igpu which they just leave off on there desktop chip.


----------



## umeng2002

I don't think the 12 to 16 core parts will even launch in the summer. They'll probably wait until the fall.

That extra die space is probably just big enough for a decent, small Navi GPU... or another Zen 2 chiplet.


----------



## Majin SSJ Eric

PwrSuprUsr said:


> It isn't "This Gen" It's a "Next Gen" 7 nm part. You don't get IPC increase from shrinking down to 7nm. What you DO get is extra clock speed.


AFAIK, Zen 2 has architectural changes from Zen 1 and is not simply a die-shrink of Zen 1 on 7nm.



speed_demon said:


> Yes he is.
> 
> I would like to add that the keynote speech was a look at the current state of manufacturing capabilities with an eye towards the future and not solely an AMD sales pitch. If there is more in store from AMD they certainly were not under pressure to reveal it all today.


"No, nope, NO!!! The ONLY thing that matters is that AMD didn't actually release a 12-core, 5GHz Zen 2 chip TODAY for $330 so that completely invalidates absolutely EVERYTHING in the leak, Zen 2 is a bust and AMD can only make 8-core chips, Intel and Nvidia are #1 forever, and Jim is just a lying fanboy scamming everybody for clicks!!1!"

-Scotty99 and Ultracarpet


----------



## andre02

Bleah, this announcement leaves room for speculation until the summer.

Am i the only one who heard 2500 Cinebench score for the AMD chip and 2050 for Intel ?

What was the score of the AMD chip ?


----------



## white owl

Majin SSJ Eric said:


> "No, nope, NO!!! The ONLY thing that matters is that AMD didn't actually release a 12-core, 5GHz Zen 2 chip TODAY for $330 so that completely invalidates absolutely EVERYTHING in the leak, Zen 2 is a bust and AMD can only make 8-core chips, Intel and Nvidia are #1 forever, and Jim is just a lying fanboy scamming everybody for clicks!!1!"
> 
> -Scotty99 and Ultracarpet


Sadly these kinds of comments have prevented most real discussion here.

Do you remember 48 hours ago when AMD would show us an 8 core that was about 10-15% slower than Intel's? Even though they already have the 2700 which is around 10% slower. Lol

I'm kinda disappointed that we didn't hear anything about Navi but the 8c chip was the only thing I was really likely to buy this year so that's good.
I still haven't figured out if Vega is now supposed to be a gaming card or if it's a compute card that can play games. People keep hating on it but I'm pretty sure it's still a compute oriented product judging by the amount of VRAM on a GPU that probably couldn't utilize most of it while gaming.
FWIW Vega GPU's were very strong in compute, more so than a 1080TI so in that aspect it was a success but now they're showing it's gaming performance at CES so I'm a little confused. Perhaps Navi is still top secret and they didn't want to show no graphics at all?


----------



## Majin SSJ Eric

white owl said:


> Sadly these kinds of comments have prevented most real discussion here.
> 
> Do you remember 48 hours ago when AMD would show us an 8 core that was about 10-15% slower than Intel's? Even though they already have the 2700 which is around 10% slower. Lol
> 
> I'm kinda disappointed that we didn't hear anything about Navi *but the 8c chip was the only thing I was really likely to buy this year so that's good.*
> I still haven't figured out if Vega is now supposed to be a gaming card or if it's a compute card that can play games. People keep hating on it but I'm pretty sure it's still a compute oriented product judging by the amount of VRAM on a GPU that probably couldn't utilize most of it while gaming.
> FWIW Vega GPU's were very strong in compute, more so than a 1080TI so in that aspect it was a success but now they're showing it's gaming performance at CES so I'm a little confused. Perhaps Navi is still top secret and they didn't want to show no graphics at all?


I'm right with you as I have said several times in this thread that all I really care about from CES is the fastest Zen 2 8-core CPU (and how it stacks up vs Intel's best which is currently the 9900K) as that's what I'm aiming for when I finally do a full rebuild of Nightfury. Make no mistake, I am not a fanboy when it comes to buying hardware by brand. I root for AMD and want them to finally topple Intel in the mainstream performance charts mostly just because competition is what pushes the technology forward (the original Ryzen launch absolutely proves that when you look at the quantum shift in the CPU paradigm today vs 2 years ago), but I am going to actually buy whichever 8C / 16T CPU is the best at the time whether its Intel or AMD.

Regarding the GPU stuff, I already said early in this thread that that info was the most likely to be inaccurate (and Jim himself poured a lot of salt over his GPU leaks in the videos). I'm not fussed over Vega or Navi; I think the takeaway from this limited info is that we will be getting a card from AMD that basically matches the 1080Ti for $699 which is a significant savings over any of the 1080Ti's that I'd actually be interested in buying. 1080Ti performance is what I am targeting for my rebuild so Vega VII at least gives me a possible alternative to consider and that's all that really matters to me. Whichever fits my build (and budget) at the time is what I'll get.



nolive721 said:


> I was in the same boat before CES started.
> 
> But the with the AMD VII at 700USD, and from what I feel like 20-30% mark up for me to buy it in Japan, and the anncoument from NVIDIA thye will support Async through drivers are making me seriously reconsidering that logic
> 
> *I am now leaning more towards keeping my great OCer 1080Ti* and get 3 monitors to be say 48-75Hz Freezsync certified in the hope they would work together in that range with my NVIDIA GPU, again thanks to this new driver implementation


I think that's definitely the way to go if you already have a good 1080Ti as that level of performance is still flagship territory (despite all the nonsense about "But, but, its a 2 year old card!!!"). I see this Vega VII as a decent (if not super exciting) upgrade for someone with an older or lower-tier card (like me) looking for ~1080Ti performance at a significant value (considering most of the 1080Ti's that you'd actually want to buy seem to still be over $1000 on NE).


----------



## nolive721

thanks Majin
Just noticed I should have updated my sig about my current GPU
I bought my 1st AIO liquid cooled 1080ti for 600USD 4-5months ago and its been great but I wanted to try an EVGA model (FTW3 Hybrid) which I got for zero money since I sold the MSI for as much as I was buying the EVGA one.and the EVGA didint disappointed either, good OCer and whisper cool and quiet under load

Happy (very) so far but there was these 2 Freesync monitors I have already in my set-up tempting me to go back to Vega with adding a 3rd one to compliment my Ryzen CPU towards a full AMD rig

the presence of 3 fans on the VII card shown at CES is bit worrying me with the cool&quiet aspect of things and the performance show-off Lisa did could well be a best case

Anyway, its just urgent for me to wait and see what the NVIDIA drivers are really going to deliver (and when?) and same applies to VII


----------



## Ultracarpet

Majin SSJ Eric said:


> "No, nope, NO!!! The ONLY thing that matters is that AMD didn't actually release a 12-core, 5GHz Zen 2 chip TODAY for $330 so that completely invalidates absolutely EVERYTHING in the leak, Zen 2 is a bust and AMD can only make 8-core chips, Intel and Nvidia are #1 forever, and Jim is just a lying fanboy scamming everybody for clicks!!1!"
> 
> -Scotty99 and Ultracarpet


He was COMPLETELY wrong about Vega 2 and Navi. Like the absolute opposite of what he was betting on happened.

He was COMPLETELY wrong about any ryzen 3000 chips being announced.

Only 2 cells out of 60 in his table have been confirmed (actually only 1, but I'm giving tdp a pass) as of right now. 6 months from now when the first ryzen 3000 release, no one will even remember his videos (much like his polaris videos) so actual comparisons won't be made. 

He was COMPLETELY wrong about what ryzen 2 chip they would demo. He thought AMD was going to showcase a 16c32t chip running at 5.1 GHz. 

Literally the ONLY thing he got right was the chiplet design, which was actually just deduced based on its use for epyc. 

Why is everyone defending a clickbait youtuber? Was it not too long ago that everyone would laugh at wccf and videocardz articles for reporting on stuff like this? Also, I found it funny that in Jim's own words he says if he doesn't commit to believing the whole leak, he might as well believe none of it. That was actually his reasoning in doubling down in NAVI being announced at CES and Vega 2 being scrapped. All of you defending him and these leaks might be sad if he follows that logic for his next video and scraps every theory and leak he was leaning on for his theories...

To be clear, I'm not even disappointed by what AMD showed, I'm just pointing out that this guy got almost nothing right about CES....

Also, please don't lump me in the same pool as Scotty. I'm not telling AMD to throw in the towel for releasing a less than stellar GPU, nor am I relishing over AMD's failure to showcase a 16c/12c mainstream CPU. I have no doubt they will release a 12 and 16 core variant, but I also don't think those products are going to matter that much in the grand scheme of things, as there just isn't a usecase for the majority of users to have that many cores. The 8c16t chip is going to be the big mover for enthuses
Enthusiasts/gamers as it will likely be the best performer for the applications they use, much like the 9900k is against Intel's own HEDT lineup.

As of right now, I'm mostly interested in what they can achieve if they use that vacant chiplet spot for a GPU. Doesn't look like an HBM stack would fit on there though... So any decently powerful GPU would be severely bottlenecked by the ddr4...


----------



## Majin SSJ Eric

Ultracarpet said:


> He was COMPLETELY wrong about Vega 2 and Navi. Like the absolute opposite of what he was betting on happened.
> 
> He was COMPLETELY wrong about any ryzen 3000 chips being announced.
> 
> Only 2 cells out of 60 in his table have been confirmed (actually only 1, but I'm giving tdp a pass) as of right now. 6 months from now when the first ryzen 3000 release, no one will even remember his videos (much like his polaris videos) so actual comparisons won't be made.
> 
> He was COMPLETELY wrong about what ryzen 2 chip they would demo. He thought AMD was going to showcase a 16c32t chip running at 5.1 GHz.
> 
> Literally the ONLY thing he got right was the chiplet design, which was actually just deduced based on its use for epyc.
> 
> Why is everyone defending a clickbait youtuber? Was it not too long ago that everyone would laugh at wccf and videocardz articles for reporting on stuff like this? Also, I found it funny that in Jim's own words he says if he doesn't commit to believing the whole leak, he might as well believe none of it. That was actually his reasoning in doubling down in NAVI being announced at CES and Vega 2 being scrapped. All of you defending him and these leaks might be sad if he follows that logic for his next video and scraps every theory and leak he was leaning on for his theories...
> 
> To be clear, I'm not even disappointed by what AMD showed, I'm just pointing out that this guy got almost nothing right about CES....
> 
> Also, please don't lump me in the same pool as Scotty. I'm not telling AMD to throw in the towel for releasing a less than stellar GPU, nor am I relishing over AMD's failure to showcase a 16c/12c mainstream CPU. I have no doubt they will release a 12 and 16 core variant, but I also don't think those products are going to matter that much in the grand scheme of things, as there just isn't a usecase for the majority of users to have that many cores. The 8c16t chip is going to be the big mover for enthuses
> Enthusiasts/gamers as it will likely be the best performer for the applications they use, much like the 9900k is against Intel's own HEDT lineup.
> 
> As of right now, I'm mostly interested in what they can achieve if they use that vacant chiplet spot for a GPU. Doesn't look like an HBM stack would fit on there though... So any decently powerful GPU would be severely bottlenecked by the ddr4...


Speaking of what he actually said in the leak video, I believe he prefaced the entire thing by telling his viewers to take all of it with a significant grain of salt. I already stated that the announcement part of the leak was wrong as they didn't announce any sku's. The content of the video though was about what AMD is going to eventually release later this year (I predicted an April or May launch of at least the 6 and 8-core CPU's but it might not be til Computex) and it is still entirely possible that all the actual meat of the leak is going to pan out. Either way, we should know definitively whether or not this leak video was correct, partially correct, totally made up, whatever. The only part of the leak that was definitely wrong was the announcement part; all of the sku's could well end up being right (though I doubt that) and you can bet that people will remember this video in just 6 months. In fact I bet this same thread will be bumping around whenever we do get confirmation of Zen 2's final configs....


----------



## rdr09

andre02 said:


> Bleah, this announcement leaves room for speculation until the summer.
> 
> Am i the only one who heard 2500 Cinebench score for the AMD chip and 2050 for Intel ?
> 
> What was the score of the AMD chip ?



You can watch the video again. If i read it correctly, its 2048 intel and 2057 amd. 

A R7 2700X (3.7 Boost to 4.3) stock will run cine15 on all cores at 4.1GHz ( i think) and score about 1820. At 4.4 using PBO all cores it scores about the same 2060. So, if it is the R5 3600X (4GHz with boost to 4.8GHz according to leak), then i would assume it was running at 4.4GHz all cores.


----------



## gt86

Ultracarpet said:


> The Vega 7 being 25-40% faster than the Vega 64, and using 25% less power at the same performance should spell it out for you that Vega 7 will likely use about the same power as Vega 64. [...]
> 
> Worse- It's going to be a 300w card.





nolive721 said:


> because I am a bit surprised they mention they could achieve VEGA 64 with 25% less power but did design a quite beefy cooler to support it.
> 
> I know that the VEGA 64 runs hot, speaking from my own short experience with the product, but does that make sense for you?


You are getting it wrong. Vega 7 will draw 20% less power than Vega 64 at same performance. 

The facts:
RTX2080 is ~39% faster than Vega 64 
RTX2080 draw 229W vs. Vega pulls like 303W
Source: https://www.computerbase.de/2018-10...st/3/#abschnitt_messung_der_leistungsaufnahme

Vega 7 will be 25% more efficient (according to amd), so it will require like 337W for the same performance as a 2080. Nearly 50% more... 


After getting these numbers from AMD, I doubt that navi will compete against the 3 year old pascal chip. Poor AMD


----------



## FlanK3r

Pinnacle Ridge (2700X) need for similar score 4400 MHz on all cores.
https://hwbot.org/submission/4024276_handrox_cinebench___r15_ryzen_7_2700x_2060_cb

Tahts mean, this ES must worked at lower allboost frequency than Pinnacle Ridge (my tip is around 4.1-4.2GHz)


----------



## ToTheSun!

Ultracarpet said:


> Why is everyone defending a clickbait youtuber?


The AdoredTV adoration is very silly. You should see AMD's sub-reddit.

He's made some decent commentary recently, but that's about it. He might be right about specs for Zen 2 (let's hope...), but he was clearly wrong about its announcement, Navi, and AMD's CES presentation in general.


----------



## umeng2002

The only "disappointing" thing about Ryzen 1 and 2 is the lack of OC headroom without LN2. So if the default base and boost clocks of Ryzen 3 aren't that much more than Ryzen 2, better OC headroom can make the chips more "fun." Plus, add on to that the increase in IPC. Lisa Su did seem genuinely concerned about the Ryzen 3 demo crashing, so that chip must really be early silicon. - maybe OC'd the piss out of to make it run to where they think final silicon will land.


----------



## JackCY

umeng2002 said:


> The only "disappointing" thing about Ryzen 1 and 2 is the lack of OC headroom without LN2. So if the default base and boost clocks of Ryzen 3 aren't that much more than Ryzen 2, better OC headroom can make the chips more "fun." Plus, add on to that the increase in IPC. Lisa Su did seem genuinely concerned about the Ryzen 3 demo crashing, so that chip must really be early silicon. - maybe OC'd the piss out of to make it run to where they think final silicon will land.


Same OC headroom lack can be said for Intel CPUs, Nvidia and AMD GPUs. It's all about how much they try to push the clocks out of factory and they've gotten more aggressive with it in latest years. Long gone are the days when you could OC +35% by bumping up the voltage and clocks, smaller nodes also love to run at a certain level fairly efficiently and past that it explodes terribly fast with more voltage not bringing ability to run barely any higher clock.

AMD has always been more aggressive than others in clocking their products higher out of the box, leaving smaller OC headroom.

The concern about crashing was in a video game about driving the car not about the product they present LOL. OMG people.


----------



## Raghar

Frankly what I seen on the image of Ryzen 2 was just a multi-die design. I wouldn't call it a chiplet. For me a chiplet is this.








Also this chiplet kinda sucks, but what can you do. Intel hired Keller, which should improve stuff like this by a lot.

Ryzen 3 looks like a normal 14-12 nm GF/Samsung process for cache, PCI-E lanes, schedulers, memory controllers. And a 7 nm TSMC process for 8-core die fed by the 14 nm die. Intel did similar stuff with Q6600 which were two E6600 and one interconnect die.








As you can see it's "simpler" design, PCI-E and memory controllers were on chipset.

The important part from that AMD announcement is the empty space for the second 8-core die. I assume this would create similar problems with latencies as on thread rippers. 120 - 220 ns die to die communication. But, it also means AMD plans affordable 12-core to 16-core CPUs. 


I wonder if AMD is using GF/Samsung process because it's more durable, or if AMD has a licencing agreement with GF, and this is a way how to avoid fines for manufacturing full die on TSMC. Another reason might be 7nm price, making only half of CPU on 7 nm might save some money.


----------



## EniGma1987

umeng2002 said:


> I don't think the 12 to 16 core parts will even launch in the summer. They'll probably wait until the fall.
> 
> That extra die space is probably just big enough for a decent, small Navi GPU... or another Zen 2 chiplet.


Having an 80mm2 die area to place there is actually enough for a beast of an iGPU, almost along the size of an RX580 scaled down to 7nm, well actually since the 8 memory channels would be taken off for system memory use then it is enough space to fit an RX580 there on a 7nm process. Of course something that powerful would be insanely held back by system memory, so I doubt AMD will waste the transistors on such an overbuilt design that could never be taken advantage of, but just saying it is possible.


----------



## Dogzilla07

@Ultracarpet

AdoredTV leaks were wrong about the timing, and what would be shown, but he is 3 for 3 for specifics.

1) Chiplet (i speculated and was right that it was gonna be chiplet design 8 cores + 1 I/O die with only 1 extra hop to RAM as soon as new horizons presentation Epyc die is shown, so a month before the leak video), at that point before he got the leaks AdoredTV himself was speculating about monolithic die instead, but turns out the Ian guy from Anandtech, the guy from Videocardz and me were right about the chiplets since the moment they got announced).

2) That from the start it's possible for more than 8 cores

3) That the clocks are gonna be higher than 4.4GHz even at ES stage (The ES ryzen on that test vs the 9900k was running confirmed at up to 4.6GHz at unconfirmed ~75w vs the 125w of the 9900k) This tells us that after binning is done with the non-ES samples there is enough room to 105w/135w to get to 5.0GHz PBO at least, and possible 8 core 5.0Ghz

Nothing that happened neither proves nor disproves that whoever supplied him with the leaks is absolutely wrong. Just that they were relative wrong about a bunch of stuff. 
Prices get finalized a week before we know of them, and depending on the current state of the 7nm production, some or a lot of the planned announcements could have been changed or delayed (all of this is not set in stone, there is no black and white here, only various shades of gray depending on economics, physics, and how your product is coming along)
I still think the leaks are correct for all except price and final clocks depending on the price segments. Before the final announcement all the information is a schroedinger's cat, you need to look at the the % of deviation from the actual accuracy of the leaks, not the absolute accuracy.

P.S like i mentioned in the other posts, AMD only told the people at the conference that it's up to 4.6GHz, not if that's PBO boost, or if they OC-ed all the cores to that speed. The lower scores in the pre-release event kinda point to either a last minute PBO optimization or a manual OC (it's quite possible it was a manual OC, which is still insane considering the ~75W footprint)
The unconfirmed power consumption values are approximations courtesy of the Anandtech article.


----------



## LancerVI

As I've said before, Jim @AdoredTV said very clearly to take these leaks with a grain of salt.

He was also correct on numerous items as delineated above by @Dogzilla07 


Where people seem to be getting hung up is the announcement timing, prices and availability portion of the leaks. All of those things, even with inside information is nearly impossible to get right as they can change on a whim. I never considered any of these aspects of the leak at all, because based on what their competitors do, process yields, floods in Taiwan, etc can change all of this in a heartbeat. Anyone expecting AMD, or nVidia and Intel for that matter, to play their entire hand at any given time is a fool.

I guarantee you that after nVidia signed off their presentation on Sunday, AMD was hard at work deciding what to reveal and what not too based on what they knew about their competitors. 

This is chess, not checkers people.


----------



## SuperZan

ToTheSun! said:


> The AdoredTV adoration is very silly. You should see AMD's sub-reddit.
> 
> He's made some decent commentary recently, but that's about it. He might be right about specs for Zen 2 (let's hope...), but he was clearly wrong about its announcement, Navi, and AMD's CES presentation in general.



To be fair, some of us are neither 'defending' or 'adoring' so much as wondering why his videos require such a visceral emotional response relative to other speculative editorial types, as well as why certain parties are so perfectly willing to absolve the viewing public of any and all 'hype train' culpability. Yes, some of what Jim said was clearly wrong. No, he was not wrong about everything. Yes, he was discussing some of the specifics which did pan out more than the tech sites which trend more towards pure journalism as it were. No, one should never treat an editorial source like a piece of hard-nosed journalism.

I'm fine with calling him on things that were concretely disproved. We should do that with editorial writers and youtubers. I'm less interested in a pitchforks and torches campaign because a guy who speculates got idiots wildly excited, or in lambasting said guy for things which remain neither proven nor disproved.


----------



## bigjdubb

It's kinda funny how we can all watch the same thing and come away feeling totally different. I didn't watch all of the Adored stuff (can't stand the guy) but I watched enough of it to get the picture, and I don't think that picture was false.

I am pretty upbeat after yesterdays keynote. The outlook looks promising for sticking with AMD for my processor and it looks like I will be able to switch to AMD for my video card. I doubt I will make the switch to the VII, availability might make up my mind, but it makes me feel good about them being able to offer something worthwhile with the next architecture.

Maybe things are going to heat up between Nvidia and AMD though, aren't both of them going to be using the same process for their next gen cards? I don't know if Nvidias next design will be another rehash but they already clock well, what are they going to be able to do on tsmc 7nm...


----------



## xTesla1856

Aren't those traces?


----------



## ibb27

bigjdubb said:


> I don't know if Nvidias next design will be another rehash but they already clock well, what are they going to be able to do on tsmc 7nm...


There are rumors that NVIDIA will prefer Samsung 7nm EUV to TSMC, and their new 7nm GPUs will enter in 2020...


----------



## Raghar

xTesla1856 said:


> Aren't those traces?


Sure they are. They are massproducing that PCB. That PCB was designed for IO die and two small CPU dies.

However they might add a dummy silicon die to reduce chance of PCB warping caused by that empty space. edit: Wait, AM4 is ZIF socket, they probably wouldn't bother with dummy die. Sockets like AM4 allow CPU to lay on straight surface, thus they are warping resistant.


----------



## Dogzilla07

LancerVI said:


> because based on what their competitors do, process yields, floods in Taiwan, etc can change all of this in a heartbeat. Anyone expecting AMD, or nVidia and Intel for that matter, to play their entire hand at any given time is a fool.
> 
> I guarantee you that after nVidia signed off their presentation on Sunday, AMD was hard at work deciding what to reveal and what not too based on what they knew about their competitors.
> 
> This is chess, not checkers people.


Exactly there is a dozen physics,yield,defects,binning and real life based things(save best dies for Epyc 2/Threadripper 3) influencing the decision what to show, and then there is the economics side, and that's only like half or 2/3 of it.
A person on twitch chat mentioned Lisa Su was uncharacteristically nervous in her voice and non-verbal communication as if she was either disappointed in what she was saying and going to say, or very excited about what she was gonna say, or could not say. I think we can safely say it's a combination of all 3 of those. The fnatic guy was obviously a last minute addition, that was painful to watch, i think it was last minute cover change for a early-early navi preview and/or more new ryzen info.



bigjdubb said:


> It's kinda funny how we can all watch the same thing and come away feeling totally different. .


That's the way things are, some us approach this over-emotionally while kicking and hitting our logical part of the brain like an abusive partner, some us approach it cold-heartedly with pure logic, and some us just balance it, and adjust our preconceptions and expectations as we go along 

How many times have I've witnessed others or argued myself with people about the same thing, just that we couldn't accept the semantics of the words, the language and the way we presented what we argued about, ...


----------



## tajoh111

Majin SSJ Eric said:


> Speaking of what he actually said in the leak video, I believe he prefaced the entire thing by telling his viewers to take all of it with a significant grain of salt. I already stated that the announcement part of the leak was wrong as they didn't announce any sku's. The content of the video though was about what AMD is going to eventually release later this year (I predicted an April or May launch of at least the 6 and 8-core CPU's but it might not be til Computex) and it is still entirely possible that all the actual meat of the leak is going to pan out. Either way, we should know definitively whether or not this leak video was correct, partially correct, totally made up, whatever. The only part of the leak that was definitely wrong was the announcement part; all of the sku's could well end up being right (though I doubt that) and you can bet that people will remember this video in just 6 months. In fact I bet this same thread will be bumping around whenever we do get confirmation of Zen 2's final configs....


He got many things wrong again. It was not only the announcement, it was the lack of Vega VII in his leaks that undoes his credibility.

Vega VII is the product closest to launch and is some how missing from this so called product list. This should be very telling because it should have been on there because it is the closest product to be announced and released. 

It's not entirely his fault though. These leaks are likely coming from AMD as a misinformation/guerilla marketing plan to get people to wait for their products. I consider Kyle's information network better than adores and he said he basically heard the same thing as Adored. In addition, AMD was not interested in the source of the leak, but rather how people were reacting to it. That people is marketing research.

What this leads me to believe is someone is leaking stuff for AMD, but very much in the same vein as Vega. That is hype and stall. Leaks don't need to be real. You don't think these leaks held off some of the competitions sales during the Christmas season?

The thing is AdoreTV has such a rabid fanbase, that even when he gets stuff wrong which is quite often for videocards, people make excuses and he partially covers his bases like grain of salt, or his latest targets and goals. People don't remember when he gets stuff wrong because the anger towards the competition and how people like him prop up AMD as the good guy even though they are doing very sneaky marketing stuff very much akin to the competition. His credibility does not take the hit it should when he gets stuff wrong(as I have predicted) because of AMD underdog status and the nationalistic bias AMD fans tend to have(e.g people tend to believe what they want to hear and don't want to believe the company they are defending is not out to make money off of them/use them). 

Look at how negative the reactions are and how many videos are made against the RTX 2060(like adored for example) and you will see how the media is biased against Nvidia now. People now say positive reviews for the RTX 2060 are shills now and thus discrediting the press. But taking into account it's priced strong versus the competition even with current street pricing(enough to cause price drops) and the relatively large die size, its not a bad product considering the lack of competition. It's certainly a better product than any GPU AMD has released on 14nm aside from perhaps Polaris. 

The only thing that made it kind of look like a bad product was the so called leaks of a Navi chip at $249 at 20% faster or comparison to dies that are half the size of the RTX 2060(double size die + Finfet = more than 2x cost to produce vs those cards). This is what people kept propping up. But considering that Vega VII is going to be around this level of performance at $699, those rumors can be flushed away. 

I said Zen 2 was not going to be released initially as a 8 core on this forum and again I am right. This is because we have to realize AMD is not this friend of the consumers and wants to make money like anyone else. Giving 2x 7nm CPU chips + a big 12/14nm chip does not make sense to be sold at $499 when the competition has nothing and costs are going up for AMD vs their last Zen architecture. AMD cares about their margins like anyone else. 

I think people need to question more why these leaks are only coming from AMD side and they typically occur when AMD sales begin to slow or they don't have a product that competes with the competition.


----------



## Ultracarpet

Dogzilla07 said:


> @Ultracarpet
> 
> AdoredTV leaks were wrong about the timing, and what would be shown, but he is 3 for 3 for specifics.
> 
> 1) Chiplet (i speculated and was right that it was gonna be chiplet design 8 cores + 1 I/O die with only 1 extra hop to RAM as soon as new horizons presentation Epyc die is shown, so a month before the leak video), at that point before he got the leaks AdoredTV himself was speculating about monolithic die instead, but turns out the Ian guy from Anandtech, the guy from Videocardz and me were right about the chiplets since the moment they got announced).
> 
> 2) That from the start it's possible for more than 8 cores
> 
> 3) That the clocks are gonna be higher than 4.4GHz even at ES stage (The ES ryzen on that test vs the 9900k was running confirmed at up to 4.6GHz at unconfirmed ~75w vs the 125w of the 9900k) This tells us that after binning is done with the non-ES samples there is enough room to 105w/135w to get to 5.0GHz PBO at least, and possible 8 core 5.0Ghz
> 
> Nothing that happened neither proves nor disproves that whoever supplied him with the leaks is absolutely wrong. Just that they were relative wrong about a bunch of stuff.
> Prices get finalized a week before we know of them, and depending on the current state of the 7nm production, some or a lot of the planned announcements could have been changed or delayed (all of this is not set in stone, there is no black and white here, only various shades of gray depending on economics, physics, and how your product is coming along)
> I still think the leaks are correct for all except price and final clocks depending on the price segments. Before the final announcement all the information is a schroedinger's cat, you need to look at the the % of deviation from the actual accuracy of the leaks, not the absolute accuracy.
> 
> P.S like i mentioned in the other posts, AMD only told the people at the conference that it's up to 4.6GHz, not if that's PBO boost, or if they OC-ed all the cores to that speed. The lower scores in the pre-release event kinda point to either a last minute PBO optimization or a manual OC (it's quite possible it was a manual OC, which is still insane considering the ~75W footprint)
> The unconfirmed power consumption values are approximations courtesy of the Anandtech article.


1. He deduced the chiplet design would be used on Ryzen because Epyc was already confirmed to use it. Oh my, I think I can almost feel his spiritual energy!

2. If they are using the chiplet design, more than 8 cores was a "no duh". Even if you didn't take that into account, it would have been the 3rd generation in a row that AMD stayed with 8 cores on their mainstream platform even after 2 process shrinks. Much like point number three, adding more cores was a CERTAIN eventuality. That's why it makes me laugh when people say "well the leaks aren't DISPROVEN yet"... if you wait long enough of course it will come true. 

3. Saying that the clocks are going to be higher is about as genius as guessing after playstation 2 would come playstation 3.

Everything he provided was information that was already kicking around forums/reddit, he bet on the leaks being correct and made clickbait opinion pieces on the leaks being true, even doubling down on his bets after many tech news outlets dismissed them. 99% of it was wrong as related to CES, which was basically the entire point of his videos because he was trying to ride the hype wave for views leading up to the event. $$$$

As for the rest of what you said. Unfortunately, I don't think that CPU was running at 75 watts. I think their test system(s) had very low power consumption outside of the CPU. Most reviews have the 9900k pulling around 200watts for the system when all the cores are loaded, I remember one mentioning around 60 watts for the rest of the system. This test had the 9900k peak at 179 watts. If the test systems were pulling around 20watts less than that 60 watt review, I'm guessing the systems were pulling closer to around 40-45watts and the CPU's were doing the rest. My guess is that the Ryzen ES was pulling closer to 90 watts. Also, as you may already know, Ryzen doesn't have a great overclocking track record. If they are saying up to 4.6ghz, I wouldn't expect much more than a couple hundred or so mhz of headroom beyond that before a voltage wall is hit. For example, the 2700x has single core boost all the way up to 4.5ghz, which only the best 2700x's seem to be able to actually achieve with all core overclocks.


----------



## white owl

Ultracarpet said:


> If they are saying up to 4.6ghz, I wouldn't expect much more than a couple hundred or so mhz of headroom beyond that before a voltage wall is hit. For example, the 2700x has single core boost all the way up to 4.5ghz, which only the best 2700x's seem to be able to actually achieve with all core overclocks.


 I don't think anyone said "up to". 4.6Ghz is just what that chip was running, she implied that it's clocked lower than what they will be. "Not final frequency, early sample"
Furthermore you can't really assume it will OC poorly based on Ryzen+, that's like assuming Coffee Lake would OC exactly the same as Sky Lake when really Coffee lake OC'd really well. That's not a great comparison because it was the same arch...maybe Sandy to Haswell is a better one. I personally don't think that they will have a lot of OC room from the start (possibly until Zen2+) but I'm also not just going to assume that Zen2's behavior mirrors that of the current Zen parts.


----------



## Ultracarpet

white owl said:


> I don't think anyone said "up to". 4.6Ghz is just what that chip was running, she implied that it's clocked lower than what they will be. "Not final frequency, early sample"
> Furthermore you can't really assume it will OC poorly based on Ryzen+, that's like assuming Coffee Lake would OC exactly the same as Sky Lake when really Coffee lake OC'd really well. That's not a great comparison because it was the same arch...maybe Sandy to Haswell is a better one. I personally don't think that they will have a lot of OC room from the start (possibly until Zen2+) but I'm also not just going to assume that Zen2's behavior mirrors that of the current Zen parts.


The person I was quoting said that, actually... I was just assuming they heard something from CES that I didn't. Regardless, I still think that it was running around 4.5-4.6ghz. 4.5 would make more sense though IMO, as 4.6ghz would imply they made almost 0 IPC increase.


----------



## tpi2007

tajoh111 said:


> He got many things wrong again. It was not only the announcement, it was the lack of Vega VII in his leaks that undoes his credibility.
> 
> Vega VII is the product closest to launch and is some how missing from this so called product list. This should be very telling because it should have been on there because it is the closest product to be announced and released.
> 
> It's not entirely his fault though. These leaks are likely coming from AMD as a misinformation/guerilla marketing plan to get people to wait for their products. I consider Kyle's information network better than adores and he said he basically heard the same thing as Adored. In addition, AMD was not interested in the source of the leak, but rather how people were reacting to it. That people is marketing research.
> 
> What this leads me to believe is someone is leaking stuff for AMD, but very much in the same vein as Vega. That is hype and stall. Leaks don't need to be real. You don't think these leaks held off some of the competitions sales during the Christmas season?
> 
> The thing is AdoreTV has such a rabid fanbase, that even when he gets stuff wrong which is quite often for videocards, people make excuses and he partially covers his bases like grain of salt, or his latest targets and goals. People don't remember when he gets stuff wrong because the anger towards the competition and how people like him prop up AMD as the good guy even though they are doing very sneaky marketing stuff very much akin to the competition. His credibility does not take the hit it should when he gets stuff wrong(as I have predicted) because of AMD underdog status and the nationalistic bias AMD fans tend to have(e.g people tend to believe what they want to hear and don't want to believe the company they are defending is not out to make money off of them/use them).
> 
> Look at how negative the reactions are and how many videos are made against the RTX 2060(like adored for example) and you will see how the media is biased against Nvidia now. People now say positive reviews for the RTX 2060 are shills now and thus discrediting the press. But taking into account it's priced strong versus the competition even with current street pricing(enough to cause price drops) and the relatively large die size, its not a bad product considering the lack of competition. It's certainly a better product than any GPU AMD has released on 14nm aside from perhaps Polaris.
> 
> The only thing that made it kind of look like a bad product was the so called leaks of a Navi chip at $249 at 20% faster or comparison to dies that are half the size of the RTX 2060(double size die + Finfet = more than 2x cost to produce vs those cards). This is what people kept propping up. But considering that Vega VII is going to be around this level of performance at $699, those rumors can be flushed away.
> 
> I said Zen 2 was not going to be released initially as a 8 core on this forum and again I am right. This is because we have to realize AMD is not this friend of the consumers and wants to make money like anyone else. Giving 2x 7nm CPU chips + a big 12/14nm chip does not make sense to be sold at $499 when the competition has nothing and costs are going up for AMD vs their last Zen architecture. AMD cares about their margins like anyone else.
> 
> I think people need to question more why these leaks are only coming from AMD side and they typically occur when AMD sales begin to slow or they don't have a product that competes with the competition.



I don't doubt that AMD is doing sketchy marketing behind the scenes, but I doubt that it influenced GPU shopping during the holiday season. They had just publicly announced and sampled 7nm Vega for professionals and we knew quite well what it was capable of - a die shrunk power hog Vega that moves up a single tier on the same power envelope. Realistically, if Navi was around the corner and not at least 6+ months away, AMD would have thrown 7nm Professional Vega in the thrash can and moved on with Navi. So, at best, people shouldn't have expected anything in terms of GPUs to come out in the first half of 2019 from AMD.

On the other side of the pond and speaking about the RTX 2060, look at what Nvidia did with TechSpot / Hardware Unboxed for the RTX 2060 launch: https://www.techspot.com/news/78184-no-soup-you-where-our-rtx-2060-review.html

As to Zen 2, did you put a "not" in there by mistake? It seems so given the context of what you write afterwards. In any case, given AMD's market position, doing exactly what you say they won't is what they absolutely need to do when Intel "has nothing". Intel, in a much better market position in 2006-2007, released the Core 2 Duos and Core 2 Quads in quick succession and iterated so fast that by July 2007 the Q6600 was on the famous G0 revision and was priced at an astonishingly low $266. And this was Intel, the behemoth. And you know what AMD had back in July of 2007? _They had nothing._ Why did Intel bother then? Giving two full C2D 4 MB L2 cache dies away, eating into their profit margins, right? You seriously think that AMD can afford to just call it a day by releasing an 8 core chip that beats the 9900K by 5%?


----------



## white owl

Ultracarpet said:


> The person I was quoting said that, actually... I was just assuming they heard something from CES that I didn't. Regardless, I still think that it was running around 4.5-4.6ghz. 4.5 would make more sense though IMO, as 4.6ghz would imply they made almost 0 IPC increase.


 Oh I see it now lol
Yeah it was 4.6Ghz according to a thing I saw posted in one of these threads.
TBH if they can beat a stock 9900k (4.7Ghz) with their own comparable CPU -100Mhz I'd say they did a good job. Perhaps there isn't a large IPC bump and we'll see even higher clocks? We can dream can't we?


----------



## Dogzilla07

courtesy of Videocardz comment section.


----------



## lightsout

Dogzilla07 said:


> courtesy of Videocardz comment section.


Wasn't Forza part of the demo and she said it was running at 1080p??


----------



## Ultracarpet

white owl said:


> Oh I see it now lol
> Yeah it was 4.6Ghz according to a thing I saw posted in one of these threads.
> TBH if they can beat a stock 9900k (4.7Ghz) with their own comparable CPU -100Mhz I'd say they did a good job. Perhaps there isn't a large IPC bump and we'll see even higher clocks? We can dream can't we?


Another thing to consider is that AMD's SMT is actually a bit better in cinebench than Intel's i believe. A 2600x at 4ghz scores slightly above the 8700k at 4ghz in multithreaded, but loses slightly in IPC single thread. Source: https://www.techspot.com/article/1616-4ghz-ryzen-2nd-gen-vs-core-8th-gen/ 

Taking that into account, I don't actually think there was much of an IPC bump at all; sub 5% I'm guessing. A 2700x at 4.4 could already almost touch 2000 pts in cinebench. What we are seeing is a big reduction in power, and slightly higher clocks. The latter of which has really been the only thing holding zen back from the very beginning.

I'm not saying AMD didn't do a good job with this chip- I'm extremely excited, and this will probably be my next upgrade. I have a placeholder 8400 right now, and I was planning on dropping in a better Intel chip later, but given the price of 9700's and 9900's I think I can wait it out for these new Ryzen chips. For the price of an upgrade to one of those chips, I could probably afford to buy the new AMD chip AND a new AMD motherboard. Though, I HIGHLY doubt the $229 price from these leaks. It is much more likely to be around $350-$400 if the performance really is that close to a 9900k. 

All I'm hoping now is that they actually can clock higher than the demo yesterday so that the single threaded advantage the 9900k holds can be thrown out. Like I said before CES, if they can come within 5-10% of a max OC 9900k in the WORST scenarios (which usually means single threaded applications/games), they will have succeeded massively IMO.


----------



## miklkit

So the 3xxx will have 9900k performance at much less watts? Cool! That leaves when and how much. Since I won't be able to buy anything until next fall when is irrelevant. That leaves how much. Worst case I only replace the 1700 in my rig. Best case I buy an X570 motherboard too. Pie in the sky I buy new ram too but don't expect it unless the price drops 100% by then. Looking forward to Black Friday.


----------



## mouacyk

I think revealed Radeon VII pricing is also indicative of how AMD will price Ryzen Mattise high-end. MSRP is going to suck.


----------



## KyadCK

mouacyk said:


> I think revealed Radeon VII pricing is also indicative of how AMD will price Ryzen Mattise high-end. MSRP is going to suck.


Unlikely. The products are not comparable that way;

Vega II is a large 7nm monolithic die on an interposer with four expensive HBM2 stacks.

Ryzen 3 is one or two very small (<100mm^2) dies on 7nm and a slightly larger die on nice and cheap 14nm.


----------



## lightsout

mouacyk said:


> I think revealed Radeon VII pricing is also indicative of how AMD will price Ryzen Mattise high-end. MSRP is going to suck.


I agree, bummer, hope I am wrong.


KyadCK said:


> Unlikely. The products are not comparable that way;
> 
> Vega II is a large 7nm monolithic die on an interposer with four expensive HBM2 stacks.
> 
> Ryzen 3 is one or two very small (<100mm^2) dies on 7nm and a slightly larger die on nice and cheap 14nm.


I don't think they will ultimately care about that. If they are beating Intel or equaling, and then crushing them in core count, they are going to want to get paid.


----------



## andrews2547

lightsout said:


> Wasn't Forza part of the demo and she said it was running at 1080p??



I'm not sure about in the actual demo, but I can run Horizon 4 at a mix of medium and high (mostly high) 1080p and I get 50-70fps using an HD7950. I would be seriously surprised if a Radeon VII runs it maxed out at 4K without being able to average 100fps.


----------



## KyadCK

lightsout said:


> I agree, bummer, hope I am wrong.
> 
> 
> I don't think they will ultimately care about that. If they are beating Intel or equaling, and then crushing them in core count, they are going to want to get paid.


The 2700X loses by ~10-15% to a 9900K in most tests. Does the 2700X cost 10-15% less than a 9900K?

The 1800X, when it launched, was busy trashing intel's entire HEDT lineup. Did it cost as much as a 5960X?

The Epic 7601 ties in most tests with the Xeon 8180. Does it cost as much?

The entire point of the Ryzen's CCX/multi-die design is that they are cheap to produce with stupid high yields. So cheap to produce that AMD can charge $100 for the same silicon that goes into the $300+ chip. Their margins are already fine. The 16c will cost a lot, possibly even $600, but no more than Threadripper already costs.


----------



## bigjdubb

Ultracarpet said:


> Did you miss that NVidia is opening up it's drivers to all A-sync monitors?


Wait a minute. What I read was a list of 12 monitors (out of the hundreds) that got approved and will have driver support. When did it become all of them? I missed that follow up story.


----------



## white owl

You didn't miss much. They just mentioned that they'll allow you to try it yourself in NVCP.


----------



## airisom2

NV approved 12 of them, but there is an option you can select in the drivers to force enable adaptive sync on untested freesync monitors or monitors that didn't fully meet NV's criteria.


----------



## nolive721

bigjdubb said:


> Wait a minute. What I read was a list of 12 monitors (out of the hundreds) that got approved and will have driver support. When did it become all of them? I missed that follow up story.


there are more to come 

Quoting NVIDIA

G-SYNC Compatible monitor support will begin later this month with the launch of our first 2019 Game Ready driver. Already, 12 monitors are G-SYNC Compatible, and we’ll continue to evaluate monitors and update our support list throughout going forward.

and in fine, there is hope VRR would be working on any Freesync featured monitor through the NVIDIA drivers implementation. they just couldnt say that so openly to avoid killing their Gsync , sort of, failure.


----------



## bigjdubb

Well that's good to hear. Anyone want to buy a lightly used 32" 1440p 144hz G-Sync monitor...


----------



## lightsout

andrews2547 said:


> I'm not sure about in the actual demo, but I can run Horizon 4 at a mix of medium and high (mostly high) 1080p and I get 50-70fps using an HD7950. I would be seriously surprised if a Radeon VII runs it maxed out at 4K without being able to average 100fps.


Thats what I was thinking, maybe she misspoke or maybe I misheard but I swore in the demo she said it was running at 1080 and was sitting around 100fps. I remember thinking 1080? Who cares?


----------



## Kpjoslee

Looks like it will be another 4-5 months before we get more info. If anything, I am interested in their new line of threadrippers (non-NUMA 32 core) and a new x599 platform. Can't wait to get rid of this x399 Gigabyte which has given me more trouble than its worth.


----------



## Ultracarpet

lightsout said:


> Thats what I was thinking, maybe she misspoke or maybe I misheard but I swore in the demo she said it was running at 1080 and was sitting around 100fps. I remember thinking 1080? Who cares?


Yea, based on techspot benchmarks of the vega 64 for that game at 4k (75 fps), 25% more would be 93fps at 4k ultra quality 2xmsaa... much closer to what we were seeing on screen. They were probably just in a slightly easier to run part of the game, or used reduced AA.


----------



## coupe

lightsout said:


> I agree, bummer, hope I am wrong.
> 
> 
> I don't think they will ultimately care about that. If they are beating Intel or equaling, and then crushing them in core count, they are going to want to get paid.


I think its just cause the card is so expensive to make. With 16 GB of HBM2 that alone probably costs $200-300. They ONLY reason they made this card is because Nvidia raised prices so high that this AMD product became viable to make some money.


----------



## ozlay

lightsout said:


> Wasn't Forza part of the demo and she said it was running at 1080p??


Yeah, She said 1080p.



Dogzilla07 said:


> courtesy of Videocardz comment section.


I wonder how much of that power draw is from the IGPU.


----------



## ZealotKi11er

ozlay said:


> Yeah, She said 1080p.
> 
> 
> 
> I wonder how much of that power draw is from the IGPU.


100W.


----------



## Majin SSJ Eric

tajoh111 said:


> Spoiler
> 
> 
> 
> He got many things wrong again. It was not only the announcement, it was the lack of Vega VII in his leaks that undoes his credibility.
> 
> Vega VII is the product closest to launch and is some how missing from this so called product list. This should be very telling because it should have been on there because it is the closest product to be announced and released.
> 
> It's not entirely his fault though. These leaks are likely coming from AMD as a misinformation/guerilla marketing plan to get people to wait for their products. I consider Kyle's information network better than adores and he said he basically heard the same thing as Adored. In addition, AMD was not interested in the source of the leak, but rather how people were reacting to it. That people is marketing research.
> 
> What this leads me to believe is someone is leaking stuff for AMD, but very much in the same vein as Vega. That is hype and stall. Leaks don't need to be real. You don't think these leaks held off some of the competitions sales during the Christmas season?
> 
> The thing is AdoreTV has such a rabid fanbase, that even when he gets stuff wrong which is quite often for videocards, people make excuses and he partially covers his bases like grain of salt, or his latest targets and goals. People don't remember when he gets stuff wrong because the anger towards the competition and how people like him prop up AMD as the good guy even though they are doing very sneaky marketing stuff very much akin to the competition. His credibility does not take the hit it should when he gets stuff wrong(as I have predicted) because of AMD underdog status and the nationalistic bias AMD fans tend to have(e.g people tend to believe what they want to hear and don't want to believe the company they are defending is not out to make money off of them/use them).
> 
> Look at how negative the reactions are and how many videos are made against the RTX 2060(like adored for example) and you will see how the media is biased against Nvidia now. People now say positive reviews for the RTX 2060 are shills now and thus discrediting the press. But taking into account it's priced strong versus the competition even with current street pricing(enough to cause price drops) and the relatively large die size, its not a bad product considering the lack of competition. It's certainly a better product than any GPU AMD has released on 14nm aside from perhaps Polaris.
> 
> The only thing that made it kind of look like a bad product was the so called leaks of a Navi chip at $249 at 20% faster or comparison to dies that are half the size of the RTX 2060(double size die + Finfet = more than 2x cost to produce vs those cards). This is what people kept propping up. But considering that Vega VII is going to be around this level of performance at $699, those rumors can be flushed away.
> 
> I said Zen 2 was not going to be released initially as a 8 core on this forum and again I am right. This is because we have to realize AMD is not this friend of the consumers and wants to make money like anyone else. Giving 2x 7nm CPU chips + a big 12/14nm chip does not make sense to be sold at $499 when the competition has nothing and costs are going up for AMD vs their last Zen architecture. AMD cares about their margins like anyone else.
> 
> 
> 
> *I think people need to question more why these leaks are only coming from AMD side and they typically occur when AMD sales begin to slow or they don't have a product that competes with the competition.*


Do you seriously not understand WHY it is that so many people root for AMD and want them to succeed against Intel and Nvidia??? I mean, it has literally NOTHING to do with people thinking AMD is the "good guy" or that they are some kind of "gamer charity" and I refuse to believe that you actually believe that either. The reason AMD is supported the way they are (and that so many people pin their hopes and optimism on them) is simply because they are the ONLY company in the entire world that has any hope at all of keeping Intel and Nvidia honest. You yourself constantly go on and on about how "lack of competition" basically excuses Nvidia for whatever underhanded and anti-consumer thing they do (and Intel USED to do before Ryzen). Well there you go, THAT's why all these PC gamers, hobbyists, builders, etc are rooting for AMD to succeed. 

We have all witnessed first hand what Intel and Nvidia do each and every time they are left unopposed in their respective markets and we are all beyond sick and tired of it. The thing is, I have no doubt whatsoever that if the reverse were true and AMD had the monopoly on performance that Nvidia does (or that Intel did before Ryzen) they would be doing the same thing, because (as you love to mention) they are a business whose sole purpose is to make as much money as possible. And you know what? If that were the case then you'd see these same reasonable people rooting for Intel or Nvidia to succeed so that the balance of competition was restored. This is not about simple fanboys being fanboys (well, obviously there are fanboys but I'm talking about the majority here); Its about us consumers wanting all of these companies fighting against each other tooth and nail to EARN the privilege of getting our money rather than any one of them having the ability extort it from us because we simply have no other choice.



mouacyk said:


> I think revealed Radeon VII pricing is also indicative of how AMD will price Ryzen Mattise high-end. MSRP is going to suck.


Try again. The price reveal of Radeon VII is indicative of the fact that it has 16GB of HBM2 and has nothing to do with Ryzen. I read somewhere that even at $699 AMD will be taking at least a $50-per-unit loss on Radeon VII. It is speculated that they shoehorned 16GB into it to at least give it one major advantage over RTX (even if that advantage is not really all that relevant for gaming). Zen 2 will not be facing anything like that kind of a cost hurdle and AMD has already been extremely aggressive with their CPU pricing since Ryzen first released. I don't expect that to change at all, even if they do outright beat Intel's performance. As TPI said, AMD is simply in no position to charge a performance tax like Intel and Nvidia have been the past few years; they need market share and the best way for them to get it is to remain aggressive while they have the upper hand and crucially BEFORE Intel can respond.

But we'll see how they play things later this year...


----------



## umeng2002

Yeah, the only reason I want AMD to succeed is to provide competition. In fact, all my GPU purchases in my entire life have been nVidia cards... not because I purposefully want to but nVidia. It's because they have the better product every time I'm looking for a new card. But right now, nVidia is just competing with themselves. Jensen is talking smack about the Radeon VII because he knows it'll eat into his RTX 2080 sales.

There was something clearly broken with Vega and a lot of the features introduced with it never made it into software. It seems AMD has fixed those issues, and Radeon VII is what Vega should have been at launch.

Look at what Ryzen did to Intel. No way Intel would be releasing parts with more cores if Ryzen was a dud.


----------



## Kpjoslee

Majin SSJ Eric said:


> Try again. The price reveal of Radeon VII is indicative of the fact that it has 16GB of HBM2 and has nothing to do with Ryzen. I read somewhere that even at $699 AMD will be taking at least a $50-per-unit loss on Radeon VII. It is speculated that they shoehorned 16GB into it to at least give it one major advantage over RTX (even if that advantage is not really all that relevant for gaming). Zen 2 will not be facing anything like that kind of a cost hurdle and AMD has already been extremely aggressive with their CPU pricing since Ryzen first released. I don't expect that to change at all, even if they do outright beat Intel's performance. As TPI said, AMD is simply in no position to charge a performance tax like Intel and Nvidia have been the past few years; they need market share and the best way for them to get it is to remain aggressive while they have the upper hand and crucially BEFORE Intel can respond.
> 
> But we'll see how they play things later this year...


Um....Source? If they are taking $50-per-unit loss on that thing, that shouldn't even exist as a product.


----------



## Majin SSJ Eric

bigjdubb said:


> Wait a minute. What I read was a list of 12 monitors (out of the hundreds) that got approved and will have driver support. When did it become all of them? I missed that follow up story.


AFAIK, the driver will attempt to make Adaptive Sync work with any monitor. The list of "approved" monitors are just monitors that Nvidia has tested and guarantees satisfactory performance with.



Kpjoslee said:


> Um....Source? If they are taking $50-per-unit loss on that thing, that shouldn't even exist as a product.


Apparently it was from a WCCFTech rumor back in early December speculating about the departure of then-Senior VP of RTG Mike Rayfield. Basically according to the rumor, Rayfield was the one ultimately responsible for Radeon VII to start with and there was rancor among the rest of the AMD executives that it wasn't a feasible product because it would cost $750 to build and only tie with a GTX 1080Ti. As I said, it was just something I had read regarding the pricing of Radeon VII (and with the cost of 16GB of HBM2 its at least plausible).

https://wccftech.com/exclusive-mike-rayfield-amd-retires/


----------



## white owl

umeng2002 said:


> Yeah, the only reason I want AMD to succeed is to provide competition. In fact, all my GPU purchases in my entire life have been nVidia cards... not because I purposefully want to but nVidia. It's because they have the better product every time I'm looking for a new card. But right now, nVidia is just competing with themselves. Jensen is talking smack about the Radeon VII because he knows it'll eat into his RTX 2080 sales.
> 
> There was something clearly broken with Vega and a lot of the features introduced with it never made it into software. It seems AMD has fixed those issues, and Radeon VII is what Vega should have been at launch.
> 
> Look at what Ryzen did to Intel. No way Intel would be releasing parts with more cores if Ryzen was a dud.


 Similar story here.
Always Nvidia cards BUT I refuse to buy them new.
I want AMD to succeed for the same reasons but there is another more personal reason and it's Intel's "Buy us or be damned" strategy. Paying and threatening CEOs to use their CPUs for their servers and OEM needs, keeping us on 4 cores for the last several years, the whole chiller "misunderstanding" and the list goes on and on.
As for Nvidia...I can't quote exact things because I'll be wrong on the details but there are several instances of Nvidia titles that ran very poorly on AMD, not because AMD was inferior but because Nvidia did arbitrary things to make the game run worse. Even as recent as The Witcher 3 with the crazy tessellated hair and the 3.5gb 970. They could have sold it as a 3.5gb card and it would have been fine but AMD had 4gb on the 290s so they had to lie about 500mb.
I can think of more on both sides but I'm not typing it all out.



I'm fine with friendly competition but holding the industry back, even sabotaging it and lying to the customer just so you can appear superior is just childish and downright evil. 



The only thing comparable that AMD had done that I know of is some GPU that was advertised as something but sold with...well something less, I can't quite remember.


----------



## gt86

umeng2002 said:


> In fact, all my GPU purchases in my entire life have been nVidia cards... not because I purposefully want to but nVidia. It's because they have the better product every time I'm looking for a new card.


What have you done after AMD launched the 5850/5870 series? [emoji848]


----------



## Kpjoslee

Majin SSJ Eric said:


> Apparently it was from a WCCFTech rumor back in early December speculating about the departure of then-Senior VP of RTG Mike Rayfield. Basically according to the rumor, Rayfield was the one ultimately responsible for Radeon VII to start with and there was rancor among the rest of the AMD executives that it wasn't a feasible product because it would cost $750 to build and only tie with a GTX 1080Ti. As I said, it was just something I had read regarding the pricing of Radeon VII (and with the cost of 16GB of HBM2 its at least plausible).
> 
> https://wccftech.com/exclusive-mike-rayfield-amd-retires/


Well, I don't think it would have been greenlighted if they are taking that much of a loss per unit. I am sure the profit margin would really thin but no way they would be taking a loss.


----------



## guttheslayer

Ramad said:


> So you two would rather like to see an Intel CPU running at 4.7GHz loosing to a future Ryzen 3 R5 (at an unknown frequency) rather than loosing at 3.6GHz.
> 
> Are you two fanboys? If your are, then I have nothing more to say and will not replay. I don't like fanboys, AMD, Intel and Nvidia fanboys are all the same and I would rather leave them alone.


you are not fan boy even, you are a epic troll. This is one of the worst comment in OCN with absolute zero knowledge about I9-9900K.

EVERY REVIEW have confirm that score of 2040 is the stock performance of 9900K AT all cores with 4.7GHz.

Don't try to misled others when simple google shows u otherwise, or you trying to say everything on google (and thus the internet) is wrong because u personally tested 9900K at 3.6GHz and that 2040 score is what you achieved. In that case prove it.

Seriously stop behaving like a mis-informed troll. Wake up and admit AMD has finally caught up and exceed Intel (for now).


----------



## kyrie74

Kpjoslee said:


> Well, I don't think it would have been greenlighted if they are taking that much of a loss per unit. I am sure the profit margin would really thin but no way they would be taking a loss.


For some reason, I think AMD are using dies that couldn't make into the Pro cards for whatever reason and are making these out of dies that would have been junked.


----------



## Cherryblue

kyrie74 said:


> For some reason, I think AMD are using dies that couldn't make into the Pro cards for whatever reason and are making these out of dies that would have been junked.


Like 970 are 980 junked dies, 1070 are 1080 junked dies, 2080 are 2080 ti junked dies... and so on.

That is the natural order of things around here . So if they are able to do it and still make a profit at 699$, good thing.

I will never buy a gpu this expensive, but still I wish them to be in good health for competition sake.


----------



## umeng2002

I think the reason why they're 16GB is that they're just recycled Radeon Instinct MI50. Had there been an 8GB MI40, we would have a cheaper version for gaming.

I doubt AMD even has 8GB 7nm Vega cards in labs...


----------



## FlanK3r

Dogzilla07 said:


> courtesy of Videocardz comment section.


If is right, possible only for max boost in XFR, not for all cores load. Because Zen2 IPC cant be worse than Zen+. Because Ryzen 7 2700X at 4425 MHz and 3466 MHz cl15 hit 2050 points in R15. (I got 2010 points at 4350 MHz with my AIO setup and 2700X and CB11 bias)


----------



## Ramad

guttheslayer said:


> you are not fan boy even, you are a epic troll. This is one of the worst comment in OCN with absolute zero knowledge about I9-9900K.
> 
> EVERY REVIEW have confirm that score of 2040 is the stock performance of 9900K AT all cores with 4.7GHz.
> 
> Don't try to misled others when simple google shows u otherwise, or you trying to say everything on google (and thus the internet) is wrong because u personally tested 9900K at 3.6GHz and that 2040 score is what you achieved. In that case prove it.
> 
> Seriously stop behaving like a mis-informed troll. Wake up and admit AMD has finally caught up and exceed Intel (for now).


I feel sad for you and your manners. You did not even read my earlier posts so you have no idea about what I meant, because you are "ordering" me to "admit" a fact that wrote about myself. Witting this from an AMD system with Ryzen 1600 and x370 motherboard by the way. Have a nice day.


----------



## NightAntilli

Why are so many giving AdoredTV a hard time? Sure, the GPU side, he was incorrect on. But I didn't expect differently, ever since the VII logo leaked. And Navi sounded way too ideal anyway. Additionally, it could very well be that Vega VII was not planned, but after seeing the RTX series, they decided to do it anyway. It's quite obvious that Vega VII uses the salvaged chips that didn't make it for the Radeon Instinct parts. It wouldn't be hard at all to come up with that card at the last minute. And what's the deal with people bashing this card? Oh right. It's an AMD graphics card. I forgot. Never mind.

But regarding the CPUs, there is nothing really here that disproves AdoredTV's leaks. Maybe they were supposed to announce it at CES but later decided to change it? It's not as if a CES presentation is set in stone months in advance.
And think about it. Why would they announce performance and price of the whole line-up, and kill their current sales of CPUs? Seems a better idea to reveal these things much closer to launch of these CPUs to allow current sales to remain relatively high. We also don't know if they had to delay their launch for whatever reason, and thus less was shown. 

There are many variables, and to somehow say that AdoredTV was more wrong than right is reaching at this point. The chiplet design is confirmed, the 16T/32T is practically confirmed, if we look at the layout of the currently announced CPU. The clock speeds don't seem too far off... There is no reason the leaks would not be possible. And obviously, since everyone said it was all too good to be true when everything was leaked, the prices will no longer be true, because they know they can charge more and thus will charge more.

AMD did what they needed to do, which is beat the most important Intel CPU with the same amount of cores and threads at less power. Other details are great for us to know in advance, but not so great for AMD if they give both Intel more time to adapt, and the consumer a bigger incentive to wait and not buy something now.


----------



## Raghar

FlanK3r said:


> If is right, possible only for max boost in XFR, not for all cores load. Because Zen2 IPC cant be worse than Zen+. Because Ryzen 7 2700X at 4425 MHz and 3466 MHz cl15 hit 2050 points in R15. (I got 2010 points at 4350 MHz with my AIO setup and 2700X and CB11 bias)


C'mon. Ryzen 2700X is a single die that has two groups of 4 cores and silicon connecting them. Obviously when they split IO and cores into separate dies, it can introduce latencies and other inefficiency. I assume AMD did improvements which compensated for IO/core split, but better IPC isn't granted.

Also RAM speeds should no longer affect infinity fabric.


----------



## FlanK3r

I remember, AMD said IPC is better and also they said latency of IMC will be lower than on Ryzen 2nd gen. So...


----------



## Lee Patekar

FlanK3r said:


> I remember, AMD said IPC is better and also they said latency of IMC will be lower than on Ryzen 2nd gen. So...


They talked about latency on threadripper / epyc if I remember well, not Rysen.


----------



## FlanK3r

if IPC will be worse (clock to clock than Pinnacle Ridge), its fail...specially on 7nm But I do not belive it, IPC of Ryzen 3000 could be worse than Pinnacle Ridge


----------



## Telimektar

New video from AdoredTV about the keynote.


----------



## 99belle99

Just finished watching it. Sounds like his original leaks were true. I'd say he is kicking himself after originally stating there will be chiplet designs on desktop while some people twisted his arm stating it costs too much to do on the desktop.

Looking good for AMD overall. Half the power and AMD lowest hand dealt. Hopefully we see the true potential closer to launch. 16 cores 32 threads.


----------



## EastCoast

I really don't know what to say about Adore video on that one. If it turns out to be true that AMD beat 9900K with a 65W midrange cpu then I just don't even understand why AMD wouldn't say so in their keynote when they know investors are watching.

I'm sure we will see chiplets in consoles as well. I also believe that you could see a gpu die or 2 cpu die in that package. But the power consumption alone would bring consoles to bear and will allow them to be on part with current mid-high end CPU on next gen console releases. That's going to further prove that they will be more then capable of true 4K HDR gaming.


----------



## Jarhead

Telimektar said:


> https://www.youtube.com/watch?v=g39dpcdzTvk
> 
> New video from AdoredTV about the keynote.


If even half of that is true Intel is in serious trouble, not in the consumer space(maybe there ,too) but in the data center market. If AMD can only match Intel's performance at half the power consumption that's gonna catch some attention, but this looks like AMD is better in every single way. Less power, better IPC, cheaper platform, cheaper per chip cost, more cores - this is AMD stomping Intel way harder than when Athlon took the performance crown from the Pentium line. I was worried that this would never happen.


----------



## 113802

gt86 said:


> What have you done after AMD launched the 5850/5870 series?


Bought a GTX 470 because I wanted to be able to play DirectX 11 games. It still has a life time warranty on it.

People are still rocking their GTX 480 till this day since it supports DirectX12


----------



## 99belle99

I had GTX 580 before I jumped ship to AMD since then I have had a 290, Fury X and now a Vega 56. Yes that 290 was a MASSIVE upgrade on the GTX 580.


----------



## kd5151

Let's hope adoredTV was right. Let's hope what we saw was a 65w tdp Ryzen 5 3600.


----------



## EastCoast

kd5151 said:


> Let's hope adoredTV was right. Let's hope what we saw was a 65w tdp Ryzen 5 3600.


IMO
Yeah, this is one of those things where all of it has to be true or not kind of thing. 
Or else, AMD used a center die from the wafer super sample that would be keep internal for life. 

But I must reiterate again that AMD should have said so in their own Keynote. There is no way that Intel doesn't know by now and keeping that info from the customers/investors/etc just doesn't make sense to me.


----------



## Ultracarpet

Telimektar said:


> https://www.youtube.com/watch?v=g39dpcdzTvk
> 
> New video from AdoredTV about the keynote.


haahahahahahahahahaha
@FlanK3r you are getting called out buddy looool. The funniest part is how adored doesn't even know why you were saying his video's/leaks were embarrassing as compared to CES- you really struck a nerve with him!

Notice how adored gave himself a pass on everything he got wrong, and then proceeded to double down on a bunch of more garbage to throw on the hype train? 2x power efficiency of the i9 9900k. He just took AMD's numbers, which they carefully crafted from in house numbers as "30% more efficient" (which is likely optimistic compared to real world testing) and turned that into a 2x power efficiency claim. 

Also, I don't know everyone is making such a huge deal about the 12c and 16c chips. Latency will be worse, the dual channel ram will probably hold it back a bit, and it will be harder to cool. I mean, you might get like a 10-20% improvement over what threadripper is already capable of today. The 8 core 16 thread chip that AMD showed is going to be the best chip for 99% of the mainstream's usecases, in exactly the same fashion as how the 9900k is compared to the rest of Intel's product stack. 

Like, for example, Adored's claim right at the end there of the 16 core variant being 2x as fast as a 9900k... No, it really won't be. It likely won't clock as high, it will not scale perfectly, and the few programs that would actually see a best case scenario for scaling to that many cores will be limited to a VERY small niche group of users.

Also, what happened to Adored's philosophy about believing only bits and pieces of leaks? Shouldn't he technically throw all of it out because nothing was announced at CES, and Vega2 was announced instead of Navi?

Super informative though, it must have taken him a good 30 minutes to scour reddit and other tech forums to come up with all of his theories for the future. Gotta keep the hype train rolling, can't miss out on that good $$$$$.


----------



## gammagoat

This thread is better than the Springer show.

Hold up for a min, gonna pop more corn!


----------



## bigjdubb

So what are we arguing about? Are we saying that the YouTube guy was wrong about things being announced CES or wrong about things even existing? He was definitely wrong about a 16 core cpu being announced at CES, it's pretty obvious. I guess it's a wait and see sort of thing as far as the parts existence is concerned, but it still seems _possible_ (space for it) that we will see 12 and 16 core chips. Most of the other information in the leaks won't be known until the products actually launch and we have prices/specs so I can't see how any of that can be relevant at this point.

I'm not sure who came up with the all or none concept but that seems like a fairly stupid rationale. Every hardware leak is a combination of information and speculation, it's totally possible for either to be correct/incorrect on their own.


----------



## ozlay

Perhaps Navi was delayed so they dropped a vega 2 in its place. It does seem odd that they would drop a card so quickly. I mean we heard nothing about vega 2. Until about a month ago. And now they rush it out the door. Maybe Navi was going to be at CES. But it got delayed. Like a Titan V to fill the gap. Actually it should destroy the titan v at fp64.


----------



## tpi2007

Cherryblue said:


> Like 970 are 980 junked dies, 1070 are 1080 junked dies, *2080 are 2080 ti junked dies*... and so on.
> 
> That is the natural order of things around here . So if they are able to do it and still make a profit at 699$, good thing.
> 
> I will never buy a gpu this expensive, but still I wish them to be in good health for competition sake.



Bold for emphasis.

Correction: the 2080 is made from a binned TU104 die - or, if you prefer, a "junked" Quadro RTX 5000 and the 2080 Ti is made from a binned TU102 die, or a "junked" Quadro RTX 8000 / 6000 / Titan RTX. That's to say, this time around not even the 2080 is made from the full die like last time. The 2070 though is, it has its own TU106 die, from which the 2060 is also made.





EastCoast said:


> IMO
> Yeah, this is one of those things where all of it has to be true or not kind of thing.
> Or else, AMD used a center die from the wafer super sample that would be keep internal for life.
> 
> But I must reiterate again that AMD should have said so in their own Keynote. There is no way that Intel doesn't know by now and keeping that info from the customers/investors/etc just doesn't make sense to me.



The only thing that they wanted to highlight at this point is that they have a competitor to the 9900K, because that affects Intel but not AMD's own current portfolio until we know the pricing; saying more at this point in time, around 5 / 6 months before release, could affect their own product portfolio, so leaving things in the air is the best policy. Will the space for the second chiplet be occupied at launch or later down the road? And with what? Another CPU chiplet or a GPU die for APUs? Or will they provide both options? And if it's a second CPU chiplet, how many cores will be enabled? 12? 16? And what is the price? If they announced 12 cores now, it would logically mean that it wouldn't go over $500, as the 1920X is already selling below that and the 2920X at around $650; besides we're talking about the mainstream platform. If they announced a 16 core, it would make even less sense, because logically the asking price of the 8C/16T would be going down, at the very least in the minds of the consumers, who would hold off buying the current CPUs - both mainstream and Threadripper, expecting major price drops. As you see, it would be counter-productive.

With all these questions unanswered, but the potential there and having been clearly shown, AMD gets the job done taking into consideration how early it is. Why mention Zen 2 at all then? Well, what else did they have to present on their first ever CES keynote, on the year that marks AMD's 50th birthday (founded May 1, 1969), and the year where they will most probably surpass Intel in both IPC and general performance? They had to signal that moment, even if they couldn't share everything, because it's too early. Going on stage to deliver Radeon VII alone would be underwhelming to say the least.


----------



## guttheslayer

Ultracarpet said:


> haahahahahahahahahaha
> 
> Also, I don't know everyone is making such a huge deal about the 12c and 16c chips. Latency will be worse, the dual channel ram will probably hold it back a bit, and it will be harder to cool. I mean, you might get like a 10-20% improvement over what threadripper is already capable of today. The 8 core 16 thread chip that AMD showed is going to be the best chip for 99% of the mainstream's usecases, in exactly the same fashion as how the 9900k is compared to the rest of Intel's product stack.


16C CPU are a big deal if they are release at AM4+ socket. Very very big deal, and simple logic tell you why:


1) It means the 8C/16T, which you said is so awesome for everyone to grab their on, will be available on cheap. Instead of charging us $329 to $379, it could come $100 cheaper since it is consider Ryzen 5 (Instead of R7)
2) AMD gaining market share as fast as possible before Intel catch up, which is pretty fast considering Jim Keller is working with them now.
3) Reputation. The highlight of a 16C that is almost twice as fast of what competitor is offering in their $500 segment shows AMD is the true leadership in CPU (Do not overlook this importance as this is the exact reason why NV is popular)



For point 3, Reputation is extremely vital. It doesn't need 16C to sell well, they just need to do exactly the same as what Nvidia is doing with their high end card. For graphics, AMD is extremely competitive in the mid-range / mass segment, but why is NV always able to charge so high premium and get away with it? It is because their high end card are so much better and energy efficient. They use their high end card to promote how great pascal/turing is, and then add on that premium to their entire lineup. 16C/32T allows AMD to do that in the future as they gain more and more foothold in the market share.

We keep complaining AMD offer no competition in the high end for GPU, hence Nvidia is always to charge us sky-high prices. Put that situation now in CPU in near future, and we complain Intel offer no competition in the high end CPU spaces. Just imagine the benefit AMD will have at that point.


----------



## Ultracarpet

guttheslayer said:


> 16C CPU are a big deal if they are release at AM4+ socket. Very very big deal, and simple logic tell you why:
> 
> 
> 1) It means the 8C/16T, which you said is so awesome for everyone to grab their on, will be available on cheap. Instead of charging us $329 to $379, it could come $100 cheaper since it is consider Ryzen 5 (Instead of R7)
> 2) AMD gaining market share as fast as possible before Intel catch up, which is pretty fast considering Jim Keller is working with them now.
> 3) Reputation. The highlight of a 16C that is almost twice as fast of what competitor is offering in their $500 segment shows AMD is the true leadership in CPU (Do not overlook this importance as this is the exact reason why NV is popular)
> 
> 
> 
> For point 3, Reputation is extremely vital. It doesn't need 16C to sell well, they just need to do exactly the same as what Nvidia is doing with their high end card. For graphics, AMD is extremely competitive in the mid-range / mass segment, but why is NV always able to charge so high premium and get away with it? It is because their high end card are so much better and energy efficient. They use their high end card to promote how great pascal/turing is, and then add on that premium to their entire lineup. 16C/32T allows AMD to do that in the future as they gain more and more foothold in the market share.
> 
> We keep complaining AMD offer no competition in the high end for GPU, hence Nvidia is always to charge us sky-high prices. Put that situation now in CPU in near future, and we complain Intel offer no competition in the high end CPU spaces. Just imagine the benefit AMD will have at that point.


1) There is a very good chance they don't release the 12 and 16 core chips at the same time as the 8 core. I'm guessing they wait to release those variants a couple months later (fall), and if that happens, the 8 core Ryzen will be priced closer to it's relative performance against the 9900k, and won't get a price drop until those 12 and 16 core chips drop onto the market.

2) The 8 core and 6 core variants are likely to do the most work for picking up market share against intel on the consumer side

3) CPU is different than GPU. GPU's don't have the added confusion of single threaded performance being just as important, or often more important, than parallel performance. AMD already has the lead in multithreaded CPU workloads. You can literally go buy a consumer Threadripper part with 32 cores and 64 threads. If the application scales perfectly, nothing even comes close. Guess where it loses though? Single threaded, and actually most things that don't scale perfectly like video games. That is why the 8 core (and probably even the 6 core), that will likely have less latency and higher achievable clocks than the dual chiplet designs, will actually be the processor to get that reputation. Having enough cores is just a check mark, what actually swings the attention of consumers is king of the hill single threaded performance. I still don't think AMD will topple Intel here, as I don't believe AMD's chips will be able to clock high enough to dethrone a 5+ghz coffee lake chip. They WILL, however; be close enough (5-10% max oc vs max oc) for the product to gain significant market share, even at just a slight price undercut to Intel's lineup. No need to price them like they belong on a mcdonalds value menu.



bigjdubb said:


> So what are we arguing about? Are we saying that the YouTube guy was wrong about things being announced CES or wrong about things even existing? He was definitely wrong about a 16 core cpu being announced at CES, it's pretty obvious. I guess it's a wait and see sort of thing as far as the parts existence is concerned, but it still seems _possible_ (space for it) that we will see 12 and 16 core chips. Most of the other information in the leaks won't be known until the products actually launch and we have prices/specs so I can't see how any of that can be relevant at this point.
> 
> I'm not sure who came up with the all or none concept but that seems like a fairly stupid rationale. Every hardware leak is a combination of information and speculation, it's totally possible for either to be correct/incorrect on their own.


I've been over this a million times lol, but he got basically everything relative to CES wrong, which was almost the entire focus of his videos. The only thing he got right was the use of a chiplet design, but as it was already shown to be used on epyc, there wasn't much clairvoyance in that theory.

As for the existence of everything... again guessing that they would have higher clocks, lower power consumption, and by association of the chiplet design- more cores, is not too far off of guessing that Sony would release a playstation 3 after the playstation 2. 

The all or nothing concept? Oh, adoredtv came up with that one lol. It's actually that exact concept that made him throw out the possibility of vega 2 being shown at CES lool.

Here you go- https://youtu.be/MG-onUm__c8?t=923


----------



## coelacanth

I think that AMD just doesn't want to tip its hand. They will launch a CPU that will beat the 9900K, Intel will counter with something, and just at that moment AMD will launch something else that will beat whatever Intel releases. It wouldn't help AMD to let Intel and the world know exactly what they have waiting in the wings. If AMD has Intel's best desktop CPU beat, no need to release a 12C or 16C monster when people will gladly buy the 8C CPUs.


----------



## Majin SSJ Eric

EastCoast said:


> I really don't know what to say about Adore video on that one. *If it turns out to be true that AMD beat 9900K with a 65W midrange cpu then I just don't even understand why AMD wouldn't say so in their keynote* when they know investors are watching.
> 
> I'm sure we will see chiplets in consoles as well. I also believe that you could see a gpu die or 2 cpu die in that package. But the power consumption alone would bring consoles to bear and will allow them to be on part with current mid-high end CPU on next gen console releases. That's going to further prove that they will be more then capable of true 4K HDR gaming.


If I had to guess I would say that they didn't want to release any more info about this product right now than they absolutely had to. I mean, obviously specific sku segmentations (core counts, designations, clock speeds, etc) are still a good ways out from being finalized if the chips won't be launching til Computex. There is also the danger of hurting existing Ryzen 2 sales (if the 8-core that just beat a 9900K is revealed to be only an R5 midrange chip then that's a pretty compelling reason for somebody right now to hold off on buying a current R7). They also probably don't want specifics about how they are going to segment Zen 2 getting into Intel's hands now, 6 months from availability to prevent them from preemptively responding with price cuts or what have you. You also have to consider that the lack of actual numbers (while showing off the performance) is a great way to fuel further speculation and discussions about upcoming Ryzen products and keep people's attention squarely on AMD over the coming months.

I'm as disappointed as anybody that AMD provided next to no concrete info about Zen 2 but, having seen their demo with an early ES chip beating a 4.7GHz 9900K with the same core and thread count while also using significantly less power, I am overall much more excited about AMD's new chips than I was before CES (and that's the whole point of a product demo at a show like this).


----------



## gt86

WannaBeOCer said:


> Bought a GTX 470 because I wanted to be able to play DirectX 11 games. It still has a life time warranty on it.
> 
> People are still rocking their GTX 480 till this day since it supports DirectX12
> 
> https://www.youtube.com/watch?v=3asFouAVpfs


HD58x0 supports DX11. 
Cant belive that there are people who still use a Fermi GPU, i mean come on, this is the Nvidia aquivalent of a Vega GPU. 
Draws like 240W and gets bashed by a GTX 1050 or so.

And whats the argument that the gtx480 "supports" DX12, when its perforning worse than on DX11?


----------



## Majin SSJ Eric

gt86 said:


> HD58x0 supports DX11.
> Cant belive that there are people who still use a Fermi GPU, i mean come on, this is the Nvidia aquivalent of a Vega GPU.
> Draws like 240W and gets bashed by a GTX 1050 or so.
> 
> And whats the argument that the gtx480 "supports" DX12, when its perforning worse than on DX11?


Eh, I'm almost as bad. I'm still rocking TWO original Titans that I bought brand new March, 2013! And you know what? They still work great for me in all my games on my 1440p monitors! Granted, my game library consists mostly of games that also came out around that time (or before), but I still play stuff like Crysis 3 on Ultra, GTAV and BF4 maxed just fine. 

I even flashed the bios's and managed to get both of them above 1300MHz back in the day and they really do fly at that speed, even now.


----------



## white owl

gt86 said:


> HD58x0 supports DX11.
> Cant belive that there are people who still use a Fermi GPU, i mean come on, this is the Nvidia aquivalent of a Vega GPU.
> Draws like 240W and gets bashed by a GTX 1050 or so.
> 
> And whats the argument that the gtx480 "supports" DX12, when its perforning worse than on DX11?


 Can I have your contact info?
In a few years I'll need to run my hardware by you to make sure it's still up to your standards. I wouldn't want to be using something you don't deem appropriate.

FYI your comparison is whack, Vega in any flavor is much faster than a 1050, as smart as you are I'm sure you knew that already. I bet your keyboard is just typing BS on it's own.


----------



## gt86

Maybe I should add that we pay 0,27€/kWh (or 0,31USD) here, so energy efficiency is not "just" about noise level and room temperature, but also about the energy bill. So in most cases there is no point in using such a heater with HDMI output. 

But forget about the energy costs, I cant stand noisy PCs, under gaming load my CPU fan runs at 600rpm and my gpu at 900rpm. So it's virtually silent to me and that's no coincidence.


----------



## VeritronX

WannaBeOCer said:


> Bought a GTX 470 because I wanted to be able to play DirectX 11 games. It still has a life time warranty on it.
> 
> People are still rocking their GTX 480 till this day since it supports DirectX12
> 
> https://www.youtube.com/watch?v=3asFouAVpfs


I have a GTX 470, two GTX 480's and a GTX 580 that are all dead and have been for a few years, and not because they weren't looked after. Most of the people I know that had a full sized fermi chip have had them suddenly die 4-5yrs after they were bought new.


----------



## Eusbwoa18

*NEW LEAK ON THREADRIPPER*

Look at the size of that die! That could fit like 100 chiplets. Wow!


----------



## white owl

pgdeaner said:


> Look at the size of that die! That could fit like 100 chiplets. Wow!


Photoshop skill level 3/10 haha


----------



## Diffident

white owl said:


> Photoshop skill level 3/10 haha



I don't think the size of that is photoshopped. In older episodes of PCWorld's "The Full Nerd", they had one sitting on the desk in the background....it looked about 10 inches tall.


----------



## Eusbwoa18

**

Would my jibe have been more credible if I said it came from AdorableTV?



white owl said:


> Photoshop skill level 3/10 haha


----------



## Hwgeek

Can u imagine 8 chiplets like this in TR3? We can expect 15K~16K CBR15 on PBO mode!


----------



## delerious

ozlay said:


> Perhaps Navi was delayed so they dropped a vega 2 in its place. It does seem odd that they would drop a card so quickly. I mean we heard nothing about vega 2. Until about a month ago. And now they rush it out the door. Maybe Navi was going to be at CES. But it got delayed. Like a Titan V to fill the gap. Actually it should destroy the titan v at fp64.


If you really want to go out on a limb - Navi will be released when X570 is ready because it will be a PCIe 4.0 card.


----------



## gt86

Isnt Vega 2 also PCI-E 4.0 and will be sold next month?


----------



## ozlay

gt86 said:


> Isnt Vega 2 also PCI-E 4.0 and will be sold next month?


Yes.


----------



## white owl

Diffident said:


> white owl said:
> 
> 
> 
> Photoshop skill level 3/10 haha
> 
> 
> 
> 
> I don't think the size of that is photoshopped. In older episodes of PCWorld's "The Full Nerd", they had one sitting on the desk in the background....it looked about 10 inches tall.
Click to expand...

One might exist but that picture is photoshopped.


----------



## ozlay

pgdeaner said:


> Look at the size of that die! That could fit like 100 chiplets. Wow!


That is a box of chocolates basically filled with a bunch chocolate chiplets.


----------



## Majin SSJ Eric

gt86 said:


> Isnt Vega 2 also PCI-E 4.0 and will be sold next month?


No, its not PCI-E 4.0 its 3.0.

https://www.anandtech.com/show/13832/amd-radeon-vii-high-end-7nm-february-7th-for-699


----------



## tpi2007

2nd gen Vega does have PCIe 4.0 capability (see below), but for some reason AMD apparently isn't making it available for the consumer version Radeon 7. Maybe they don't want to give the arch too many accolades (it already has the 1 TB/s memory bandwidth one), given its lacklustre nature, reserving it for Navi instead. Not that it technically needs it anyway. There are supposed to be some active and idle power optimizations in the spec, but maybe they are not leveraging them yet. You could speculate that because of this AMD is fairly confident that they will come out with 7nm Navi ahead of Nvidia's 7nm next gen cards, in order to be able to claim the accolade for consumer GPUs.


----------



## gt86

tpi2007 said:


> AMD is fairly confident that they will come out with 7nm Navi ahead of Nvidia's 7nm next gen cards, in order to be able to claim the accolade for consumer GPUs.


I doubt that. Nvidias current gpus are like 100% more efficient than polaris. So 7nm will give AMD 25% more efficiency... that means that the new navi architecture itself needs to boost efficiency by 60%! 
(60%*25%=100%)

Let alone what happens when Nvidia switch to 7nm.


----------



## tpi2007

gt86 said:


> I doubt that. Nvidias current gpus are like 100% more efficient than polaris. So 7nm will give AMD 25% more efficiency... that means that the new navi architecture itself needs to boost efficiency by 60%!
> (60%*25%=100%)
> 
> Let alone what happens when Nvidia switch to 7nm.



They boosted IPC on first gen Ryzen by 52% compared to Excavator, so it's not impossible.

Anyway, I was talking about being the first with a consumer PCIe 4.0 GPU out, I wasn't making any performance speculations.


----------



## magnek

pgdeaner said:


> Look at the size of that die! That could fit like 100 chiplets. Wow!


LGA 40940 LOL


----------



## Hwgeek

How do you think that Isolating the CPU Cores from rest of the I/O can effect OCing?


----------



## Mygaffer

It's been a long time since I've done an AMD build for my primary machine but when I do a new build this year I think I'm going to base it around Ryzen.


----------



## NightAntilli

The perfect video for the ones that want to dismiss things on tiny little irrelevant details. Yes, we all know who we're talking about here.


----------



## speed_demon

Mygaffer said:


> It's been a long time since I've done an AMD build for my primary machine but when I do a new build this year I think I'm going to base it around Ryzen.


I was in the same position and am quite happy with the decision to go Ryzen. I've been gaming on Intel since the Windows XP days and this was enough to sway my dollars to AMD.

As for the video, I'm sorry but I don't have the attention span to watch it all through at the moment. Can anybody give me a quick recap?


----------



## gt86

speed_demon said:


> I've been gaming on Intel since the Windows XP days


You have chosen P4 over A64? :thinking:


----------



## ku4eto

gt86 said:


> You have chosen P4 over A64? :thinking:


Lots of people did. Blame the Intel illegial tactics. My friends had P4s, while i was the only one with a Athlon 64. Purely, because a friend of my father was making our rig at that time.


----------



## Ultracarpet

NightAntilli said:


> The perfect video for the ones that want to dismiss things on tiny little irrelevant details. Yes, we all know who we're talking about here.
> 
> https://www.youtube.com/watch?v=c8EONokJTdU


It actually blows my mind the amount of people that listen to this guys crap and just eat it up.

He just said in that video that anyone who thinks he is wrong basically isn't smart enough to comprehend what he is showing them; that their "comprehension" isn't good enough. Then he immediately gives himself a pass for writing down "launch" in his table. CLEARLY people trashing him are looking TOO CLOSE, and are using TOO MUCH comprehension. He is demanding that his viewers listen and read what he is saying perfectly, and then ignore everything that he says and writes that ends up not fitting the right narrative. That is PATHETIC.

I honestly wouldn't even care so much if he just said "look, I'm going to be wrong a ton, I'm reporting on really early leaks, it is what it is". Instead he goes on tirades trying to lambaste anyone who ever said he was wrong. He makes money from clickbait leaks and then demands to be taken seriously. You don't get to have your cake and eat it too. There is a reason that mainstream tech news outlets don't report on every single leak that crosses their desks. He is playing a game where is going to be wrong a lot of the time. Instead of accepting that, he and his die-hard fans look for every little pathetic excuse to make it appear that is not the case.

This is a guy who was claiming that Polaris was going to be more efficient than Pascal, and they were going to bring out a big Polaris "12" chip (he got Polaris 10 and 11 backwards) that would give Titan X performance, which would have been around GTX 1070 level. Oh, and he was saying that big Pascal could end up being another "Fermi". Time isn't particularly on Adoreds side regarding the truth of his videos. Yet all of his friends seem to have short memories, so it doesn't matter.


----------



## NightAntilli

Ultracarpet said:


> It actually blows my mind the amount of people that listen to this guys crap and just eat it up.
> 
> He just said in that video that anyone who thinks he is wrong basically isn't smart enough to comprehend what he is showing them; that their "comprehension" isn't good enough. Then he immediately gives himself a pass for writing down "launch" in his table. CLEARLY people trashing him are looking TOO CLOSE, and are using TOO MUCH comprehension. He is demanding that his viewers listen and read what he is saying perfectly, and then ignore everything that he says and writes that ends up not fitting the right narrative. That is PATHETIC.


He admitted that that was a mistake. Additionally, either you're being deliberately obtuse, or it simply went way over your head.



Ultracarpet said:


> I honestly wouldn't even care so much if he just said "look, I'm going to be wrong a ton, I'm reporting on really early leaks, it is what it is". Instead he goes on tirades trying to lambaste anyone who ever said he was wrong.


He generally says to take his leaks with a grain of salt, for starters. Additionally, people complaining about his credibility because he is getting a wrong price, or a wrong announcement date, or a wrong launch date, while he's getting stuff like chip names, RTX brand, chiplet designs right, have practically zero credibility, because obviously they don't understand the market.



Ultracarpet said:


> He makes money from clickbait leaks and then demands to be taken seriously. You don't get to have your cake and eat it too. There is a reason that mainstream tech news outlets don't report on every single leak that crosses their desks. He is playing a game where is going to be wrong a lot of the time. Instead of accepting that, he and his die-hard fans look for every little pathetic excuse to make it appear that is not the case.


You mean his haters nitpick every little irrelevant detail to dismiss all his claims.



Ultracarpet said:


> This is a guy who was claiming that Polaris was going to be more efficient than Pascal, and they were going to bring out a big Polaris "12" chip (he got Polaris 10 and 11 backwards) that would give Titan X performance, which would have been around GTX 1070 level. Oh, and he was saying that big Pascal could end up being another "Fermi".


And in this exact video he said he was wrong about Polaris... Did you even watch it? 



Ultracarpet said:


> Time isn't particularly on Adoreds side regarding the truth of his videos. Yet all of his friends seem to have short memories, so it doesn't matter.


He's been more right than wrong. Who else predicted chiplets? Who else predicted RTX? Who else predicted Vega to be failure way before launch? 

Can't help but think that his haters are simply short-sighted. He doesn't need to be 100% accurate, considering his content is premature leaks.


----------



## Ultracarpet

NightAntilli said:


> He admitted that that was a mistake. Additionally, either you're being deliberately obtuse, or it simply went way over your head.
> 
> He generally says to take his leaks with a grain of salt, for starters. Additionally, people complaining about his credibility because he is getting a wrong price, or a wrong announcement date, or a wrong launch date, while he's getting stuff like chip names, RTX brand, chiplet designs right, have practically zero credibility, because obviously they don't understand the market.
> 
> You mean his haters nitpick every little irrelevant detail to dismiss all his claims.
> 
> And in this exact video he said he was wrong about Polaris... Did you even watch it?
> 
> He's been more right than wrong. Who else predicted chiplets? Who else predicted RTX? Who else predicted Vega to be failure way before launch?
> 
> Can't help but think that his haters are simply short-sighted. He doesn't need to be 100% accurate, considering his content is premature leaks.


Yea, you're actually right, this stuff is too complicated for me, it all went over my head. Little irrelevant details like what the actual performance of stuff is, and the actual release dates of stuff... those are just little nitpicky things. Especially when they are basically the only reason people are clicking on your videos. Like, he didn't release 3 clickbait videos leading up to CES both fueling and feeding off the hype leading up to the announcement, and then claim that he is still right after basically 0% of it happened because some of the crap he threw at the wall will maybe still be sticking after half a year. His follow up videos like "Zen 2 faster than i9 9900k at half the power" definitely not click bait lies either.

Not that I don't have free time, but it would take me (literally) years to go through all of his videos and analyze all the dumb stuff/claims he has made over the past few years. Which I'm sure is part of the defense for all of his claims; people who don't like him will not spend that amount of time to properly critique him. The only people who can actually stand to watch his 30 minute videos all the way through are people who love him. Echo Chambers. As far as this new video, though; I did watch most of it. I particularly liked the part where he mentioned how much these leakers from AMD loved him and his videos, and that's why he believes everything they say. Definitely good logic, and he definitely does not have an over inflated ego that is very fragile at the same time. I also liked that he told his viewers to stop calling out people/news outlets specifically on twitter, even though he was just fresh off of calling out flanker as one of his points in his previous video. 

I have said this a million times, but after it was revealed chiplets were being used by epyc, it was not a very big leap to guess it would be adopted by their consumer lineups. The RTX thing makes me laugh, because suddenly the brand name of the product is super important, yet if he gets it wrong with ryzen or navi it is no big deal because it is just "small details". Lots of people predicted Vega to be a failure way before launch, and actually that is something I want to bring up- most everything that adored brings up in his videos is crap and rumors that people on forums and reddit talk about on a daily basis. He scrapes it all up and puts it into a video and suddenly it was his idea all along. I'm not even sure he actually claims responsibility for a lot of the stuff that he "discovers" but his fans sure do.

No duh he doesn't need to be 100% accurate, but he clearly has a big issue with being called out for being wrong. If your income is dependent on reporting click bait leaks, being wrong comes with the territory, and you don't really have the right to be upset about it. If he doesn't want to be called out, he should just stick to reporting actual industry news. He is just another videocardz/wccftech. Except instead of just publishing the leak, he gives 30 minute commentary of his own speculation over it. Which often is complete bs.

Anyways, continue gulping down his content, I'm done talking about him.


----------



## tpi2007

NightAntilli said:


> The perfect video for the ones that want to dismiss things on tiny little irrelevant details. Yes, we all know who we're talking about here.
> 
> https://www.youtube.com/watch?v=c8EONokJTdU



I agree with him on some things, but on others he is not seeing the full picture - or doesn't want to. As to the Turing line-up, as I had mentioned before, his specific leaks usually contained some good stuff, but with things off, and so was the case this time. It's not surprising. 100% accurate leaks way before their time are very rare. VideoCardz for example got some 100% correct slides for Broadwell-E and other Intel CPUs but it was just 3 or so days before the NDA lifted, not weeks or months before, so take everything you learn from very early leaks with a grain of salt because not only can things change, but there is the very high chance that the leaks have errors on purpose. 

Either the leaker is working on behalf of the employer and thus it's more a controlled marketing stunt, where the audience is to be excited, but at the same time, too many accurate details is too much information given to the competition, so that's a no go, or the leaker is on his/her own and in order to cover tracks, the best choice is to always introduce some noise in the information that he/she got, because that information may already contain some errors on purpose to catch leakers in the first place. So, all in all, don't expect 100% accurate information from very early leaks. In any case, if the employee likes his/her job, leaking too accurate information too ahead of time could be detrimental to his/her own job security and the company overall, so, again, there is little reason for very early leaks to be 100% accurate.

In any case, he got the basics right, namely the chips, including the fact that for the first time the x70 card would not be a cut-down x80 card, but instead be based on what is usually the chip that powers the x60 card, and the fact that there would be a non RTX x60 card based on an also novel TU116 chip. He didn't know about the RTX 2060, although I suspect that if he - and we - had thought about it some more, we could have predicted it since the 2070 was based on the full chip, and being rather large, a cut-down version of sorts was predictable.

As to not seeing the full picture, I sense some AMD bias perhaps. He didn't say a word about AMD's overstock of Polaris cards and that may explain some of his video from a completely different angle. Why did AMD release the RX 590? In my opinion it was to flush out the stock of RX 580 and 570 cards by drawing attention to their new, lower, saner, after mining prices, given the [intentionally] lackluster price of the RX 590. I mean, his leaks say that AMD is planning on being aggressive with the Navi pricing, but those are new cards based on a new and expensive 7nm node, with new and expensive GDDR6 memory, yet they couldn't price the RX 590 on a super mature 12nm (which is just a very lightly refined 14/16nm) with the same old 8 Gbps GDDR5 at RX 580 pricing levels? It doesn't add up.

So, was there really a problem with Navi and they needed to tape it out again as his leaker says, or AMD has a big stock of RX 580 and 570's to sell? Announcing Navi at CES would surely slow down those sales a lot. If Nvidia postponed the Turing launch, and as of today still has a GTX 1060 stock problem for several more weeks as Jensen himself said, why can't AMD use the same strategy?

More:

18:13 - "Navi 10 (RX 3080) will square off against the RTX 2070", ok makes sense, and Radeon VII doesn't conflict with that as it's at 2080 level, even if it will be less efficient relatively speaking. The naming though, as he mentions right at the end of the video may not line up, it may as well be called Radeon 5. And frankly, it probably won't be called the RX 30*8*0 if it will compete against the RTX 20*7*0 (unless it manages to beat it reasonably).

But then he mentions the RX 3070 and RX 3060, and that's the problem I mentioned above - let's assume that the 3070 would compete with the RTX 2060, ok, seems plausible. But where does the RX 3060 fit then? It will go against the rumoured 1660 Ti, which is set to displace the RX 590. So the 590 will be a card for 6 months, the true stopgap, 580/570 flusher, way more stopgap than the Radeon 7. The justification for the 590's existence given in the video makes no sense.

But there is one more problem with the leak: the part where supposedly 3 Vega parts were cancelled because Navi's performance was better than expected. This makes no sense. If they were as expected (read: worse than now), where would they land in the line-up? It just so happens that as they are they line up perfectly where they should. If not, why would they be a consideration to begin with? This doesn't make sense and Jim didn't put much thought when relaying this information to us.

As to Radeon VII's existence, that's a big mess. They ramped up production to consumers, but then dropped it because Turing was underwhelming, but as it turns out apparently it wasn't because they feared being without anything for six months at that tier. What a mess is going on in his video at that point. 

Let's just say it like it is: 7nm is new, expensive and yields are probably still not at the optimal level and production is still ramping up in terms of volume / fab capacity. Not even Nvidia, the market leader, has any 7nm cards out. The Navi chips Jim spoke about will likely be 200mm2 and smaller as the leaker also suggested, thus more viable in volume this early on 7nm. Radeon VII on the other hand is based on a known Vega design and is a small volume and much higher price part, of an even higher priced Instinct part, and even then it's still relatively manageable with 331mm2. So maybe for a big Navi they need more time to make on 7nm with good yields and acceptable prices. And if the RX 3080 competes with the RTX 2070, they probably need a bigger 400mm2+ chip to compete with both the 2080 Ti and a binned version to compete with (read: beat) the 2080 and along the way replace Radeon VII. And then they need even bigger parts to compete with Nvidia's 7nm line-up. 

None of that (except for smaller 200mm2 and below Navi as I said) is ready because it can't be. 7nm is too new for such volume endeavours and the price points they need to hit. Jim didn't say a word about that.


----------



## Shatun-Bear

Ultracarpet said:


> It actually blows my mind the amount of people that listen to this guys crap and just eat it up.
> 
> He just said in that video that anyone who thinks he is wrong basically isn't smart enough to comprehend what he is showing them; that their "comprehension" isn't good enough. Then he immediately gives himself a pass for writing down "launch" in his table. CLEARLY people trashing him are looking TOO CLOSE, and are using TOO MUCH comprehension. He is demanding that his viewers listen and read what he is saying perfectly, and then ignore everything that he says and writes that ends up not fitting the right narrative. That is PATHETIC.
> 
> I honestly wouldn't even care so much if he just said "look, I'm going to be wrong a ton, I'm reporting on really early leaks, it is what it is". Instead he goes on tirades trying to lambaste anyone who ever said he was wrong. He makes money from clickbait leaks and then demands to be taken seriously. You don't get to have your cake and eat it too. There is a reason that mainstream tech news outlets don't report on every single leak that crosses their desks. He is playing a game where is going to be wrong a lot of the time. Instead of accepting that, he and his die-hard fans look for every little pathetic excuse to make it appear that is not the case.
> 
> This is a guy who was claiming that Polaris was going to be more efficient than Pascal, and they were going to bring out a big Polaris "12" chip (he got Polaris 10 and 11 backwards) that would give Titan X performance, which would have been around GTX 1070 level. Oh, and he was saying that big Pascal could end up being another "Fermi". Time isn't particularly on Adoreds side regarding the truth of his videos. Yet all of his friends seem to have short memories, so it doesn't matter.


Yep, people claim that Turing leak as being evidence that Adored is legit, but in his own table of what was true, half of it was wrong. Kudos to get the things right about RTX but the rest is just guesswork.

The Ryzen 3000-series full specs and price 'chart' that he made, again, is likely to be half wrong and some pure guesswork. Prices and specs are not going to be correct as 6-7 months from launch (at the time of that leak), AMD wouldn't know these details as the chips are still engineering samples as Lisa Su demonstrated on the 9th. So how can he be serious with base and boost clocks for every single SKU? Mind boggling.

He also said he believed a 3000-series CPU would be revealed on the 9th and that didn't happen either, although that was his feeling only.


----------



## ejb222

I find most of the is he legit or not discussion/arguement to be a bit sophomoric. Look, its "leaks" and entertainment. Even if AMD showed of a 16c/32t ryzen at 5ghz at CES, you're still going to be skeptical of performance until 3rd parties bench em at the very least. I mean until chips are for sale and actively tested, it's all the same, sit back, pop some popcorn, and take it ALL with a dump truck load of salt. Be excited for legit competition in the cpu market and hope for the same in gpus....that's all you can do...whether Adored is 100% or 0% accurate...thats it.


----------



## ozlay

Hopefully the motherboard manufacturers. Release BIOS updates to add PCIe 4.0 support to the top 16x slot on x370-x470 boards.


----------



## Kpjoslee

ozlay said:


> Hopefully the motherboard manufacturers. Release BIOS updates to add PCIe 4.0 support to the top 16x slot on x370-x470 boards.


While it seems technically possible, I don't expect that to happen. Manufacturers never promised future 4.0 support for x370-470 boards, and they all need to get recertifications from PCI-Sig for 4.0 support as well, which I don't think they will bother with existing boards.


----------



## ku4eto

ozlay said:


> Hopefully the motherboard manufacturers. Release BIOS updates to add PCIe 4.0 support to the top 16x slot on x370-x470 boards.


Thats not how it works.


----------



## Dogzilla07

ku4eto said:


> Thats not how it works.


In this situation yes, absolutely 100%, it's completely possible.
It's the probability of it that makes it unlikely, and we're hoping for a fiscally irresponsible in-good-will move from the motherboard manufacturers


----------



## ku4eto

Dogzilla07 said:


> In this situation yes, absolutely 100%, it's completely possible.
> It's the probability of it that makes it unlikely, and we're hoping for a fiscally irresponsible in-good-will move from the motherboard manufacturers


After digging a bit around for information, turns out, that i was wrong.

https://www.tomshardware.com/news/amd-ryzen-pcie-4.0-motherboard,38401.html


Specifically, because RyZen CPUs CAN support PCIE 4.0. So a BIOS update would tell the motherboard, that is OK, to use PCI-E 4.0.


----------



## elina08

https://www.reddit.com/r/Amd/comments/ahxxpg/der_8auer_thinks_5_ghz_on_ryzen_3000_is_very/

In his Q&A (ger) live stream 1:04:50 der8auer told that he thinks 5 ghz on ryzen 3000 is very very realistic. He claims that he got industry sources.

"Transcript translated to English:

I think those 5 GHz boost on gen 3 are very, very realistic. I have to say that. I've heard some rumors from the industry and those sounded very good to me. Those 48 cores are theoretically possible if AMD makes a big enough dye, but I've not heard a lot about gen 3 in general."


----------



## Streetdragon

elina08 said:


> https://www.reddit.com/r/Amd/comments/ahxxpg/der_8auer_thinks_5_ghz_on_ryzen_3000_is_very/
> 
> In his Q&A (ger) live stream 1:04:50 der8auer told that he thinks 5 ghz on ryzen 3000 is very very realistic. He claims that he got industry sources.
> 
> "Transcript translated to English:
> 
> I think those 5 GHz boost on gen 3 are very, very realistic. I have to say that. I've heard some rumors from the industry and those sounded very good to me. Those 48 cores are theoretically possible if AMD makes a big enough dye, but I've not heard a lot about gen 3 in general."


Nice nice. Dont think that der8auer would say something like that WITHOUT a trustfull source.
BUT that can mean, that 5Ghz are only possible with a high end board and watercooling or better.

Time will tell us more!


----------



## Asmodian

Streetdragon said:


> Nice nice. Dont think that der8auer would say something like that WITHOUT a trustfull source.
> BUT that can mean, that 5Ghz are only possible with a high end board and watercooling or better.


Even that would be great though. 5GHz on an 8+ core Ryzen 2 CPU with anything short of a chiller would be great news and allow near parity with Intel's best with all workloads (except possibly AVX512). A 4.2 GHz OC being similarly difficult as a 5.2 GHz OC is a big difference but 5.0 GHz instead of 5.2 GHz is a lot less important.

Still not quite there but getting very close. I want to see AMD CPUs at the top of the Timespy or Superposition leader boards.


----------



## AlphaC

https://www.gamersnexus.net/news-pc/3429-hw-news-ryzen-3000-pcie-4-dram-prices


> *Update on AMD Ryzen 3000, X570*
> 
> A quick update on our previous X570/PCIe story: First off, as pointed out previously, "chipset" wasn't really the right language to use when referring to the X570's induction of "parts" of Epyc -- it's just PCIe 4.0, more or less, that's moving over. We had a few people reach out to us and confirm that the chipset will almost certainly be running PCIe 4.0, responsible for the power requirement increase and for potential logistical challenges.
> Separately, on core counts, our engineering contacts within the industry have informed us that we should expect 16C and 12C CPUs with Ryzen 3000, in addition to the usual 8-core parts. It's just a question of if those launch altogether or independently.



Also 5nm: https://www.overclock3d.net/news/mi..._tap_tsmc_and_samsung_when_they_move_to_5nm/1


----------



## EniGma1987

AlphaC said:


> Also 5nm: https://www.overclock3d.net/news/mi..._tap_tsmc_and_samsung_when_they_move_to_5nm/1



lol. "Rumored to tap"? In other news, the sky is blue and water is wet.
How can they be rumored to when neither GF or Intel will have 5nm in anything close to the near future? TSMC and Samsung are literally the only two fabs on earth that have 5nm in anything close to the near future, so they are the only two options available.


----------



## Shatun-Bear

https://www.tomshardware.co.uk/amd-matisse-third-gen-ryzen-benchmark,news-59832.html

12-core engineering sample with 3.3Ghz base clock and 4.7Ghz 'peak boost'. More evidence that this AdoredTV leak was BS. He had the 12-core at 4.2Ghz BASE clock and 5Ghz boost. Or the non-X model 3.8/4.6Ghz.

We know now that these chips are still engineering samples and final clocks are not known, certainly no-one would know clockspeeds of every SKU at the time of this leak 1 month ago.


----------



## agatong55

Shatun-Bear said:


> https://www.tomshardware.co.uk/amd-matisse-third-gen-ryzen-benchmark,news-59832.html
> 
> 12-core engineering sample with 3.3Ghz base clock and 4.7Ghz 'peak boost'. More evidence that this AdoredTV leak was BS. He had the 12-core at 4.2Ghz BASE clock and 5Ghz boost. Or the non-X model 3.8/4.6Ghz.
> 
> We know now that these chips are still engineering samples and final clocks are not known, certainly no-one would know clockspeeds of every SKU at the time of this leak 1 month ago.


It even says in the article " but early silicon typically comes with dialed back frequencies as vendors fine-tune the design. In other words, these results likely aren't representative of the final clock speeds." so you saying his leaks was BS is not completely true seeing as these are not the final clock speeds and just a test.


----------



## Cuthalu

Shatun-Bear said:


> https://www.tomshardware.co.uk/amd-matisse-third-gen-ryzen-benchmark,news-59832.html
> 
> 12-core engineering sample with 3.3Ghz base clock and 4.7Ghz 'peak boost'. More evidence that this AdoredTV leak was BS. He had the 12-core at 4.2Ghz BASE clock and 5Ghz boost. Or the non-X model 3.8/4.6Ghz.
> 
> We know now that these chips are still engineering samples and final clocks are not known, certainly no-one would know clockspeeds of every SKU at the time of this leak 1 month ago.


Did he claim the specs as 100 % confirmed retail numbers? I thought it's pretty obvious they would be internal projections if the leak is correct. Of course AMD wouldn't know every single detail at that point in time, but they would have performance and price targets with preliminary info about the capabilities of their chips.


----------



## reqq

where do tomshardware get 4.7 boost from? this is how someone on reddit translated that code:

2D3212BGMCWH2_37/34_N
2 = ES1
D = Desktop
321 = This was base clock, but as we see below base clock is 3.4ghz.
2 = Revision 2
BG = 105W
M = AM4
C = 12 cores
W = Unknown cache configuration
H2 = Matisse
37 = 3.7ghz boost
34 = 3.4ghz base


----------



## guttheslayer

AlphaC said:


> https://www.gamersnexus.net/news-pc/3429-hw-news-ryzen-3000-pcie-4-dram-prices
> 
> 
> 
> Also 5nm: https://www.overclock3d.net/news/mi..._tap_tsmc_and_samsung_when_they_move_to_5nm/1



Well from GN, so 16C is still true. That is all we need to know.


----------



## agatong55

https://www.userbenchmark.com/UserRun/14076820 Here is the think to the actual benchmark.

A couple of weird things from this benchmark:

it says "6.1 GB free of 4 GB @ 1.3 GHz"

Then down at the bottom it says:

Hynix HMA851U6JJR6N-VK 1x4GB
1 of 4 slots used
4GB DIMM DDR4 clocked @ 1333 MHz

So the ram they are using is ether wrong or not posting correctly


----------



## LancerVI

claiming 13% performance increase above 2700x at same clocks. 

Interesting.

As to the leaks....Adored commented on the GN vid saying....



> AdoredTV
> 1 day ago (edited)
> @blitzwing1 My analysis has been better than the leaks so far lol, as the leaks suggested there would be no I/O die.
> 
> I get what you mean though, tbh I wouldn't say it was fantastic when Lisa Su held up the Ryzen package because to me it was just obvious that 16C would happen.
> 
> I actually put more stock in 5GHz than I do 16C and if we don't see a 5GHz chip at launch I'll consider it a failed leak.﻿


----------



## Shatun-Bear

agatong55 said:


> It even says in the article " but early silicon typically comes with dialed back frequencies as vendors fine-tune the design. In other words, these results likely aren't representative of the final clock speeds." so you saying his leaks was BS is not completely true seeing as these are not the final clock speeds and just a test.


Oh come on that's a bit of a strawman argument. Of course he didn't say 'these are 100% accurate specs'. But the whole point of the video was 'Hey guys how's it going, I've got a good one for you - every Ryzen 3000-series CPU unveiled'. Now it turns out the clockspeeds are all wrong in his chart, the prices are almost certainly all wrong, and you're telling me 'but he didn't say any of it was 100% accurate'. Gee that's great, that.


----------



## GHADthc

Shatun-Bear said:


> Oh come on that's a bit of a strawman argument. Of course he didn't say 'these are 100% accurate specs'. But the whole point of the video was 'Hey guys how's it going, I've got a good one for you - every Ryzen 3000-series CPU unveiled'. Now it turns out the clockspeeds are all wrong in his chart, the prices are almost certainly all wrong, and you're telling me 'but he didn't say any of it was 100% accurate'. Gee that's great, that.


You must of misheard the part moments after he said "Hey guys how's it going" where he says "If you are still here, I would ask that you grab yourself a pinch of salt"...As in take what hes about to say with skepticism, and not as gospel..not a hard concept to grasp is it?

He was never 100% on any of the details of the leaks (And it's clearly obvious that some of what he was apparently told, is not true), he was even second guessing himself about a separate IO die at one point (even though his predictions turned out to be right).

I cannot believe people are still trying to burn him at the stake, and still up in arms, thrusting their pitchforks into the air..when as time goes on, more and more of the details he leaked are coming true, it's some sort of bizarre denial happening...

As for the clocks people have been discussing, its an ES, its not final silicon, not final specs, there could be room to push to 5ghz peak clock (with XFR3 or whatever the OC ability is going to be named), clearly AMD are still binning chips, and bidding their time till Computex in the mid year.

Even if they don't reach 5Ghz clocks (which I admit I will be a bit disappointed about, but life goes on), AMD have already demonstrated better IPC than Intel has to offer..what seems to have been an R5 3600, just beat out a 9900K, essentially clock for clock (allegedly 4.6ghz boost for the Zen 2 chip, against 4.7Ghz for the 9900K), and core for core, anything beyond that is just icing on the cake, what is there for people to still be arguing about? Intel is in for a world of hurt in the next few months, especially when they just announced yet another delay for 10nm due to chipset issues with PCI-E 4.0...even though PCI-E 4.0 is going to be backwards compatible on X470 with some motherboards, depending on the manufacturers (according to AMD).

I don't know about you guys, but I am looking forward to the prospect of dropping a R9 3850X into my CH7 and calling it a day, when and if that comes to pass...all this circle-jerking and speculation seems pretty superfluous to me.


----------



## LancerVI

Shatun-Bear said:


> Oh come on that's a bit of a strawman argument. Of course he didn't say 'these are 100% accurate specs'. But the whole point of the video was 'Hey guys how's it going, I've got a good one for you - every Ryzen 3000-series CPU unveiled'. Now it turns out the clockspeeds are all wrong in his chart, the prices are almost certainly all wrong, and you're telling me 'but he didn't say any of it was 100% accurate'. Gee that's great, that.


Clearly, you didn't watch the whole video or series of videos or worse, you chose to ignore some critical points. At the very beginning he says to take it all with a grain of salt.



GHADthc said:


> You must of misheard the part moments after he said "Hey guys how's it going" where he says "If you are still here, I would ask that you grab yourself a pinch of salt"...As in take what hes about to say with skepticism, and not as gospel..not a hard concept to grasp is it?
> 
> He was never 100% on any of the details of the leaks (And it's clearly obvious that some of what he was apparently told, is not true), he was even second guessing himself about a separate IO die at one point (even though his predictions turned out to be right).
> 
> I cannot believe people are still trying to burn him at the stake, and still up in arms, thrusting their pitchforks into the air..when as time goes on, more and more of the details he leaked are coming true, it's some sort of bizarre denial happening...
> 
> As for the clocks people have been discussing, its an ES, its not final silicon, not final specs, there could be room to push to 5ghz peak clock (with XFR3 or whatever the OC ability is going to be named), clearly AMD are still binning chips, and bidding their time till Computex in the mid year.
> 
> Even if they don't reach 5Ghz clocks (which I admit I will be a bit disappointed about, but life goes on), AMD have already demonstrated better IPC than Intel has to offer..what seems to have been an R5 3600, just beat out a 9900K, essentially clock for clock (allegedly 4.6ghz boost for the Zen 2 chip, against 4.7Ghz for the 9900K), and core for core, anything beyond that is just icing on the cake, what is there for people to still be arguing about? Intel is in for a world of hurt in the next few months, especially when they just announced yet another delay for 10nm due to chipset issues with PCI-E 4.0...even though PCI-E 4.0 is going to be backwards compatible on X470 with some motherboards, depending on the manufacturers (according to AMD).
> 
> I don't know about you guys, but I am looking forward to the prospect of dropping a R9 3850X into my CH7 and calling it a day, when and if that comes to pass...all this circle-jerking and speculation seems pretty superfluous to me.


QFT


----------



## ejb222

Shatun-Bear said:


> Oh come on that's a bit of a strawman argument. Of course he didn't say 'these are 100% accurate specs'. But the whole point of the video was 'Hey guys how's it going, I've got a good one for you - every Ryzen 3000-series CPU unveiled'. Now it turns out the clockspeeds are all wrong in his chart, the prices are almost certainly all wrong, and you're telling me 'but he didn't say any of it was 100% accurate'. Gee that's great, that.


Go troll somewhere else. Maybe you should watch the video again and note that he says CLOCK SPEEDS CAN CHANGE BECAUSE IT IS A MARKETING DECISSION. And this is an engineering sample...which is not going to be at final clock speeds. 
When will people actually listen and comprehend?


----------



## battlenut

ejb222 said:


> When will people actually listen and comprehend?


They can't and its only a hand full of people that want to burn him at the stake. all they ever wanted to do during this whole thread is complain about AMD and this JIM guy. But if you tell these people they are being fan boys or tell them to stop being negative, they cry to an admin.


----------



## martinhal

Shatun-Bear said:


> https://www.tomshardware.co.uk/amd-matisse-third-gen-ryzen-benchmark,news-59832.html
> 
> 12-core engineering sample with 3.3Ghz base clock and 4.7Ghz 'peak boost'. More evidence that this AdoredTV leak was BS. He had the 12-core at 4.2Ghz BASE clock and 5Ghz boost. Or the non-X model 3.8/4.6Ghz.
> 
> We know now that these chips are still engineering samples and final clocks are not known, certainly no-one would know clockspeeds of every SKU at the time of this leak 1 month ago.


We know no such thing... only what is in the link.


----------



## Grin

12-cores Eng sample compared to 2700x. ;( quad core results make me cry, it's a huge penalty


----------



## agatong55

Grin said:


> 12-cores Eng sample compared to 2700x. ;( quad core results make me cry, it's a huge penalty


the benchmark is also version 2 of there prototype, gen 4 was shown at ces, but again take those speeds at a grain of salt since they will change a lot before than.


----------



## ajc9988

Shatun-Bear said:


> Oh come on that's a bit of a strawman argument. Of course he didn't say 'these are 100% accurate specs'. But the whole point of the video was 'Hey guys how's it going, I've got a good one for you - every Ryzen 3000-series CPU unveiled'. Now it turns out the clockspeeds are all wrong in his chart, the prices are almost certainly all wrong, and you're telling me 'but he didn't say any of it was 100% accurate'. Gee that's great, that.


Yeah, do you know where the 4.5GHz came from? The press taking a guess. That is all. Not people with actual info, but the guesses from people at the event. So let's do some math, shall we?

First, let's figure out what the estimated IPCs are to work the solution backwards. We start with Zen, Zen+, and Intel's IPC. Intel has 7% IPC over Zen and 3-4% over Zen+. Now, we look at Epyc having a range over Zen chips in first gen Epyc of 11-15%, but averaging around 13%. That would be approximately 6% IPC on average over Intel chips. Then, we have to acknowledge that Su has said Zen 2 for mainstream is targeting 15%, meaning that we should be looking at the middle to high end of the IPC gains, not the low end. 

So now some time for some math.

4.7*.94= (reduction of speed by 6% representing a roughly 6% increase in IPC to compensate for the slower clock speed but achieve equal performance) = 4.418.

So, if the IPC is up 6%, then it is around a 4.4GHz clock that would equal Intel's 9900K at stock 4.7GHz all core, which was seen. Another way to do that math is to divide 4.4 by 4.7 which equals 93.6%, or roughly 6.x% IPC. If you add the 7% IPC Intel has over Zen, you arive at 13.x%. Now that seems right. But, let's examine 4.5GHz, shall we?

4.5/4.7=95.7% or about 4.3% IPC gain. Now, let's add 7% IPC to that and you get an 11.3% IPC gain over Zen, which is the low end of all rumors on IPC dating back to around August or September of 2018. Seems a bit low, while moving much farther from the stated 15% IPC goal, which gaining 4% IPC so late in the game after tape out, etc., is a *really* hard ask! Couple percent, sure, but you get into mid-single digits, nah, that is pushing it. Look how little IPC gains were had with Skylake, Kaby, and Coffee. If IPC gains were so easy with late game tweaks, wouldn't Intel have stepped up their game over the past 3 years?

Meanwhile, the 4.6GHz number was pulled out of WCCFTech's estimate when talking with PCWorld during a Youtube interview, which he didn't base it on anything while alleging it was back of napkin math. What I just showed you is back of napkin math, and the IPC gain on 4.6 would be 2.2%, which added to 7% is 9.2%, which is far short than the IPC SHOWN by Epyc already, meaning it is instantly removed from consideration. 

So, the 4.5 number is in range, the 4.4GHz on the 65W is in range (also note that aside from AdoredTV discussing PSU inefficiencies to declare it a 65W chip, sense then, we have discovered AMD's new chipset will use about 14W (see GN coverage), which is about 9W HIGHER than the asmedia chipset controller which uses 5W, meaning that if the inhouse chipset was used, it easily slides into the 65W chip category).

Because of this, and changes potentially happening with clocks up until release, I would ignore his claim and others of 4.5GHz being used for that event UNTIL better evidence is provided. And one person's post is not enough. As such, I'd say his leak is fairly accurate, within margins. So troll on!


----------



## ajc9988

Grin said:


> 12-cores Eng sample compared to 2700x. ;( quad core results make me cry, it's a huge penalty


You really suck at math, don't you. It was an engineering sample ran with a 3.6GHz boost average going against a CPU with a 3.7GHz base and 4.3GHz single core boost, meaning that the quad core was likely running around 4.2GHz. As such, look at how close it was relative to the speed. For example, with the 2700X running 4.2GHz for quad testing most likely, the new chip was about 14% slower on speeds during the test. Then, as also pointed out, that chip was an earlier revision than later chips which will be clocked higher. What I'd like to see is the 12-core compared to the 1920X and the 2920X. Yet people instead just focus on mainstream or HEDT.


----------



## Grin

agatong55 said:


> the benchmark is also version 2 of there prototype, gen 4 was shown at ces, but again take those speeds at a grain of salt since they will change a lot before than.


My sadness is not about speed, it’s about ratio. It seems like 4 cores demonstrated a speed of 3, because of data transmission between two different chiplets. It’s looks like equivalent of dual socket system, the same level of penalties between two processors in our case it’s between two chiplets.

Speed itself looks promising especially 130 floating on this clock.


----------



## ajc9988

Grin said:


> My sadness is not about speed, it’s about ratio. It seems like 4 cores demonstrated a speed of 3, because of data transmission between two different chiplets. It’s looks like equivalent of dual socket system, the same level of penalties between two processors in our case it’s between two chiplets.
> 
> Speed itself looks promising especially 130 floating on this clock.


First, you have to remove the effects of speed to try to standardize to see what can be attributed to the effects of multiple chiplets. Let's start with speed deltas. 

So, due to difference in speed, the engineering sample is running at 85.7% the speed of the 4.2GHz chip. Now, you need to know IPC difference to give a full parity, but that is something there is no hard numbers on. Many would assume the 13% IPC estimate should be in play, so that should negate it, but that isn't precisely correct. [edit: also, the 13% IPC is over first gen, this is a second gen chip, meaning that only around 9-10% should be used for IPC relative for 2700X to this chip, if the second variant of the H stepping was in fact able to achieve that IPC at the time, meaning that they cannot negate fully by using a 14% slower speed, meaning at least around 5% of the deficit may be IPC alone in this example]. Many times IPC impounds other information like cache utilization and memory bandwidth speeds and latencies. Here, we know they are running single channel instead of dual channel, which already cut bandwidth in half, followed by ram speed being less than half of what the 2700X was likely utilizing. That means a good chunk of that is from memory hit to IPC alone.

Now, let us look at the percentage difference between the two chips for each of the four categories:

87.34%
73.76%
79.8%
79.9%

So, due to the uncontrolled variables, INCLUDING RAM SPEED, your analysis is found lacking (coming from a 1950X owner which well understands the inter-CCX, inter-die, and memory latencies of first gen Threadripper). Now, you can say that there is a hit, but for quad core processing, if it is being done right, it never leaves the single chip, as the scheduler would often keep it on the same die. Now, if you are making the argument that because the CPUs now have an I/O core which blinds the system from NUMA nodes which may lead to the scheduler splitting work to the second die, I'd have to give more credence to your comment. Instead, it looks like you are wildly speculating that purely because there is a second die, the second die must be utilized. That isn't necessarily the case, as just explained. 

Because of the above, before just wildly spewing, please fully frame your statement and argument so that others may better engage with the underlying assertions found within your statements, that way it saves everyone time and effort.


----------



## Grin

It’s absolutely not a NUMA case, it’s a dual socket like processor consisting from North aka I/O which contains a memory controller and two processors aka chiplets attached. Memory bandwidth is an important thing but not for this benchmark. 130 floating for single core and only 388 for quad in Eng compared to 134/542 in 2700x it is a sad news. They need to do a lot of work with the windows scheduler for common soho programs to avoid running threads on both chiplets simultaneously.


----------



## ajc9988

Grin said:


> It’s absolutely not a NUMA case, it’s a dual socket like processor consisting from North aka I/O which contains a memory controller and two processors aka chiplets attached. Memory bandwidth is an important thing but not for this benchmark. 130 floating for single core and only 388 for quad in Eng compared to 134/542 in 2700x it is a sad news. They need to do a lot of work with the windows scheduler for common soho programs to avoid running threads on both chiplets simultaneously.


Dual socket IS NUMA! NUMA is Non-Uniform Memory Architecture, meaning that you must jump to another node to reach the memory stored on the memory channels of the other node, whether the same socket or not. Threadripper used NUMA nodes and two dies, just on the same socket. So you are saying that it isn't a NUMA case, which I already explained would be impossible, but then say it is dividing the work between the dies without adopting my explanation of how that would occur: i.e. the scheduler, due to being blinded to multi-die because of the single UMA on the I/O die is scheduling to both dies instead of one to utilize the L3 cache solely found on that one chip and to decrease core to core latencies by keeping the four cores on the same die. Instead, you ignore that clear explication of the likely potential issue, which we've already seen Microsoft's scheduler follies related to chiplet designed CPUs, but instead you blame the CPU and not the scheduler that is not keeping it on the same die. Very interesting on your part.

Then you go into memory speed, which effects memory bandwidth AND, arguably more important here, memory latencies, which very well CAN effect the benchmark unless the file size fully fits within the cache requiring zero memory calls (caveat, if the only way it fits in cache is storing part on the second core die, then there is the inherent latency of a 4 segment round trip to the other die and back through the I/O die, which could increase latency ABOVE a memory call, potentially, depending on the latencies of the IF gen 2). 

As to saying it is sad news without taking into account the speed deficit, the difference in IPC, the memory bandwidth and latencies, the hit with only a single memory channel populated, etc., says you don't have any clue what the hell you are talking about! 

Finally, I do agree windows scheduler needs a major overhaul. But you are making an assumption here that both chips are being utilized on the package, while ignoring all other data, including that this chip was the second minor revision of the H major revision stepping, which they have already said that the 4th minor revision was shown at CES and they are working on even newer revisions currently targeting even higher IPC. This means you are either being intentionally obtuse, or you are here solely to try to troll. If the latter, move on. If the former, please state in your next response IN DETAIL why the multitude of factors I have explained do not fact in here. Because what we are seeing on single core is around 3% slower while the speed deficit is 14% and the IPC improvement is likely around 9%. For quad core, we are seeing a 29% performance deficit, which does point to a scheduler issue at least in part, but shows that isn't the chiplet design at issue, rather it is the **** scheduler of Windows, which was exposed even further in the research done related to the performance regression of the 2990WX after it was shown that memory bandwidth wasn't the culprit by a 32 core Epyc 7551 being used on the same setup and getting the same performance regression, later showing that Windows was thread thrashing, using half of the full load just to move processes around to different threads. So there is merit, to a degree, there. But Microsoft also have stated they are working on fixing that, which is due to lazy fixes found in the scheduler due to adaptations when Intel Xeons used to have two nodes with one memory controller, so they just allowed a single overflow so that Xeons could utilize the extra cores. But I digress. 

Also, no games or programs are any longer capped to quad core performance. Instead, you could point to the full FP score being only 25% more over the 2700X while having 50% more cores. But then you would have to acknowledge the speed, IPC, memory latencies and bandwidth related to the low memory speed and being single channel, etc., etc., etc. Instead, you are trying to play let's **** on this leak while not acknowledging ANY other factors, except the nod of the head to the scheduler issue I mentioned.


----------



## Grin

“Dual socket IS NUMA!”

No, because the only one memory controller exists in I/O and both chiplets have the equal connection to memory. It is the dual socket from the pentium era. NUMA appeared when memory controller was moved from the North to processor die. Two-die TR is NUMA processor, it has two separate memory controllers one per die. Ryzen 30XX is UMA, it has the equal access to the memory for both chiplets. The only one problem here when windows scheduler runs different threads on both chiplets simultaneously for one program. It would cause the same time penalty as for dual socket UMA system, when a core on the first processor/chiplet waiting a data from another core placed in the second processor/chiplet.


----------



## ajc9988

Grin said:


> “Dual socket IS NUMA!”
> 
> No, because the only one memory controller exists in I/O and both chiplets have the equal connection to memory. It is the dual socket from the pentium era. NUMA appeared when memory controller was moved from the North to processor die. Two-die TR is NUMA processor, it has two separate memory controllers one per die. Ryzen 30XX is UMA, it has the equal access to the memory for both chiplets. The only one problem here when windows scheduler runs different threads on both chiplets simultaneously for one program. It would cause the same time penalty as for dual socket UMA system, when a core on the first processor/chiplet waiting a data from another core placed in the second processor/chiplet.


You are forcing me to be pedantic. Go reread my posts. 

Dual socket IS NUMA because on a dual socket board, half the memory is traced to one socket, the other half to the other socket. So, unless you can show me an example of DUAL SOCKET WITHOUT NUMA, you need to back off right now.

Further, I explained that the I/O now controls the memory channels which makes it UMA for the socket and the processor is seen as a single node. 

Instead, as I already discussed, the OS does not see the separate chips DUE TO THE I/O DIE. 

Meanwhile, why does no one discuss dual socket from the pentium era? BECAUSE IT IS 20 YEARS OLD. The scheduler should have evolved much further by now, as I've read the white papers related to scheduler awareness to greatly increase efficiencies, then discussed in more detail that you didn't discuss what various issues are. Go read my damn posts above. You are literally repeating me in slightly different ways. F*ing idiots, man.


----------



## Ultracarpet

ajc9988 said:


> You are forcing me to be pedantic. Go reread my posts.
> 
> Dual socket IS NUMA because on a dual socket board, half the memory is traced to one socket, the other half to the other socket. So, unless you can show me an example of DUAL SOCKET WITHOUT NUMA, you need to back off right now.
> 
> Further, I explained that the I/O now controls the memory channels which makes it UMA for the socket and the processor is seen as a single node.
> 
> Instead, as I already discussed, the OS does not see the separate chips DUE TO THE I/O DIE.
> 
> Meanwhile, why does no one discuss dual socket from the pentium era? BECAUSE IT IS 20 YEARS OLD. The scheduler should have evolved much further by now, as I've read the white papers related to scheduler awareness to greatly increase efficiencies, then discussed in more detail that you didn't discuss what various issues are. Go read my damn posts above. You are literally repeating me in slightly different ways. F*ing idiots, man.



Dang, I feel like I come across a bit abrasive sometimes... but you need to chill out, man.


----------



## ajc9988

Ultracarpet said:


> Dang, I feel like I come across a bit abrasive sometimes... but you need to chill out, man.


When I say something, then see it in a slightly different form, while that person selectively takes parts out of longer posts thereby making it devoid of context, I really get peeved. Have you read my other posts for context?


----------



## delerious

Grin said:


> It’s absolutely not a NUMA case, it’s a dual socket like processor consisting from North aka I/O which contains a memory controller and two processors aka chiplets attached. Memory bandwidth is an important thing but not for this benchmark. 130 floating for single core and only 388 for quad in Eng compared to 134/542 in 2700x it is a sad news. They need to do a lot of work with the windows scheduler for common soho programs to avoid running threads on both chiplets simultaneously.


That doesn't seem right. With perfect scaling you should get 536 for 4 cores (134x4). You're only getting around 75% for the ES.


----------



## Majin SSJ Eric

Shatun-Bear said:


> https://www.tomshardware.co.uk/amd-matisse-third-gen-ryzen-benchmark,news-59832.html
> 
> 12-core engineering sample with 3.3Ghz base clock and 4.7Ghz 'peak boost'. More evidence that this AdoredTV leak was BS. He had the 12-core at 4.2Ghz BASE clock and 5Ghz boost. Or the non-X model 3.8/4.6Ghz.
> 
> We know now that these chips are still engineering samples and final clocks are not known, certainly no-one would know clockspeeds of every SKU at the time of this leak 1 month ago.


He said right in the video that final clocks, pricing, and release dates are always the most subject to change, as they are oftentimes not finalized until close to release. The main thrust of the videos was the chiplet design, separate I/O die on 14nm (which he speculated about but the leak was wrong on), and core counts / configs. All the rest of it was down to educated speculation but that does not make the whole leak "fake".


----------



## epic1337

Grin said:


> They need to do a lot of work with the windows scheduler for common soho programs to avoid running threads on both chiplets simultaneously.


no not quite, they had the same issues with bulldozer's modules, so windows already supports that.
e.g. putting each thread on two different modules is better than putting everything on one module.











now they just need to apply the same scheduler sequence with the modules being chiplets.


----------



## ajc9988

epic1337 said:


> no not quite, they had the same issues with bulldozer's modules, so windows already supports that.
> e.g. putting each thread on two different modules is better than putting everything on one module.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> now they just need to apply the same scheduler sequence with the modules being chiplets.


I actually have to disagree with you here. Bulldozer presented its own challenges for scheduling due to the shared FPU, IIRC. Here, the reason we bring up node/die awareness is that the I/O die makes the entire CPU a single node by masking the cores behind the I/O chip with a single set of memory controllers. That means it cannot determine which cores are on which die for the chiplets. Because of this, it cannot schedule taking into account the round trip latency of jumping twice on the IF gen 2 to go from chiplet -> I/O die -> chiplet 2, then back. We also do not know if they put an independent IF controller in the I/O die or if it is still tied to memory. 

But, due to the inherent latency of going from one chiplet to the other for using cores, there can be penalties for the jump. With TR and Epyc, due to the NUMA awareness of the scheduler, they could keep certain tasks local rather than using the secondary node (whether they did so effectively is up for debate).

Now, AMD was correct that there isn't a *need* to add new optimizations to get the chips to work for end consumers. It is x86 and seen as a single UMA processor, meaning some of the issues with NUMA go away (a huge benefit). But, this is where I argue that the scheduler does need optimized to prevent lightly multithreaded workloads from being divided across both chiplets to avoid the penalty of inter-die comms in latency. IF2 doubles the bandwidth and reduced latency over gen 1 IF, but there is still a higher latency than what was found in inter-CCX comms.

But too often people will focus on those issues instead of examining the overall performance, which is where those effects are baked into IPC performance, generally. That means that even with this discussion of schedulers, etc., if the IPC is higher than Intel, it is already baked in any effects of latencies on performance, meaning discussing such nuances and learning to change these things would only act to further increase IPC over time. 

I hope that explains a little better on what the difference is between them and why we both brought up scheduler awareness.


----------



## epic1337

ajc9988 said:


> I actually have to disagree with you here. Bulldozer presented its own challenges for scheduling due to the shared FPU, IIRC. Here, the reason we bring up node/die awareness is that the I/O die makes the entire CPU a single node by masking the cores behind the I/O chip with a single set of memory controllers. That means it cannot determine which cores are on which die for the chiplets. Because of this, it cannot schedule taking into account the round trip latency of jumping twice on the IF gen 2 to go from chiplet -> I/O die -> chiplet 2, then back. We also do not know if they put an independent IF controller in the I/O die or if it is still tied to memory.
> 
> But, due to the inherent latency of going from one chiplet to the other for using cores, there can be penalties for the jump. With TR and Epyc, due to the NUMA awareness of the scheduler, they could keep certain tasks local rather than using the secondary node (whether they did so effectively is up for debate).
> 
> Now, AMD was correct that there isn't a *need* to add new optimizations to get the chips to work for end consumers. It is x86 and seen as a single UMA processor, meaning some of the issues with NUMA go away (a huge benefit). But, this is where I argue that the scheduler does need optimized to prevent lightly multithreaded workloads from being divided across both chiplets to avoid the penalty of inter-die comms in latency. IF2 doubles the bandwidth and reduced latency over gen 1 IF, but there is still a higher latency than what was found in inter-CCX comms.
> 
> But too often people will focus on those issues instead of examining the overall performance, which is where those effects are baked into IPC performance, generally. That means that even with this discussion of schedulers, etc., if the IPC is higher than Intel, it is already baked in any effects of latencies on performance, meaning discussing such nuances and learning to change these things would only act to further increase IPC over time.
> 
> I hope that explains a little better on what the difference is between them and why we both brought up scheduler awareness.


i see, so you're saying that the cores aren't exposed so the OS-level scheduler can't differentiate, that would indeed be a problem.



on a side note, has anyone seen this?
https://www.phoronix.com/scan.php?page=article&item=windows-coreprio-linux&num=1


----------



## ku4eto

epic1337 said:


> i see, so you're saying that the cores aren't exposed so the OS-level scheduler can't differentiate, that would indeed be a problem.
> 
> 
> 
> on a side note, has anyone seen this?
> https://www.phoronix.com/scan.php?page=article&item=windows-coreprio-linux&num=1


Yup, it was referenced a while ago (or at least i only saw it).


----------



## Majin SSJ Eric

epic1337 said:


> i see, so you're saying that the cores aren't exposed so the OS-level scheduler can't differentiate, that would indeed be a problem.
> 
> 
> 
> on a side note, has anyone seen this?
> https://www.phoronix.com/scan.php?page=article&item=windows-coreprio-linux&num=1


Personally I am only interested in the 8C / 16T variants so the scheduling conflicts between multiple chiplets won't affect me any as far as I know. I'm also sure that AMD is well aware of any conflicts that may be caused by the chiplet approach and are working right now to address them. Could actually be the reason they didn't show the 12 or 16-core variants at CES.


----------



## VeritronX

Really this is more like the multi ringbus high core count intel chips, but with more latency. You have two or more groups of cores and an IO / memory controller, and the OS knows there are physical threads and logical threads, but not where they are on the inside. This means AMD could in theory manipulate what is ran where on the inside if they want to, similar to how ssds work where you can tell them to write to a certain sector but that can be whatever blocks of flash the controller wants, and can be moved around internally.


----------



## epic1337

Majin SSJ Eric said:


> Personally I am only interested in the 8C / 16T variants so the scheduling conflicts between multiple chiplets won't affect me any as far as I know. I'm also sure that AMD is well aware of any conflicts that may be caused by the chiplet approach and are working right now to address them. Could actually be the reason they didn't show the 12 or 16-core variants at CES.


well theres still one issue of whether the IMC is on the I/O or within the chiplet.
if IMC is in the I/O then even if the package only has one chiplet 8C/16T, there might still be some significant latency penalty.


and seeing how much smaller the chiplets are when compared to Ryzen dies, it doesn't look like the IMC is in there.


----------



## Majin SSJ Eric

epic1337 said:


> well theres still one issue of whether the IMC is on the I/O or within the chiplet.
> if IMC is in the I/O then even if the package only has one chiplet 8C/16T, there might still be some significant latency penalty.
> 
> 
> and seeing how much smaller the chiplets are when compared to Ryzen dies, it doesn't look like the IMC is in there.


I imagine AMD hasn't been sitting on their hands since Zen 1 first launched, and that the Infinity Fabric is now much more optimized for Zen 2 than it was two years ago. But maybe I'm just misunderstanding where the latency comes in?


----------



## VeritronX

It wouldn't surprise me if both the IO die and the chiplets are the same between epyc 2 and ryzen 3000. This would mean that the IMC is on the IO die.

At a guess there are probably 32 lanes worth of configurable connects (pci-e 4 or IF 2) on the chiplets and 128 lanes worth on the IO die.. so on epyc 2 it's probably half the chiplet lanes going to the IO die and half to the mobo, while on ryzen 3000 i'd guess that the chiplets connect all lanes to the IO die and then the IO die connects 24 lanes to the motherboard.

edit:

This would also allow for good binning.. maybe with chiplets that aren't using all the cores you turn off some of the connections to the IO die, which lets you use chiplets with some cores or some links defective.. and also the IO dies that have defective links and ram controllers in ryzen 3000. IO dies that have all their links but are missing some of the ram controllers could go into threadripper. You'd just have to have different versions of the interposer depending on which part of which chip was defective.

edit2: Maybe on ryzen 3000 half the chiplet links go to the IO die, and half to the other chiplet? maybe only on dual chiplet interposers, could be all to the IO die on single ones.. will be interesting to see how they connected everything.


----------



## ajc9988

epic1337 said:


> i see, so you're saying that the cores aren't exposed so the OS-level scheduler can't differentiate, that would indeed be a problem.
> 
> 
> 
> on a side note, has anyone seen this?
> https://www.phoronix.com/scan.php?page=article&item=windows-coreprio-linux&num=1


It is and I believe that is being worked on, as others mentioned (discussed more below).



Majin SSJ Eric said:


> Personally I am only interested in the 8C / 16T variants so the scheduling conflicts between multiple chiplets won't affect me any as far as I know. I'm also sure that AMD is well aware of any conflicts that may be caused by the chiplet approach and are working right now to address them. Could actually be the reason they didn't show the 12 or 16-core variants at CES.


Agreed.



VeritronX said:


> Really this is more like the multi ringbus high core count intel chips, but with more latency. You have two or more groups of cores and an IO / memory controller, and the OS knows there are physical threads and logical threads, but not where they are on the inside. This means AMD could in theory manipulate what is ran where on the inside if they want to, similar to how ssds work where you can tell them to write to a certain sector but that can be whatever blocks of flash the controller wants, and can be moved around internally.


That is a great analogy, with the latency of IF2 being more than the on chip communications. Now, as to the manipulation, that would require a hardware scheduler on the I/O die, and I don't know that they have created such a thing yet, although that is a very interesting idea, if being honest. Removing the problem through creation of a hardware optimized scheduler to minimize off die travel really would be awesome, but there is also balancing the thermal density and heat distribution through scheduling, etc. Still able to be balanced, but got my brain churning. Loving it.



epic1337 said:


> well theres still one issue of whether the IMC is on the I/O or within the chiplet.
> if IMC is in the I/O then even if the package only has one chiplet 8C/16T, there might still be some significant latency penalty.
> 
> 
> and seeing how much smaller the chiplets are when compared to Ryzen dies, it doesn't look like the IMC is in there.


Yeah, this is the same chiplets from Epyc, so the I/O chip contains the IMC. That is how you make the system see it as an UMA chip package.

As to the latency, I wonder if the I/O chip has a clock controller for the IF gen 2 or if it is still tied to the memory. It is double bandwidth and lower latency, but we don't have details on the latency yet. But, if wired like Epyc, according to a Mark Papermaster interview by Ian Cuttress of Anand, then there is no direct die to die communications between the core chiplets like in TR and Epyc 1, instead having to go to the I/O die each time for a memory call or die to die communications. That means the latency going to memory would be less than going to memory on the second die of my 1950X, but higher than if the IMC was on the core die package itself. 

Instead, what is more interesting is the changes made to the store/retire algorithms along with standardizing IF gen 2 traces to the same length to try to equalize latency on Epyc chips. This was done to help with the problem of stale data. Basically, due to uneven processing depending on the die on which the IMC was connected, you could result in the non-direct memory connected cores doing a call, getting hit with the latency on the data call, processing it, then being hit with the latency to compare that to other tasks done by the other chips, making the data stale and unusable. That means you would have done all that work and got nothing for the effort. I believe this compounded the scheduler problem found on the TR 2990WX, which would explain why AMD went with an Epyc design that standardized the latency to each core chiplet. It lowers overall average latency, although increasing latency in some instances, but completely helps with the off-die issue of stale data and wasted cycles in the process. It is EXACTLY what I argued for at one point in time, and with changes to the prefetch, etc., seems they are more aware of those efficiencies through controlled inefficiency this time around. 



Majin SSJ Eric said:


> I imagine AMD hasn't been sitting on their hands since Zen 1 first launched, and that the Infinity Fabric is now much more optimized for Zen 2 than it was two years ago. But maybe I'm just misunderstanding where the latency comes in?


No, that was a good part of it. They used IF to tie together both CCX complexes, which is where the inter-CCX latency showed by PCPer came from, with a higher latency than that for inter-die communications, as shown through the SiSoft Sandra 2018 lite bench. 
http://ranker.sisoftware.net/show_u...e8d5e4c2aa97a284fcc1f0d6b3d6ebdbfd8eb383&l=en (my scores with my 1950X)

Now, as mentioned, Zen 2 will use Infinity Fabric gen 2. That lowered the latency (although we don't know by how much) and doubled the bandwidth. What we don't know is if the speed of IF2 is still tied to memory speed and timings. If it is tied to those, then the 1333 MT ram speed used in the 12-core test shown would be severely knee-capped on IF2 transfer speeds and latency (why everyone with Zen and Zen+ wanted to hit AT LEAST 3200 MT rates on their DDR4). Now, we know AMD is targeting 3200MHz stock for the ram speed. That is important because stock memory speeds is what the ram defaults down to, like 2133 or 2400, with the rare 2666 ram chips on some sticks. Samsung, though, produces a base 3200MHz ram chip. To be clear, the reason you see higher speeds is XMP speeds are ram that has been factory overclocked. But with the higher rating on base ram speed support for stock coming from AMD, it says they are confident that 3200MHz will run out of the box, whether the ram is factory overclocked or if using the 3200MHz base Samsung chips. That is a bold statement considering Intel's chips coming out support 2933 as the listed speed for their IMC. 

This is likely achieved through binning of the I/O chips and it being produced on the 14nm node. By removing the cores and cache from the I/O die, you don't have to deal as much with the non-critical defect hits to the IMC in order to get the yields you needed on the cores. In other words, you would have to accept a lower performance IMC when the IMC was tied to the cores because you needed the cores intact. By disintegrating the CPU die into smaller components, you create smaller dies which increase yields, then you can bin those dies to move performance up in different areas of the stack, while also not taking as much of a hit on defective dies where one hit to the I/O section with a critical defect ruins a perfectly good 8-core chip otherwise (same with IMC critical defects). That is the genius of chiplets. 

But going back to IF, I just need to know if they put a clock gen on the I/O die to control the speed and performance of the IF2 or not. If not, ram speed still matters a lot. If so, then it makes getting faster ram less relevant to a degree.


----------



## Grin

Majin SSJ Eric said:


> Personally I am only interested in the 8C / 16T variants so the scheduling conflicts between multiple chiplets won't affect me any as far as I know. I'm also sure that AMD is well aware of any conflicts that may be caused by the chiplet approach and are working right now to address them. Could actually be the reason they didn't show the 12 or 16-core variants at CES.


Agreed. That’s exactly what I’m talking about. With two chiplets AMD got the same problems as for dual socket UMA. I’m still using one that oldie and I know well that without affinity management the speed in common programs will be reduced significantly.


----------



## Shatun-Bear

New leak on the 12-core Ryzen:

https://www.techspot.com/news/78452-amd-next-12-core-cpu-appears-benchmark-database.html

3.4Ghz base clock, 3.7Ghz boost, 105W TDP. Slower than a 2700X likely because of wonky set-up and memory.


----------



## ajc9988

Shatun-Bear said:


> New leak on the 12-core Ryzen:
> 
> https://www.techspot.com/news/78452-amd-next-12-core-cpu-appears-benchmark-database.html
> 
> 3.4Ghz base clock, 3.7Ghz boost, 105W TDP. Slower than a 2700X likely because of wonky set-up and memory.


Read my comments over the past couple pages. That was posted and launched a long chain of discussion.

Edit: There is the fact it is H2 stepping (which is closer to October silicon, IIRC) whereas AMD showed off H4 stepping at CES, likely a single channel memory stick instead of using dual channel while clocked low, the memory may effect the infinity fabric 2 like it did infinity fabric, meaning lower mem speeds would slow down the IF and increase latency for comms over IF, etc.

Then there was talk of the OS scheduler not being aware of the two separate dies on the chip, which means there could be performance hit assigning the quad workload to cores on the other die, etc. 

The speeds should not even be considered due to the age of the stepping. It would have no bearing on final clock speed. 


In other news, another AdoredTV video:


----------



## tpi2007

Shatun-Bear said:


> New leak on the 12-core Ryzen:
> 
> https://www.techspot.com/news/78452-amd-next-12-core-cpu-appears-benchmark-database.html
> 
> 3.4Ghz base clock, 3.7Ghz boost, 105W TDP. Slower than a 2700X likely because of wonky set-up and memory.



Yeah, testing with single channel RAM, under the rated speed (for Zen+, let alone Zen 2) and with non final clocks (lower than the current 2920X).


----------



## VeritronX

I haven't watched any of his ryzen 3000 videos but if it's similar to my last few posts in this thread with no credit to ocn I'll be mad.


----------



## ajc9988

tpi2007 said:


> Yeah, testing with single channel RAM, under the rated speed (for Zen+, let alone Zen 2) and with non final clocks (lower than the current 2920X).


What I found interesting is that in that last video I posted, Jim said that the IF2 has been divorced from the memory speed altogether. If true, that rules out the ram being low causing the IF to run slower and increasing the IF latency, not to say ram running that slow will not impact performance, because it will! 

There was an interesting point in the article at techspot that we don't know if the ram is being reported at the double rate or single rate (think of how CPU-Z reports the ram speed at half of what it is running). That is a fair point, but goes to not knowing the factors of the setup, thereby meaning drawing too many inferences at this point may not be fair nor accurate. 

Also, I'm surprised at the lack of talk on the version variant. Everyone is referencing the first number of the string, but paying short shrift to the stepping designation. It is like them forgetting AMD showing off F2, F3, and F4 variants of Zen. Here, this is an H2 stepping, but H4 was shown at CES. The first designation (the letter) refers to major revisions while the second designation, the number, refers to the minor revision. There were rumors that more minor revisions were being worked on after the H4 variant, the veracity of those rumors not yet being known. But, due to the difference of H2 to H4 shown at CES, this chip may well have been an older one that was given to a third party (like a board mfr.) to work on the changes needed for firmware or something similar. The truth is, there are a lot of unknowns.

Also, Jim, in that last video, mentioned that the mainstream chips will have direct die to die comms, unlike the statements about Epyc. That means a single jump rather than two jumps each way, if true. Yes, he mentioned the latency was standardized and overall lower, but that means some of my above analysis, if his leaks are correct, is incorrect in regards to the IF and latency for going die to die, although keeping it on die for the quad testing would surely be less latency than being scheduled between the two dies (see PCPer on the inter-CCX latencies, even though they later misrepresented the Ryzen gen 1 latencies compared to Intel Mesh on the 7900X to a degree).


----------



## epic1337

ajc9988 said:


> Also, Jim, in that last video, mentioned that the mainstream chips will have direct die to die comms, unlike the statements about Epyc. That means a single jump rather than two jumps each way, if true. Yes, he mentioned the latency was standardized and overall lower, but that means some of my above analysis, if his leaks are correct, is incorrect in regards to the IF and latency for going die to die, although keeping it on die for the quad testing would surely be less latency than being scheduled between the two dies (see PCPer on the inter-CCX latencies, even though they later misrepresented the Ryzen gen 1 latencies compared to Intel Mesh on the 7900X to a degree).


this is very likely, IF I/O itself isn't big, having two on each chiplet would allow it to form a full ring bus (I/O <-> chiplet1 <-> chiplet2 <-> I/O).


----------



## Majin SSJ Eric

Well, none of us can know right now what the ultimate endgame for Zen 2 is going to be as most of what we do know is based on limited facts with a ton of added speculation/leaks. But what I have been sensing lately is a LOT of confidence coming out of AMD (and not the blustering, beat-you-over-the-head sort we saw from JF-AMD in the run-up to BD's launch) combined with cautious optimism from several fairly neutral and respected sources like DeBauer and Ian at AnanandTech. Its silly-season for sure, but its been a ton of fun keeping up with the daily updates and speculation surrounding what I truly believe will be a smashing success for AMD with Zen 2's release. Ryzen is already really good as-is, so all Zen 2 needs to do is provide solid improvement to its minor weaknesses (such as IPC and clock speed), and with its new 7nm node and architecture (especially if its true that the chiplets will be connected to each other as well as the I/O chip) it should address those concerns nicely. 

But again, all pure speculation at this point, though the CES keynote provided a very solid demonstration of Zen 2's performance parity with Intel at this still-very-early stage. Exciting stuff!


----------



## octiny

AMD's Lisa Su tweeted @ Jims latest video. Nooiiicee.


----------



## guttheslayer

Majin SSJ Eric said:


> Well, none of us can know right now what the ultimate endgame for Zen 2 is going to be as most of what we do know is based on limited facts with a ton of added speculation/leaks. But what I have been sensing lately is a LOT of confidence coming out of AMD (and not the blustering, beat-you-over-the-head sort we saw from JF-AMD in the run-up to BD's launch) combined with cautious optimism from several fairly neutral and respected sources like DeBauer and Ian at AnanandTech. Its silly-season for sure, but its been a ton of fun keeping up with the daily updates and speculation surrounding what I truly believe will be a smashing success for AMD with Zen 2's release. Ryzen is already really good as-is, so all Zen 2 needs to do is provide solid improvement to its minor weaknesses (such as IPC and clock speed), and with its new 7nm node and architecture (especially if its true that the chiplets will be connected to each other as well as the I/O chip) it should address those concerns nicely.
> 
> But again, all pure speculation at this point, though the CES keynote provided a very solid demonstration of Zen 2's performance parity with Intel at this still-very-early stage. Exciting stuff!




There is no need for doubt on alot of speculation by adoredTV Jim when Lisa herself have tweeted on him. This is the final stamp that whatever Jim said have some good truth to it.


----------



## ToTheSun!

octiny said:


> AMD's Lisa Su tweeted @ Jims latest video. Nooiiicee.


I don't care much for Jim, but having Su tweet at him directly is pretty neat. Goals, right there.


----------



## ibb27

octiny said:


> AMD's Lisa Su tweeted @ Jims latest video. Nooiiicee.


Definitely, she have fun with his "predictions"! LOL


----------



## NightAntilli

The main thing I want to know is whether we'll be getting AM4+ motherboards or if it will remain AM4. I made the mistake of buying an AM3 motherboard just before AM3+ was released, which meant that I had to upgrade my motherboard again. I don't want to make that same mistake twice.

I am definitely ready to upgrade. My FX-8320 has served me well, and still does, but the differences have become big enough to warrant an upgrade to Ryzen.


----------



## agatong55

NightAntilli said:


> The main thing I want to know is whether we'll be getting AM4+ motherboards or if it will remain AM4. I made the mistake of buying an AM3 motherboard just before AM3+ was released, which meant that I had to upgrade my motherboard again. I don't want to make that same mistake twice.
> 
> I am definitely ready to upgrade. My FX-8320 has served me well, and still does, but the differences have become big enough to warrant an upgrade to Ryzen.


Here is one report: 

https://hothardware.com/news/amd-confirms-am4-socket-support-future-ryzen-processors-2020

It looks like current AM4 motherboards will support it via BIOS updates, so its up to the manufactures to decide which boards they want to support the new chip.


----------



## Particle

ajc9988 said:


> There was an interesting point in the article at techspot that we don't know if the ram is being reported at the double rate or single rate (think of how CPU-Z reports the ram speed at half of what it is running).


For the record, CPU-Z reports the real memory IO bus clock speed. 400 MHz memory clock = 1600 MHz IO clock = 3200 MT/s effective transfer rate = DDR4-3200

In case anyone is interested, these memory:IO:transfer ratios work like this:
DDR1: 1:1:2
DDR2: 1:2:4
DDR3: 1:4:8
DDR4: 1:4:8


----------



## EniGma1987

NightAntilli said:


> The main thing I want to know is whether we'll be getting AM4+ motherboards or if it will remain AM4. I made the mistake of buying an AM3 motherboard just before AM3+ was released, which meant that I had to upgrade my motherboard again. I don't want to make that same mistake twice.
> 
> I am definitely ready to upgrade. My FX-8320 has served me well, and still does, but the differences have become big enough to warrant an upgrade to Ryzen.





AMD has said they will support AM4 until 2020. So the socket is over halfway through its life already. Many people assume that means they are promised "Zen3" support on the socket but that isnt necessarily true. AMD could release this Zen2/Ryzen3000 series and then next year follow it up with a Zen2+ built on the new EUV enhanced 7nm node, or even not release a node updated design and just go straight with a 16 core model sometime next year. So just be aware that this may *possibly* be the last arch generation available on the AM4 socket, even if a 7nm+ design does bring small speed improvements.



2020 is when we should be getting "AM5" socket if everything stays on track, which will have DDR5 memory and probably upgraded to PCI-E 5.0 as well. The release of this socket will probably have more to do with if the memory vendors are on track or not, rather than if AMD is on track. DDR5 was supposed to release last year, and it was delayed until this year. Now there is talk of delays till 2020. Which with DDR5 being pushed back so much, AMD may be forced to make a Zen3 design and support AM4 through 2021-2022


----------



## AlphaC

NightAntilli said:


> The main thing I want to know is whether we'll be getting AM4+ motherboards or if it will remain AM4. I made the mistake of buying an AM3 motherboard just before AM3+ was released, which meant that I had to upgrade my motherboard again. I don't want to make that same mistake twice.
> 
> I am definitely ready to upgrade. My FX-8320 has served me well, and still does, but the differences have become big enough to warrant an upgrade to Ryzen.


 Ryzen + would have been a great upgrade for you , let alone whatever the Ryzen 3rd gen 7nm products bring to market.


There's supposedly X570 boards coming out for whatever CPUs are more power hungry than the existing 8 cores.


----------



## bigjdubb

ToTheSun! said:


> I don't care much for Jim, but having Su tweet at him directly is pretty neat. Goals, right there.


That's a pretty amazing accomplishment. What's left to shoot for once you've been tweeted directly?


----------



## Ultracarpet

bigjdubb said:


> That's a pretty amazing accomplishment. What's left to shoot for once you've been tweeted directly?


The payroll


----------



## ToTheSun!

Ultracarpet said:


> The payroll


>implying


----------



## NightAntilli

AlphaC said:


> Ryzen + would have been a great upgrade for you , let alone whatever the Ryzen 3rd gen 7nm products bring to market.
> 
> 
> There's supposedly X570 boards coming out for whatever CPUs are more power hungry than the existing 8 cores.


After looking at current prices, I think I'm going for the R7 1700. At $160 that's simply too good a deal to pass up, even if it is an older CPU. I will get either an X470 Taichi or an X470 ROG Strix. That way, I can (hopefully) at a later date update to the most powerful CPU available on the socket. If that's the 3000 series, so be it. If it's the 4000 series, even better. I'll be stuck with DDR4 for a while, but I can live with that. And PCI-E 5.0 won't likely give me any performance boosts anyway.


----------



## ibb27

First mention of X570 motherboards, 9 from ASRock:
ASRock X570 Phantom Gaming X
ASRock X570 Phantom Gaming 6
ASRock X570 Phantom Gaming 4
ASRock X570 Extreme4
ASRock X570 Taichi
ASRock X570 Pro4
ASRock X570 Pro4 R2.0
ASRock X570M Pro4
ASRock X570M Pro4 R2.0


----------



## Majin SSJ Eric

NightAntilli said:


> After looking at current prices, I think I'm going for the R7 1700. At $160 that's simply too good a deal to pass up, even if it is an older CPU. I will get either an X470 Taichi or an X470 ROG Strix. That way, I can (hopefully) at a later date update to the most powerful CPU available on the socket. If that's the 3000 series, so be it. If it's the 4000 series, even better. I'll be stuck with DDR4 for a while, but I can live with that. And PCI-E 5.0 won't likely give me any performance boosts anyway.


Dude, you're coming from an 8320; that 1700 is going to absolutely light your hair on fire, regardless of how old it is!! Ryzen 1 was a revolutionary leap forward for AMD's CPU division and if you don't care much about high OC's should provide you with all the performance you could possibly need, especially at the kind of price you are getting it for. An OC'd 1700 will basically net you performance parity with something like a 5960X which is still insanely great performance for all applications even today...


----------



## ToTheSun!

bigjdubb said:


> That's a pretty amazing accomplishment. What's left to shoot for once you've been tweeted directly?


The sky's the limit!

I mean literally - let's shoot Jim into open space and leave him there.


----------



## SpacemanSpliff

NightAntilli said:


> After looking at current prices, I think I'm going for the R7 1700. At $160 that's simply too good a deal to pass up, even if it is an older CPU. I will get either an X470 Taichi or an X470 ROG Strix. That way, I can (hopefully) at a later date update to the most powerful CPU available on the socket. If that's the 3000 series, so be it. If it's the 4000 series, even better. I'll be stuck with DDR4 for a while, but I can live with that. And PCI-E 5.0 won't likely give me any performance boosts anyway.


If you live reasonably close to a MicroCenter, they have the 1700X for $149.99 right now. One catch, if you're running a Version of Windows 7 or 8, and you don't want to upgrade to 10 yet, you should anticipate having to do some extra legwork with your existing OS install and ensure that you have the latest AMD USB 3.0 drivers in advance of installing the new CPU and motherboard. Ultimately, you'll still eventually need to upgrade to 10, as it will not allow you to do further Windows updates on the older versions once you install a Ryzen CPU and motherboard. That and it has officially been announced that after January 14, 2020, Microsoft will no longer provide security updates or support for PCs running Windows 7.

Also, don't rule out a B450 motherboard. The only real reason I have seen for getting the X-x70 series boards is if you intend to use 5 or more SATA devices AND want to have an NVMe boot drive, or if you want to run an SLI rig. B450 supports Crossfire, but not SLI. Yet with multi-GPU configs now entirely at the mercy of each individual title's development teams, Crossfire and SLI are essentially obsolete. As fas as NVMe support, the X series boards tend to only sacrifice one SATA port per NVMe device, while the B series boards usually sacrifice 2 ports per device. Outside of that, the B series boards still handle overclocking very well given your case has reasonable air flow. 

Likely reasons we're going to see the release of and push for X570 boards will be for more streamlined native PCIe4.0 support and beefed up power delivery as it seems all but certain that there will be 12 and/or 16 core variants of Ryzen 3000 series. In all honestly, I think we'll likely see the Zen 2 refresh (Ryzen 4000 series) and the 600 series motherboards announced and released about the same time as PCIe 4.0 GPUs, so that's really seems like an entirely subjective upgrade, at least based off of what AMD has confirmed so far.



Majin SSJ Eric said:


> Dude, you're coming from an 8320; that 1700 is going to absolutely light your hair on fire, regardless of how old it is!! Ryzen 1 was a revolutionary leap forward for AMD's CPU division and if you don't care much about high OC's should provide you with all the performance you could possibly need, especially at the kind of price you are getting it for. An OC'd 1700 will basically net you performance parity with something like a 5960X which is still insanely great performance for all applications even today...


What Majin said for sure. As a stop gap upgrade (and it's what will eventually be the guts of my NAS tower, so why not buy it and enjoy a short term boost till my 4K build this fall lol?) I went from a Haswell i5 to a Ryzen 5-1600 and gained an average of 25 fps in more modern titles like AC Odyssey, Battlefield V, and Far Cry 5... and that Haswell stomped my old FX-8120 into the dirt.


----------



## ryan92084

EniGma1987 said:


> AMD has said they will support AM4 until 2020. So the socket is over halfway through its life already. Many people assume that means they are promised "Zen3" support on the socket but that isnt necessarily true. AMD could release this Zen2/Ryzen3000 series and then next year follow it up with a Zen2+ built on the new EUV enhanced 7nm node, or even not release a node updated design and just go straight with a 16 core model sometime next year. So just be aware that this may *possibly* be the last arch generation available on the AM4 socket, even if a 7nm+ design does bring small speed improvements.
> 
> 
> 
> 2020 is when we should be getting "AM5" socket if everything stays on track, which will have DDR5 memory and probably upgraded to PCI-E 5.0 as well. The release of this socket will probably have more to do with if the memory vendors are on track or not, rather than if AMD is on track. DDR5 was supposed to release last year, and it was delayed until this year. Now there is talk of delays till 2020. Which with DDR5 being pushed back so much, AMD may be forced to make a Zen3 design and support AM4 through 2021-2022


Zen3 is still "on track" to be done before the end of 2020 with no sign of any + between. Of course they could still pull an am4+ and have it work on old boards except xxx features or save the ryzen version of zen3 for 2021, we'll see.


----------



## NightAntilli

Majin SSJ Eric said:


> Dude, you're coming from an 8320; that 1700 is going to absolutely light your hair on fire, regardless of how old it is!! Ryzen 1 was a revolutionary leap forward for AMD's CPU division and if you don't care much about high OC's should provide you with all the performance you could possibly need, especially at the kind of price you are getting it for. An OC'd 1700 will basically net you performance parity with something like a 5960X which is still insanely great performance for all applications even today...


I tend to avoid getting overly exited when I'm making my purchases, simply in order to try and make as rational a decision as possible. I'm not really interested in OC'ing this time though. I OC'd my FX because I really needed to. I will be getting fast RAM though, in order to get the most out of the CPU out of the box.



SpacemanSpliff said:


> If you live reasonably close to a MicroCenter, they have the 1700X for $149.99 right now. One catch, if you're running a Version of Windows 7 or 8, and you don't want to upgrade to 10 yet, you should anticipate having to do some extra legwork with your existing OS install and ensure that you have the latest AMD USB 3.0 drivers in advance of installing the new CPU and motherboard. Ultimately, you'll still eventually need to upgrade to 10, as it will not allow you to do further Windows updates on the older versions once you install a Ryzen CPU and motherboard. That and it has officially been announced that after January 14, 2020, Microsoft will no longer provide security updates or support for PCs running Windows 7.


 I run Windows 10, so that won't be an issue. I don't live anywhere near a MicroCenter considering I live in Curaçao lol. But I generally order from the US and have it shipped to my country. But obviously I'm limited to online shopping & shipping.



SpacemanSpliff said:


> Also, don't rule out a B450 motherboard. The only real reason I have seen for getting the X-x70 series boards is if you intend to use 5 or more SATA devices AND want to have an NVMe boot drive, or if you want to run an SLI rig. B450 supports Crossfire, but not SLI. Yet with multi-GPU configs now entirely at the mercy of each individual title's development teams, Crossfire and SLI are essentially obsolete. As fas as NVMe support, the X series boards tend to only sacrifice one SATA port per NVMe device, while the B series boards usually sacrifice 2 ports per device. Outside of that, the B series boards still handle overclocking very well given your case has reasonable air flow.


Actually, I'm simply trying to get a board with an extremely good VRM, which will increase the chances of it having better compatibility with newer CPUs on the AM4 socket. There's no B450 board that has a VRM quality equivalent to the X470 Taichi, or X470-F ROG Strix, or Gigabyte Gaming 7. I think it's going to be the Taichi. It costs a bit, but it should be fine. 



SpacemanSpliff said:


> Likely reasons we're going to see the release of and push for X570 boards will be for more streamlined native PCIe4.0 support and beefed up power delivery as it seems all but certain that there will be 12 and/or 16 core variants of Ryzen 3000 series. In all honestly, I think we'll likely see the Zen 2 refresh (Ryzen 4000 series) and the 600 series motherboards announced and released about the same time as PCIe 4.0 GPUs, so that's really seems like an entirely subjective upgrade, at least based off of what AMD has confirmed so far.


Those features don't really interest me that much at this point. I think AMD will include something else to keep motherboard makers happy though, because why would anyone by a 500 series chipset instead of the 400 series if those are the only upgraded features? StoreMI is something that could push some people to get a 400 series over the 300 series, for example. 



SpacemanSpliff said:


> What Majin said for sure. As a stop gap upgrade (and it's what will eventually be the guts of my NAS tower, so why not buy it and enjoy a short term boost till my 4K build this fall lol?) I went from a Haswell i5 to a Ryzen 5-1600 and gained an average of 25 fps in more modern titles like AC Odyssey, Battlefield V, and Far Cry 5... and that Haswell stomped my old FX-8120 into the dirt.


Are you planning to upgrade to a 3000 series CPU? Or is the 1600 more than sufficient for you at this point? I'm actually in doubt if I should go for the 1700 or the 2600. They're practically the same price.


----------



## LancerVI

NightAntilli said:


> Are you planning to upgrade to a 3000 series CPU? Or is the 1600 more than sufficient for you at this point? I'm actually in doubt if I should go for the 1700 or the 2600. They're practically the same price.


I believe memory support is better with 2600. My 2700x can get my 3600 kit to 3400 with no problems. 1st gen Ryzen had more extensive problems with highspeed kits, IIRC.


----------



## nesham

My R7 1800X works great with gskill ripjaws v [email protected]@[email protected] from September 2017. Before from March to September I had Corsair 3000 kit with Hynix chips and in first two months was problem to get it higher than 2933 but after August it works at 3200. First Corsair kit was with MFR chips and after 2 months one stick died. After RMA got another kit with AFR chips and that was better and worked at [email protected]@1.38V.

Sent fra min SM-G965F via Tapatalk


----------



## SpacemanSpliff

NightAntilli said:


> Are you planning to upgrade to a 3000 series CPU? Or is the 1600 more than sufficient for you at this point? I'm actually in doubt if I should go for the 1700 or the 2600. They're practically the same price.


I will be getting a Ryzen 7-3000 series for the 4K build I'm doing later this year. I had already planned on doing a 65w Ryzen 5 platform for my data storage server / photo-editing tower which is why I went ahead and pulled the trigger on getting my 1600 when the Christmas deals were going on. The Haswell i5 was really showing how long in the teeth it's getting for gaming by strugglling to even push mid 50s for FPS in AAA titles. For 1080p, the Ryzen 5s do just great, but it you are considering moving up to 1440p or 4K, or even high refresh rate on 1080p in the future, I think the Ryzen 7 will give you a stronger base to upgrade from.

In regards to memory performance, even on 1st gen Ryzen, the BIOS and chipset driver support for memory controllers has greatly improved. That being said, lots of folks are able to hit the 3400-3600 range with 2nd Gen Ryzen. I plugged my kit in, and only had to enable the SPD profile and it ran at rated speeds on 1.35v with no adjustments needed. I think I hit the upper end of the silicon lottery for a first gen chip though. My 1600 will run stable 4025 MHz @ 1.375v, and with a 240mm AIO it peaks at ~68C at full load. I have it dialed back to 3.9GHz on 1.325v for everyday use and gaming and it never gets above 60C. I was also able to achieve and decent RAM OC and tighten up the timings some. The memory was factory profiled to run at 16-18-18-38 at 3000 and it's stable at 15-15-15-30 at 3333, but it took bumping the voltage up from 1.35 to 1.45.


----------



## Streetdragon

Is in German, so maybe you need a translator:
https://www.pcbuildersclub.com/en/2019/03/ryzen-3000-appeared-at-dealer-in-singapore-16-core-ryzen-9-3850x-costs-560/

Tldr.:









A list of 10 Ryzen 3000 Cpus got "leaked" on a Singapore side. More or less are the same informations as AdoredTV said. Maybe the side just copied the informations. Dont know, but would be nice if its true


----------



## agatong55

Streetdragon said:


> Is in German, so maybe you need a translator:
> https://www.pcbuildersclub.com/en/2019/03/ryzen-3000-appeared-at-dealer-in-singapore-16-core-ryzen-9-3850x-costs-560/
> 
> Tldr.:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> A list of 10 Ryzen 3000 Cpus got "leaked" on a Singapore side. More or less are the same informations as AdoredTV said. Maybe the side just copied the informations. Dont know, but would be nice if its true


I would still take everything with a grain of salt, but doesn't AMD have a press conference sometime this month dedicated to the new ryzen chip? I forgot the dates of it, but I am sure we will get all the info them, but if the leaks are true and the 3700x can do what it actually says for that price point that would be major.

Here are the prices translated into USD.
Price:$99 $129	$129	$178	$229	$199	$299	$329	$449


----------



## keikei

If the Ryzen 9 exists, I won't have to upgrade my cpu for a decade...


----------



## ibb27

agatong55 said:


> I would still take everything with a grain of salt, but doesn't AMD have a press conference sometime this month dedicated to the new ryzen chip? I forgot the dates of it, but I am sure we will get all the info them, but if the leaks are true and the 3700x can do what it actually says for that price point that would be major.


Yep - preview of Zen 2 arch at GDC 2019 (18-22 March).

https://www.guru3d.com/news-story/preview-of-zen-2-architecture-in-at-amd-gdc-2019-presentation.html

Edit: Actually conference will be on 20 March.

https://schedule.gdconf.com/session...software-optimization-presented-by-amd/864865


----------



## rancor

agatong55 said:


> I would still take everything with a grain of salt, but doesn't AMD have a press conference sometime this month dedicated to the new ryzen chip? I forgot the dates of it, but I am sure we will get all the info them, but if the leaks are true and the 3700x can do what it actually says for that price point that would be major.
> 
> Here are the prices translated into USD.
> Price:$99 $129	$129	$178	$229	$199	$299	$329	$449


GDC2019 (March 18-22, 2019) AMD Ryzen Processor Software Optimization (Presented by AMD) with a glimpse of the next generation of "Zen 2" x86 core architecture. Probably only going to get some more details on artitecture not a lunch it is a game developers conference after all.


----------



## EniGma1987

agatong55 said:


> I would still take everything with a grain of salt, but doesn't AMD have a press conference sometime this month dedicated to the new ryzen chip? I forgot the dates of it, but I am sure we will get all the info them, but if the leaks are true and the 3700x can do what it actually says for that price point that would be major.
> 
> Here are the prices translated into USD.
> Price:$99 $129 $129 $178 $229 $199 $299 $329 $449



I saw an article yesterday that said prices were leaked online translated to $549 for the top chip. SO ya, lots of rumors out there and we cant really trust any of them just yet. I doubt AMD will launch a 16 core processor this gen for under $500 though.


----------



## AlphaC

I feel that the 12 core might launch at $550 or something like that but not a 16 core. Performance-wise a 12 core would match a i9-9900k surely so there'd be no reason to undercut it.

TR 2920X is still $650 right now , TR1950X 16 core is nearly $600 and TR 1920X 12 core ~$430.


Maybe the top end part Ryzen 9 is actually 12 cores at launch.


There's still a new for low end parts with graphics that are leaky mobile chips so that's likely to be Ryzen 3.


It could be 

Ryzen 3 = 4c/8t (both failed mobile chips and 2 cores working per 7nm chiplets) --> sub $200 , competition for i3s (hence need iGPU)

Ryzen 5 = 6c/12t (3 cores working per 7nm chiplet or 6 cores due to 8 cores in one chiplet + graphics) --> sub $300 , competition for i5s (hence need iGPU)

Ryzen 7 = 8c/16t (either 8 core chiplet or two 4 cores) --> sub $400 , competition for i7s

Ryzen 9 = 12c/16t (6 cores working per chiplet with 2 chiplets, no GPU) --> competition for i9s , so around $500-600


instead of the overly optimistic 

$99-130 launchday hexcores (this isn't likely unless they're using two chiplets and 2+4 or 3+3 cores)
$180-230 octocores (4+4, 6+2 , 8+0 reasonable for iGPU-less ones but if you factor in its launch day pricing it would mean it doesn't match Intel clocks / performance per core)
$300-330 12-cores (6+6, 8+4)
$450-500 16-cores (8+8)


----------



## mouacyk

Let it rain on Intel.


----------



## Hwgeek

Correct me if I am wrong, over 90% of the PC market is under 6 cores, so AMD has high chance to make most of them to upgrade to their platform with the new Ryzen launch (doesn't necessary needs to Ryzen 3000, maybe even Ryzen 1X00 and Ryzen 2X00 depends on the discount after new gen launch).
And after that those new AMD owners won't have any good reason to upgrade to Intel's 10nm CPU's in 2020. I think this is AMD's goal with Ryzen 3000 launch, to make all the 2~4 core old Intel PC's to be updated to new AMD platforms, until Ryzen they had no reason to upgrade since all new Intel CPU's were just Tick&Tock.


----------



## Asterox

mouacyk said:


> Let it rain on Intel.


Well it is raining from first Ryzen realase, Intel is as wet as a bottle at the bottom of the sea."I now it is only Microcenter", but in US nobody has any objections i hope.

https://www.microcenter.com/product...-am4-boxed-processor-with-wraith-spire-cooler

https://www.microcenter.com/product/485473/amd-ryzen-7-1700x-34-ghz-8-core-am4-boxed-processor

https://www.microcenter.com/site/stores/default.aspx

AMD 6/12 CPU-s today, and in the future with Ryzen 2 6/12 models will continue as best seling AMD CPU-s no doubt.I cant imagine that(my head hurts), Intel will never sell not even "some old 6/6 CPU for 100$ for example i5 8400".


----------



## ajc9988

You are wrong for many reasons on thinking it is overly optimistic. Have you seen the retired engineer's die wafer calculation on price per chip of both 7nm core die and 12/14nm I/O die. Under $17 for core and $13 and change for the I/O. Add to that binning costs, packaging costs, etc., you wind up in a good situation so that the 6 core chips can be in the $100 range and you could do 16-cores on the main chips easily at that (edit: meaning the rumored $500-560) price. What we are likely to see is a deck slide.

Also, remember mainstream chips only have 2 memory controllers for dual channel and only have a limited PCIe lane count compared to HEDT.

Then you need to consider the price of the 8 core 1800X being $500 on release.

Considering the 3950X is rumored at around $1400, that will likely be the 32-core chip, while the one closer to $2400 or so is likely a 64-core chip for HEDT. That means the 16-core HEDT chip, if made, would become the entry level chip, like the 8-core 1900X was.

So your argument seems flawed.

Also, here is this:






Sent from my SM-G900P using Tapatalk


----------



## Hwgeek

Yep, those small chiplet and separate I/O die will improve the yields and the price cost per CPU, AMD going to make very good $$$ on them while intel keep overclocking their large monolithic CPU's with iGPU.
Just think about it, while Intel could have failed Silicone in:
*I/O Area 
*CPU core Area 
*iGPU area

*AMD has separete small CPU Core Chiplet and separate I/O die and no iGPU to worry about.*


----------



## mouacyk

Hwgeek said:


> Yep, those small chiplet and separate I/O die will improve the yields and the price cost per CPU, AMD going to make very good $$$ on them while intel keep overclocking their large monolithic CPU's with iGPU.
> Just think about it, while Intel could have failed Silicone in:
> *I/O Area
> *CPU core Area
> *iGPU area
> 
> *AMD has separete small CPU Core Chiplet and separate I/O die and no iGPU to worry about.*


And AMD has pretty much caught up in IPC. They just have to find ways to clock them like Intel.


----------



## Hwgeek

If Vega VII (larger and more complex silicone) can OC to 2.2Ghz then I am sure that TSMC's 7nm will solve this problem for AMD.
Also it's very interesting to see how separate CPU core chiplet could affect the OC and power usage while over-volting to get higher clocks.


----------



## EniGma1987

ajc9988 said:


> You are wrong for many reasons on thinking it is overly optimistic. Have you seen the retired engineer's die wafer calculation on price per chip of both 7nm core die and 12/14nm I/O die. Under $17 for core and $13 and change for the I/O. Add to that binning costs, packaging costs, etc., you wind up in a good situation so that the 6 core chips can be in the $100 range and you could do 16-cores on the main chips easily at that price. What we are likely to see is a deck slide.



Doesnt work that way. They have to price their lowest end product around cost, and then require enough price difference between models going up for proper price segmentation. AMD cannot release an 8 core model for $120~ or it destroys the whole product stack since no one will buy anything less than that top end one. Then a 16 core is just 1 extra die so they price it at $140? Nope not going to happen because then no one would buy anything but a 16 core CPU. They MUST segment with enough price difference between models to make each SKU relevant and not destroy sales of the whole product stack, as well as make enough profit to go into next generation. Part of that segmentation will be doing things like adding $40-50 over the last SKU each time the core count jumps up by 2. So if the lowest end quad core is around $80 (which is probably around their cost + profit margin), then we will probably see the 16 core at $500 minimum, most likely even higher since I only added on bare minimum product pricing segmentation when going up models in my estimates. And in those prices AMD also has to think about not only destroying their own Ryzen line product stack with too close or too low of pricing, they also have to think about destroying their whole Threadripper product stack when they undercut their pricing by too large a margin.


----------



## TheHorse

I'm already planning on buying a 3700x. X because binning. I wanna OC the **** out of it, and I do as much encoding as I do gaming now so more cores will always help, even though not a lot of games will use 8 cores/16 threads. Also tired of the crap IMC on the 1600 I have now.


----------



## ajc9988

EniGma1987 said:


> Doesnt work that way. They have to price their lowest end product around cost, and then require enough price difference between models going up for proper price segmentation. AMD cannot release an 8 core model for $120~ or it destroys the whole product stack since no one will buy anything less than that top end one. Then a 16 core is just 1 extra die so they price it at $140? Nope not going to happen because then no one would buy anything but a 16 core CPU. They MUST segment with enough price difference between models to make each SKU relevant and not destroy sales of the whole product stack, as well as make enough profit to go into next generation. Part of that segmentation will be doing things like adding $40-50 over the last SKU each time the core count jumps up by 2. So if the lowest end quad core is around $80 (which is probably around their cost + profit margin), then we will probably see the 16 core at $500 minimum, most likely even higher since I only added on bare minimum product pricing segmentation when going up models in my estimates. And in those prices AMD also has to think about not only destroying their own Ryzen line product stack with too close or too low of pricing, they also have to think about destroying their whole Threadripper product stack when they undercut their pricing by too large a margin.


Edit: I went and reread and see the ambiguity. I meant 16-core at $500 easy with the 6 core being able to be done easily within the estimate of $130 or so. My bad.

Edit 2: I edited the post with a parenthetical to avoid future ambiguity. I thought it would be obvious I meant the 16-core within the rumored price range of $500-560, but can see how people could misunderstand that sentence. The problem on clarity was all mine.


----------



## CynicalUnicorn

Streetdragon said:


> Is in German, so maybe you need a translator:
> https://www.pcbuildersclub.com/en/2019/03/ryzen-3000-appeared-at-dealer-in-singapore-16-core-ryzen-9-3850x-costs-560/
> 
> Tldr.:
> *snip*
> 
> A list of 10 Ryzen 3000 Cpus got "leaked" on a Singapore side. More or less are the same informations as AdoredTV said. *Maybe the side just copied the informations.* Dont know, but would be nice if its true


It's the same source. Clockspeeds aren't finalized that many months in advance.


----------



## YaGit(TM)

These listed price are in Singapore dollars... 
The retailer from where these were "leaked" are pretty much spot on for un-released items.
planning to get a 3700x


----------



## Hwgeek

*AMD: 3rd Gen Ryzen Threadripper in 2019*
https://www.anandtech.com/show/14059/amd-3rd-gen-ryzen-threadripper-in-2019


----------



## KyadCK

AlphaC said:


> I feel that the 12 core might launch at $550 or something like that but not a 16 core. *Performance-wise a 12 core would match a i9-9900k* surely so there'd be no reason to undercut it.
> 
> TR 2920X is still $650 right now , TR1950X 16 core is nearly $600 and TR 1920X 12 core ~$430.
> 
> 
> Maybe the top end part Ryzen 9 is actually 12 cores at launch.
> 
> 
> There's still a new for low end parts with graphics that are leaky mobile chips so that's likely to be Ryzen 3.
> 
> 
> It could be
> 
> Ryzen 3 = 4c/8t (both failed mobile chips and 2 cores working per 7nm chiplets) --> sub $200 , competition for i3s (hence need iGPU)
> 
> Ryzen 5 = 6c/12t (3 cores working per 7nm chiplet or 6 cores due to 8 cores in one chiplet + graphics) --> sub $300 , competition for i5s (hence need iGPU)
> 
> Ryzen 7 = 8c/16t (either 8 core chiplet or two 4 cores) --> sub $400 , competition for i7s
> 
> Ryzen 9 = 12c/16t (6 cores working per chiplet with 2 chiplets, no GPU) --> competition for i9s , so around $500-600
> 
> 
> instead of the overly optimistic
> 
> $99-130 launchday hexcores (this isn't likely unless they're using two chiplets and 2+4 or 3+3 cores)
> $180-230 octocores (4+4, 6+2 , 8+0 reasonable for iGPU-less ones but if you factor in its launch day pricing it would mean it doesn't match Intel clocks / performance per core)
> $300-330 12-cores (6+6, 8+4)
> $450-500 16-cores (8+8)


It would do much better than tie with it.

There is zero reason to only sell crippled parts. The entire point of doing it the chiplet way is high yields.

The 1800X didn't need to significantly undercut the 5960X, let alone the 1700 coming in at one third the price. Threadripper did not need to charge only $1000 for their 16-core either. AMD's design is cheaper to make, thus they charge less, even when they have the performance (and security) lead.


----------



## AlphaC

I know, but if AMD has the performance lead they'd probably capitalize on that especially with the AVX improvements. Ryzen has a power efficiency lead out of the box as well. As far as I am aware the 8 core chiplets would probably be prioritized for EPYC and Threadripper first. 



At $500 for 12 cores they'd still undercut i9-9900K , i9-9900X, i9-9820X, i9-7920X, and i9-7900X.


If it's performing on par 8 core vs 8 core as they demoed , AMD can still claim that their 12 core is 50% less money per core at ~$500 or 50% more threads vs i9-9900k.


There is probably going to be 16 cores in response to Intel's next CPU rather than a launch-day Ryzen 9.


----------



## ajc9988

AlphaC said:


> I know, but if AMD has the performance lead they'd probably capitalize on that especially with the AVX improvements. Ryzen has a power efficiency lead out of the box as well. As far as I am aware the 8 core chiplets would probably be prioritized for EPYC and Threadripper first.
> 
> 
> 
> At $500 for 12 cores they'd still undercut i9-9900K , i9-9900X, i9-9820X, i9-7920X, and i9-7900X.
> 
> 
> If it's performing on par 8 core vs 8 core as they demoed , AMD can still claim that their 12 core is 50% less money per core at ~$500 or 50% more threads vs i9-9900k.
> 
> 
> There is probably going to be 16 cores in response to Intel's next CPU rather than a launch-day Ryzen 9.


You confuse their goal. Market share and mind share is first and foremost. History has already shown even when they have a better product, that alone doesn't turn into market share. Good pricing by undercutting the competition does.

With 7nm and chiplets being used to get smaller dies, you get higher yields per wafer, meaning eating less silicon cost and getting to market with relatively good speed.

Then we look at being willing to undercut Intel's offerings by up to 40% or so with the first couple ridden generations. The data on units sold from mind factory in Europe shows this has helped AMD outsell Intel on volume, even though revenue is similar between the two companies, since around either November or December. There are multiple reasons why, from recessionary indicators to social unrest related to inequality to the Intel shortage of chips to the overpricing of Nvidia cards (effecting buying trends for new PC builds). Now I'm pointing more to DIY information, so the data may suggest something different in OEM and ODM channels as well as boutiques.

I agree that holding the 16-core mainstream back to see what may come with comet lake or ice lake makes sense, and may come later. The only thing I don't agree with is pricing.

By taking market share and showing the performance, they can break through big blue's mind share, which has been a huge issue due to the pile driver/steamroller/evacuator epoch of the company's history.

Instead, by releasing an equivalent 8 core at half the price, or double the cores of Intel at the same price, it becomes difficult to argue Intel offers much of anything over AMD except for burning money. It knocks entry level to six cores, basically striking at Intel's dual and quad core low end chips, it turns the 8 core into a mid-range mainstream chip, and the 12-core at where Intel was selling their 6 core completely undermines looking at Intel anywhere in that range. 

In other words, this is the opening shots in a pricing war, something Intel had been trying to avoid. With the gap on single core and IPC closing, Intel will be left with little to argue for the price premium. That is how you take market share. If Intel cuts prices, investors get rattled, if they don't, they get priced out of the market.

Businesses regularly make the mistake to try to maximize income in liue of market share, pricing it roughly with competitors instead. It is a balancing act, as you need to balance making revenue through volume vs. margin. And that balance point varies by industry. The semiconductor industry is pretty cut throat. But I posit this to you: AMD, at current pricing, is doing 40% margins, while Intel, priced higher per core, is at 60%. AMD will maintain that 40%, but if they undercut Intel so deep on pricing, especially with the narrowing of the performance gap, how much can Intel cut prices and not harm shareholder value? In other words, and in summary, AMD is giving Intel a choice, cut prices or lose volume. When faced with that choice, it really speaks to what AMD is trying to accomplish on competition.

Sent from my SM-G900P using Tapatalk


----------



## kingduqc

So many of you keep saying ryzen 9 is going to be a 12 core parts. They've put two chipplet in those CPUs, of course they are going to be 16 cores. Why would they not release their best offering? Just because it's too good to be true? They have their R3-5-7 lineup to sell binned/defective dies already.


----------



## Hwgeek

"You don't kick someone while they are down, because they've already been hurt. Hurting them again, when they have not risen to challenge you, is disrespectful." .


----------



## rluker5

Hwgeek said:


> Yep, those small chiplet and separate I/O die will improve the yields and the price cost per CPU, AMD going to make very good $$$ on them while intel keep overclocking their large monolithic CPU's with iGPU.
> Just think about it, while Intel could have failed Silicone in:
> *I/O Area
> *CPU core Area
> *iGPU area
> 
> *AMD has separete small CPU Core Chiplet and separate I/O die and no iGPU to worry about.*


Intel already has a separate I/O die, it is called the chipset. This new/old I/O die is the northbridge. They did away with that when they came out with the Athlon64 and Nehalem, respectively. Both of those architectures were huge advancements. There were ofc other big improvements, but the integration of the northbridge was one too. Hopefully the rebirth of the northbridge isn't too costly in terms of performance. It may even help, idk maybe they've figured out a way to lose some performance bottleneck. Reviews, benchmarks and comparison posts will let us know the details.

I'm glad we have such an open internet.

Edit: To me, an igpu is worth about $50. That is what I would pay extra over a non igpu i7/R7 to have one.


----------



## ajc9988

Hwgeek said:


> "You don't kick someone while they are down, because they've already been hurt. Hurting them again, when they have not risen to challenge you, is disrespectful." .


"Colonel Graff:
Tell me why you kept on kicking him. You had already won.

Ender Wiggin:
Knocking him down won the first fight. I wanted to win all the next ones, too."

Sent from my SM-G900P using Tapatalk


----------



## Hwgeek

*IMO- Ryzen 3000 will be ANN before Computex:*

Judging by the Delta between the day X370 Motherboards got new bios with "new upcoming processors support" to actual Ryzen 2000 ANN, it was ~75 days in ahead for Asus and less for others.
So this week X470 boards got the same "new upcoming processors support" so looking for May 7th for Ryzen 3000 ANN could be reasonable!

Also for 2400G- Asus released the bios 2 month ahead for X370.

So the question is, if they ANN the Ryzen 3000 on May 7th- when they gonna release it fir reviews/pre-order? during Computex or after?


----------



## LancerVI

Hwgeek said:


> *IMO- Ryzen 3000 will be ANN before Computex:*
> 
> Judging by the Delta between the day X370 Motherboards got new bios with "new upcoming processors support" to actual Ryzen 2000 ANN, it was ~75 days in ahead for Asus and less for others.
> So this week X470 boards got the same "new upcoming processors support" so looking for May 7th for Ryzen 3000 ANN could be reasonable!
> 
> Also for 2400G- Asus released the bios 2 month ahead for X370.
> 
> So the question is, if they ANN the Ryzen 3000 on May 7th- when they gonna release it fir reviews/pre-order? during Computex or after?


Yep. I noticed that too when I went to check for updates for my Asus X470-Pro. "AGESA 0070 for the upcoming processors"


----------



## AlphaC

Maybe May for the 50th anniversary of AMD?


----------



## Majin SSJ Eric

AlphaC said:


> Maybe May for the 50th anniversary of AMD?


This is what Jim speculated on in his latest Youtube video. He is estimating a soft/paper launch well ahead of Computex, with real availability coming online after Computex (from what I remember of his video).


----------



## speed_demon

Wow I haven't been this excited for new gen hardware in a long time. Here's hoping it meets or exceeds our expectations! :thumb:


----------



## dubldwn

Usually not big on paper launches but this is a special circumstance. If AMD can get just enough copies out there to reviewers and get sold out on newegg I say go for it because they haven’t had the performance crown for at least 13 years. That would really light a fire under Intel.


----------



## Cyrious

rluker5 said:


> Hopefully the rebirth of the northbridge isn't too costly in terms of performance. It may even help, idk maybe they've figured out a way to lose some performance bottleneck.


The 3 biggest issues with the classic northbridge design was bandwidth, the fact it wasn't full duplex (could only send or receive, but not both at the same time) and that all external communication had to pass through the socket-northbridge link before getting anywhere else. There was also the issue of latency.

The I/O die solves the bulk of these issues:

1. Infinity Fabric, being a significantly newer technology, is inherently faster than any of the old FSB/QPI/HT links ever could be. It's wider and clocked higher than either one of those. This helps bandwidth.
2. It's full duplex and can send and receive at the same time. This will help latency, as the CPU or northbridge doesn't have to wait for the other to finish sending before they can do their own thing. It also doubles the overall bandwidth.
3. Communications, while still having to leave the cores to get routed, don't have to leave the socket until they're sent directly to the destination PHYs. This saves power as the environment between the cores and the communications hardware is always the same, and thus can be far more tightly tuned, resulting in lower latency and higher bandwidth.

There's also the rumor currently going around that AMD has decoupled the IF/Mem clocks. If true, AMD (or the end-user) could really clock the Core-I/O links to much higher values further helping bandwidth and latency. It also helps that nowadays cores have access to much larger cache pools than what was available back in the day.

Will there be a penalty for going the non-integrated route with the I/O die? Most likely. But the bonuses for doing it this way in AMD's eyes outweigh the downsides, and so they're doing it.


----------



## rluker5

Cyrious said:


> The 3 biggest issues with the classic northbridge design was bandwidth, the fact it wasn't full duplex (could only send or receive, but not both at the same time) and that all external communication had to pass through the socket-northbridge link before getting anywhere else. There was also the issue of latency.
> 
> The I/O die solves the bulk of these issues:
> 
> 1. Infinity Fabric, being a significantly newer technology, is inherently faster than any of the old FSB/QPI/HT links ever could be. It's wider and clocked higher than either one of those. This helps bandwidth.
> 2. It's full duplex and can send and receive at the same time. This will help latency, as the CPU or northbridge doesn't have to wait for the other to finish sending before they can do their own thing. It also doubles the overall bandwidth.
> 3. Communications, while still having to leave the cores to get routed, don't have to leave the socket until they're sent directly to the destination PHYs. This saves power as the environment between the cores and the communications hardware is always the same, and thus can be far more tightly tuned, resulting in lower latency and higher bandwidth.
> 
> There's also the rumor currently going around that AMD has decoupled the IF/Mem clocks. If true, AMD (or the end-user) could really clock the Core-I/O links to much higher values further helping bandwidth and latency. It also helps that nowadays cores have access to much larger cache pools than what was available back in the day.
> 
> Will there be a penalty for going the non-integrated route with the I/O die? Most likely. But the bonuses for doing it this way in AMD's eyes outweigh the downsides, and so they're doing it.


I would hope the new northbridge interconnects would be better than the old ones. Ryzen needs way more communication capabilities than pre Athlon 64 cpus since it is many times faster. My concern is that it would be enough of a downgrade from what Ryzen currently has to make a new bottleneck. But I don't know this. That's something that will have the details shown when it comes out. Also on package, but off die latency isn't that much better than off package latency. At least with my edram cpus. 

But we will see. I imagine it will be easier to raise core clocks with simplified cores. Maybe there will be a larger cache that makes ram latency much less relevant. It is nice that we are seeing some creativity again. I hope good things come from it.


----------



## Cyrious

rluker5 said:


> I would hope the new northbridge interconnects would be better than the old ones. Ryzen needs way more communication capabilities than pre Athlon 64 cpus since it is many times faster. My concern is that it would be enough of a downgrade from what Ryzen currently has to make a new bottleneck. But I don't know this. That's something that will have the details shown when it comes out. Also on package, but off die latency isn't that much better than off package latency. At least with my edram cpus.
> 
> But we will see. I imagine it will be easier to raise core clocks with simplified cores. Maybe there will be a larger cache that makes ram latency much less relevant. It is nice that we are seeing some creativity again. I hope good things come from it.


Well I think it's already been (semi)confirmed that L3 cache has received a significant increase in amount per CCX, so that should help mask ram latency.

The bigger boost is that Threadripper and EPYC are both going to see their memory latencies get significantly flatter as all memory accesses only have to pass through the I/O die in and out, instead of having to slingshot across a bunch of different nodes back and forth, especially to nodes that dont have a direct link to the requesting node. That, and bandwidth, as any one CCX can now pull from all 8 memory channels simultaneously instead of being restricted to the local 2 or having to pull across IF. We'll see when the juicier details drop.


----------



## tpi2007

rluker5 said:


> Intel already has a separate I/O die, it is called the chipset. This new/old I/O die is the northbridge. They did away with that when they came out with the Athlon64 *and Nehalem, respectively*. Both of those architectures were huge advancements. There were ofc other big improvements, but the integration of the northbridge was one too. Hopefully the rebirth of the northbridge isn't too costly in terms of performance. It may even help, idk maybe they've figured out a way to lose some performance bottleneck. Reviews, benchmarks and comparison posts will let us know the details.
> 
> I'm glad we have such an open internet.
> 
> Edit: To me, an igpu is worth about $50. That is what I would pay extra over a non igpu i7/R7 to have one.



Minor clarification on the part in bold: Intel didn't get rid of the Northbridge with Nehalem right away and even later it will depend on the model, even the 32nm die shrink Westmere. When it came out on the X58 platform, the Northbridge was still there and hosted the 36 lane PCIe controller. What they did do was integrate the triple channel memory controller on the CPU die and this naturally also applies to the 32nm hexacores that were released later for the X58 platform. They only got rid of the Northbridge later for the mainstream P55 platform and even then not quite. The 45nm quad core Core i5's and i7's had the Northbridge fully integrated, but the 32nm Celerons, Pentiums, Core i3's and dual core Core i5's had a dual chip arrangement on the CPU package: a 32nm CPU die with the PCIe controller on board and then a separate 45nm iGPU chip that also hosted the memory controller, which resulted in increased memory latency on those models.


----------



## rluker5

Thank you for the detailed clarification. 

I'm not saying the chiplet I/O is a bad thing. Just that it might have some drawbacks like in single core games. But everything cpu is moving towards better utilization of more cumulative processing power and the chiplet thing is going along with that. There is only so long single core and memory latency dependent games can mask that a 5775c has 80% the processing power of a 7700k, 53% of an 8700k, 50% of a 2700, and 40% of a 9900k. And breaking up the cores is a great way to lower the heat density and price. There will just be some fodder for trolling is all. I don't think it will be some unilateral victory in the shot term like Adoredtv implies.


----------



## Hwgeek

Looks like Ryzen 3000 ANN is closer then we thought:
A lot of MB got new bios with Ryzen 3000 support (even A320! on ASUS) and user on Reddit asked MSI and they confirmed that it's for new Ryzen + TPU DB listed the lineup:
https://www.reddit.com/r/Amd/comments/b08h4p/techpowerup_listed_and_matched_the_predicted/eid8zd2/
https://www.techpowerup.com/cpudb/?codename=Zen 2&sort=generation (look that only Ryzen 7/9 active!)


----------



## speed_demon

Wow a 5ghz clock out of the box is quite the accomplishment. Things are appearing more and more enticing.


----------



## keikei

Hwgeek said:


> Looks like Ryzen 3000 ANN is closer then we thought:
> A lot of MB got new bios with Ryzen 3000 support (even A320! on ASUS) and user on Reddit asked MSI and they confirmed that it's for new Ryzen + TPU DB listed the lineup:
> https://www.reddit.com/r/Amd/comments/b08h4p/techpowerup_listed_and_matched_the_predicted/eid8zd2/
> https://www.techpowerup.com/cpudb/?codename=Zen 2&sort=generation (look that only Ryzen 7/9 active!)


A 16 core/ 32 thread cpu @ 125 tdp sounds amazing. March release?! Take my $$$.


----------



## battlenut

I thought these chips were not supposed to be here till june. This comes directly from TechRadar, plus I know that AMD CEO said this also.

AMD is hard at work on Zen 2, the architecture behind AMD Ryzen 3rd Generation. And, according to the latest internet rumors, we could see core counts rising up to 16 and clock speeds up to 5.0GHz. If any of this is true, the desktop processor landscape is going to be extremely compelling when these next-gen chips release sometime in mid 2019.


----------



## Streetdragon

I thought the x3800 will boost/turbo to 5Ghz too. If not i will go for the 12 Core monster^^.

But still, nice and i keep my hopes(and wallet) high


----------



## King4x4

/Look 5930k
/SadFace
/Bury 5930k
/Buy AMD


----------



## ajc9988

Streetdragon said:


> I thought the x3800 will boost/turbo to 5Ghz too. If not i will go for the 12 Core monster^^.
> 
> But still, nice and i keep my hopes(and wallet) high


Yeah, you are referencing the wrong SKU. 3700x is the 12 core and 3600x is the 8 core according to rumors. The fastest 8 core is rumored to be 4.8GHz, while the fastest 12 core is 5GHz and the fastest 16 core is 5.1GHz.

With that being the rumor, I cannot wait to see the specs on TR!


battlenut said:


> I thought these chips were not supposed to be here till june. This comes directly from TechRadar, plus I know that AMD CEO said this also.
> 
> AMD is hard at work on Zen 2, the architecture behind AMD Ryzen 3rd Generation. And, according to the latest internet rumors, we could see core counts rising up to 16 and clock speeds up to 5.0GHz. If any of this is true, the desktop processor landscape is going to be extremely compelling when these next-gen chips release sometime in mid 2019.


Correct, plus the slide showing mid-year, which places it June to August, although all rumors point to a computex announcement and July release.

Either way, we'll know more in a week at GDC, where more info on Zen 2 is set to be publicized.


keikei said:


> A 16 core/ 32 thread cpu @ 125 tdp sounds amazing. March release?! Take my $$$.


March isn't happening. But that is OK, as Intel is said to have a deepening shortage, this time in mobile chips, possibly ceding up to 8% market share in mobile.

Sent from my SM-G900P using Tapatalk


----------



## zealord

Yeah I would love for the next AMD CPUs to have high clocks because that is certainly the area where they are lagging most and (probably) responsible for the worse single threaded performance in comparison to intel.
The 3600X with 4.8GHZ Boost sounds really good. Some mild overclock and 5.0GHZ should be no problem I hope.

Game performance is all that matters to me


----------



## ZealotKi11er

zealord said:


> Yeah I would love for the next AMD CPUs to have high clocks because that is certainly the area where they are lagging most and (probably) responsible for the worse single threaded performance in comparison to intel.
> The 3600X with 4.8GHZ Boost sounds really good. Some mild overclock and 5.0GHZ should be no problem I hope.
> 
> Game performance is all that matters to me


Clock speed is not where AMD lacks. Its part of the reason but in games its mostly architecture. Look at something like 8400 vs 2600X. 2600X has higher clock speeds but still can't beat 8400 in gaming.


----------



## Nizzen

ZealotKi11er said:


> Clock speed is not where AMD lacks. Its part of the reason but in games its mostly architecture. Look at something like 8400 vs 2600X. 2600X has higher clock speeds but still can't beat 8400 in gaming.


The main key is latency 

I testet 2700x "max OC" VS 9900k "max oc" in Aida memory benchmark.

2700x OC had 70% higher latency than 9900k. It was like 37ns VS 63ns or so... It was in that range...


----------



## ajc9988

ZealotKi11er said:


> Clock speed is not where AMD lacks. Its part of the reason but in games its mostly architecture. Look at something like 8400 vs 2600X. 2600X has higher clock speeds but still can't beat 8400 in gaming.


At stock, that is correct, overclocked, that is false. 

What that is about is the mix of frequency and IPC, as well as program utilization of threading. Intel has higher IPC and higher stock frequency, which frequency controls cycles, so is multiplied by the instructions per cycle to get the performance.

After overclocking a 2600X, it achieves higher frequency than the 8400, which closes the gap and in fact edges out the 8400.

This is already known widely, so I don't know why you don't add the caveat of overclocking or explanation.

Sent from my SM-G900P using Tapatalk


----------



## mouacyk

ZealotKi11er said:


> Clock speed is not where AMD lacks. Its part of the reason but in games its mostly architecture. Look at something like 8400 vs 2600X. 2600X has higher clock speeds but still can't beat 8400 in gaming.


So this is what Ryzen+ is capable of when the IMC is unleashed (not 24/7):

src: https://www.reddit.com/r/Amd/comments/8dwn83/4000mhz_ram_with_2700x/

That is still about 40% slower latency than the average 24/7 coffee lake, which I'm staking at 40ns (best 24/7 can reach down to 34ns). Against core count and clock speed differences, this is overlooked and will affect gaming more often than people expected (compounded by inefficient scheduling of Ryzen CCX threads in Windows that invalidate cache). Most of the games tested in the press have large binaries and/or assets that will not fit into L3 cache, either. Prefetching only does so much to smooth it out, and will not magically reduce the true memory fetch latency.

IF is great for adding cores, but beyond 4-6 cores it's irrelevant for most gaming anyway and then it ends up adding latency to other subsystems. Tuned daily Vishera systems on DDR3-2400 had already reached low 52ns for memory fetch latency. IF seems a step in the wrong direction for gamers... it doesn't affect every frame, but will have the maximum affect on the worst frame.


----------



## ajc9988

mouacyk said:


> So this is what Ryzen+ is capable of when the IMC is unleashed (not 2/7):
> 
> 
> 
> src: https://www.reddit.com/r/Amd/comments/8dwn83/4000mhz_ram_with_2700x/
> 
> 
> 
> That is still about 40% slower latency than the average 24/7 coffee lake, which I'm staking at 40ns (best 24/7 can reach down to 34ns). Against core count and clock speed differences, this is overlooked and will affect gaming more often than people expected (compounded by inefficient scheduling of Ryzen CCX threads in Windows that invalidate cache). Most of the games tested in the press have large binaries and/or assets that will not fit into L3 cache, either. Prefetching only does so much to smooth it out, and will not magically reduce the true memory fetch latency.
> 
> 
> 
> IF is great for adding cores, but beyond 4-6 cores it's irrelevant for most gaming anyway and then it ends up adding latency to other subsystems. Tuned daily Vishera systems on DDR3-2400 had already reached low 52ns for memory fetch latency. IF seems a step in the wrong direction for gamers...


I'd argue you are missing the forest in the trees. IPC is influenced, and rarely separated from, the IF speed, the latency of memory and cache, etc. Since IPC compounds those, focusing on the independent values doesn't necessarily give you the representative value for performance. Instead, it can give you areas for improvement to increase IPC values, which vary by task (meaning every programs' ability to use the specific hardware for a specific task is different).

This is why I argue examining IPC for the task and frequency of the hardware, which controls cycles, is still the most accurate way to look at this. Zen 2 is allegedly 11-15% faster IPC than Zen, and up to almost 30% at floating point depending on the task. Intel has a 7% IPC increase over Zen and 3-4% over Zen+. That means current Intel processors will have lower IPC versus Zen 2.

But, IPC can be overcome through clock speeds. For locked Intel chips, that is why the 2600X when overclocked can beat the 8400. And then there is SMT vs HT efficiencies, meaning single thread and multi thread have different scaling depending on architecture.

Now, that means with IPC and clock speeds increases, the last piece of the puzzle is software optimizations by software vendors to better utilize the hardware. 

Sent from my SM-G900P using Tapatalk


----------



## rluker5

ajc9988 said:


> I'd argue you are missing the forest in the trees. IPC is influenced, and rarely separated from, the IF speed, the latency of memory and cache, etc. Since IPC compounds those, focusing on the independent values doesn't necessarily give you the representative value for performance. Instead, it can give you areas for improvement to increase IPC values, which vary by task (meaning every programs' ability to use the specific hardware for a specific task is different).
> 
> This is why I argue examining IPC for the task and frequency of the hardware, which controls cycles, is still the most accurate way to look at this. Zen 2 is allegedly 11-15% faster IPC than Zen, and up to almost 30% at floating point depending on the task. Intel has a 7% IPC increase over Zen and 3-4% over Zen+. That means current Intel processors will have lower IPC versus Zen 2.
> 
> But, IPC can be overcome through clock speeds. For locked Intel chips, that is why the 2600X when overclocked can beat the 8400. And then there is SMT vs HT efficiencies, meaning single thread and multi thread have different scaling depending on architecture.
> 
> Now, that means with IPC and clock speeds increases, the last piece of the puzzle is software optimizations by software vendors to better utilize the hardware.
> 
> Sent from my SM-G900P using Tapatalk


I still don't see a scenario where Zen 2 makes ram speed and latency irrelevant.
Processing power is more important, but it is different. The processor is given stuff to do through ram. Just like using an ssd instead of a hdd, ram latency will be important in some uses. It is part of the whole package.
Processing power will be more important when it is inadequate. We haven't seen a seperate ram controller either of this nature, or in a processor this fast before so it's effects on modern games is not yet known. But I'm guessing it will be like a very fast processor with crap ram. Kind of like Ryzen 1 vs intel, but moreso. But since then we also have the whole latency messing spectre stuff happening to intel chips that seems to have no end so idk. Hope for the best?


----------



## ajc9988

rluker5 said:


> I still don't see a scenario where Zen 2 makes ram speed and latency irrelevant.
> Processing power is more important, but it is different. The processor is given stuff to do through ram. Just like using an ssd instead of a hdd, ram latency will be important in some uses. It is part of the whole package.
> Processing power will be more important when it is inadequate. We haven't seen a seperate ram controller either of this nature, or in a processor this fast before so it's effects on modern games is not yet known. But I'm guessing it will be like a very fast processor with crap ram. Kind of like Ryzen 1 vs intel, but moreso. But since then we also have the whole latency messing spectre stuff happening to intel chips that seems to have no end so idk. Hope for the best?


IPC INCLUDES ram speed and latency as those keep the processor fed with data. If the processing power of the core becomes bottlenecked by the ram speed or latencies, then the core waits for the data to arrive to process it. In other words, although ram speed and latency impact IPC, if IPC is able to be increased, that is more important than focusing solely on the latencies that effect the IPC. 

What you are confusing is that the Infinity Fabric speed on Zen 1 and Zen+ were tied to ram, which also the speed effected IF latency. For Zen 2, the IF controller, and thereby the speed at which the IF is run, was divorced from the speed of the ram, meaning that ram will influence the ability to move data less on Zen 2 than Zen 1 and Zen+, for a start. That is likely where a large part of the IPC increase claimed for Zen 2 comes from. In addition, Infinity Fabric 2 doubled the bandwidth while lowering the latency of the infinity fabric. The final question is whether they have a separate clock gen for overclocking the IF independently, similar to overclocking Intel's mesh.

To get back to your point, we do know how it effects performance. A 2700X vs a 9900K, both stock, resulted in an 11% deficit on average for games, according to some reviewers, and 22% deficit on productivity workloads. If we assume that the 2700X was boosting single core to 4.35GHz or when multithreaded using only 4.2GHz, and Intel's 5GHz with 4.7GHz all core, except if the VRM caused it to throttle below 4.7 (don't remember the exact setup for the benchmark), then we can assume the increase in IPC and Frequency for Zen 2 will fully close the gaming gap this gen, or make it relatively meaningless, if rumors are true. 

Backing up further to your comment overall, I think you are confusing processing power for IPC. IPC changes by task because we have no way to separate it from the rest of the components that influence that metric. That includes cache speeds, predictive branch algorithms, pipeline widths, store and retire, etc. So, you can calculate relative IPCs by measuring the instructions sent (if doing a fixed number), the number of cycles elapsed, etc. The frequency effects the cycles done in a time period. So, what we usually do is clock CPUs to the same frequency so that the cycles are relatively held constant. After that, we have them perform the exact same task, thereby trying to control the instruction set sent for processing. That gives us the IPC values we then compare to get the relative IPC of one processor to another. Hence the average, from comparing performance at different tasks, which the IPC varies by task due to a host of factors, including instruction sets and flag optimizations in compiling, etc., amount that Intel CPUs had over Zen 1 CPUs was 7%, and 3-4% on Zen+. These values ALREADY include the latencies of the ram, cache, and IF. 

As such, you cannot look at those latencies in a vacuum to conclude performance. Take for example the cache size. If you have a higher latency, but the cache is both faster and larger, you may have fewer memory calls, thereby reducing going outside of cache as often, which may make up part of the deficit caused from that extra latency on a memory call. This is why I am calling those numbers you point to "the trees." They are important and contribute to the function of the CPU, but IPC is the "forest," meaning it already includes those trees and other trees, all of which make up the whole of performance. Even benchmarks, generally, measure the IPC of the CPU, not the individual components, although the individual data can often also be measured (although not fully correlated to IPC). 

Does that make more sense?


----------



## mouacyk

ajc9988 said:


> As such, you cannot look at those latencies in a vacuum to conclude performance. Take for example the cache size. If you have a higher latency, but the cache is both faster and larger, you may have fewer memory calls, thereby reducing going outside of cache as often, which may make up part of the deficit caused from that extra latency on a memory call. This is why I am calling those numbers you point to "the trees." They are important and contribute to the function of the CPU, but IPC is the "forest," meaning it already includes those trees and other trees, all of which make up the whole of performance. Even benchmarks, generally, measure the IPC of the CPU, not the individual components, although the individual data can often also be measured (although not fully correlated to IPC).
> 
> Does that make more sense?


Yes. Except, IF is a rather big tree in the Ryzen/Ryzen+ forest. Is that not why Ryzen 2 is de-coupling it for the potential to increase its memory clock speed and lower the latency? If this is achieved, its worst case performance will be uplifted, and in turn, shift its average performance upward -- even before any new clocking potential is added. It's definitely the tree to look for in the new Ryzen 2 forest.

In a similar fashion, SSD improved the general performance of a computer.


----------



## ajc9988

mouacyk said:


> Yes. Except, IF is a rather big tree in the Ryzen/Ryzen+ forest. Is that not why Ryzen 2 is de-coupling it for the potential to increase its memory clock speed and lower the latency? If this is achieved, its worst case performance will be uplifted, and in turn, shift its average performance upward -- even before any new clocking potential is added. It's definitely the tree to look for in the new Ryzen 2 forest.
> 
> In a similar fashion, SSD improved the general performance of a computer.


Yes, I even admitted earlier in the post that the decoupling likely was a large contributor to the IPC increase. But that is not the SOLE contributor. AMD is using nearly 2x the L3 victim cache, which will reduce the memory calls, thereby reducing the impact of going off die to the memory, as well as increasing the L2 cache, increasing pipelines so that wider instructions can be used, etc. Each of those changes contributes to the IPC. How much for each element is what I am saying cannot easily be separated out. Each is important in its own way for the overall performance uplift. 

Also, by moving the memory controller to the I/O die instead of the core CPU die, you are able to bin those components. Before, let's say you had great performing cores, but a non-critical defect in the IMC made it so you could get 2666 memory stable, but not much above that. By moving those components off die, you no longer have to "deal" with the lower performance IMC in order to use that die. Instead, you can bin the core dies and bin the I/O dies and match them accordingly, along with having the premium for the better performing memory controller which is paired with the faster core dies.

So there are many variables in play. This is why I'm arguing to NOT put it all on one tree, even if a large tree, as it is not the only change that is happening. With that said, I will reiterate what I said at the beginning of this post, it may be a large contributor to the IPC increase. The truth is, we don't know what percentage though, and that is something we all should remember. The changes and how the components work together is not done in a vacuum.

https://wccftech.com/amds-next-gen-zen-2-7nm-cpu-core-to-feature-double-l3-cache-size/


----------



## bigjdubb

I haven't been keeping up with all the rumors surrounding the next Ryzen processors. Has there been anything mentioned regarding it's memory speed capabilities? What about pci-e lanes, will be be able to run 3 nvme drives?

I love my 2700x but the platform's pci-e situation could be better and motherboard manufacturers really need to step up to the plate with a product stack for x570 that rivals their Intel product stack.


----------



## ajc9988

bigjdubb said:


> I haven't been keeping up with all the rumors surrounding the next Ryzen processors. Has there been anything mentioned regarding it's memory speed capabilities? What about pci-e lanes, will be be able to run 3 nvme drives?
> 
> I love my 2700x but the platform's pci-e situation could be better and motherboard manufacturers really need to step up to the plate with a product stack for x570 that rivals their Intel product stack.


Memory speed is rumored to officially support 3200MHz, up from 2666MHz. I would take it the ability to overclock the memory will also be higher this time around as well, helping to level the field. 

As to PCIe lanes available, no clue. Intel gets around this by running two NVMe drives off the PCH, which then is bottlenecked with only 4 lanes to the CPU. That means only 1 NVMe is direct to CPU and the graphics card. This is something I am keeping an eye out for, as I'm hoping that they increase the mainstream to 32 PCIe lanes, half of TR, which is half of Epyc. Really, the PCIe lanes and quad-channel memory is the reason I stepped up to HEDT in the first place.

With that said, instead of utilizing the lanes better, many MB mfrs for TR basically ported over designs from X299. Now, with market growth in both mainstream and HEDT and servers, along with customer feedback, the second gen boards (x470) have gotten better, along with the MSI MEG Creation board, the Zenith Extreme Alpha, and the Giga board. So, I assume since AMD is first to market with PCIe 4.0, more attention will be given to AMD boards for Zen 2 than was seen for Zen and Zen+. Especially since PCIe 4.0 NVMe drives are already being designed, which AMD may find the first use of those rather than Intel, unless they don't arrive to market until next year (which Intel will likely have support for PCIe 4.0 by then).


----------



## rluker5

ajc9988 said:


> IPC INCLUDES ram speed and latency as those keep the processor fed with data. If the processing power of the core becomes bottlenecked by the ram speed or latencies, then the core waits for the data to arrive to process it. In other words, although ram speed and latency impact IPC, if IPC is able to be increased, that is more important than focusing solely on the latencies that effect the IPC.
> 
> What you are confusing is that the Infinity Fabric speed on Zen 1 and Zen+ were tied to ram, which also the speed effected IF latency. For Zen 2, the IF controller, and thereby the speed at which the IF is run, was divorced from the speed of the ram, meaning that ram will influence the ability to move data less on Zen 2 than Zen 1 and Zen+, for a start. That is likely where a large part of the IPC increase claimed for Zen 2 comes from. In addition, Infinity Fabric 2 doubled the bandwidth while lowering the latency of the infinity fabric. The final question is whether they have a separate clock gen for overclocking the IF independently, similar to overclocking Intel's mesh.
> 
> To get back to your point, we do know how it effects performance. A 2700X vs a 9900K, both stock, resulted in an 11% deficit on average for games, according to some reviewers, and 22% deficit on productivity workloads. If we assume that the 2700X was boosting single core to 4.35GHz or when multithreaded using only 4.2GHz, and Intel's 5GHz with 4.7GHz all core, except if the VRM caused it to throttle below 4.7 (don't remember the exact setup for the benchmark), then we can assume the increase in IPC and Frequency for Zen 2 will fully close the gaming gap this gen, or make it relatively meaningless, if rumors are true.
> 
> Backing up further to your comment overall, I think you are confusing processing power for IPC. IPC changes by task because we have no way to separate it from the rest of the components that influence that metric. That includes cache speeds, predictive branch algorithms, pipeline widths, store and retire, etc. So, you can calculate relative IPCs by measuring the instructions sent (if doing a fixed number), the number of cycles elapsed, etc. The frequency effects the cycles done in a time period. So, what we usually do is clock CPUs to the same frequency so that the cycles are relatively held constant. After that, we have them perform the exact same task, thereby trying to control the instruction set sent for processing. That gives us the IPC values we then compare to get the relative IPC of one processor to another. Hence the average, from comparing performance at different tasks, which the IPC varies by task due to a host of factors, including instruction sets and flag optimizations in compiling, etc., amount that Intel CPUs had over Zen 1 CPUs was 7%, and 3-4% on Zen+. These values ALREADY include the latencies of the ram, cache, and IF.
> 
> As such, you cannot look at those latencies in a vacuum to conclude performance. Take for example the cache size. If you have a higher latency, but the cache is both faster and larger, you may have fewer memory calls, thereby reducing going outside of cache as often, which may make up part of the deficit caused from that extra latency on a memory call. This is why I am calling those numbers you point to "the trees." They are important and contribute to the function of the CPU, but IPC is the "forest," meaning it already includes those trees and other trees, all of which make up the whole of performance. Even benchmarks, generally, measure the IPC of the CPU, not the individual components, although the individual data can often also be measured (although not fully correlated to IPC).
> 
> Does that make more sense?


I think you are confusing theoretical ipc and practical ipc. People use theoretical ipc from compared, canned benchmarks and try to extrapolate the practical ipc they would get if they had that cpu instead of their own. Also if they are looking for bottlenecks. Your ipc is too far reaching and encompasses all variables. Nobody tests all of those things while eliminating all other bottlenecks. Reviewers are still figuring out how to run a game with a cpu bottleneck. Canned cinebench is different than online multiplayer, or no loading screen open world games. And so is some universal omniscient average of everything.

I suspect the separate northbridge will add between 15 and 20ns of latency to ram access and I don't know if AMD has some sort of silver bullet that can make up the difference. They haven't shown us one yet, but that doesn't mean they won't. I just think it will hurt the performance of Zen 2 in the same type of (mostly gaming) scenarios that Ryzen 1 had difficulties with. Otherwise Zen 2 should be faster. But I don't know this, I have to wait and see like everyone else.

For someone claiming to see the forest, you sure spend a lot of time talking about trees. I say go empirical. If you are increasing clocks and adding ram latency at the same time, find some game benchmark that used fast core clocks and worse ram timings, but same ram frequency to compare it to. They did a bunch of ram quality comparisons when Ryzen came out, maybe there is one like this.

Usually core clocks are much more important, but sometimes aspects of performance can be held back by high latency ram. That's the forest I'm talking about in one sentence.


----------



## bigjdubb

Half of TR would be awesome. I would much prefer that over Intel's solution, but I prefer Intel's solution to AMD's current implementation. I would like for MSI, Asus, Gigabyte and Asrock to offer a halo product for AM4 like they do with threadripper and Intel. It feels like they assume only penny pinchers would buy AM4 products.


----------



## tpi2007

keikei said:


> A 16 core/ 32 thread cpu @ 125 tdp sounds amazing. March release?! Take my $$$.



I don't believe in 16 cores just yet, there is no TDP headroom for that in my opinion, while keeping competitive clocks. But there is for 12 cores. My bet is that they are reserving the 16 core version for the Zen 2+ refresh next year, which they will probably do on 7nm EUV, which will give them the headroom that they need to pull it off.


----------



## technodanvan

bigjdubb said:


> I would like for MSI, Asus, Gigabyte and Asrock to offer a halo product for AM4 like they do with threadripper and Intel.


Frankly, I have yet to see an impressive TR4 board either.


----------



## ajc9988

rluker5 said:


> I think you are confusing theoretical ipc and practical ipc. People use theoretical ipc from compared, canned benchmarks and try to extrapolate the practical ipc they would get if they had that cpu instead of their own. Also if they are looking for bottlenecks. Your ipc is too far reaching and encompasses all variables. Nobody tests all of those things while eliminating all other bottlenecks. Reviewers are still figuring out how to run a game with a cpu bottleneck. Canned cinebench is different than online multiplayer, or no loading screen open world games. And so is some universal omniscient average of everything.
> 
> I suspect the separate northbridge will add between 15 and 20ns of latency to ram access and I don't know if AMD has some sort of silver bullet that can make up the difference. They haven't shown us one yet, but that doesn't mean they won't. I just think it will hurt the performance of Zen 2 in the same type of (mostly gaming) scenarios that Ryzen 1 had difficulties with. Otherwise Zen 2 should be faster. But I don't know this, I have to wait and see like everyone else.
> 
> For someone claiming to see the forest, you sure spend a lot of time talking about trees. I say go empirical. If you are increasing clocks and adding ram latency at the same time, find some game benchmark that used fast core clocks and worse ram timings, but same ram frequency to compare it to. They did a bunch of ram quality comparisons when Ryzen came out, maybe there is one like this.
> 
> Usually core clocks are much more important, but sometimes aspects of performance can be held back by high latency ram. That's the forest I'm talking about in one sentence.


Your distinction between theoretical and practical is meaningless here. You are mistaking synthetic for theory. Synthetic is still practical IPC, theory is the mathematical computation of what is likely going to be the highest achievable IPC on paper. There is a difference. 

Now, to address the synthetic comment, I addressed that by saying directly that IPC changes by task, task meaning program. What that means is that performance varies by program, which is already seen from game play benchmarks and variations due to the coding and compiling of the program. Easiest example of this is looking at Intel's compiler which purposely gimped AMD CPUs and told it to run a slower instruction set and was not optimized for AMD CPUs, which Intel got slapped with an antitrust violation for doing. That means specific program level optimizations for CPUs can and do make a difference. This is also why AMD has sued over some benchmark companies stacking the deck with tasks that don't necessarily mirror real life workloads and stacked the deck with tasks that Intel CPUs do well in, while reducing tasks that AMD does well in. This is different than the compiling optimizations and selection of instruction sets, etc.

That means that IPC varies program to program and means that seeing the performance on the specific program you use is the only way to get a deeper understanding of what the difference in the hardware is. That is why some reviewers measure it by game, bottleneck the CPU by using the most powerful GPU and using a lower frame rate, although below 1080p is NOT realistic in any sense of the word. This is why the settled practices are already teased out, which it was Intel that suggested to reviewers using 720p, which only a couple places actually used, one being PCPer, whose Ryan Shrout now works for Intel in a marketing position where he designs tests to over-inflate the performance of Intel CPUs for a living. Hmmm....

As to talking latency again, maybe you should go look up the rumored reduction in latency. You like to speculate on the increase of going off core die, surely you trying to find that value would solve your question whether AMD has a silver bullet or not. Same with an examination of trying to equalize the latency from the I/O die to the core chiplets on TR and Epyc (likely helps with the stale data potential). 

As to IPC being too general, how? No one fully will use peak memory throughput or memory latency to determine the optimal settings for their use. They will either use performance on their specific software suite while tweaking settings or a synthetic benchmark which is similar to their use for setting up the system. How much can you change the values on each of the settings? Cache, can't change that. Pipeline and bandwidth on cache, generally no. So, what do you do? You overclock the ram to find stable, then measure INCREASE IN IPC to determine whether those settings work for your needs. So how is that too general?


Also, as to the impact of ram, until Zen, whose IF relies on Memory speed, the impact of ram on performance on Intel's side was generally 2%, with an outside of 5%, increase in performance. AMD, due to the clocks effecting IF speed, bandwidth, and latency, you could see a wider margin, like 10%, increase in performance. As you increased ram speed, so long as ram latency was equal to the slower ram speed or lower latency than the slower ram speed, you were receiving a knock on effect where the IF was then clocked higher, which lowered the IF latency. Teasing out how much of the performance uplift is attributed to each is an impossible task.

Because of that, it isn't getting lost or focusing on a lot of trees, it is explaining the role the various trees have in the forest, which explains how the forest is created and functions. It is attempting to show the interconnected nature and how differences in architecture would change the interactions between the parts. 

Meanwhile, it's only a couple months between now and release, which is the true empirical analysis. So that will come soon enough.



tpi2007 said:


> I don't believe in 16 cores just yet, there is no TDP headroom for that in my opinion, while keeping competitive clocks. But there is for 12 cores. My bet is that they are reserving the 16 core version for the Zen 2+ refresh next year, which they will probably do on 7nm EUV, which will give them the headroom that they need to pull it off.


Let me run some numbers by you, which I think may assuage the skepticism on the TDP headroom. 

1950X had a TDP of 180W. The 2950X had a TDP of 180W. 

Epyc was said to have an isopower/isoperformance curve of either a 50% reduction in power at the same performance or a 25% increase in performance at the same power draw. If we assume the performance metric refers solely to clock speeds, then you would have to have a 180W TDP for somewhere in the high 4GHz to 5GHz range. Binning may help, but going off of the rumor of 5GHz for the 16-core, I'd say either AMD is calculating TDP like Intel now, or it doesn't make sense.

Now, if you just look on whether it is possible, sure it is. 135W and 125W are very doable, considering the 135W is a reduction of 25% of the prior TDP, which may come with closer to a 12.5% so increase in speed. We also have to remember that the curve is not perfect, meaning that you will increase the wattage more after a certain point on the curve so that power demand increases significantly for smaller increases in frequency, meaning that the 12.5% increase in speed would actually be conservative. 

Beyond the isopower/performance curve, we also have binning of chiplets. The lowest tier will go into the 3300 series and lower. That also matches the rumor with how slow these CPUs are. Then, moving to the r5 series, you get better binned, but not the top bins. Then you get the 12-core with the rumored premium for the faster variant (which supposedly is 5.0, over the lower variant of 4.7GHz). The best binned for mainstream will be the 16-core, with 4.8GHz and 5.1GHz. Then you get the bins reserved for Threadripper above that. 

With removing the I/O from the core die, you then do not have to compromise as to what elements constitute the "best" bin for performance. You can focus solely on speeds. You don't have to match the better performance of an IMC, the impact that has on IF, which then had a larger overall impact on performance. It frees that binning and allows for binning of the I/O, while not having to try to shrink the I/O, which can sometimes not scale as well, all while being on a mature node.

So, if arguing the speeds seem a bit high, I'd say yes. If arguing the TDP doesn't make as much sense for the stated speeds of the chips from the rumor, I could get behind that, depending if those are single core or all core boost speeds (forgot to mention that, but if that is single core boost, subtract 200-400MHz from the rumored speeds, then compare that to 12.5-17% over current all core speeds of Zen and Zen 2 and see if that is possible). If saying there isn't a way to make it work at all, I'd find that dubious. 

With that said, I don't think it will be released at the same time as the 12-core and below. They may even be pulling the binned CPU core dies from the TR stack (top 2-5%). If they do that for the 16-core, then it will have a fair amount more headroom. 

But, theoretically, Zen 2 should make a 1950X be possible at 90W for stock. Same with the 2950X. With up to 45W over that on TDP, we could see a decent speed 16-core within that envelope, although whether it is the rumored speeds is a different story.


----------



## bigjdubb

technodanvan said:


> Frankly, I have yet to see an impressive TR4 board either.


I really haven't looked at them in depth, there are certainly some expensive TR4 boards. 

There's not really anything wrong with the AM4 boards but they got no pizazz. I wanted asus to make an x470 extreme with one of those dim.2 slots/riser cards, the little waste of an lcd screen on the io cover and a honkin' power delivery system. What we get instead is the CHVII that costs more than it's peers but doesn't actually offer anything more, not even any added fluff on the darn thing.


----------



## LancerVI

bigjdubb said:


> I really haven't looked at them in depth, there are certainly some expensive TR4 boards.
> 
> There's not really anything wrong with the AM4 boards but they got no pizazz. I wanted asus to make an x470 extreme with one of those dim.2 slots/riser cards, the little waste of an lcd screen on the io cover and a honkin' power delivery system. What we get instead is the CHVII that costs more than it's peers but doesn't actually offer anything more, not even any added fluff on the darn thing.


Completely agree here. My Asus Prime X470 Pro is a good board, but it isn't a great board. I want real, high end option for AMD boards. Hopefully the mobo makers will come around. Ryzen is an unmitigated success. Time for full on support with full product trees.


----------



## rluker5

ajc9988 said:


> Your distinction between theoretical and practical is meaningless here. You are mistaking synthetic for theory. Synthetic is still practical IPC, theory is the mathematical computation of what is likely going to be the highest achievable IPC on paper. There is a difference.
> 
> Now, to address the synthetic comment, I addressed that by saying directly that IPC changes by task, task meaning program. What that means is that performance varies by program, which is already seen from game play benchmarks and variations due to the coding and compiling of the program. Easiest example of this is looking at Intel's compiler which purposely gimped AMD CPUs and told it to run a slower instruction set and was not optimized for AMD CPUs, which Intel got slapped with an antitrust violation for doing. That means specific program level optimizations for CPUs can and do make a difference. This is also why AMD has sued over some benchmark companies stacking the deck with tasks that don't necessarily mirror real life workloads and stacked the deck with tasks that Intel CPUs do well in, while reducing tasks that AMD does well in. This is different than the compiling optimizations and selection of instruction sets, etc.
> 
> That means that IPC varies program to program and means that seeing the performance on the specific program you use is the only way to get a deeper understanding of what the difference in the hardware is. That is why some reviewers measure it by game, bottleneck the CPU by using the most powerful GPU and using a lower frame rate, although below 1080p is NOT realistic in any sense of the word. This is why the settled practices are already teased out, which it was Intel that suggested to reviewers using 720p, which only a couple places actually used, one being PCPer, whose Ryan Shrout now works for Intel in a marketing position where he designs tests to over-inflate the performance of Intel CPUs for a living. Hmmm....
> 
> As to talking latency again, maybe you should go look up the rumored reduction in latency. You like to speculate on the increase of going off core die, surely you trying to find that value would solve your question whether AMD has a silver bullet or not. Same with an examination of trying to equalize the latency from the I/O die to the core chiplets on TR and Epyc (likely helps with the stale data potential).
> 
> As to IPC being too general, how? No one fully will use peak memory throughput or memory latency to determine the optimal settings for their use. They will either use performance on their specific software suite while tweaking settings or a synthetic benchmark which is similar to their use for setting up the system. How much can you change the values on each of the settings? Cache, can't change that. Pipeline and bandwidth on cache, generally no. So, what do you do? You overclock the ram to find stable, then measure INCREASE IN IPC to determine whether those settings work for your needs. So how is that too general?
> 
> 
> Also, as to the impact of ram, until Zen, whose IF relies on Memory speed, the impact of ram on performance on Intel's side was generally 2%, with an outside of 5%, increase in performance. AMD, due to the clocks effecting IF speed, bandwidth, and latency, you could see a wider margin, like 10%, increase in performance. As you increased ram speed, so long as ram latency was equal to the slower ram speed or lower latency than the slower ram speed, you were receiving a knock on effect where the IF was then clocked higher, which lowered the IF latency. Teasing out how much of the performance uplift is attributed to each is an impossible task.
> 
> Because of that, it isn't getting lost or focusing on a lot of trees, it is explaining the role the various trees have in the forest, which explains how the forest is created and functions. It is attempting to show the interconnected nature and how differences in architecture would change the interactions between the parts.
> 
> Meanwhile, it's only a couple months between now and release, which is the true empirical analysis. So that will come soon enough.


So, with a 5ghz core clock and an increased ram latency of about 25%, do you think Zen2 anything will consistently beat the 7700k (stock) in these games in clearly cpu limited scenarios?: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/9
And how do you explain ipc when the cores don't scale? 
I chose Anand because it was the first thing to come to mind, and the 9900k because the focus was neither on Ryzen or Kaby Lake.
And I'm comparing with gaming because 1. that is the most important reason for most people who put together computers for a cpu to be fast, 2. that is the one big thing people bring up about Zen not being as good as core with, and 3. it is ram latency related. 

Ryzen wins in compute based productivity because of the number of cores that are good at it and Zen2 should be faster and have more cores. But that means little to an average guy like me who has no reason to upgrade any quad core cpu since Z97 other than video games.

But maybe I'm worrying over nothing. I hope Zen 2 is better. Intel doesn't seem to be coming out with much better anytime soon.


----------



## Grin

Do not mix an overall cpu performance with IPC. IPC calculations are based on the number of machine-level instructions required to complete a piece of code and the number of clock cycles required to complete it on the actual cpu. Processors with similar IPCs may have a different performance in different scenarios.


----------



## ajc9988

rluker5 said:


> So, with a 5ghz core clock and an increased ram latency of about 25%, do you think Zen2 anything will consistently beat the 7700k (stock) in these games in clearly cpu limited scenarios?: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/9
> And how do you explain ipc when the cores don't scale?
> I chose Anand because it was the first thing to come to mind, and the 9900k because the focus was neither on Ryzen or Kaby Lake.
> And I'm comparing with gaming because 1. that is the most important reason for most people who put together computers for a cpu to be fast, 2. that is the one big thing people bring up about Zen not being as good as core with, and 3. it is ram latency related.
> 
> Ryzen wins in compute based productivity because of the number of cores that are good at it and Zen2 should be faster and have more cores. But that means little to an average guy like me who has no reason to upgrade any quad core cpu since Z97 other than video games.
> 
> But maybe I'm worrying over nothing. I hope Zen 2 is better. Intel doesn't seem to be coming out with much better anytime soon.


For measuring IPC, I did mention that frequency needs to be constant between the tested hardware and that it varied by program. Looking at tests of processors with different frequencies and IPC is not doing an IPC comparison, but looking at overall.

Then, where are you getting an increase of 25%? From your estimate of what you think latency will be going to the I/O die, while ignoring the lower latency of IF, changes to the IMC, etc.? You are building that statement on assumptions to which you possess NO INFORMATION. Let's say that IF lowered the latency by 15ns and the IMC changes allowed to lower that latency as well. You'd wind up with zero change. I'm pulling numbers out of my butt here because NO ONE KNOWS, unless you are a company insider. That is my point, you are making bad assumptions without ground. Sure, it is reasonable to estimate there will be some latency going off core to the I/O die versus staying on die and straight out to memory, but you have to compare the latencies to the prior X950x processors, as that is the only comparable product with two chiplets. Reason I say that is because half the time on a 1950x or 2950x, you would have to first use IF to go to the other die, then out to memory, then return trip. With Zen 2, you are changing that so the memory controllers accessed by the CPUs is centrally located. That means that although the latency is higher for the first die where the IMC is no longer a direct link, it is WAY shorter than having to jump to the IMC on the second die, then go out, instead having both chips go to the I/O and then out on IF2 with double the bandwidth on the IF2 and much lower latency than previously, with IF2 connecting the core dies directly (something not present on EPYC, and may or may not be present on TR chips).

Next, you assume memory is the issue, but if you compare the score of the 2700X to the 9900K in World of Tanks (which this is before the AMD optimization went in place for WoT in the past month or two, so those numbers are WORTHLESS as a practical matter because AMD CPUs received a significant uplift in performance with that update, but I digress), you have a 30% delta (which is now much lower, and of which, this is an outlier on game performance comparing frames out of the entire selection of games benchmarked, which ranges from 1-2% performance difference up to nearly 20%, with many settling into single digit and low double digit (think low teens) for the most common distribution, once again getting back to my statement earlier in posts showing 11% overall performance deficit in gaming and 22% at overall productivity when comparing a 2700X and a 9900K. This means even though Intel has lower memory latency and higher frequency AND higher IPC (which memory latency is compounded into in regards to its effects), the delta for gaming is now easier to close than the productivity suites designed for better use of multithread workloads. It also shows the variance by program on tasks for the CPU. 

So you are using wild aspersions on latency and the impact thereof, using outdated, cherry picked data to try to prove a point (as bad as anecdotal evidence), misunderstanding the meaning of IPC and interplay with frequency and other factors to give overall performance, using overall performance instead of an IPC calculation, etc. In other words, you don't control for anything and are using improper data to try to prove a point that the data does not support. Intel's frequency alone is over 10% of AMD's fastest, meaning having that same relative speed increase on a Zen or Zen+ design should allow, mostly, for the 11% average difference to disappear, if one were being rational. When you multiply the projected percent frequency increase by the projected IPC increase over first gen or second gen, you get an estimate of the overall uplift in performance, which should be 25%+ (not 25% IPC, using 13% multiplied by a 10% frequency increase gives you 24% approximate uplift in performance). 

What are you trying to accomplish? Are you truly trying to understand this better, or trying to prove me wrong? That last paragraph suggests the latter, but I could be wrong... If truly trying to wrap your mind around this, let's continue.



Grin said:


> Do not mix an overall cpu performance with IPC. IPC calculations are based on the number of machine-level instructions required to complete a piece of code and the number of clock cycles required to complete it on the actual cpu. Processors with similar IPCs may have a different performance in different scenarios.


Thank you for the direct statement here. Although I did explain this a couple posts back, I really like this straightforward statement's wording which is very accurate and poignant.


----------



## nonametoclaim

didnt feel like searching to see if this was already posted or not(pretty sure i have the flu)

but some more leaks have surfaced, this one shows march release which would be awesome but im sure will be debunked 

https://www.guru3d.com/news-story/overview-of-zen-2-ryzen-3000-processors-spotted-online.html


----------



## ajc9988

nonametoclaim said:


> didnt feel like searching to see if this was already posted or not(pretty sure i have the flu)
> 
> but some more leaks have surfaced, this one shows march release which would be awesome but im sure will be debunked
> 
> https://www.guru3d.com/news-story/overview-of-zen-2-ryzen-3000-processors-spotted-online.html


Yeah, there won't be a March release. At GDC on March 20th, AMD is presenting more information on Zen 2 and how to code for it, as well as potential info on Navi coming to the GPU optimization presentation later in the day. As such, I wouldn't get too excited yet. Just wait a week to find out more about Zen 2.


----------



## nonametoclaim

ajc9988 said:


> Yeah, there won't be a March release. At GDC on March 20th, AMD is presenting more information on Zen 2 and how to code for it, as well as potential info on Navi coming to the GPU optimization presentation later in the day. As such, I wouldn't get too excited yet. Just wait a week to find out more about Zen 2.


honestly im in no need of an upgrade, my current $800(at build) rig is over twice the performance of my rig from 2012 that cost me a fortune to build. but i absolutely love watching AMD casually toss out bang for the buck CPUs that creep on intel's money milkers and watch team blue yank stuff off the production line and toss it into the market just to claim top performer.

but a 5ghz 12/24 c/t does sound quite nice

just hoping navi can bring in something for my SG13 that rivals a 1080 ti


----------



## rluker5

ajc9988 said:


> For measuring IPC, I did mention that frequency needs to be constant between the tested hardware and that it varied by program. Looking at tests of processors with different frequencies and IPC is not doing an IPC comparison, but looking at overall.
> 
> Then, where are you getting an increase of 25%? From your estimate of what you think latency will be going to the I/O die, while ignoring the lower latency of IF, changes to the IMC, etc.? You are building that statement on assumptions to which you possess NO INFORMATION. Let's say that IF lowered the latency by 15ns and the IMC changes allowed to lower that latency as well. You'd wind up with zero change. I'm pulling numbers out of my butt here because NO ONE KNOWS, unless you are a company insider. That is my point, you are making bad assumptions without ground. Sure, it is reasonable to estimate there will be some latency going off core to the I/O die versus staying on die and straight out to memory, but you have to compare the latencies to the prior X950x processors, as that is the only comparable product with two chiplets. Reason I say that is because half the time on a 1950x or 2950x, you would have to first use IF to go to the other die, then out to memory, then return trip. With Zen 2, you are changing that so the memory controllers accessed by the CPUs is centrally located. That means that although the latency is higher for the first die where the IMC is no longer a direct link, it is WAY shorter than having to jump to the IMC on the second die, then go out, instead having both chips go to the I/O and then out on IF2 with double the bandwidth on the IF2 and much lower latency than previously, with IF2 connecting the core dies directly (something not present on EPYC, and may or may not be present on TR chips).
> 
> Next, you assume memory is the issue, but if you compare the score of the 2700X to the 9900K in World of Tanks (which this is before the AMD optimization went in place for WoT in the past month or two, so those numbers are WORTHLESS as a practical matter because AMD CPUs received a significant uplift in performance with that update, but I digress), you have a 30% delta (which is now much lower, and of which, this is an outlier on game performance comparing frames out of the entire selection of games benchmarked, which ranges from 1-2% performance difference up to nearly 20%, with many settling into single digit and low double digit (think low teens) for the most common distribution, once again getting back to my statement earlier in posts showing 11% overall performance deficit in gaming and 22% at overall productivity when comparing a 2700X and a 9900K. This means even though Intel has lower memory latency and higher frequency AND higher IPC (which memory latency is compounded into in regards to its effects), the delta for gaming is now easier to close than the productivity suites designed for better use of multithread workloads. It also shows the variance by program on tasks for the CPU.
> 
> So you are using wild aspersions on latency and the impact thereof, using outdated, cherry picked data to try to prove a point (as bad as anecdotal evidence), misunderstanding the meaning of IPC and interplay with frequency and other factors to give overall performance, using overall performance instead of an IPC calculation, etc. In other words, you don't control for anything and are using improper data to try to prove a point that the data does not support. Intel's frequency alone is over 10% of AMD's fastest, meaning having that same relative speed increase on a Zen or Zen+ design should allow, mostly, for the 11% average difference to disappear, if one were being rational. When you multiply the projected percent frequency increase by the projected IPC increase over first gen or second gen, you get an estimate of the overall uplift in performance, which should be 25%+ (not 25% IPC, using 13% multiplied by a 10% frequency increase gives you 24% approximate uplift in performance).
> 
> What are you trying to accomplish? Are you truly trying to understand this better, or trying to prove me wrong? That last paragraph suggests the latter, but I could be wrong... If truly trying to wrap your mind around this, let's continue.
> 
> 
> Thank you for the direct statement here. Although I did explain this a couple posts back, I really like this straightforward statement's wording which is very accurate and poignant.


That 25% was from the ram latency on a leaked Zen2 userbenchmark where the latency was 15 or 20 ns higher than the crappy ram Hynix spec, where my laptop with kaby lake and very similar crappy Hynix runs 5ns less. 15ns/60ns = .25, 20/80=.25 . A very rough approximation of a plausible range, but all I have heard so far.

I didn't cherry pick, Anand was first pick. It was a guess. And I was going by the average of averages and mins of all of the games at 720p. Guru3d was 2nd and Ryzen looked worse for gaming. Maybe you have a better cpu limited (not 4k ultra or 1080p ultra with a 470 so there is a spread in performance per cpu larger than random testing noise) source that isn't reviewing either of the compared cpus.

And I'm trying to understand better because I'm hoping for an upgrade. But I want my use case - gaming, web browsing, streaming, shopping, light office, etc to justify it. That's all. I plan on getting a single gpu of the next Nvidia 80 series when it comes out as well if it is a bit over 2080ti perf at about 200w and $800. Still on Z97 era stuff and would like if something came out on the cpu side that could overpower 4k60 without running over 80c on an aio, or a big air cooler. I have a concern over ram latency because I've seen it's effects, but you just dismiss them, even though the main reason the 2nd Ryzen series was faster than the first was latency improvement. My stuff already games like a 7700k and I don't feel like spending 1k on a new mobo, cpu, ram for something over a sidegrade that will be all hot and noisy unless I go for watercooling.
Not getting any new info so I guess I can wait like everyone else.


----------



## ajc9988

rluker5 said:


> That 25% was from the ram latency on a leaked Zen2 userbenchmark where the latency was 15 or 20 ns higher than the crappy ram Hynix spec, where my laptop with kaby lake and very similar crappy Hynix runs 5ns less. 15ns/60ns = .25, 20/80=.25 . A very rough approximation of a plausible range, but all I have heard so far.
> 
> I didn't cherry pick, Anand was first pick. It was a guess. And I was going by the average of averages and mins of all of the games at 720p. Guru3d was 2nd and Ryzen looked worse for gaming. Maybe you have a better cpu limited (not 4k ultra or 1080p ultra with a 470 so there is a spread in performance per cpu larger than random testing noise) source that isn't reviewing either of the compared cpus.
> 
> And I'm trying to understand better because I'm hoping for an upgrade. But I want my use case - gaming, web browsing, streaming, shopping, light office, etc to justify it. That's all. I plan on getting a single gpu of the next Nvidia 80 series when it comes out as well if it is a bit over 2080ti perf at about 200w and $800. Still on Z97 era stuff and would like if something came out on the cpu side that could overpower 4k60 without running over 80c on an aio, or a big air cooler. I have a concern over ram latency because I've seen it's effects, but you just dismiss them, even though the main reason the 2nd Ryzen series was faster than the first was latency improvement. My stuff already games like a 7700k and I don't feel like spending 1k on a new mobo, cpu, ram for something over a sidegrade that will be all hot and noisy unless I go for watercooling.
> Not getting any new info so I guess I can wait like everyone else.


OK, now that I know where you got the information from, I can address it.

First, I'll post AdoredTV addressing it:
https://youtu.be/lM-21GySlso?t=1407

It's a good video. Excellent explanation of caches at the start of the video as well. 

Now, on to the issue of it being a single channel of 4GB SR ram clocked at 2666 CL19, which would be glacial compared to my [email protected] in quad channel SR sticks. If you think that won't add some latency to you, I don't know what you are thinking. Further, as shown, the cache behavior varies in three examples, one of which had tight latency values. It shows that none of them are trustworthy, but that two may have been outliers compared to the other bench that had tight latencies. 

I already explained 720P was rejected as unrealistic, because NO ONE USES 720P anymore! Your adherence to a point INTEL fished to tech journalists is really.... And wanting to embellish performance benefits through making it look more spread out than any experience likely is ... sad! Especially when I said use a 2080 Ti to try to get the CPU to bottleneck at 1080P is proper, as you want the MOST powerful GPU you can use to remove the GPU being the limit to see how much the CPU is hitting performance. So why did you bring up a 470? For someone saying earlier that I needed to focus on non-synthetics, but real-world use, that seems ... yeah.

As to what you want, you just have to wait until June to July and it will be tested in all scenarios. You can make your choice on empirical data. Also, AMD graphics cards are loud, but the CPUs are not bad. And are you talking rendering on the CPU or rendering on the GPU, which 4K60 would be a GPU limit on anything lower than a 2080 Ti and above over the CPU. As you increase resolution, you increase the GPU load, which decreases the load on the CPU. That is the reason as you go to 1440p and 4K, you see the frame rates get closer or sync to within a frame or so of each other. 

So just wait for reviews and make your choice on the hard data in front of you.

Edit:
Here is what my latency is in that benchmark on a 1950X:
https://www.userbenchmark.com/UserRun/15351327 (58.7ns latency)
Here is a person with a similar setup, except running 3.95GHz instead of 4.2GHz and ram running at 3200CL14 (likely stock XMP) (85.8ns latency)
https://www.userbenchmark.com/UserRun/15477346

Part of the difference, about 20ns, is due to setting up the interleaving of the ram channels and ranks to lower the latency. When you select channel interleaving, 512b size, etc., it knocks off that amount. So, the difference between the chips most comparable to a 16-core mainstream is that. If you then look at what real time latency 2666CL19 has, you get 14.25ns, going against 3200CL14 giving 8.82ns or 3466CL14 at 8.12ns, which doesn't go into the other timings of the ram that effect the total memory latency. 

So the settings for memory and the timings can easily add 10ns or more latency to the score, especially using a single channel, single rank dimm with no channel or rank interleaving, which would give the 20ns or so penalty I showed with my own rig and someone else's rig above. Put them together, it looks like you would be in the territory of these new chips, doesn't it? 

That is why trying to read too much into one leaked benchmark regarding the latency will NOT give you a true picture of what is going on with the capabilities of the silicon. Engineering Samples created and tested 5 months before a product launches will not give you the final results you can expect. You don't know what is and isn't tuned yet, what all is being tested, etc. This is why I ignored anyone at the time the benches were making the rounds screaming their head off about this latency. The majority can be explained and shown to be similar to what current chips have, after making the above adjustments. 

Here is that first one with the peak of 96.92ns (about 11ns slower than the 3200CL14 example of the 1950X)
https://www.userbenchmark.com/UserRun/14076820

Here is the bugged run with the rise too early in cache latency which hit 100ns
https://www.userbenchmark.com/UserRun/14273098

Also, with knowing the cache was separated from the memory speed, we have ZERO idea what they had the IF2 speed tuned to, meaning we do not know if the engineering sample had a purposely tuned down IF2 to test something else with the chips, trying to remove errors that can occur due to faster IF2 speeds that can cause cache errors or something similar.

So don't buy too heavily into writing off the chip on things easily explainable, especially if the rumor of officially supporting 3200MHz is true and that is achievable with CL14. Between that and interleaving, getting memory latency to the 60ns levels seems to be in the cards without too much effort.


----------



## rluker5

ajc9988 said:


> OK, now that I know where you got the information from, I can address it.
> 
> First, I'll post AdoredTV addressing it:
> https://youtu.be/lM-21GySlso?t=1407
> 
> It's a good video. Excellent explanation of caches at the start of the video as well.
> 
> Now, on to the issue of it being a single channel of 4GB SR ram clocked at 2666 CL19, which would be glacial compared to my [email protected] in quad channel SR sticks. If you think that won't add some latency to you, I don't know what you are thinking. Further, as shown, the cache behavior varies in three examples, one of which had tight latency values. It shows that none of them are trustworthy, but that two may have been outliers compared to the other bench that had tight latencies.
> 
> I already explained 720P was rejected as unrealistic, because NO ONE USES 720P anymore! Your adherence to a point INTEL fished to tech journalists is really.... And wanting to embellish performance benefits through making it look more spread out than any experience likely is ... sad! Especially when I said use a 2080 Ti to try to get the CPU to bottleneck at 1080P is proper, as you want the MOST powerful GPU you can use to remove the GPU being the limit to see how much the CPU is hitting performance. So why did you bring up a 470? For someone saying earlier that I needed to focus on non-synthetics, but real-world use, that seems ... yeah.
> 
> As to what you want, you just have to wait until June to July and it will be tested in all scenarios. You can make your choice on empirical data. Also, AMD graphics cards are loud, but the CPUs are not bad. And are you talking rendering on the CPU or rendering on the GPU, which 4K60 would be a GPU limit on anything lower than a 2080 Ti and above over the CPU. As you increase resolution, you increase the GPU load, which decreases the load on the CPU. That is the reason as you go to 1440p and 4K, you see the frame rates get closer or sync to within a frame or so of each other.
> 
> So just wait for reviews and make your choice on the hard data in front of you.
> 
> Edit:
> Here is what my latency is in that benchmark on a 1950X:
> https://www.userbenchmark.com/UserRun/15351327 (58.7ns latency)
> Here is a person with a similar setup, except running 3.95GHz instead of 4.2GHz and ram running at 3200CL14 (likely stock XMP) (85.8ns latency)
> https://www.userbenchmark.com/UserRun/15477346
> 
> Part of the difference, about 20ns, is due to setting up the interleaving of the ram channels and ranks to lower the latency. When you select channel interleaving, 512b size, etc., it knocks off that amount. So, the difference between the chips most comparable to a 16-core mainstream is that. If you then look at what real time latency 2666CL19 has, you get 14.25ns, going against 3200CL14 giving 8.82ns or 3466CL14 at 8.12ns, which doesn't go into the other timings of the ram that effect the total memory latency.
> 
> So the settings for memory and the timings can easily add 10ns or more latency to the score, especially using a single channel, single rank dimm with no channel or rank interleaving, which would give the 20ns or so penalty I showed with my own rig and someone else's rig above. Put them together, it looks like you would be in the territory of these new chips, doesn't it?
> 
> That is why trying to read too much into one leaked benchmark regarding the latency will NOT give you a true picture of what is going on with the capabilities of the silicon. Engineering Samples created and tested 5 months before a product launches will not give you the final results you can expect. You don't know what is and isn't tuned yet, what all is being tested, etc. This is why I ignored anyone at the time the benches were making the rounds screaming their head off about this latency. The majority can be explained and shown to be similar to what current chips have, after making the above adjustments.
> 
> Here is that first one with the peak of 96.92ns (about 11ns slower than the 3200CL14 example of the 1950X)
> https://www.userbenchmark.com/UserRun/14076820
> 
> Here is the bugged run with the rise too early in cache latency which hit 100ns
> https://www.userbenchmark.com/UserRun/14273098
> 
> Also, with knowing the cache was separated from the memory speed, we have ZERO idea what they had the IF2 speed tuned to, meaning we do not know if the engineering sample had a purposely tuned down IF2 to test something else with the chips, trying to remove errors that can occur due to faster IF2 speeds that can cause cache errors or something similar.
> 
> So don't buy too heavily into writing off the chip on things easily explainable, especially if the rumor of officially supporting 3200MHz is true and that is achievable with CL14. Between that and interleaving, getting memory latency to the 60ns levels seems to be in the cards without too much effort.


I did err in my comparison of measured stock latency vs manufacturer spec. I compared what I am assuming as stock Ryzen 3 vs intel when a comparison of Ryzen 3 to Ryzen 2 would have been more appropriate to see the change in performance relative to Ryzen generations. It could be that Ryzen always adds 10 or 20ns to latency vs manufacturers spec and that aspect isn't actually getting worse. I don't have a Ryzen to compare so that probably contributed to my overlooking it. I hope this is the case and the dizcrete Northbridge is adding minimal penalty.


----------



## ajc9988

rluker5 said:


> I did err in my comparison of measured stock latency vs manufacturer spec. I compared what I am assuming as stock Ryzen 3 vs intel when a comparison of Ryzen 3 to Ryzen 2 would have been more appropriate to see the change in performance relative to Ryzen generations. It could be that Ryzen always adds 10 or 20ns to latency vs manufacturers spec and that aspect isn't actually getting worse. I don't have a Ryzen to compare so that probably contributed to my overlooking it. I hope this is the case and the dizcrete Northbridge is adding minimal penalty.


That is an understandable mistake. Here is a link to the Ryzen DRAM Calc thread:
https://www.overclock.net/forum/13-...1-overclocking-dram-am4-413.html#post27892566
Here, you can see a user of a 2700X with around 60ns with 3466MHzCL14 ram. Above is a 1700 CPU with 2933CL14 with around 80ns memory latency. Once again, that is likely due to the interleave settings for the ram. But that isn't all. In my own benchmarking, I have found that the interleaving settings to lower the latency improve some benches, like CB20, but lessen performance on others, like CB15, which does slightly better leaving the interleaving settings to auto for me. Granted, this is talking within 1-2% performance variance, but it is measurable and repeatable. That is why even memory timings and latency due to that may not be the final story. 

But all of this has unknown factors, like what they had the IF2 clocked to for testing on the engineering sample. If not tweaking timings of the IF, only the speed, then, generally, as the speed increases and the latency value (meaning the clock latency value), then the real life ns value of the latency will be reduced. Since we have no information on IF2 clocks used, or how speed effects IF2, the latency related to it is a big unknown. That means that how I showed this latency to be roughly in line with Zen 1 and Zen+ on by TR and Ryzen mainstream, there is a possibility of it being even lower in the final product.

Another thing to note, the latency values are roughly in line between mainstream Ryzen and TR. There are other tests, such as Sisoft Sandra, which allow for measuring the latency for inter-core on the same CCX, inter-core on a different CCX, inter-die, then inter die on a different CCX, as well as jumping to the other die and out to memory. Each of those has a different latency value. For TR, when you jump dies, you jump to the mirrored core on the mirrored CCX of the other die. That means if you need to go to the second CCX on the second die, you would need to use the IF again to move to that core, then do the return trip. 

What is being done with Epyc and likely TR is standardizing the length, and latency, of those die jumps to try to prevent the data becoming stale (meaning useless or redundant). Because of the variable latency depending on travel route, you can wind up in a situation that by the time the information is fetched from the other CCX on the other die, then is used in the processing on the core that requested it, that another core with a lower latency will have already found it stalled waiting on the data, so made it's own request and completed it before the first core did, meaning you waste cycles and computation power. With standardizing the latency, this lowers stale data occurrence and wasted cycles, even if the latency is higher in some instances, which the average latency is overall lower. It is controlled inefficiency to create synergistic efficiencies that outweigh the drawbacks of additional latency introduction. That is why looking at the latency value in a vacuum is dangerous and misleading at times. 

Now, what is interesting is the additional potential latency was added to the single core die mainstream processors. Having dual die, moving to a centralized I/O makes sense to create consistent latency, which can aide with what I just described. But the single core die, theoretically, takes the hit without as much benefit as integration may allow. Arguably, since the values seem to have been designed to make latency similar to current Ryzen and Ryzen 2000 chips, it may be a push on that, but the benefits showing up in IPC due to other changes to the microarchitecture. Also, on the mainstream chips, it seems they are using IF2 to directly connect the two core dies, meaning the latency going inter-core to the second die will be less than round-tripping to the I/O-->other die-->I/O-->back. Why this is done here and not on Epyc may be a case of scaling. Due to doubling the number of core dies on the highest end offering, making it a 64-core CPU, it may be more sensitive to creating stale data due to the variances of connecting IF2 to each core die separately, which would mean standard latency to all would be preferred. This is something I hope AMD will talk about and address at some point in the future, as this is a design decision that I would love to see tested. 

This is all before discussing the effects of going to an I/O die to mask separate nodes so that the system sees the CPU as an UMA rather than a NUMA situation, in theory. As such, you get uniform latency for every memory call, programs won't need optimized for NUMA on the same package, and it will avoid the issues with the scheduler, to a degree, which created the need for coreprio for 24 and 32 core TR CPUs. That doesn't mean software will need no optimization, it means that current optimizations will help with workloads spread over multiple dies, but Windows scheduler only allows overflow to 1 NUMA node, which is why in some cases the 32 core basically matches the 16-core threadripper CPUs. But, if the scheduler isn't architecture aware, it may put interdependent calculations on cores on different dies, which can introduce its own latency, even if the NUMA restriction is lifted. 

So, there is concern on some elements of performance, like that last one I just mentioned. But, if being honest, latency is the least of it. It is like the community saying memory bandwidth was why TR 2990WX was not performing. That was the go-to statement of the general public, that it was memory. Turns out, a person replicated it with an Epyc 7551P, showing it wasn't bandwidth limitations (thanks to Wyndell at Level1Techs, Bitsum, and Ian Cuttress). They eventually narrowed it down to threads being moved around without need and the scheduler being part of the issue (but not completely just a scheduler issue). That is why it is always good to dig deeper. Never stop questioning! 

I do hope this helps a bit. And, yes, it is widely known Intel has lower memory latency than Zen based designs. Empirical fact. But I hope I have shown that memory latency isn't the final word on performance, also.


----------



## AlphaC

Zen only has a substantial latency hit when it is cross CCX. The other biggest difference is Data In-Page Random Latency.


https://www.sisoftware.co.uk/2018/0...arks-2-channel-ddr4-cache-memory-performance/


----------



## ajc9988

AlphaC said:


> Zen only has a substantial latency hit when it is cross CCX. The other biggest difference is Data In-Page Random Latency.
> 
> 
> https://www.sisoftware.co.uk/2018/0...arks-2-channel-ddr4-cache-memory-performance/


Yes, but on threadripper, there are intricacies beyond the simple look at cross-CCX communications. What I mean is they use Infinity Fabric to connect each core to its mirror core on a second die when used. They also use Infinity Fabric to connect one CCX on the same die to the other CCX. This creates a knock-on effect so that latency increases as follows (lowest to highest in the list):

1) Inter-core on the same CCX
2) Inter-CCX on the same die
3) Mirrored CCX on a second die
4) opposite CCX on a second die

The reason is the IF latency for inter-die is higher, it seems, than just the on die inter-CCX latency. Going to the other CCX on the same die seems to be lower latency than any others that have to go off die. Then, going to the mirrored CCX on the second die is one hop. For the fourth option, it must first travel the distance of number 3, then go over the IF in number 2 to get to the second CCX, which increases the latency for each hop. So there are knock-on effects with that design that need understood.

Of course, at the moment, that is primarily a concern related to Threadripper and Epyc, not Ryzen. But, with Ryzen now having two dies, understanding how it hops along the IF traces may become more important.


----------



## AlphaC

That also boils down to how many cores make up a CCX for Zen 2 and how fast Infinity Fabric can run. If it's 8 cores per CCX rather than 8 cores per chiplet it will be more interesting than 4 cores per CCX and 2 CCX per chiplet.


The Infinity Fabric is analogous to Intel HEDT mesh or the ringbus speed on mainstream Intel desktop. Right now most people run mainstream uncore at 4GHz+ , with typical 4.3-4.7GHz uncore which is much higher than a 1600-1800MHz Infinity Fabric on Zen+.


----------



## ajc9988

AlphaC said:


> That also boils down to how many cores make up a CCX for Zen 2 and how fast Infinity Fabric can run. If it's 8 cores per CCX rather than 8 cores per chiplet it will be more interesting than 4 cores per CCX and 2 CCX per chiplet.
> 
> 
> The Infinity Fabric is analogous to Intel HEDT mesh or the ringbus speed on mainstream Intel desktop. Right now most people run mainstream uncore at 4GHz+ , with typical 4.3-4.7GHz uncore which is much higher than a 1600-1800MHz Infinity Fabric on Zen+.


Although I hear ya, and higher speed is planned when they eventually do active interposers, aside from speed, I'd argue it is better to look at bandwidth and latency. The example is comparing the wide bus with low speed of HBM2 versus the higher frequency, smaller bus GDDR5 and GDDR6. How exactly that compares, though, I cannot remember, in regards to the ringbus or the mesh on bandwidth.


----------



## Grin

AlphaC said:


> That also boils down to how many cores make up a CCX for Zen 2 and how fast Infinity Fabric can run. If it's 8 cores per CCX rather than 8 cores per chiplet it will be more interesting than 4 cores per CCX and 2 CCX per chiplet.
> 
> 
> The Infinity Fabric is analogous to Intel HEDT mesh or the ringbus speed on mainstream Intel desktop. Right now most people run mainstream uncore at 4GHz+ , with typical 4.3-4.7GHz uncore which is much higher than a 1600-1800MHz Infinity Fabric on Zen+.


This is the main reason why Zen is loosing in gaming scenarios having the similar IPCs, the bus speed.


----------



## ogider

New rumors
https://www.overclock3d.net/news/cp..._rumours_-_amd_s_ces_demo_was_power-limited/1


----------



## Hwgeek

Mabe 300 a320/B350 support depends on the specific model?
Here Asus A320/B350 Boards with the latest bios update for Ryzen 3000:
https://www.asus.com/Motherboards/PRIME-A320M-F/HelpDesk_BIOS/
https://www.asus.com/Motherboards/PRIME-B350M-A-CSM/HelpDesk_BIOS/


----------



## CynicalUnicorn

ogider said:


> New rumors
> https://www.overclock3d.net/news/cp..._rumours_-_amd_s_ces_demo_was_power-limited/1


If that's the case, then AMD definitely went for the best perf/watt when they demoed the CPU at CES. Just fast enough to beat Intel, but slow enough to be running efficiently.

Given the sample rate on power consumption - about once per second - it's difficult to see where efficiency lies exactly.


----------



## ajc9988

CynicalUnicorn said:


> If that's the case, then AMD definitely went for the best perf/watt when they demoed the CPU at CES. Just fast enough to beat Intel, but slow enough to be running efficiently.
> 
> 
> 
> Given the sample rate on power consumption - about once per second - it's difficult to see where efficiency lies exactly.


It seemed like it was a 65W chip, where the full powered chip will be 95W or 105W. Just like the lower powered 65W 8 core, then the full wattage TDP on 2000 and 1000 series.

Sent from my SM-G900P using Tapatalk


----------



## AlphaC

https://www.anandtech.com/show/14160/amd-ceo-dr-lisa-su-to-deliver-computex-2019-lead-keynote


Computex keynote May 28


----------



## delboy67

5ghz 12 core, 16 core @4.2 scores 4278 in cb r15.


----------

