# [AMDFX] A10-7850K (R5-200M) + R7-240 in Hybrid Crossfire - over 500% the GPU performance of Iris! 800% the performance of Richland!



## polyzp

*UPDATE*: This is actually an R5-200M (could very well be renamed) + R7-240 in hybrid crossfire.

I thought this would be obvious by now simply from the GPU core counts. 512 + 320.

Showing up to 800% the performance of the little old Richland 8670D!!

"


This Speaks for itself!

Intel HD 5200 Iris with eDRAM is now pathetic compared to a DDR3 1600 Mhz Kaveri system. Watch HuMA and HSA at work with that memory bandwidth as well as General Purpose Cryptography up to near 500% the performance (+400%) of Iris 5200 and up to ~840% the performance (~+740%)! Both systems are running with 1600 Mhz DDR3.

Kaveri's GPU+ R7-240 is essentially matching an underclocked 7790 (7750) with DDR3 working with GDDR5. This means Kaveri's GPU will sport 832 Stream Processors and overshoots the original estimates and rumours of 512 stream processors by alot! Stock clock will be 600 Mhz but there is no word if there will be a turbo clock implemented but my guess is YES!









And Now that Kaveri motherboards have entered the enthusiast field, I'd love to see 2400-2600 Mhz DDR3 testing! This is a GREAT time for 2400 Mhz purchasing aswell, as the price is only marginally higher than 1866 or 2133 Mhz options. Asus claims a +30% increase in performance on their Kaveri FM2+ page here going from 1333-2133 Mhz! So compare the above results at 1600 Mhz to a theoretical system with 2600 Mhz DDR3!"

Source (AMDFX)
Source (Kaveri benchmarks)


----------



## 161029

This belongs in Rumors and Unconfirmed Articles.


----------



## EliteReplay

is there any cpu benchmarks leak? i mean is good to have a good iGpu but i would like to get a FM2+ CPU as least able to fight a 2500k which is sitll rocking.


----------



## zalbard

I don't even see 400% anywhere.


----------



## gamer11200

If this is true, then Kaveri was worth the wait!


----------



## Yeroon

Why does it say 256-bit for the second last benchmark shown (in the source link, not the qouted chart)
http://www.planet3dnow.de/cms/wp-content/gallery/cache/647__x_kv-spectre-8cu-crypto.png


----------



## <({D34TH})>

If it really does have more than the 512 cores we were suspecting, then consider me sold.


----------



## nitrubbb

mother of god


----------



## MrJava

You have to take the given "benchmark numbers" with loads of salt for the below reasons:

13CU and 832SPU would make for a huge die - we are expecting 8CU and 512SPU
One possible conclusion is that this is a 7CU (448 SPU) Kaveri attached to a 6CU (384 SPU) Hainan discrete GPU
Could be spoofed
Point 2 raises some interesting questions. Can hardware along with AMD driver support allow for applications to view Kaveri iGPU + discrete GPU as one device for OpenCL/DirectCompute?

Edit: Also SiSoft could be doing something stupid as well.


----------



## iamhollywood5

If that's true, good for AMD









However I'm still not on board with the direction HUMA and HSA is taking the industry. I don't want my choice of CPU to be locked to my choice of GPU. If my CPU goes bad I don't want to have to replace my totally fine GPU. I don't want my CPU and GPU to share the same thermal capacity. And most of all, there's a reason DDR3 is used for system memory and GDDR5 is used for graphics memory. They're specialized for different things. It would make sense if serial processors and parallel processors worked best with the same kind of memory, but they dont.


----------



## AlphaC

Quote:


> Originally Posted by *gamer11200*
> 
> If this is true, then Kaveri was worth the wait!


yes, I agree









Please let the rumor be true .

Quote:


> Originally Posted by *iamhollywood5*
> 
> If that's true, good for AMD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However I'm still not on board with the direction HUMA and HSA is taking the industry. I don't want my choice of CPU to be locked to my choice of GPU. If my CPU goes bad I don't want to have to replace my totally fine GPU. I don't want my CPU and GPU to share the same thermal capacity. And most of all, there's a reason DDR3 is used for system memory and GDDR5 is used for graphics memory. They're specialized for different things. It would make sense if serial processors and parallel processors worked best with the same kind of memory, but they dont.


Stage 1 is hUMA

Stage 2 is discrete GPU integration with HSA.


----------



## Schmuckley

I'm paying attention








Only downside I see isosted by AMD.
Looks like they might have done something right, though.


----------



## CynicalUnicorn

Wow, if these are accurate, then this may be able to run Crysis!







But I'm skeptical and curious as to why they didn't use faster RAM. Anything much past 1866MHz makes minimal difference for even a 3970X, but the faster clocks are necessary to compensate for the lower bandwidth for graphics in APUs. But tied with a 7790? Yes please. I just want a dual FM2+ motherboard and I will be happy with hUMA and HSA.


----------



## xd_1771

*I'm going to keep this in hardware news, since this is actually sourced within the AMDFX article to a result that has been uploaded on SiSoftware's official live ranker. (I've added the link to that in the OP). It looks like it's valid







*


----------



## Schmuckley

Indeed it does!







]
Now..Where can i get one of these? NAOW!








R5-N200!
Wait a min..GP=Graphics processing?


----------



## Hattifnatten

Mother of god


----------



## mtcn77

Quote:


> Originally Posted by *iamhollywood5*
> 
> If that's true, good for AMD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However I'm still not on board with the direction HUMA and HSA is taking the industry. I don't want my choice of CPU to be locked to my choice of GPU. If my CPU goes bad I don't want to have to replace my totally fine GPU. I don't want my CPU and GPU to share the same thermal capacity. And most of all, there's a reason DDR3 is used for system memory and GDDR5 is used for graphics memory. They're specialized for different things. It would make sense if serial processors and parallel processors worked best with the same kind of memory, but they dont.


AMD wrote down GDDR5, they had better known what it is capable of...
Don't worry so much, there are benefits to having them inside the same package. Closer, the better for memory transactions.


----------



## Redwoodz

Quote:


> Originally Posted by *iamhollywood5*
> 
> If that's true, good for AMD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However I'm still not on board with the direction HUMA and HSA is taking the industry. I don't want my choice of CPU to be locked to my choice of GPU. If my CPU goes bad I don't want to have to replace my totally fine GPU. I don't want my CPU and GPU to share the same thermal capacity. And most of all, there's a reason DDR3 is used for system memory and GDDR5 is used for graphics memory. They're specialized for different things. It would make sense if serial processors and parallel processors worked best with the same kind of memory, but they dont.


So buy a discrete GPU. Your point is non-existent.


----------



## s-x

Oh sweet jesus. When can I preorder..


----------



## coachmark2

Wellllllll since Iris Pro is within 10% of a GT640....

Then 400% better than GT640 would be in 660ti territory. Sounds a little too rosy.
50% better than the GT640 would be about at a 5770/6770.

Interesting. Let's see what Kaveri can do when it really releases.


----------



## Hattifnatten

Quote:


> Originally Posted by *coachmark2*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Wellllllll since Iris Pro is within 10% of a GT640....
> 
> Then 400% better than GT640 would be in 660ti territory. Sounds a little too rosy.
> 50% better than the GT640 would be about at a 5770/6770.
> 
> Interesting. Let's see what Kaveri can do when it really releases.


Not nessecarily, the benchmarks shown are compute, not 3d-rendering. I suspect it won't perform as well as these results indicate due to memory bottleneck.


----------



## iamhollywood5

Quote:


> Originally Posted by *Redwoodz*
> 
> So buy a discrete GPU. Your point is non-existent.


And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.


----------



## NaroonGTX

Wouldn't be wasted if you took advantage of Hybrid Xfire/Dual Graphics...Which should be greatly improved with Kaveri.

Also a bit odd to complain about things on the die not being utilized... People with FX chips have some portion of the die not being used ever since it's just a slightly-cut down Opteron chip. I know it's a bit of a stretch of a comparison, but still. There are people who started off their rig builds with APU's and then later got a powerful discreet GPU down the line. The iGPU being disabled isn't really a big deal. It's still a CPU, first and foremost, and behaves like one even when the iGPU is disabled.


----------



## yawa

Yup, wasting die space anyway. And if HSA gains turn out to be even half of what is claimed the architecture change over and shift to APIs will be worth it.

I just hope at some point an 8 core Excavator or Steamroller chip will ship, whether it be an apu or not I'm all for it.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *iamhollywood5*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Redwoodz*
> 
> So buy a discrete GPU. Your point is non-existent.
> 
> 
> 
> And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.
Click to expand...

Which is one reason why I prefer FX CPUs to Intel anything on paper at least. The extreme CPUs (e.g. 3970X and I think 3930K?) don't have iGPUs. But for some reason, everything else does and Intel hasn't done anything like hUMA or HSA so I also prefer the APUs (again, on paper) to anything Intel. But AMD doesn't have great single-thread performance or anything high-end, so Intel beats them despite the wasted silicon on their iGPUs after a certain price point. However, a discrete GPU in dual-graphics mode isn't a waste. Sure, you don't a crossfire-esque performance increase, but it's the best bang for the buck for mITX. (obviously you'll go with a 4770 and 7990 or 690 for uber high-end, but not all of us are willing to spend $3000 on a rig)


----------



## Awooboowoo

Did no one even read the OP? The increase in GP financial analysis is increased by 738%. All the other stats are no where near that number and instead much lower. The title is pretty sensationalist and misleading.


----------



## Usario

Quote:


> Originally Posted by *coachmark2*
> 
> 
> 
> Wellllllll since Iris Pro is within 10% of a GT640....
> 
> Then 400% better than GT640 would be in 660ti territory. Sounds a little too rosy.
> 50% better than the GT640 would be about at a 5770/6770.
> 
> Interesting. Let's see what Kaveri can do when it really releases.


This is HSA compute; you can't really make that kind of comparison.

It's pretty difficult to estimate performance in graphics at this point... theoretically, if it's using GCN 1.x, it would edge out the 5750 by a bit at the reported 600MHz _before you account for memory bandwidth constraints_ (due to the limitation of DDR3), which could have a varying impact depending on the nature of the game/workload... though it would probably come out at around, maybe, 8800GTX performance? If it uses GCN 2.0 though we have absolutely nothing to go by.


----------



## NaroonGTX

Intel's Extreme-series CPU's don't have iGPU's because they are just cut-down Xeon chips that are multiplier-unlocked. If you could see the die, there is no space whatsoever for an iGPU because of the massive L3 caches and I think the L2 caches are bigger as well. Their "normal" chips have completely different dies and thus they can have the iGPU on there.


----------



## Hattifnatten

Well, the title says *up to* 500% the performance of Iris pro, which is correct. And I do feel that 2-3-400% increased perf over Richland is a pretty big deal


----------



## karamel

Quote:


> Originally Posted by *iamhollywood5*
> 
> And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.


Combining an APU with a discrete GPU is future of gamers thanks to AMD. I created an account to say this. The key why AMD doesn't continue to FX series is HSA. That's why I already think there will be no Athlon series with no GPU either.

What you guys don't understand that it won't be waste if you use your discrete GPU with an APU anymore. That's not because of Dual Graphics. That's because of HSA. Your GPU in APU will be used for computing in games or other stuff. Playstation 4 already uses HSA, XBox One uses custom but similarly thing. So it gonna be like this:

Central Processing Unit (CPU) - Will do single-threaded heavy tasks.
Parallel Processing Unit (PPU - Integrated GPU) - Will do multi-threaded tasks like particles in games.
Graphics Processing Unit (GPU - Discrete GPU) - Will do graphics, texturing etc..

That's why AMD focuses maximum 4 cores. If greater parallelism is needed, there is a GPU unit for it in APU.


----------



## raclimja

sensationalist OP is sensationalist


----------



## NaroonGTX

@karamel

The Athlon x4 700/Athlon II x6x1 series were released for two reasons: 1) So that the silicon they produced wouldn't be wasted; why trash the whole thing when it's just the iGPU that has some sort of defect, when they can shut the iGPU off, and salvage it as a CPU-only part for a cheap price? and 2) To provide an alternate build path for people who wanted cheap quad-cores on the FM2 socket.

This doesn't really point to FX getting left behind nor them not making any CPU-only parts. You can bet that they will do some Steamroller-based Kaveri-derived Athlon x4 variant at some point next year. Socket AM3+'s fate is dubious at best, but they could be doing a SR FX chip later in 2H 2014 or they might bring FX over to FM2+...or both. We don't really know yet. We'll just have to wait for the new roadmaps to find out for sure. Those will be available in October/November.

I agree with your other points, though. The iGPU can be used for physics calculations and other such processing in a game, while the dGPU does the rest of the work and doesn't waste cycles on such things. No more performance hits for having something like Physx enabled. Several devs have already said their future games will be optimized for AMD's APU's, this is probably what they were talking about.


----------



## Thunderclap

I really hope the new Kaveri APUs do well in the CPU-related performance, so they could be used with high end graphic cards and to be able to take full advantage of HSA... It's definitely the future of gaming, and seeing how more and more games get first developed for consoles and then get ported to PCs, taking into account that both next gen consoles run on AMD hardware, namely APUs with HSA, I see some really big success in future gaming performance for them. And that's probably the reason for more and more game developers saying that upcoming games will be optimized for AMD and specifically AMD's APUs. Can't wait for future desktop AMD APUs with 6-8 cores and integrated GDDR5 memory.


----------



## RKTGX95

i'm just happy this baby hits the market since at the worst case scenario it will be a stronger APU than Richland and it will have great platform to launch to (the A88 chipset).


----------



## karamel

@NaroonGTX

Your point of defective GPU in Athlons makes sense. I didn't think of that way.









But, I still think there will be no FX. AMD puts everything on HSA in CPU development. Because in this area, they can suppress Intel in performance. Making CPU's with no GPU's purposeful will put them in Intel's area which AMD is far behind of Intel.


----------



## NaroonGTX

It's likely that FX will be left behind and simply replaced with 6/8-core APU's later on down the line. It would certainly be possible with a die-shrink and high-density cell libraries being implemented.


----------



## yawa

I just want one more 8 core fx and I'll be happy. One more with sr cores to get my moneys worth out of my board then they can make APus forever.


----------



## Demonkev666

weren't we already told it's 512SP and DDR3
I think that is a discrete card imo.


----------



## Usario

Quote:


> Originally Posted by *Demonkev666*
> 
> weren't we already told it's 512SP and DDR3
> I think that is a discrete card imo.


There's a 512SP version.

It says "integrated graphics" on the results page, and there's zero reason to bottleneck an 832SP card with DDR3, especially when they're currently shipping cards with as few as 384SPs with GDDR5.


----------



## geoxile

Maybe this is the APU that's supposed to have integrated GDDR5


----------



## NaroonGTX

If it is GDDR5, it would be an embedded solution, i.e. some mobile product. But even then, you couldn't switch between GDDR5 and DDR3 as system memory or anything. The GDDR5 would have to be stacked or soldered onto the board. They could do this in a BGA config on desktop, but nothing points to that being the case.


----------



## Thunderclap

I'm interested in seeing how much performance you will actually gain from running very fast ram i.e. 2133-2400-2600-2666MHz, etc. and if it will make sense to spend more on those ultra high speed ram modules. The performance gains will probably top out at the 2400MHz memory, but there might be some more to be had in higher speed ones. Makes me wonder how well have they utilised this aspect of the new APUs...


----------



## NaroonGTX

I suspect Kaveri will top out at around 2600 MHz RAM. I've heard murmurs that it would support 2400 MHz natively, so it's plausible. Kaveri also boasts (currently unknown) improvements to its memory controller, so that should enhance performance itself by some amount.

It would be best to buy cheaper modules and just overclock them. The modules that come out the gate at the higher clockspeeds are terribly overpriced.


----------



## SuperMudkip

Quote:


> Originally Posted by *Nonehxc*
> 
> That can't be!
> 
> Why, you ask?
> 
> It's from AMD!And everybody knows AMD is crap.


No sir, I've used AMD in mostly all of my systems (About 6 systems) and they run perfectly. Yes, they may not be up to snuff with the top of the line i7 but for what you get is a decent bang for the buck. There many people on this forum who use AMD on there systems and say other wise to your statement. So basing it on "everybody" is kinda stupid. Under performer? Yes. Piece of crap? No.


----------



## iamwardicus

Interesting... I just hope I'm not at the end of my rope with my Formula-Z motherboard when it comes to Steamroller... however if they ever release a 6 core variant of Kaveri with a non-gimped GPU on it I'll be changing over if HSA becomes a programming support standard that's widely used.


----------



## SuperMudkip

Quote:


> Originally Posted by *iamhollywood5*
> 
> And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.


Could also just buy a compatible Radeon GPU and run Dual Graphics on it. I use it on my machine with a A8-5600K + HD6570 (Granted the dGPU has kinda aged) works great on my machine.


----------



## polyzp

This Chip will most likely feature a turbo mode to in the 700-800 Mhz Range, and is capable of overclocking to around ~1 Ghz.

With 2600 Mhz DDR3 and a huge overclock this iGPU will even in the best case scenario will still be shy of a stock 7790. This is impressive given this implies a ~1600-1700 Gflops range (7790 is 1792 Gflops). This is pretty much on par with the GPU in the XBOX One without any eDRAM.

It will be interesting to see how this chip stacks up against a 2500k in raw cpu performance. I estimate that it will be shy of its single core performance by ~10% and scaling by ~15%, with the 2500k coming out on top. Overclocking these chips will at least tie a 4670k/FX9590 at stock in single thread cpu tests and gaming benchmarks paired with a high end discrete gpu.

Dual Graphics will be working with up to the R7-260 or 260x GPUs and yield performance that is around the sweetspot for 1080p! (~7870-7950 performance)

If the price on Kaveri remains below 150 USD this will be a steal! (7790 costs $120-130 at the moment)


----------



## geoxile

Quote:


> Originally Posted by *polyzp*
> 
> This Chip will most likely feature a turbo mode to in the 700-800 Mhz Range, and is capable of overclocking to around ~1 Ghz.
> 
> With 2600 Mhz DDR3 and a huge overclock this iGPU will even in the best case scenario will still be shy of a stock 7790. This is impressive given this implies a ~1600-1700 Gflops range (7790 is 1792 Gflops). This is pretty much on par with the GPU in the XBOX One without any eDRAM.
> 
> It will be interesting to see how this chip stacks up against a 2500k in raw cpu performance. I estimate that it will be shy of its single core performance by ~10% and scaling by ~15%, with the 2500k coming out on top. Overclocking these chips will at least tie a 4670k/FX9590 at stock in single thread cpu tests and gaming benchmarks paired with a high end discrete gpu.
> 
> If the price on Kaveri remains below 150 USD this will be a steal! (7790 costs $120-130 at the moment)


What makes you think this will be biting the heels of Sandy?


----------



## Timeofdoom

As much as I love these news, remember: This is using OpenCL-accelleration (without AMD specific flags for HUMa/HSA I suppose?).
Theoretically speaking; performance, using HUMa/HSA flags, could be higher - but coming back down to earth:
_Performance increases in non-accelerated workloads will probably be minimal._


----------



## NaroonGTX

The FX-9590 was nothing more than a factory-overclocked Vishera, so clock-for-clock Steamroller will beat it out no matter what. We currently don't know how the desktop Kaveri will clock, but if it can somehow clock as well as Richland could, it would put up some damned good performance... I don't think the single-thread perf will reach Sandy Bridge levels yet, but it should come close to it. I'm thinking at least Nehalem or better.

I think it will replace the A10-6800k (USD $150) as the flagship APU so it'll probably be $150...In which case if the CPU perf is great and the iGPU really is on par with the HD 7790, this chip will be flying off shelves (real ones and virtual ones.) I don't think the iGPU will be that great though. Realistically it would probably be around 7750 DDR3 or 7770. I don't expect huge strides in performance increases, though that's just my realistic/slightly-pessimistic outlook there.


----------



## polyzp

Steamroller IPC is an increase of ~20-25% over piledriver therefore placing it ~10% behind sandy. Overclocking DELTA makes the difference in single core performance negligible.


----------



## polyzp

Even a stock FX 9590 gives a ~1.3 single core score in cinebench 11.5 which is dangerously close to a stock 2500k's score of ~1.4. With 20-25% IPC increase over piledriver AMD steamroller single core performance will atleast be close to sandy bridge.


----------



## TheLAWNOOB

Quote:


> Originally Posted by *polyzp*
> 
> Even a stock FX 9590 gives a ~1.3 single core score in cinebench 11.5 which is dangerously close to a stock 2500k's score of ~1.4. With 20-25% IPC increase over piledriver AMD steamroller single core performance will atleast be close to sandy bridge.


Dude, the 9590 is clocked around 50% faster.

You could OC the Sandy by 50%as well you know.


----------



## Seronx

Quote:


> Originally Posted by *polyzp*
> 
> Even a stock FX 9590 gives a ~1.3 single core score in cinebench 11.5 which is dangerously close to a stock 2500k's score of ~1.4.


Cinebench is Floating Point, games/productivity use Integer which is faster than FP.


----------



## Demonkev666

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> Dude, the 9590 is clocked around 50% faster.
> 
> You could OC the Sandy by 50%as well you know.


shouldn't really care about the single core bench imo.

SSE2 vs SSE4.2


----------



## kakik09

This is just too sensationalized, even for my taste :/


----------



## Seronx

Quote:


> Originally Posted by *Demonkev666*
> 
> shouldn't really care about the single core bench imo.
> 
> *Scalar SSE2* vs *Vector AVX*


I fixed it for you.


----------



## NaroonGTX

It doesn't matter if it's clocked faster, they are two completely different architectures. Bulldozer was specifically made to clock faster; the original engineers deliberately forwent single-threaded performance in favor of multi-threaded performance (hence the octocore jazz) for parallelization. Either way, it goes without saying that an i5 at its theoretical max stable OC would be much faster than a Piledriver chip at its max OC.

There's nothing to suggest that SR will be 20~25%+ faster than Piledriver, btw. All there ever was, was just that "30% Ops Per Cycle improvement" from that slide from Hot Chips 2012, and even that was most likely in comparison to Bulldozer, not Piledriver. This would put SR about 15% faster than PD, which goes in hand with this:



Of course Kaveri was pushed back from its original projected release (early 2013) so who knows what potential improvements they could've made since then.


----------



## asxx

seems legit!


----------



## geoxile

On 28nm (vs Piledriver's 32nm) Kaveri should be able to clock higher right? I hope so...


----------



## NaroonGTX

Node-shrinks don't really guarantee that. i.e. Llano; It used the Stars architecture (K10 i.e. Phenom II and such) for the CPU cores and was on 32nm, yet the max clocks anyone could get out of it was around the 3.6 Ghz mark. Deneb and Thuban in comparison would usually top out at around 4.0~4.2 Ghz. Llano was around 6~7% faster clock-for-clock than Phenom II even with L3 cache, yet it couldn't clock higher. Some theorize this was probably due to the yield issues they (GloFo) had with the transition to 32nm.

Some say that the transition from 32nm to 28nm (a half-node shrink at that) was the cause for the initial Kaveri push back, among other things.


----------



## Seronx

Quote:


> Originally Posted by *NaroonGTX*
> 
> Some say that the transition from 32nm to 28nm (a half-node shrink at that) was the cause for the initial Kaveri push back, among other things.


It was Haswell that caused the delay for Kaveri.


----------



## Clocknut

mother of god.jpg

buy kaveri, buy 1 more 7790(I already have 1 now) triple crossfire it!


----------



## Dynamo11

man FM2+ is looking cooler every day, I didn't expect the iGPU to be THAT powerful!


----------



## Durquavian

come on daddy wants some power in a notebook/tablet.


----------



## robert c james

So it seems at worst this thing will be like a 4ghz + P II with a 7770 that is unlocked
For $150 Yummy :}
For pore boys like me that means Win for AMD


----------



## 161029

Quote:


> Originally Posted by *gamer11200*
> 
> If this is true, then Kaveri was worth the wait!


That would be very interesting. Now I would have an incentive to move up from Trinity to Kaveri and still not need GPU.









Now just waiting for CPU performance...(tbh not expecting too much).


----------



## Moustache

Quote:


> Originally Posted by *Clocknut*
> 
> buy kaveri, buy 1 more 7790(I already have 1 now) triple crossfire it!


Is that even possible?


----------



## Seronx

Quote:


> Originally Posted by *Moustache*
> 
> Is that even possible?


It should be if you flash all of them with a hUMA vbios.


----------



## Moustache

Quote:


> Originally Posted by *Seronx*
> 
> It should be if you flash all of them with a hUMA vbios.


cool


----------



## sdlvx

Quote:


> Originally Posted by *karamel*
> 
> Quote:
> 
> 
> 
> Originally Posted by *iamhollywood5*
> 
> And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.
> 
> 
> 
> Combining an APU with a discrete GPU is future of gamers thanks to AMD. I created an account to say this. The key why AMD doesn't continue to FX series is HSA. That's why I already think there will be no Athlon series with no GPU either.
> 
> What you guys don't understand that it won't be waste if you use your discrete GPU with an APU anymore. That's not because of Dual Graphics. That's because of HSA. Your GPU in APU will be used for computing in games or other stuff. Playstation 4 already uses HSA, XBox One uses custom but similarly thing. So it gonna be like this:
> 
> Central Processing Unit (CPU) - Will do single-threaded heavy tasks.
> Parallel Processing Unit (PPU - Integrated GPU) - Will do multi-threaded tasks like particles in games.
> Graphics Processing Unit (GPU - Discrete GPU) - Will do graphics, texturing etc..
> 
> That's why AMD focuses maximum 4 cores. If greater parallelism is needed, there is a GPU unit for it in APU.
Click to expand...

AMD is working on HSA for dCPU and dGPU. It was suppose to start happening with Tahiti but it was never used, probably because AM3+ doesn't support it.

http://en.wikipedia.org/wiki/Radeon_HD_7000_Series#Graphics_Core_Next_Architecture
Quote:


> Originally Posted by *Seronx*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Demonkev666*
> 
> shouldn't really care about the single core bench imo.
> 
> *Scalar SSE2* vs *Vector AVX*
> 
> 
> 
> I fixed it for you.
Click to expand...

Friendly reminder that in Gentoo I saw over 60% increase in LAME performance by recompiling for AVX/SSE4/etc in Gentoo on my FX 8350 and I saw a more than doubling of speed in Blender when recompiling.

Phoronix also finds massive gains in C-Ray when recompiling.

Friendly reminder that not only is Cinebench r11.5 compiled with ICC, but it ALSO uses Intel performance libraries like libguide40.dll.

In short, Cinebench is absolutely useless. By playing games with compilers I beat my friends 4.2ghz 3930k by 30% with my 5ghz FX 8350.


----------



## Derp

I really want to get hyped about this but an AMD employee posted this so I just can't. I'm also up to ~500% more interested in the CPU performance.


----------



## akromatic

sounds awsome but hard to believe but if its true it could be the first platform that is truly gaming capable with onboard and an ideal candidate to replace my A10 in my ISK100

tbh i was expecting around a 7750 or slightly less due to DDR3 limitation but if it offers 7770 performance i be extremely happy


----------



## yraith

Quote:


> Originally Posted by *Clocknut*
> 
> mother of god.jpg
> 
> buy kaveri, buy 1 more 7790(I already have 1 now) triple crossfire it!


The problem is... Currently there aren't that many mobos out that will be more than Dual Graphics. I too have a 7790 and am waiting for the new chip to arrive. If I want Xfire, the UP4 board from Gigabyte looks good for that.


----------



## maarten12100

Quote:


> Originally Posted by *iamhollywood5*
> 
> And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.


I therefore hope for working CF with midrange cards that would be great.


----------



## Artikbot

If this is legit, I'm pairing the fastest 65W APU with an ASRock FM2A88X-ITX+

To hell with being poor, I'm throwing the house outta' the window xD


----------



## maarten12100

Quote:


> Originally Posted by *sdlvx*
> 
> Friendly reminder that in Gentoo I saw over 60% increase in LAME performance by recompiling for AVX/SSE4/etc in Gentoo on my FX 8350 and I saw a more than doubling of speed in Blender when recompiling.
> 
> Phoronix also finds massive gains in C-Ray when recompiling.
> 
> Friendly reminder that not only is Cinebench r11.5 compiled with ICC, but it ALSO uses Intel performance libraries like libguide40.dll.
> 
> In short, Cinebench is absolutely useless. By playing games with compilers I beat my friends 4.2ghz 3930k by 30% with my 5ghz FX 8350.


We indeed need a new universal compiler run by neither of those 2 or we need a AMD to genuine intel patcher.


----------



## Durquavian

Quote:


> Originally Posted by *maarten12100*
> 
> We indeed need a new universal compiler run by neither of those 2 or we need a AMD to genuine intel patcher.


If that were possible then I wish I knew more so I could do it.


----------



## maarten12100

Quote:


> Originally Posted by *Durquavian*
> 
> If that were possible then I wish I knew more so I could do it.


http://www.yeppp.info/index.html#
That seems to be the top candidate.


----------



## MrJava

You guys are setting yourselves up for disappointment. 13 GCN compute units in an APU is extremely unlikely for obvious reasons.


----------



## Artikbot

Quote:


> Originally Posted by *MrJava*
> 
> You guys are setting yourselves up for disappointment. 13 GCN compute units in an APU is extremely unlikely for obvious reasons.


State the obvious, please.


----------



## mtcn77

Quote:


> Originally Posted by *MrJava*
> 
> You guys are setting yourselves up for disappointment. 13 GCN compute units in an APU is extremely unlikely for obvious reasons.


Mr. Programmer, I consider AMD has always implemented great power gating as of yet compared to Intel's Iris Pro solution imho.
It will not be the power budget that sets the threshold imo.
I have been buying AMD gpus for all these years just so that they run power efficient and silent.


----------



## Durquavian

Quote:


> Originally Posted by *maarten12100*
> 
> http://www.yeppp.info/index.html#
> That seems to be the top candidate.


You are so evil. Now I am gonna have to read and learn about that. Looks very interesting.


----------



## MrJava

Quote:


> Originally Posted by *Artikbot*
> 
> State the obvious, please.


- die size => lower yields, lower margins
- still complex and expensive if implemented as an MCM
- lack of bandwidth (even for compute)
- not clear how this would be better than running smaller GPU @ ~900mhz like previous APUs


----------



## NaroonGTX

The die size would have to be pretty big, way bigger than Trinity/Richland to accommodate these specs.


----------



## MrJava

Quote:


> Originally Posted by *Seronx*
> 
> It was Haswell that caused the delay for Kaveri.


Bugs? The northbridge to support hUMA is probably quite complex and it may have taken more time than expected to iron everything out.


----------



## flippin_waffles

Quote:


> Originally Posted by *Timeofdoom*
> 
> As much as I love these news, remember: This is using OpenCL-accelleration (without AMD specific flags for HUMa/HSA I suppose?).
> Theoretically speaking; performance, using HUMa/HSA flags, could be higher - but coming back down to earth:
> *Performance increases in non-accelerated workloads will probably be minimal.*


But there is a shift taking place. The next gen software is HSA aware and a huge portion of the industry is coding for it. intel might even be forced to join if it wants to compete. Those kinds of performance and efficiency gains can't be obtained anywhere else.


----------



## MrJava

CPU cores will be a lot better than last time around.
Quote:


> Originally Posted by *flippin_waffles*
> 
> But there is a shift taking place. The next gen software is HSA aware and a huge portion of the industry is coding for it. intel might even be forced to join if it wants to compete. Those kinds of performance and efficiency gains can't be obtained anywhere else.


----------



## Usario

MrJava, I don't think 13CU is too difficult. Remember that Intel's current top-of-the-line APU competitor, Haswell GT3, comes in at around 270mm^2. Considering perf/mm^2 and perf/watt improvements with newer iterations of GCN on top of the shrink to 28nm, it shouldn't be difficult for AMD to fit that whole thing into a die around 300mm^2 or maybe smaller. Remember, this GPU is a bit smaller than Bonaire, which with the memory controllers and all that stuff comes out to around 160mm^2. More than half of Richland's die was the GPU, and the whole thing came out to 246mm^2. That means that the CPU and uncore need less than 120mm^2. Sure, the NB will definitely need to get beefier for hUMA/HSA, and SR will almost certainly be larger than BD/PD, but even then it's uncertain if that will exceed the ~30% extra transistor count given to you in any given space by going from 32nm to 28nm. In reality this 13CU 2M Kaveri could very well come out to be about the same size as Haswell GT3.

Speaking of Haswell, AMD really needs to implement some kind of eDRAM or GDDR5 support... 13CUs on DDR3? Huge bottleneck. DDR4 won't be here fast enough, and even then I'm not sure if DDR4 will fully alleviate the problem, especially if we assume that the IGPs will be even stronger by then.


----------



## Skydragon26

I'm thinking thats with a discrete card, in crossfire. From what i've read rumors and what not, Kaveri has 8 SIMDs and that would would mean 512 shaders. 7750.


----------



## Artikbot

Quote:


> Originally Posted by *MrJava*
> 
> - die size => lower yields, lower margins
> - still complex and expensive if implemented as an MCM
> - lack of bandwidth (even for compute)
> - not clear how this would be better than running smaller GPU @ ~900mhz like previous APUs


This just says why wouldn't it be optimal to run them, not that it wouldn't be worth to.

Don't get me wrong, I firmly believe we'll see 8 CUs, and that's enough of a performance boost (should provide well above a 40% performance boost), but tht they _can_ throw 13 CUs in an APU? They sure can. And they might, seeing how all the MB manufacturers are going with their high end lineups for FM2+ and not AM3+!


----------



## NaroonGTX

I'm skeptical of Kaveri, but the MOBO manufacturer's wouldn't be doing all these high-end boards if anything on FM2+ couldn't put up some great performance. Don't know what's ahead for FM2+, but it just might have a bright future.


----------



## nitrubbb

I probably wont even need to buy GPU for some lighter gaming


----------



## Artikbot

Quote:


> Originally Posted by *nitrubbb*
> 
> I probably wont even need to buy GPU for some lighter gaming


Servant here plays everything he plays on his desktop on his mobile rig as well, from the A10-5700 IGP.

Sure I have to tune down to low-medium-ish a lot of games (especially BF3/Just Cause 2/GTAIV and such), but I still keep above 30FPS at all times and at native 1920x1080.

Kaveri might provide just the right amount of horsepower to keep strong for 2-3 more years of mobile gaming


----------



## <({D34TH})>

Quote:


> Originally Posted by *Artikbot*
> 
> Servant here plays everything he plays on his desktop on his mobile rig as well, from the A10-5700 IGP.
> 
> Sure I have to tune down to low-medium-ish a lot of games (especially BF3/Just Cause 2/GTAIV and such), but I still keep above 30FPS at all times and at native 1920x1080.
> 
> Kaveri might provide just the right amount of horsepower to keep strong for 2-3 more years of mobile gaming


My 5800K can barely break 30FPS at [email protected] on BF3.. What magic dust are you using?


----------



## Durquavian

Quote:


> Originally Posted by *<({D34TH})>*
> 
> My 5800K can barely break 30FPS at [email protected] on BF3.. What magic dust are you using?


Ram is a HUGE factor with APUs so that alone can mean 20% as seen in an APU review. The difference was 20% or 5fps between 1866 9c and 2133 10c.


----------



## nitrubbb

luckily my GPU need aren't huge - trackmania 2 stadium mostly. I'm in no hurry to get BF4


----------



## Artikbot

Quote:


> Originally Posted by *<({D34TH})>*
> 
> My 5800K can barely break 30FPS at [email protected] on BF3.. What magic dust are you using?


Overclocked the RAM to 2GHz.

CPU at stock speed, undervolted a good chunk.


----------



## Seronx

All VI/CI chips can fuse without a crossfire configuration with the hUMA vBIOs.

^--Only in OpenCL/C++ AMP/etc workloads.


----------



## flippin_waffles

man Kaveri is looking even more interesting than it already was. I think there were examples of 5-8x efficiency gains in some cases that's something that can't be ignored. if this is just beginning it bodes well for the future.


----------



## roofrider

Quote:


> Originally Posted by *flippin_waffles*
> 
> But there is a shift taking place. The next gen software is HSA aware and *a huge portion of the industry is coding for it.* intel might even be forced to join if it wants to compete. Those kinds of performance and efficiency gains can't be obtained anywhere else.


Any links or quotes from software giants?


----------



## Clocknut

Quote:


> Originally Posted by *Usario*
> 
> Speaking of Haswell, AMD really needs to implement some kind of eDRAM or GDDR5 support... 13CUs on DDR3? Huge bottleneck. DDR4 won't be here fast enough, and even then I'm not sure if DDR4 will fully alleviate the problem, especially if we assume that the IGPs will be even stronger by then.


Dont need the eDRAM it is a waste of die space, just add another memory controller. Run the mobo as triple DDR3 channel.

I am ok with budget mobo with 3 memory slot triple channel than 4 slot dual channel.


----------



## raghu78

My only question is "*If the Kaveri APU has 13 compute units what is AMD doing to address the bandwidth bottleneck ?*" There is no point in wasting die space when bandwidth bottlenecks are not allowing you to scale performance. there is a significant performance difference between a HD 7750 DDR3 card and HD 7750 GDDR5 card . these have only 8 compute units. at 13 compute units the bandwidth requirements are even more. so unless there is a 256 bit DDR3 memory controller or 128bit GDDR5 memory controller I don't see AMD packing 13 compute units.


----------



## Eclipx2

If this is true it would be a paradigm shift for iGPUs. Many gamers could actually get by with this level of performance without making compromises.


----------



## Rickyyy369

Quote:


> Originally Posted by *NaroonGTX*
> 
> I'm skeptical of Kaveri, but the MOBO manufacturer's wouldn't be doing all these high-end boards if anything on FM2+ couldn't put up some great performance. Don't know what's ahead for FM2+, but it just might have a bright future.


They did the same thing before bulldozer was released and we all know how that turned out. Everyone thought that since AMD boards were finally getting SLI that it was going to be some sort of monster gaming platform. But not so much.

Not saying that Kaveri wont be a very nice product. Just pointing out that just because motherboard manufacturers are making high quality motherboards doesn't equal it being a showstopping CPU.


----------



## flippin_waffles

Quote:


> Originally Posted by *roofrider*
> 
> Any links or quotes from software giants?


Just had a look at the HSA foundation website.

http://hsafoundation.com/

It appears that VIA and Vivante have now joined as well. Very interesting, Charlie had a very detailed write up on the new Vivante GPU.

http://semiaccurate.com/2013/09/05/vivante-has-a-high-precision-mobile-gpu/


----------



## flippin_waffles

Quote:


> Originally Posted by *raghu78*
> 
> My only question is "*If the Kaveri APU has 13 compute units what is AMD doing to address the bandwidth bottleneck ?*" There is no point in wasting die space when bandwidth bottlenecks are not allowing you to scale performance. there is a significant performance difference between a HD 7750 DDR3 card and HD 7750 GDDR5 card . these have only 8 compute units. at 13 compute units the bandwidth requirements are even more. so unless there is a 256 bit DDR3 memory controller or 128bit GDDR5 memory controller I don't see AMD packing 13 compute units.


Maybe it has something similar to the 512 bit bus in Volcanic Islands.


----------



## NaroonGTX

Quote:


> They did the same thing before bulldozer was released and we all know how that turned out. Everyone thought that since AMD boards were finally getting SLI that it was going to be some sort of monster gaming platform. But not so much.
> 
> Not saying that Kaveri wont be a very nice product. Just pointing out that just because motherboard manufacturers are making high quality motherboards doesn't equal it being a showstopping CPU.


I see where you're coming from, but I don't go by that line of thinking, because the AMD from then is a completely different company than the AMD of now. We haven't seen any shenanigans like that *since* that happened. Also if that were true, why wouldn't there have been higher-end boards for FM1, or even FM2? Seems telling that it's specifically FM2+ that's getting this.


----------



## sumitlian

Quote:


> Originally Posted by *raghu78*
> 
> My only question is "*If the Kaveri APU has 13 compute units what is AMD doing to address the bandwidth bottleneck ?*" There is no point in wasting die space when bandwidth bottlenecks are not allowing you to scale performance. *there is a significant performance difference between a HD 7750 DDR3 card and HD 7750 GDDR5 card . these have only 8 compute units. at 13 compute units the bandwidth requirements are even more. so unless there is a 256 bit DDR3 memory controller or 128bit GDDR5 memory controller I don't see AMD packing 13 compute units.*


Absolutely right. At least now AMD really has a reason to implement Quad Channel IMC in the case of Kaveri. Low Memory bandwidth has already been a major inferiority in AMD CPUs, And it still exists.


----------



## NaroonGTX

The memory bandwidth bottleneck will be here for a while... Everything points to Kaveri still being dual-channel only, so that restriction is still in place. I think realistically Kaveri would have a pretty big step up from Richland, but nothing mind-blowing like an on-die DDR3 version of the 7790. Probably would get close to a 7770 once OC'd or something, and that's once again choked by the DDR3.

That's really why it's such a shame. Every APU since Llano has been held back by the limitations of system memory, since the APU's rely on that for the graphics. Hybrid xfire will probably be a lot better this time around, but no details yet...


----------



## sumitlian

Quote:


> Originally Posted by *Thunderclap*
> 
> I'm interested in seeing how much performance you will actually gain from running very fast ram i.e. 2133-2400-2600-2666MHz, etc. and if it will make sense to spend more on those ultra high speed ram modules. The performance gains will probably top out at the 2400MHz memory, but there might be some more to be had in higher speed ones. Makes me wonder how well have they utilised this aspect of the new APUs...


2400MHz Dual Channel = 38.4 GB/s
2600MHz Dual Channel = 41.6 GB/s
2666MHz Dual Channel = 42.6 GB/s
2800MHz Dual Channel = 44.8 GB/s
3000MHz Dual Channel = 48.0 GB/s

Even if we make Kaveri APU stable at 2800MHz, I don't think 44.8 GB/s would be enough for 13 CU / 832 SP iGPU. My 7790 still shows improvement going from 96 GB/s to 108 GB/s.

Why is AMD always so behind in Memory bandwidth whether its CPU or iGPU ?








Why ?


----------



## maarten12100

Quote:


> Originally Posted by *sumitlian*
> 
> 2400MHz Dual Channel = 38.4 GB/s
> 2600MHz Dual Channel = 41.6 GB/s
> 2666MHz Dual Channel = 42.6 GB/s
> 2800MHz Dual Channel = 44.8 GB/s
> 3000MHz Dual Channel = 48.0 GB/s
> 
> Even if we make Kaveri APU stable at 2800MHz, I don't think 44.8 GB/s would be enough for 13 CU / 832 SP iGPU. My 7790 still shows improvement going from 96 GB/s to 108 GB/s.
> 
> Why is AMD always so behind in Memory bandwidth whether its CPU or iGPU ?
> 
> 
> 
> 
> 
> 
> 
> 
> Why ?


They are working on this Excavator will improve also a 512bit bus (probably) on the 290x.
What if they make the fm2+ platform so that it can run with on chip gddr5 (they might be able to fit a couple of high density chips)


----------



## GrizzleBoy

What does this mean in the real world?


----------



## NaroonGTX

^Since the benchmarks seem to be from OpenCL-enabled workloads, it means you can expect pretty big gains in those workloads. None of this really points to 3D rendering (i.e. gaming) performance right now.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> ^Since the benchmarks seem to be from OpenCL-enabled workloads, it means you can expect pretty big gains in those workloads. None of this really points to 3D rendering (i.e. gaming) performance right now.


^This !

I though I could make my own benchmark of 7790 to compare it to Kaveri







Hence I did it.
I underclocked to 600 MHz core and 800 MHz Memory and used latest SandraLite. (theoretically equivalent to Kaveri's iGPU core) have used to test for their Benchmark )

And I get these results:

GP Processing:
767.47 MP/s 8.15% Faster than Kaveri

GP Memory Bandwidth:
8.87 GB/s 69.1% Slower than Kaveri

GP Cryptography
7.88 GB/s 19.2% Slower than Kaveri

GP Memory Latency:
1093.20 ns 34.43% Slower than Kaveri

GP Financial Analysis:
291.73 kOPT/s 8.97% Slower than Kaveri

Remember that 7790 has 14 CU (896 SP) and that Kaveri APU has 13 CU (832 SP), therefore Kaveri seems to be significant faster and efficient than my 7790 in these OpenCL Benchmark at least.









Edit: Memory bandwidth on my 7790 at the time of benching was 2x (800 MHz 128 bit Quad Channe ) the bandwidth of Kaveri's iGPU ( 1600 MHz 64 bit Dual Channel ). I tried to underclock my GPU's memory speed to 400 MHz so that I could match the speed of Kaveri system, but for some reason it was crashing at load everytime and experienced display corruption. Hence I left it at 800 MHz. Finally, the results shown above should be in more favor to Kaveri than hey are currently looking.


----------



## Seronx

Quote:


> Originally Posted by *sumitlian*
> 
> 2400MHz Dual Channel = 38.4 GB/s
> 2600MHz Dual Channel = 41.6 GB/s
> 2666MHz Dual Channel = 42.6 GB/s
> 2800MHz Dual Channel = 44.8 GB/s
> 3000MHz Dual Channel = 48.0 GB/s
> 
> Even if we make Kaveri APU stable at 2800MHz, I don't think 44.8 GB/s would be enough for 13 CU / 832 SP iGPU. My 7790 still shows improvement going from 96 GB/s to 108 GB/s.
> 
> Why is AMD always so behind in Memory bandwidth whether its CPU or iGPU ?
> 
> 
> 
> 
> 
> 
> 
> 
> Why ?


Convert all those numbers to Gbit/s.

Gbit/s = GFlops.

With 256-pin GDDR5M:
128-bit * 4 GHz => 512 Gbit/s = 512 GFlops. Enough to power all 8 CUs.


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> Convert all those numbers to Gbit/s.
> 
> Gbit/s = GFlops.


Actually it doesn't work that way







!
We were talking about maximum theoretical read/write bandwidth of memory at there.

While GFLOPS are the floating point performance of a CPU when using a certain instruction set.

For example a single core running at 3.0 GHz when using 128 bit instruction set (Assuming IPC = 1.0 ) can process at a speed of 48 GFLOPS [ (3.0GHz x 128 bits/cycle) / 8 ]. But there are various overheads in system architecture and you will never be able to pull that theoretical GFLOPS from CPU, resulting you will always have lower performance. Some main overheads are CPU's cache speed and memory bandwidth, when memory bandwidth is not sufficient enough for the CPU that is able to read/write at much faster than memory can handle, then we get lower performance. This efficiency is greatly affected by L3 cache speed. Both Phenom II and FX have been suffering from this.


----------



## raghu78

Quote:


> Originally Posted by *flippin_waffles*
> 
> Maybe it has something similar to the 512 bit bus in Volcanic Islands.


Hawaii is a massive 430 - 440sq mm chip with 250W TDP. Kaveri is more around 220 - 240 sq mm and - 65 - 100w TDP .so 512 bit is impossible. even 256 bit DDR3 is quite difficult as the motherboard costs go up. maybe 128 bit GDDR5 is possible but unlikely.


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> With 256-pin GDDR5M:
> 128-bit * 4 GHz => 512 Gbit/s = 512 GFlops. Enough to power all 8 CUs.


I think you should read the following very carefully.








http://en.wikipedia.org/wiki/Gflops
http://en.wikipedia.org/wiki/Memory_bandwidth


----------



## Clocknut

Quote:


> Originally Posted by *raghu78*
> 
> Hawaii is a massive 430 - 440sq mm chip with 250W TDP. Kaveri is more around 220 - 240 sq mm and - 65 - 100w TDP .so 512 bit is impossible. even 256 bit DDR3 is quite difficult as the motherboard costs go up. maybe 128 bit GDDR5 is possible but unlikely.


which is why I said they should have go with just triple channel + 3 slot of RAM DIMM only(instead of 4 or 6) to save cost.


----------



## Usario

192 or 256 bit DDR3 would probably work well enough.


----------



## Seronx

Quote:


> Originally Posted by *sumitlian*
> 
> I think you should read the following very carefully.
> 
> 
> 
> 
> 
> 
> 
> 
> http://en.wikipedia.org/wiki/Gflops
> http://en.wikipedia.org/wiki/Memory_bandwidth


I'm talking about the correlation.

1 Gbit/s is needed for 1 Gflop. If you go into HPC with multi-nodes it is required to have at least 2 Gbit/s for 1 Gflop.


----------



## geoxile

Quote:


> Originally Posted by *flippin_waffles*
> 
> Just had a look at the HSA foundation website.
> 
> http://hsafoundation.com/
> 
> It appears that VIA and Vivante have now joined as well. Very interesting, Charlie had a very detailed write up on the new Vivante GPU.
> 
> http://semiaccurate.com/2013/09/05/vivante-has-a-high-precision-mobile-gpu/


Those are hardware manufacturers, not software developers


----------



## Clovertail100

Did AMD's board of directors just hand over the reigns to their engineering team?

Things are looking seriously good lately.


----------



## flippin_waffles

Quote:


> Originally Posted by *geoxile*
> 
> Those are hardware manufacturers, not software developers


Yes, I wasn't really responding to that part of the question because software developers don't need to be part of the HSA foundation to write code for it. There have been numerous announcements of next gen software taking advantage of heterogeneous computing. Adobe is the most recent I believe.


----------



## MrJava

Actually the iGPU has its own cache hierarchy and has a very wide bus which bypasses cache coherency with the CPU (used for graphics). Its bandwidth should not be affected by slow CPU caches.
Quote:


> Originally Posted by *sumitlian*
> 
> Actually it doesn't work that way
> 
> 
> 
> 
> 
> 
> 
> !
> We were talking about maximum theoretical read/write bandwidth of memory at there.
> 
> While GFLOPS are the floating point performance of a CPU when using a certain instruction set.
> 
> For example a single core running at 3.0 GHz when using 128 bit instruction set (Assuming IPC = 1.0 ) can process at a speed of 48 GFLOPS [ (3.0GHz x 128 bits/cycle) / 8 ]. But there are various overheads in system architecture and you will never be able to pull that theoretical GFLOPS from CPU, resulting you will always have lower performance. Some main overheads are CPU's cache speed and memory bandwidth, when memory bandwidth is not sufficient enough for the CPU that is able to read/write at much faster than memory can handle, then we get lower performance. This efficiency is greatly affected by L3 cache speed. Both Phenom II and FX have been suffering from this.


----------



## Asterox

Quote:


> Originally Posted by *Mookster*
> 
> *Did AMD's board of directors just hand over the reigns to their engineering team?*
> 
> Things are looking seriously good lately.


Not realy, but the AMD CPU arhitecture is now under command of "CPU General Jim Keller" and he allegedly knows something about about this area.







After all Jim was not the only one, who came back from Apple to AMD and that certainly includes the arrival of John Gustafson although he did not come from Apple.









http://www.techspot.com/news/49611-apple-chip-designer-jim-keller-heads-back-to-amd.html

http://www.brightsideofnews.com/news/2013/4/19/apples-graphics-cto-raja-koduri-leaves-the-company2c-goes-back-to-amd.aspx

http://www.anandtech.com/show/6202/amd-hires-exintel-labs-architect-john-gustafson-as-chief-graphics-product-architecture


----------



## s-x

Quote:


> Originally Posted by *Mookster*
> 
> Did AMD's board of directors just hand over the reigns to their engineering team?
> 
> Things are looking seriously good lately.


They flushed some of the bad apples that lead them to the predicament they were in. So overall management should be better then it has been in a couple years.


----------



## NaroonGTX

We'll probably see Jim Keller's magic in Excavator. Jim did say after all, that they're on track to catch up on high-performance cores later down the line.


----------



## MrJava

Current management has a lot of stars:
- Jim Keller
- Raja Koduri
- Lisa Su
- John Gustafson
- Mark Papermaster

Hell, even Rory Read was a great choice because of his relationship with a big OEM - Lenovo.
Quote:


> Originally Posted by *NaroonGTX*
> 
> We'll probably see Jim Keller's magic in Excavator. Jim did say after all, that they're on track to catch up on high-performance cores later down the line.


----------



## raghu78

Quote:


> Originally Posted by *NaroonGTX*
> 
> We'll probably see Jim Keller's magic in Excavator. Jim did say after all, that they're on track to catch up on high-performance cores later down the line.


yeah by 2015 I expect AMD to match Intel's big core


----------



## MrJava

Don't discredit AMD's engineering staff. With good leadership and (consequently) the right design goals, they can work wonders. Just look at the refinement of jaguar vs. bulldozer/piledriver for example.
Quote:


> Originally Posted by *NaroonGTX*
> 
> We'll probably see Jim Keller's magic in Excavator. Jim did say after all, that they're on track to catch up on high-performance cores later down the line.


----------



## Nonehxc

Quote:


> Originally Posted by *Mookster*
> 
> Did AMD's board of directors just *hand over the reigns to their engineering team?*
> 
> Things are looking seriously good lately.


Luckily they didn't hand them to their Old Driver's Team.









Lately, current and ex-AMD/ATI engineers/managers making a comeback have been staging a coup d'etat and taking positions where it matters. The sort of engineers who made AthlonFX and all the forward thrust in the past, but rather than being in a concept/engineering position, they are in a engineering/management position so, you know, there's little above them, mainly stock-holders...and those want blood in the sand, honour & valour, not some pink skirted gladiators graciously dancing their way away from really competing year after year. And the heads also want to get dirty...


----------



## sdlvx

http://www.tomshardware.com/reviews/memory-bandwidth-scaling-trinity,3419-3.html

Glad to see you guys focusing on memory. Going from 1600mhz ram to 2400mhz ram is about a 30% increase in performance.

For reference that's about 12.8GB/s @ 1600mhz to 19GB/s 2400mhz ram. More than 30% increase in bandwidth but it's still good numbers.

GDDR5 is >100GB/s

I don't think 5x performance over Iris is that unreasonable. Specially considering Iris falls apart when you turn on MSAA or increase resolution beyond 768p that everyone loves to bench Iris on.

I'm assuming "up to almost 500% increase" refers to those situations at 1080p when MSAA is enabed and Iris is getting like 5fps and Kaveri is getting 24fps.


----------



## Asterox

Quote:


> Originally Posted by *MrJava*
> 
> Current management has a lot of stars:
> - Jim Keller
> - Raja Koduri
> - Lisa Su
> - John Gustafson
> - Mark Papermaster
> 
> Hell, even *Rory Read* was a great choice because of his relationship with a big OEM - Lenovo.


To be more precise Rory Read was a COO at Lenovo.


----------



## MrJava

Always bugs me that reviewers show CPU memory bandwidth only in their scaling charts for APUs.
Quote:


> Originally Posted by *sdlvx*
> 
> http://www.tomshardware.com/reviews/memory-bandwidth-scaling-trinity,3419-3.html
> 
> Glad to see you guys focusing on memory. Going from 1600mhz ram to 2400mhz ram is about a 30% increase in performance.
> 
> For reference that's about 12.8GB/s @ 1600mhz to 19GB/s 2400mhz ram. More than 30% increase in bandwidth but it's still good numbers.
> 
> GDDR5 is >100GB/s
> 
> I don't think 5x performance over Iris is that unreasonable. Specially considering Iris falls apart when you turn on MSAA or increase resolution beyond 768p that everyone loves to bench Iris on.
> 
> I'm assuming "up to almost 500% increase" refers to those situations at 1080p when MSAA is enabed and Iris is getting like 5fps and Kaveri is getting 24fps.


----------



## sumitlian

Quote:


> Originally Posted by *MrJava*
> 
> Actually the iGPU has its own cache hierarchy and has a very wide bus which bypasses cache coherency with the CPU (used for graphics). Its bandwidth should not be affected by slow CPU caches.


I really had no idea about the width of those iGPU's cache and buses. Thanks for the info







, but even if the cache is extremely fast it doesn't have much space, and performance might still be limited in case of slow memory bandwidth. This is why I was expecting Kaveri to have faster memory speeds.
Though you seem to be correct all about what you said because at same speed Kaveri is faster than my 7790. It must be because of cache's wider bus to CPU and vice versa.


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> I'm talking about the correlation.
> 
> 1 Gbit/s is needed for 1 Gflop. If you go into HPC with multi-nodes it is required to have at least 2 Gbit/s for 1 Gflop.


I really know nothing about how HPC works.
But here in case of CPU and GPU, I've searched and learned something that might be useful. If anything is wrong, please correct me.

_The concept of machine balance has been defined in a number of studies as a ratio of the number of memory operations per cpu cycle to the number of floating-point operations per cpu cycle for a particular processor.
( Peak floating ops/cycle ) / (Peak Memory ops/cycle)_
http://www.cs.virginia.edu/~mccalpin/papers/balance/

512 GFLOPS = 512 Billion 32 bit operations per second (Since Float = 32 bit ), convert it to bytes per second. 512 GFLOPS / 8 = 64 GigaBytes/second

Here in this case of Kaveri iGPU, One compute unit contains 64 shader processors and are running at 600 MHz (or 0.6 GHz)
then max GFLOPS for 13 CU will be ( 13 CU x 64 SP x 0.6 GHz x 16 bits/cycle ) / 8 Bit = 998.4 GFLOPS or 998400 Mega FLOPS.

Since, 1 FLOPS = One floating point operation per second [or] one 32 bit operation per second.

1 MegaFLOPS = One Million 32 bit operations per second.

998400 Mega FLOPS = 9.984 x 10^11 32 bit operations per second

Now convert it to Bytes per second,
it will be 1.248 x 10^11 bytes per second.

[or] 121,875,000 KB per second

[or] 119018.5 MB per second

[or] 116 GB per second.

This should be the final maximum theoretical memory bandwidth which is perfect for Kaveri's iGPU.

I know and we all know that we always have a GPU which is capable to do thousands of Gflops. But they might be limited by memory bandwidth. I admit not all places are affected by higher memory bandwidth. But in certain cases we can see the improvements of having higher bandwidth. This is the thing, if it performs higher in some situations and performs same in most situations. Then we can clearly say that some certain situations are being bottlenecked by lower memory bandwidth. Hardware and software developers are probably aware of this but I believe that tweaking each model of graphics card to the best will be extremely difficult because each of those GPU will have to be analyzed with how much flops they can do and how much memory bandwidth will be needed for optimum performance even in worst case scenario.


----------



## Darkstalker420

Quote:


> Originally Posted by *Mookster*
> 
> Did AMD's board of directors just hand over the reigns to their engineering team?
> 
> Things are looking seriously good lately.


They should have done this years ago LOL! suits stay in the board room...... Let the men in white coats do their thing.

Thanx.


----------



## Seronx

@sumitlian, you don't need to convert bits to bytes. 1 bit = 1 flop, 1 byte = 8 flops.

ES Kaveri 35W TDP:
2 Modules: each with 16 32-bit FMACs. 1.8 GHz
32 SIMDs: each with 16 32-bit FMACs. 0.5 GHz

1.8 GHz * 16 * 2 => 57.6 GFlops
0.5 * 16 * 32 => 256 GFlops

57.6 + 256 => 313.6 GFlops

You would need dual 64b channels and RAM to be at 2400+ MHz to sustain that in actual compute. It only gets worse at higher clocks.


----------



## akromatic

Quote:


> Originally Posted by *sumitlian*
> 
> 2400MHz Dual Channel = 38.4 GB/s
> 2600MHz Dual Channel = 41.6 GB/s
> 2666MHz Dual Channel = 42.6 GB/s
> 2800MHz Dual Channel = 44.8 GB/s
> 3000MHz Dual Channel = 48.0 GB/s
> 
> Even if we make Kaveri APU stable at 2800MHz, I don't think 44.8 GB/s would be enough for 13 CU / 832 SP iGPU. My 7790 still shows improvement going from 96 GB/s to 108 GB/s.
> 
> Why is AMD always so behind in Memory bandwidth whether its CPU or iGPU ?
> 
> 
> 
> 
> 
> 
> 
> 
> Why ?


then again if AMD did release faster memory solutions it would have cannibalized sales of the 77XX class GPUs not that i dont welcome that though.

I'm dieing for a desktop platform mac mini sized or smaller that offers full desktop gaming performance at 1080p mid-high without AA that isnt a notebook


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> @sumitlian, you don't need to convert bits to bytes. 1 bit = 1 flop, 1 byte = 8 flops.
> 
> ES Kaveri 35W TDP:
> 2 Modules: each with 16 32-bit FMACs. 1.8 GHz
> 32 SIMDs: each with 16 32-bit FMACs. 0.5 GHz
> 
> 1.8 GHz * 16 * 2 => 57.6 GFlops
> 0.5 * 16 * 32 => 256 GFlops
> 
> 57.6 + 256 => 313.6 GFlops
> 
> You would need dual 64b channels and RAM to be at 2400+ MHz to sustain that in actual compute. It only gets worse at higher clocks.


I've understood the CPU side of calculation what you've written.
But its still out of my mind how you come to a conclusion to 256 Gflops of iGPU.
7790 is 1792 GFLOPS at stock and Kaveri's iGPU is only *256 GFLOPS* ?








...and that ES Kaveri kills 7790 in GP OpenCL benchmark.









Well nice chitchatting with you







We'll come back when it will be officially released.


----------



## maarten12100

Quote:


> Originally Posted by *sumitlian*
> 
> I've understood the CPU side of calculation what you've written.
> But its still out of my mind how you come to a conclusion to 256 Gflops of iGPU.
> 7790 is 1792 GFLOPS at stock and Kaveri's iGPU is only *256 GFLOPS* ?
> 
> 
> 
> 
> 
> 
> 
> 
> ...and that ES Kaveri kills 7790 in GP OpenCL benchmark.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Well nice chitchatting with you
> 
> 
> 
> 
> 
> 
> 
> We'll come back when it will be officially released.


That is a 35w model though


----------



## sumitlian

Quote:


> Originally Posted by *akromatic*
> 
> then again if AMD did release faster memory solutions it would have cannibalized sales of the 77XX class GPUs not that i dont welcome that though.
> 
> I'm dieing for a desktop platform mac mini sized or smaller that offers full desktop gaming performance at 1080p mid-high without AA that isnt a notebook


Upcoming APU might fulfill your needs !


----------



## DaveLT

Quote:


> Originally Posted by *maarten12100*
> 
> That is a 35w model though


It still can't be 1/7 of the 7790 either way.


----------



## maarten12100

Quote:


> Originally Posted by *DaveLT*
> 
> It still can't be 1/7 of the 7790 either way.


A high clock over a larger core can cause it


----------



## akromatic

Quote:


> Originally Posted by *sumitlian*
> 
> Upcoming APU might fulfill your needs !


i'm only hoping as my trinity platform didnt


----------



## sumitlian

Quote:


> Originally Posted by *maarten12100*
> 
> That is a 35w model though


I had totally forgotten about its TDP though.









Well here is what I get with my 7790 Sleeping Dogs at Ultra 768p High AA. 896 Shader Processor at 510 MHz and 500 MHz Memory. (Remember 7790 has 14 CU while Kaveri one has 13 )
You can clearly see the Power Consumption in GPUz. Believe me its between 17.5 watts to 26 watts max.



So now you can understand Kaveri will be more optimized and efficient as CPU side is 28nm. I am fully confident it will do 900+ GFLOPS at GPU side and CPU at 1.8 GHz dual module, all in one at 35watt


----------



## sumitlian

Quote:


> Originally Posted by *DaveLT*
> 
> It still can't be 1/7 of the 7790 either way.


*No ! You are totally wrong.*
See the post no. 148.


----------



## Seronx

Quote:


> Originally Posted by *sumitlian*
> 
> But its still out of my mind how you come to a conclusion to 256 Gflops of iGPU.
> 7790 is 1792 GFLOPS at stock and Kaveri's iGPU is only *256 GFLOPS* ?
> 
> 
> 
> 
> 
> 
> 
> 
> ...and that ES Kaveri kills 7790 in GP OpenCL benchmark.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Well nice chitchatting with you
> 
> 
> 
> 
> 
> 
> 
> We'll come back when it will be officially released.


7790 85W => 56 SIMDs @ 1 GHz
Kaveri ES 35W => 32 SIMDs @ 0.5 GHz

56 * 16 * 1 => 896 GFlops
32 * 16 * 0.5 => 256 GFlops

I'm not including FMA in the results as 1 FMA/MUL/ADD GFlop requires only 1 Gbit. If you do use FMA it will come out with 512 GFlops for the ES Kaveri and 1792 GFlops with the 7790.


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> 7790 85W => 56 SIMDs @ 1 GHz
> Kaveri ES 35W => 32 SIMDs @ 0.5 GHz
> 
> 56 * 16 * 1 => 896 GFlops
> 32 * 16 * 0.5 => 256 GFlops
> 
> I'm not including FMA in the results as 1 FMA/MUL/ADD GFlop requires only 1 Gbit. If you do use FMA it will come out with 512 GFlops for the ES Kaveri and 1792 GFlops with the 7790.


Now I understood








btw I was calculating FMA all the time.









Well A10 6800k's iGPU had 648 GFLOPS FMA
http://en.wikipedia.org/wiki/List_of_AMD_APU_microprocessors#.22Richland.22_.282013.2C_32_nm.29
so I still don't see why Kaveri one will only have 512 GFLOPS.


----------



## assaulth3ro911

If this is true.........


----------



## NaroonGTX

Quote:


> So now you can understand Kaveri will be more optimized and efficient as CPU side is 20nm.


Isn't Kaveri still 28nm? There haven't been any reports or anything to suggest a full shrink down to 20/22nm.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Isn't Kaveri still 28nm? There haven't been any reports or anything to suggest a full shrink down to 20/22nm.










oh sorry, my bad








I corrected it.


----------



## Elmy

If you would of bought stock in AMD Nov 15 2012 you would of doubled your money today......


----------



## Pheesh

Quote:


> Originally Posted by *Elmy*
> 
> If you would of bought stock in AMD Nov 15 2012 you would of doubled your money today......


And if you bought AMD stock July 2013 and sold Sept 2013 you could have lost 30%, and if you bought AMD stock any time from 2010 to mid 2012 you would lose half your money or more. God help you if you bought between 2004-2007. AMD is a high risk stock, you could double your money or more or you could lose it all, might as well goto vegas


----------



## maarten12100

Quote:


> Originally Posted by *Pheesh*
> 
> And if you bought AMD stock July 2013 and sold Sept 2013 you could have lost 30%, and if you bought AMD stock any time from 2010 to mid 2012 you would lose half your money or more. God help you if you bought between 2004-2007. AMD is a high risk stock, you could double your money or more or you could lose it all, might as well goto vegas


You only lose if you sell or a bankruptcy isn't bought out.

A bit off topic don't you think.


----------



## MrJava

Will you give up trying to convince us that kaveri has 4 128-bit (or 2 256-bit) FMACs - we all know that's not happening. Oh, and don't start with the conspiracy theories about how we haven't seen the "real kaveri" once its released.
Quote:


> Originally Posted by *Seronx*
> 
> @sumitlian, you don't need to convert bits to bytes. 1 bit = 1 flop, 1 byte = 8 flops.
> 
> ES Kaveri 35W TDP:
> 2 Modules: each with 16 32-bit FMACs. 1.8 GHz
> 32 SIMDs: each with 16 32-bit FMACs. 0.5 GHz
> 
> 1.8 GHz * 16 * 2 => 57.6 GFlops
> 0.5 * 16 * 32 => 256 GFlops
> 
> 57.6 + 256 => 313.6 GFlops
> 
> You would need dual 64b channels and RAM to be at 2400+ MHz to sustain that in actual compute. It only gets worse at higher clocks.


----------



## Durquavian

Quote:


> Originally Posted by *MrJava*
> 
> Will you give up trying to convince us that kaveri has 4 128-bit (or 2 256-bit) FMACs - we all know that's not happening. Oh, and don't start with the conspiracy theories about how we haven't seen the "real kaveri" once its released.


At least they are making some good discussions. Where is your proof it isn't? I don't know but it still is good info whether true or not.


----------



## Usario

Quote:


> Originally Posted by *MrJava*
> 
> Will you give up trying to convince us that kaveri has 4 128-bit (or 2 256-bit) FMACs - we all know that's not happening. Oh, and don't start with the conspiracy theories about how we haven't seen the "real kaveri" once its released.


He's going by that leaked die shot supposedly of a Steamroller module that showed two 256-bit FMACs.


----------



## Pheesh

Quote:


> Originally Posted by *maarten12100*
> 
> You only lose if you sell or a bankruptcy isn't boght out.
> 
> A bit off topic don't you think.


I agree Elmy's comment was off topic. Good luck in the stock market, I'm looking forward to more leaks on kaveri


----------



## NaroonGTX

I remember when that die shot was leaked. Most likely it was Excavator, and that's what a lot of people initially said. It wasn't until later that people decided that suddenly there was a Steamroller v2, and that's what the module die shot was. Personally I don't think there is a Steamroller v2, Kaveri will be the same as it always was.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> I remember when that die shot was leaked. Most likely it was Excavator, and that's what a lot of people initially said. It wasn't until later that people decided that suddenly there was a Steamroller v2, and that's what the module die shot was. Personally I don't think there is a Steamroller v2, Kaveri will be the same as it always was.


I've perceived what you're saying. Actually the die shot on left one was of Piledriver, and the other one should obviously be of Steamroller. You can easily see the width of FPU.

Piledriver has one 128 bit FMA per cluster, means 2 x 128 Bit Per Module.
But on the other shot we are seeing the 2 x 128 bit FPU per cluster, means 4 x 128 bit FPU per module. Or you can say 2 x 256 bit FPU per module

One bad thing about that Zambazi/Piledriver is that they needed two cycles to dispatch one instruction of 256 bit.
What I'm afraid of that if Steamroller's one core (cluster) still seems (imo) to be following 2 x128 bit FMA (one clock for each 128 bit FPU ), then it will end up like older Zambazi/Piledriver and the results will be much lower in 256 bit FMA or AVX than ivy-bridge and Haswell. I hope this time they will fix it with the release of Kaveri.

Source

Quote:


> Originally Posted by *MrJava*
> 
> Will you give up trying to convince us that kaveri has 4 128-bit (or 2 256-bit) FMACs - we all know that's not happening. Oh, and don't start with the conspiracy theories about how we haven't seen the "real kaveri" once its released.


Xeronx and Usario are right about the die shot supposedly of a steamroller module. And it does look upgraded/refined version of Piledriver at least.


----------



## DaveLT

Looks like the FMAs doubled, too. Many things have changed, looks _very_ different


----------



## NaroonGTX

When I saw the die shot, they didn't have the full pic, just the pic that had the module on the right. So it was dubious as to what the context originally was. That might be Steamroller, and if it is, it's certainly different than what was originally detailed in 2012. This could explain the original delay, if true.


----------



## Seronx

Steamroller was delayed to compete with Haswell to Skylake and beyond.

The 4 ALU/4 AGU/4 vALU(1 SIMD) approach is probably to compete with a non-hyperthreaded Haswell core. If there is any issues though Steamroller+ is the one to wait for.


----------



## roofrider

Lol, delayed to compete with skylake and beyond but realistically even after this strategic delay it'll be trading blows with Sandy at best huh! (minus HSA of course)
Noice.


----------



## NaroonGTX

Don't think it was delayed for that, and I don't see how it could compete with Skylake even... Carizzo would be out around that time.


----------



## roofrider

This is funny.
Not sure if by Steamroller+ he means Steamroller v2/refresh or Carrizo which according him has Steamroller cores and not Excavator.

Not saying Carrizo _is_ Excavator, might very well be a Kaveri refresh but the details regarding Carrizo are very sparse and there's nothing solid.
Anyway, this is a topic for a different thread.


----------



## Schmuckley

Quote:


> Originally Posted by *roofrider*
> 
> Lol, delayed to compete with skylake and beyond but realistically even after this strategic delay it'll be trading blows with Sandy at best huh! (minus HSA of course)
> Noice.


I'd be almost content with Sandy-ish performance out of an AMD chip..








almost..









Then there's the overclocking
















Sandy-ish performance with a 5.3 OC on water + 2400Mhz RAMs? gimme! gimme! gimme!


----------



## DaveLT

We all know how phenomenal RCM does to AMD chips ... On Richland ...








They will thread the ground Sandy never ever threaded.


----------



## NaroonGTX

Don't you mean they will tread the ground?


----------



## DaveLT

Quote:


> Originally Posted by *NaroonGTX*
> 
> Don't you mean they will tread the ground?


Of course i did. Even if it only stands up to Sandy in single-thread if AMD prices it as usual we will see Intel eating their hearts out on the i3 series


----------



## CynicalUnicorn

If they make a pure CPU approaching Sandy performance with 4 and 6 cores for the price of an i3, Intel is in trouble. And since AMD doesn't have fake octa-core (via hyperthreading) CPUs, they'll dominate in multi-thread too.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> If they make a pure CPU approaching Sandy performance with 4 and 6 cores for the price of an i3, Intel is in trouble. And since AMD doesn't have fake octa-core (via hyperthreading) CPUs, they'll dominate in multi-thread too.


I bet you they are definitely planning on a hexa-core and octa-core from the early roadmaps. There's absolutely no reason Gigabyte will make high-end FM2 boards if AMD weren't planning on that
What do you think, Unicorn?








I really hope they can stuff a 6-core and 8-core. Even a quad priced at i3 is going to make intel cry ... Forget broadwell, this is the future


----------



## geoxile

Quote:


> Originally Posted by *DaveLT*
> 
> Of course i did. Even if it only stands up to Sandy in single-thread if AMD prices it as usual we will see Intel eating their hearts out on the i3 series


What makes you think Kaveri will come close to Sandy in single-threaded performance?


----------



## DaveLT

Quote:


> Originally Posted by *geoxile*
> 
> What makes you think Kaveri will come close to Sandy in single-threaded performance?


I'm just guessing. I said "even"


----------



## NaroonGTX

There's nothing stopping AMD from doing a hexacore or octocore part on FM2+, but don't expect them to be APU's. We will definitely see a hexacore APU at some point in the near-future (and when I say near, I mean within a year or a little over a year.) With implementation of the high-density cell libraries combined with the shrink down to 28nm, they could easily stuff another module onto the die and have a beefier GPU as well. That will probably be Excavator though. Then again, not sure since Excavator will supposedly only have up to a 65W part maximum. If that's true, it seems the APU's would become more efficient rather than stuffing more cores onto them.

They could bring the FX series over to FM2+ however. We'll just have to wait and see (might as well be a catchphrase by now). Updated roadmaps will arrive around the time of the next conference in November.


----------



## MrJava

What makes you think it won't?
Quote:


> Originally Posted by *geoxile*
> 
> What makes you think Kaveri will come close to Sandy in single-threaded performance?


Still think its 2 128-bit FMAC's per core. Also still 4 ports from the INT scheduler so 2 ALU + 2 AGU. It's possible that the AGUs are more capable to allow more types of instructions to execute on pipes 3 and 4. Perhaps another multiplier on pipe 2 as well?


----------



## geoxile

Quote:


> Originally Posted by *MrJava*
> 
> What makes you think it won't?
> Still think its 2 128-bit FMAC's per core. Also still 4 ports from the INT scheduler so 2 ALU + 2 AGU. It's possible that the AGUs are more capable to allow more types of instructions to execute on pipes 3 and 4. Perhaps another multiplier on pipe 2 as well?


AMD's older roadmaps showed they were targeting only 10-15% performance improvement between generations.


----------



## MrJava

Improvements at the front end and through the cache hierarchy have had a larger impact than expected.
Quote:


> Originally Posted by *geoxile*
> 
> AMD's older roadmaps showed they were targeting only 10-15% performance improvement between generations.


----------



## geoxile

I guess we'll see when Kaveri actually comes out


----------



## MrJava

Indeed we will. Btw, be sure to check out phoronix's kaveri benches when they're out to judge performance on a level playing field.
Quote:


> Originally Posted by *geoxile*
> 
> I guess we'll see when Kaveri actually comes out


----------



## CynicalUnicorn

Quote:


> Originally Posted by *DaveLT*
> 
> I bet you they are definitely planning on a hexa-core and octa-core from the early roadmaps. There's absolutely no reason Gigabyte will make high-end FM2 boards if AMD weren't planning on that
> What do you think, *Unicorn*?
> 
> 
> 
> 
> 
> 
> 
> 
> I really hope they can stuff a 6-core and 8-core. Even a quad priced at i3 is going to make intel cry ... Forget broadwell, this is the future


I would first like to point out that you are the first person I've seen to use the noun in this username when referring to me. "Cynical" describes the personality traits of "Unicorn." Srsly, Internet, get it right.









As it stands, Haswell is only about 10% better than Sandy Bridge in IPCs, which is not much improvement after that much time. Piledriver isn't too good in comparison - only about 75% that of SB - but excels in multi-threading thanks to all dem jiggahertz and physical cores. Hyperthreading just doesn't compare. Assuming we get another 10% increase (like from Bulldozer to Piledriver), then it will close the gap to within 20%, though I bet it will approach within 15%. That leaves it within about 25% of Haswell, which, while not good, is good enough when you factor in the ease of overclocking these things to ridiculous levels.

Obviously there's a reason for the uber-boards, yes. I hope to see an octa-core with a bare-bones iGPU (if absolutely necessary for it to function properly - maybe disable the iGPU functions if none is detected?) for performance, or something on AM3+/AM4. Intel still only has hexa-core CPUs at best, and while cores certainly aren't everything, I'd rather spend the same amount for triple the cores despite two-thirds the IPCs when given the choice of an FX-6000 vs. an i3. I personally don't like that Intel only has CPU CPUs when you get into the Extreme series and Xeons - I'm sorry, guys, I know you hate Nvidia and AMD, but your iGPUs suck and you're wasting precious silicon. Why do you charge nearly $200 for the cheapest overclockable chips, or cheapest quad-core chips? (Minor rant: there is a computer at my local public library that advertises "Intel i5!" and has USB 3.0 ports and is used solely as a catalog look-up. STOP WASTING MY TAX DOLLARS!)

The only problem I see these and APUs in general running into is that DDR3 sucks for graphics memory. I'm actually really curious to see how the XB1 faces off against PS4: the former uses DDR3 for graphics and the latter uses GDDR5 for system memory. GDDR5's biggest problem is its bad timings while DDR3 doesn't have the bandwidth to handle graphics. Now, if the iGPU gets used for parallel processing (e.g. use it for PhysX: a program with calculations suited for a GPU rather than a CPU) while the graphics portion is loaded onto discrete GPUs, then that might work better than basically disabling it altogether when you add a GPU unsuited for DGM (meaning a not-6670/7750). But we don't know enough about HSA (unless I missed a major announcement) to be able to predict its performance. What the architecture will allow is far more important than the architecture alone.

And finally, I want a motherboard that supports dual FM2+/FM3 sockets if/when HSA takes off. It seems like a fun system to mess with. I doubt it will happen, but I can dream. Actually, that might not be a bad idea: octa-core with no/minimal iGPU in one socket and quad-core with stripped down 7790 in the other, and a perfect world for them to function together.


----------



## NaroonGTX

Quote:


> I hope to see an octa-core with a bare-bones iGPU (if absolutely necessary for it to function properly - maybe disable the iGPU functions if none is detected?) for performance, or something on AM3+/AM4.


An octocore part with an iGPU would have a massive die size. Don't think it would be within their 95W/100W TDP targets either. There won't be an AM4. AMD said a while back that FM2 would be the last socket before they settled on a unified socket, so either that is FM2+, or an upcoming FM3. The 1090FX platform was canceled a long time ago.
Quote:


> I'm actually really curious to see how the XB1 faces off against PS4: the former uses DDR3 for graphics and the latter uses GDDR5 for system memory. GDDR5's biggest problem is its bad timings while DDR3 doesn't have the bandwidth to handle graphics.


A lot of devs have already said PS4 is the more powerful system. I've seen a lot of fanboys try to talk about the X1's EsRAM, but they fail to understand that it's not gonna magically boost graphics performance tenfold or anything. The GPU of the X1's APU just isn't as powerful as the PS4's -- people have calculated based off the specs that the X1's GPU is around a 7750 or so, while the PS4's was around a 7850.

As for the latencies, I remember Mark Cerny saying that they chose GDDR5 early into the design phase, so they knew what they would have to do deal with. As it stands, the "higher latencies" won't have any real impact whatsoever. The OS isn't some bloated POS like Windows is, and generally won't be too demanding in those regards. In terms of graphics rendering, the latencies obviously won't mean s*** because the whole point of GDDR5 was to offset the higher latencies with faster memory speeds. From what Mark and co. have said, this hasn't produced any issues with software running on the system.

In terms of how games will look, multi-plats will largely be the same. The first-party devs will be the ones to truly harness the power of the respective systems. PS4 will, by the nature of its higher performance capabilities, produce better visuals in the end.


----------



## DaveLT

Barebones iGPU, you missed out that one.'
With steamroller efficiency improvements 90W TDP is no problem, throw in a barebones GPU that beats a HD4600 (Easy, given with GCN) and i bet you it won't go over 100W


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> An octocore part with an iGPU would have a massive die size. Don't think it would be within their 95W/100W TDP targets either. There won't be an AM4. AMD said a while back that FM2 would be the last socket before they settled on a unified socket, so either that is FM2+, or an upcoming FM3. The 1090FX platform was canceled a long time ago.


I don't want another FX too.
If the steamroller is exactly what we are expecting then upcoming APU even with Dual modules (four cores) might beat 8350 per clock in both single and multithreaded Integer and FP benchmarks.


----------



## DaveLT

Besides it was getting clear that there wasn't any new FX boards coming out for a year now (i think?) that AMD dropped FX completely already
What-with that FM2 has all the goodies, Integrated USB3 (that LGA2011 still doesn't have, haha), 8 SATA3 ports and now PCIE 3.0


----------



## CynicalUnicorn

Unfortunately that's probably true. Intel has, what, two sockets? One for high-end extreme CPUs (re: niche) and the other for everything else. However my AM3+ motherboard has everything you mentioned sans PCIe 3.0, unless all eight SATA ports are native AMD and the USB 3.0 is integrated in the chipset (is that just bonus stuff from Gigabyte/ASRock/MSI right now?) in which case, wow, what a feature set!


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> Besides it was getting clear that there wasn't any new FX boards coming out for a year now (i think?) that AMD dropped FX completely already
> What-with that FM2 has all the goodies, Integrated USB3 (that LGA2011 still doesn't have, haha), 8 SATA3 ports and now PCIE 3.0


And I'm happy that they did.

The AMx platform has been long dated. It lived a good life and gave us some really nice chips, long live AMx!


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Unfortunately that's probably true. Intel has, what, two sockets? One for high-end extreme CPUs (re: niche) and the other for everything else. However my AM3+ motherboard has everything you mentioned sans PCIe 3.0, unless all eight SATA ports are native AMD and the USB 3.0 is integrated in the chipset (is that just bonus stuff from Gigabyte/ASRock/MSI right now?) in which case, wow, what a feature set!


Yes, it's all native.


----------



## Zyro71

Wait..I have a small issue with this

Okay sure, *IF* this article happens to come forth and be true to everything..thats just the GPU side of the APU no..?

What about the processing power, With that big of a GPU in there, the cores must not be so impressive.

That or Steamroller has very good IPC, and other tricks up its sleeve to provide great performance.

From what im guessing, the GPU part alone is like 50-60 watts here, if maxed out, then the rest is on the cores themselves.

Also the performance increase with the GCN architecture seems to be rather impressive, well, from what ive seen with the A6-5200 APU, needing only 128 cores to rival a intel HD 4k. (that being compared to my older 7640g with 256 cores)

But enough of my blabbing. I feel like they are shoving entirely too much GPU power on the chip, This may balance it out a bit, but the memory controller..unless they can pull off some magic voodoo like they did with the lower end babcat cores being using a single channel of ram yielded great performance..things will be interesting.

Eh..Either way, i want to get my hands all over this..even though i prefer the jaguar APUs over everything else.


----------



## NaroonGTX

Quote:


> What about the processing power, With that big of a GPU in there, the cores must not be so impressive.


The GPU doesn't have any real bearing on the CPU's performance. The only thing the iGPU does die-wise is just take up an adequate amount of space to give out some good graphics/rendering/compute performance. The CPU isn't hampered by this in any way, the only "downside" there is, is the core count. They can fit two modules on there and get four cores, which is what they've been doing since the beginning. The only exception being Llano which did have four separate physical K10 cores on it since it wasn't Bulldozer-based.

Llano









Trinity/Richland









Everything on the dies were pretty much well-balanced; caches, I/O & misc, cores/modules, GPU, etc. Trinity/Richland has roughly the same performance clock-for-clock when compared to its AM3+ counterpart, the FX-4300.


----------



## roofrider

I'm guessing this Mantle thing should be working on Kaveri too, it's iGPU has GCN cores after all.


----------



## nitrubbb

Quote:


> Originally Posted by *roofrider*
> 
> I'm guessing this Mantle thing should be working on Kaveri too, it's iGPU has GCN cores after all.


definitely. could be a beast in crossfire with external GPU


----------



## roofrider

Quote:


> Originally Posted by *nitrubbb*
> 
> definitely. could be a beast in crossfire with external GPU


Well, that BF4 boost will be welcomed with open arms by the APU guys then.


----------



## sumitlian

Quote:


> Originally Posted by *roofrider*
> 
> I'm guessing this Mantle thing should be working on Kaveri too, it's iGPU has GCN cores after all.


Obviously ! They are claiming CPU draw calls to be dispatching at a rate of 900% faster with Mantle over DirecX 11. If its true than we'll really see Kaveri running Battlefield 4 at 1080p medium settings.


----------



## Zyro71

Quote:


> Originally Posted by *NaroonGTX*
> 
> The GPU doesn't have any real bearing on the CPU's performance. The only thing the iGPU does die-wise is just take up an adequate amount of space to give out some good graphics/rendering/compute performance. The CPU isn't hampered by this in any way, the only "downside" there is, is the core count. They can fit two modules on there and get four cores, which is what they've been doing since the beginning. The only exception being Llano which did have four separate physical K10 cores on it since it wasn't Bulldozer-based.
> 
> Everything on the dies were pretty much well-balanced; caches, I/O & misc, cores/modules, GPU, etc. Trinity/Richland has roughly the same performance clock-for-clock when compared to its AM3+ counterpart, the FX-4300.


Well, I understand it may not hurt performance with the CPU cores, I meant, they have to stay within the TDP correct? Wouldn't this reduce the CPU performance for the bigger GPU die?


----------



## maarten12100

Quote:


> Originally Posted by *sumitlian*
> 
> Obviously ! They are claiming CPU draw calls to be dispatching at a rate of 900% faster with Mantle over DirecX 11. If its true than we'll really see Kaveri running Battlefield 4 at 1080p medium settings.


The patch will be two months later though I'm totally thrilled.


----------



## sumitlian

Quote:


> Originally Posted by *Zyro71*
> 
> Well, I understand it may not hurt performance with the CPU cores, I meant, they have to stay within the TDP correct? Wouldn't this reduce the CPU performance for the bigger GPU die?


They will have 20nm by then with more GPU cores.


----------



## maarten12100

Quote:


> Originally Posted by *sumitlian*
> 
> They will have 20nm by then with more GPU cores.


And HDL


----------



## sumitlian

Quote:


> Originally Posted by *maarten12100*
> 
> The patch will be two months later though I'm totally thrilled.


Yes I know








It should already run BF4 at playable fps without Mental. These days of 832 SPs are enough for 1080p at custom settings. Only one thing gives me bad feeling, and its the memory bandwidth. Well...lets see how they manage it, I've read somewhere that CPU to iGPU interconnection bus is 256 bit each direction, no of channel are unconfirmed though. At least its many times better than HyperTransport, it had 16 bit bus each direction with two channels.


----------



## sumitlian

Quote:


> Originally Posted by *maarten12100*
> 
> And HDL


Sorry







Didn't understand !


----------



## roofrider

Quote:


> Originally Posted by *sumitlian*
> 
> Sorry
> 
> 
> 
> 
> 
> 
> 
> Didn't understand !


High density libraries, lower power consumption and reduced area. So moar free space for moar GPU/CPU cores!

Btw next stop is going to be 20nm or 22nm?


----------



## sumitlian

Quote:


> Originally Posted by *roofrider*
> 
> High density libraries, lower power consumption and reduced area. So moar free space for moar GPU/CPU cores!


Oh Thanks !

Then they will probably hide a 7870 within it


----------



## EniGma1987

Quote:


> Originally Posted by *Zyro71*
> 
> Well, I understand it may not hurt performance with the CPU cores, I meant, they have to stay within the TDP correct? Wouldn't this reduce the CPU performance for the bigger GPU die?


That is correct in a way. The CPU speed usually stays a bit lower during heavy GPU usage because the graphics cores are drawing so much power. When the GPU is not doing work the CPU cores are generally all in turbo mode speeds because of their additional headroom. With our custom built systems we can go into the bios and raise the TDP threshold so that the CPU cores do not need to be throttled down during heavy GPU usage.

Quote:


> Originally Posted by *roofrider*
> 
> High density libraries, lower power consumption and reduced area. So moar free space for moar GPU/CPU cores!


Dont forget about reduced CPU clock speeds too as a new feature of high density libraries.


----------



## DaveLT

Quote:


> Originally Posted by *Zyro71*
> 
> Well, I understand it may not hurt performance with the CPU cores, I meant, they have to stay within the TDP correct? Wouldn't this reduce the CPU performance for the bigger GPU die?


What the bigger processor crowd wants is SMALLER GPU. It can be a really big die if AMD doesn't want to create several SKUs but if they sliced off a few SIMDs it's no problem
Quote:


> Originally Posted by *roofrider*
> 
> High density libraries, lower power consumption and reduced area. So moar free space for moar GPU/CPU cores!
> 
> Btw next stop is going to be 20nm or 22nm?


20nm. They're not Intel. BTW i bet you they're gonna disconnect from TSMC and use GF Fab 8 for 20nm (I can't bet on GPUs though)


----------



## NaroonGTX

Quote:


> Well, I understand it may not hurt performance with the CPU cores, I meant, they have to stay within the TDP correct? Wouldn't this reduce the CPU performance for the bigger GPU die?


Steamroller will be more efficient overall, with it's transition to 28nm bulk and architectural improvements, as well as the usage of RCM (resonant clock mesh, which was enabled with Richland, which explains how they were able to reach higher clocks while staying within the previous power/thermal envelopes). Rumor has it that Steamroller might even have the high-density libraries, but I don't know if that's true or not. HDL on 28nm would increase the efficiency even more.

Reports say that AMD is targeting a 65W TDP for the top-end Carrizo part, but that they are "scoping out" 45W, which is insane if true. Those figures sound like something that HDL would enable, so maybe HDL wouldn't be here until Excavator in late 2014/early 2015.

They may go to 20nm next with Excavator as well. Which would introduce HDL and DDR4 support. CPU clockspeeds would drop by an unknown amount, but this would be offset by a large increase in IPC, which is one of the main goals of Excavator. Jim Keller's CPU expertise (he was one of the main ones responsible for Athlon 64, and he did work on K7 as well) and Raja Koduri's GPU expertise (wouldn't be surprised if Mantel was his idea!) will most definitely shine through in Excavator.

As for Mantel, I don't think I'm alone when I say that it was the most interesting thing announced at GPU '14. Kaveri will be the first APU to have a GCN-based graphics core. The Mantel API is the product of AMD's console wins. It is a low-level API similar to coding on a console. Now that the consoles share an architecture that is for the most part the same as what's on our mainstream PC's, it will be even easier to do cross-platform development. All gamers, whether on console or PC, will benefit from this. Games will have shorter development cycles, less development issues (at least in terms of porting), and it'll be a smoother transition overall.

Now that Mantel is optimized for GCN (which means anyone with HD 7000 series, Kaveri, or the R7/R9-200 series GPU's), AMD is in a great advantage over the competition. No more CPU bottlenecks due to lazy coding. I guess this is also what those devs meant when they said their engines would be "optimized for APU's as well". Things are getting pretty interesting. "Gaming Evolved" and "AMD Optimized" are no longer such vague terms anymore, I think.


----------



## DaveLT

Yeah, K8 was uber-fantastic for it's day. Having Jim Keller back on the team is a win. Raja on the other hand is also another win but something ... strikes me. Both of them were at some point (or probably both are at THE SAME TIME) at apple.


----------



## NaroonGTX

Yeah, AMD has pulled a lot of top-notch talent into their reigns. Roy Taylor worked at Nvidia before joining AMD recently. In the cases of both Jim Keller and Raja Koduri, both previously worked at AMD and left the company for whatever reasons they had at the time, and now both have re-joined. Steve Jobs wanted the best of the best, so he had people like Raja, Jim, and Bob Drebin working on various Apple products, most notably in the mobile sector. Raja was only behind GPU hardware development when he was originally at AMD, but now he's behind both that as well as software development. This might explain AMD's commitment to improving their drivers, including the Linux ones now.

The company certainly has a lot of great personalities on board now. Mark Papermaster, Jim Keller, Raja Koduri, Roy Taylor, Lisa Su, Rory Read, etc.


----------



## Phantom123

This would be great really. But it always turns out that AMD is late to the party by like 1 or 2 years. If they can remain on schedule and release this kind of product on a continual basis then Intel and others would start having big competition.


----------



## azanimefan

i still can't believe these kaveri numbers... if true this APU will be the game changer. at 200 for a quad core it probably will be worth it... as you'll be getting a $130 ish cpu and a $130 ish gpu in one package. dual graphics will be worth it as well, as the 7790/7850 are strong gpus in their own right, and piggybacking one of those onto this little beast will give you some elite game performance for peanuts. $500 gaming pcs which could run with a current gen 800 machine?

yeah... this will really shake up the landscape if true.

of course i remember all the bulldozer buzz and all the original phenom buzz... and all the haswell buzz too. so i'm going to keep my enthusiasm in check here.


----------



## MrJava

A small detail from the Sisoft benchmarks:

832 SPU Kaveri is listed as "1 device / *2 threads*"
http://www.sisoftware.eu/rank2011d/show_run.php?q=c2ffcdf4d2b3d2efdee7d0e4dcedcbb984b492f792af9fb9caf7cf&l=en

512 SPU Kaveri is listed as "1 device / *1 thread*"
http://www.sisoftware.eu/rank2011d/show_run.php?q=c2ffcbfed8b9d8e5d4e0d6efc9bb86b690f590ad9dbbc8f5cd&l=en

A hint that this might be the result of drivers allowing OpenCL enabled programs to view/use Kaveri's iGPU + dGPU as if it were one device.


----------



## 8800GT

Quote:


> Originally Posted by *MrJava*
> 
> You have to take the given "benchmark numbers" with loads of salt for the below reasons:
> 
> 13CU and 832SPU would make for a huge die - we are expecting 8CU and 512SPU
> One possible conclusion is that this is a 7CU (448 SPU) Kaveri attached to a 6CU (384 SPU) Hainan discrete GPU
> Could be spoofed
> Point 2 raises some interesting questions. Can hardware along with AMD driver support allow for applications to view Kaveri iGPU + discrete GPU as one device for OpenCL/DirectCompute?
> 
> Edit: Also SiSoft could be doing something stupid as well.


Sisoft always does stupid stuff. Always puts my 7870 as 42x higher compute than a 7970, As if.


----------



## DaveLT

Quote:


> Originally Posted by *8800GT*
> 
> Sisoft always does stupid stuff. Always puts my 7870 as 42x higher compute than a 7970, As if.


Lol. Seriously?!


----------



## 8800GT

Quote:


> Originally Posted by *DaveLT*
> 
> Lol. Seriously?!


Yea, I was in the top 25 forever. Might still be, look for "JOSH-PC". At one time I was number 1 with a crazy otherworldly high compute score but other people had the glitch as well.

Point is that SiSoft is a very unreliable source. It can spit out 2 completely different numbers on a back to back run. Usually it is pretty good, but as I said -- definitely not the de facto benching program around.


----------



## DaveLT

Quote:


> Originally Posted by *8800GT*
> 
> Yea, I was in the top 25 forever. Might still be, look for "JOSH-PC". At one time I was number 1 with a crazy otherworldly high compute score but other people had the glitch as well.
> 
> Point is that SiSoft is a very unreliable source. It can spit out 2 completely different numbers on a back to back run. Usually it is pretty good, but as I said -- definitely not the de facto benching program around.


SiSoft ...


----------



## Thunderclap

Quote:


> Originally Posted by *MrJava*
> 
> A small detail from the Sisoft benchmarks:
> 
> 832 SPU Kaveri is listed as "1 device / *2 threads*"
> http://www.sisoftware.eu/rank2011d/show_run.php?q=c2ffcdf4d2b3d2efdee7d0e4dcedcbb984b492f792af9fb9caf7cf&l=en
> 
> 512 SPU Kaveri is listed as "1 device / *1 thread*"
> http://www.sisoftware.eu/rank2011d/show_run.php?q=c2ffcbfed8b9d8e5d4e0d6efc9bb86b690f590ad9dbbc8f5cd&l=en
> 
> A hint that this might be the result of drivers allowing OpenCL enabled programs to view/use Kaveri's iGPU + dGPU as if it were one device.


Very interesting...


----------



## Fabriz89

If this is true it should be able to crossfire with a 7770, right? Or do you think it is more feasible it will work only with the latest gpus (R7 2x0)?


----------



## Durquavian

Quote:


> Originally Posted by *Fabriz89*
> 
> If this is true it should be able to crossfire with a 7770, right? Or do you think it is more feasible it will work only with the latest gpus (R7 2x0)?


New GPU being refresh at the lower tiers then it should.


----------



## NaroonGTX

Most of Volcanic Islands are slightly-improved refreshes, so they're all the same GCN architecture basically. Kaveri should xfire with the 7770.

Does anyone know if only the new Hawaii GPU's will be GCN 2.0, or will some of the newer refreshes be GCN 2.0-capable as well?


----------



## OwnedINC

Quote:


> Originally Posted by *NaroonGTX*
> 
> Most of Volcanic Islands are slightly-improved refreshes, so they're all the same GCN architecture basically. Kaveri should xfire with the 7770.
> 
> Does anyone know if only the new Hawaii GPU's will be GCN 2.0, or will some of the newer refreshes be GCN 2.0-capable as well?


Maybe the 260X since it has the new audio features.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Most of Volcanic Islands are slightly-improved refreshes, so they're all the same GCN architecture basically. Kaveri should xfire with the 7770.
> 
> Does anyone know if only the new Hawaii GPU's will be GCN 2.0, or will some of the newer refreshes be GCN 2.0-capable as well?


We can't confirm yet, because AMD hasn't announced about Hawaii being GCN 2.0 on their Hawaii Launch Event/Conference.
Well as we are guessing non Re-branded GPU will obviously be different than current high end GCN.

TPU believe R7 260X to be improved version of 7790 and they also believe that the card is based on the Curacao silicon, though the name 'Bonaire XTX' was also mentioned. So what would Anandtech call it this time.......a GCN 1.2 ?








http://www.tomshardware.com/news/amd-radeon-r7-260x-curacao-bonaire,24370.html


----------



## NaroonGTX

Yeah, GPU '14 didn't do much to curb the confusion! Have no idea which codenames are correct right now or what all the improvements are.

If the 260x is an improved 7790, I wouldn't mind picking one up. I don't need much power for the games I play, so it's the right segment for me.


----------



## maarten12100

Quote:


> Originally Posted by *NaroonGTX*
> 
> Yeah, GPU '14 didn't do much to curb the confusion! Have no idea which codenames are correct right now or what all the improvements are.
> 
> If the 260x is an improved 7790, I wouldn't mind picking one up. I don't need much power for the games I play, so it's the right segment for me.


the 7790 was already so dam good I wonder if they made a improved version of the 7870 just like they did with the 7790 would be best price/performance and performance/power card


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Yeah, GPU '14 didn't do much to curb the confusion! Have no idea which codenames are correct right now or what all the improvements are.
> 
> If the 260x is an improved 7790, I wouldn't mind picking one up. I don't need much power for the games I play, so it's the right segment for me.


Its also officially 2 GB DDR5 this time


----------



## roofrider

I'm unable to find any info regarding GCN 2.0.
Someone earlier said Kaveri sports GCN 2.0 cores but i've not seen anything solid to back that up.


----------



## DaveLT

Quote:


> Originally Posted by *roofrider*
> 
> I'm unable to find any info regarding GCN 2.0.
> Someone earlier said Kaveri sports GCN 2.0 cores but i've not seen anything solid to back that up.


Yeah, until AMD lifts the NDA all is noise at this point ... I'm just thinking it's GCN1/1.1 at this point. And remember guys, infinitely increasing shader processors won't help much ... the upside for increasing CUs are the increased TMUs and that's the important one i reckon

Or at least that's how i think it is


----------



## Mech0z

Anyone know if A88x is highend and if its a stupid idea to buy a A88X motherboard for a NAS/HTPC, would just like to be more future proof, but it needs to consume very little power and

I am currently eyeing the Asrock A88X ITX


----------



## Dynamo11

Judging from the specs of the boards released A88X seems high end


----------



## NaroonGTX

http://www.planet3dnow.de/cms/wp-content/gallery/cache/647__x_kv-spectre-8cu-crypto.png

This is Kaveri's actual iGPU. The 256-bit thing is pretty interesting, however, though it could just be SiSoft screwing up as usual.


----------



## MrJava

Kaveri having 2 64-bit DRAM controllers is pretty much confirmed from various communications between AMD and open-source-software maintainers.
Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.planet3dnow.de/cms/wp-content/gallery/cache/647__x_kv-spectre-8cu-crypto.png
> 
> This is Kaveri's actual iGPU. The 256-bit thing is pretty interesting, however, though it could just be SiSoft screwing up as usual.


----------



## nagle3092

Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.planet3dnow.de/cms/wp-content/gallery/cache/647__x_kv-spectre-8cu-crypto.png
> 
> This is Kaveri's actual iGPU. The 256-bit thing is pretty interesting, however, though it could just be SiSoft screwing up as usual.


Is that saying 8 cores?


----------



## Gungnir

Quote:


> Originally Posted by *nagle3092*
> 
> Is that saying 8 cores?


8 CUs in the IGP, I believe.


----------



## nagle3092

Quote:


> Originally Posted by *Gungnir*
> 
> 8 CUs in the IGP, I believe.


Ah ok, that would make sense.


----------



## NaroonGTX

Yeah, 8 CU's for the iGPU. Kaveri, initially anyway, will have up to 4 x86 cores.


----------



## nagle3092

Any news on a release date yet for these? Looking to replace my current htpc/home server with one of these.


----------



## NaroonGTX

Either Q4 2013 or early Q1 2014 for desktop.


----------



## sumitlian

Quote:


> Originally Posted by *MrJava*
> 
> Kaveri having 2 64-bit DRAM controllers is pretty much confirmed from various communications between AMD and open-source-software maintainers.


Please give us source.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.planet3dnow.de/cms/wp-content/gallery/cache/647__x_kv-spectre-8cu-crypto.png
> 
> This is Kaveri's actual iGPU. The 256-bit thing is pretty interesting, however, though it could just be SiSoft screwing up as usual.


This is awesome









Look at what Sisoft shows with 7790. If the Sisoft is not screwing up with Kaveri, then no one can stop Kaveri iGPU's memory to run at (2 x 1.87 x 256 ) / 8 = 119 GB/s


But as we know 7790 is discrete and Kaveri iGPU is integrated.
Therefore I looked into what sisoft is showing with Intel Integrated GPU.


And I got confused again.








Sandra is showing 64 bit with Intel Motherboard only, while asrock, asus and gigabyte are showing 128 bit









So for Kaveri, after looking that screenshot, one of these must be the truth.

( 4 channel x 1.87 GHz x 64 bit ) / 8 = 59.84 GB/s
or (4 channel x 1.87 GHz x 128 bit ) / 8 = 119.68 GB/s
or (2 channel x 1.87 GHz x 128 bit ) / 8 = 59.84 GB/s
or (2 channel x 1.87 GHz x 256 bit ) / 8 = 119.68 GB/s

Then again I looked into Wiki and it showed "Dual Channel Memory Controller".


So, now we get

( 4 channel x 1.87 GHz x 64 bit ) / 8 = 59.84 GB/s
or (4 channel x 1.87 GHz x 128 bit ) / 8 = 119.68 GB/s

Therefore it should be either (2 channel x 1.87 GHz x 128 bit ) / 8 = 59.84 GB/s
or (2 channel x 1.87 GHz x 256 bit ) / 8 = 119.68 GB/s

Note: All these speculations are based on the the screenshot of Sandra showing Kaveri iGPU specs.


----------



## akromatic

Quote:


> Originally Posted by *Gungnir*
> 
> 8 CUs in the IGP, I believe.


which puts it back down to the performance level of a 7750 with slowish DDR3 memory rather then the 13CU of between 7770 and 7790 performance


----------



## anubis44

Quote:


> Originally Posted by *iamhollywood5*
> 
> If that's true, good for AMD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However I'm still not on board with the direction HUMA and HSA is taking the industry. I don't want my choice of CPU to be locked to my choice of GPU. If my CPU goes bad I don't want to have to replace my totally fine GPU. I don't want my CPU and GPU to share the same thermal capacity. And most of all, there's a reason DDR3 is used for system memory and GDDR5 is used for graphics memory. They're specialized for different things. It would make sense if serial processors and parallel processors worked best with the same kind of memory, but they dont.


Think of the GPU portion of the chip as being like the old x87 FPU in 486 and Pentium/Pentium II/Pentium III chips. You can still have a discreet graphics subsystem (1, 2 or more graphics cards for video display), but the built-in GPU will be available in a unifed address space along with the x86 cores to perform insane FPU calculations. There's no downside. Nobody complained they didn't want x87 FPU in their CPU because it might 'break down' and force them to replace the whole CPU.

This is just all upside for AMD and their customers.


----------



## sumitlian

Quote:


> Originally Posted by *akromatic*
> 
> which puts it back down to the performance level of a 7750 with slowish DDR3 memory rather then the 13CU of between 7770 and 7790 performance


7790 with 14 CUs (896 shader processors) running at 500 MHz core and 2000MHz effective memory consumes about 30 watts maximum at full load. I can prove you if you want.

What will you say about Kaveri's 8 CUs (512 shader processors) at 600 MHz core 3732MHz effective (1866 MHz DDR3). Power consumption should not exceed 20 watts max at any condition, and it can still get you more than playable 1080p experience at custom graphics settings. Assuming CPU power to be ~50 watts and you'll get a complete Chip within ~70 watts max. I know they are showing 100w TDP. But as history of 100w APUs proving that they never consumed 100w at stock clock. Therefore its more concern to Heat Dissipation in case of APU at least.
This is still a substantial improvement over A10 6800. What do you want more ?


----------



## DaveLT

Early rumors said it would be about 7750 performance already ... anyway.


----------



## NaroonGTX

7750 isn't much faster than a 6670. This is one reason why the 7750 could do DG with the A10-6800k. Since Richland's iGPU pretty much already put up 6670-ish performance, I find it hard to believe Kaveri's iGPU wouldn't at least be a decent leap above that.


----------



## Yeroon

Quote:


> Originally Posted by *sumitlian*
> 
> 7790 with 14 CUs (896 shader processors) running at 500 MHz core and 2000MHz effective memory consumes about 30 watts maximum at full load. I can prove you if you want.
> 
> What will you say about Kaveri's 8 CUs (512 shader processors) at 600 MHz core 3732MHz effective (1866 MHz DDR3). Power consumption should not exceed 20 watts max at any condition, and it can still get you more than playable 1080p experience at custom graphics settings. Assuming CPU power to be ~50 watts and you'll get a complete Chip within ~70 watts max. I know they are showing 100w TDP. But as history of 100w APUs proving that they never consumed 100w at stock clock. Therefore its more concern to Heat Dissipation in case of APU at least.
> This is still a substantial improvement over A10 6800. What do you want more ?


They definitely consume 100w at stock when both cpu+gpu are loaded. I know, i've had both generations doing science on both. Overclocked, you can double that.
We all know what an improvement of 512 gcn cores to richlands 384 vliw4 cores will bring. What we don't yet know is how Kaveri will deal with the bandwidth limitations the IMC in the apu's thus far have shown is the bottleneck for the gpu's.


----------



## DaveLT

Quote:


> Originally Posted by *NaroonGTX*
> 
> 7750 isn't much faster than a 6670. This is one reason why the 7750 could do DG with the A10-6800k. Since Richland's iGPU pretty much already put up 6670-ish performance, I find it hard to believe Kaveri's iGPU wouldn't at least be a decent leap above that.


Um, no. 7750 is a massive leap from 6670. 7770 = 6870 so what do you think a 7750 is equal to?


----------



## bencher

Quote:


> Originally Posted by *DaveLT*
> 
> Um, no. 7750 is a massive leap from 6670. 7770 = 6870 so what do you think a 7750 is equal to?


6850?


----------



## sumitlian

I still am confused in that 256 bit memory bus width.

If we take it as Dual 128 bit channels, how would they manage them to interact properly with our DDR3 RAMs which only support 64 bit per module. I mean those entry level discrete GPUs supported 128 bit DDR3 per channel because they were embedded/special purpose chips. But we have to use our ordinary RAMs. Aren't we ?
Are they going to use each two slot of DDR3 on motherboard as 2 x 64 bit so that it can be combined into one 128 bit to connect to one 128 bit IMC and with four DIMMs we will have 256 bit of throughput when they all are connected to dual 128 bit IMCs ?


----------



## sumitlian

Quote:


> Originally Posted by *DaveLT*
> 
> Um, no. 7750 is a massive leap from 6670. 7770 = 6870 so what do you think a 7750 is equal to?


Quote:


> Originally Posted by *bencher*
> 
> 6850?


5770.


----------



## Usario

Quote:


> Originally Posted by *sumitlian*
> 
> 7790 with 14 CUs (896 shader processors) running at 500 MHz core and 2000MHz effective memory consumes about 30 watts maximum at full load. I can prove you if you want.
> 
> What will you say about Kaveri's 8 CUs (512 shader processors) at 600 MHz core 3732MHz effective (1866 MHz DDR3). Power consumption should not exceed 20 watts max at any condition, and it can still get you more than playable 1080p experience at custom graphics settings. Assuming CPU power to be ~50 watts and you'll get a complete Chip within ~70 watts max. I know they are showing 100w TDP. But as history of 100w APUs proving that they never consumed 100w at stock clock. Therefore its more concern to Heat Dissipation in case of APU at least.
> This is still a substantial improvement over A10 6800. What do you want more ?


1866MHz DDR3 is effective 1866MHz; the actual clock is 933MHz.

3732MHz effective DDR3 requires very expensive ICs and lots of liquid nitrogen...

Though 2GHz effective DDR3 should be superior to 2GHz effective GDDR5 because of latency.


----------



## bencher

Quote:


> Originally Posted by *sumitlian*
> 
> 5770.


Really?

So is the 7770 < 6870?


----------



## Usario

Quote:


> Originally Posted by *bencher*
> 
> Really?
> 
> So is the 7770 < 6870?


Yes, actually...

5770 ~ 7750 < 7770 < 6850 < 7790 ~ 6870

all of this is not considering overclocking though; the 7000 series cards start to look better when you take that into account.


----------



## bencher

Quote:


> Originally Posted by *Usario*
> 
> Yes, actually...
> 
> 5770 ~ 7750 < 7770 < 6850 < 7790 ~ 6870
> 
> all of this is not considering overclocking though; the 7000 series cards start to look better when you take that into account.


Thanks for the explanation.


----------



## sumitlian

Quote:


> Originally Posted by *Usario*
> 
> 1866MHz DDR3 is effective 1866MHz; the actual clock is 933MHz.
> 
> 3732MHz effective DDR3 requires very expensive ICs and lots of liquid nitrogen...
> 
> Though 2GHz effective DDR3 should be superior to 2GHz effective GDDR5 because of latency.


yes I understand that







I meant by Dual Channel throughput of 1866 MHz.


----------



## akromatic

Quote:


> Originally Posted by *sumitlian*
> 
> 7790 with 14 CUs (896 shader processors) running at 500 MHz core and 2000MHz effective memory consumes about 30 watts maximum at full load. I can prove you if you want.
> 
> What will you say about Kaveri's 8 CUs (512 shader processors) at 600 MHz core 3732MHz effective (1866 MHz DDR3). Power consumption should not exceed 20 watts max at any condition, and it can still get you more than playable 1080p experience at custom graphics settings. Assuming CPU power to be ~50 watts and you'll get a complete Chip within ~70 watts max. I know they are showing 100w TDP. But as history of 100w APUs proving that they never consumed 100w at stock clock. Therefore its more concern to Heat Dissipation in case of APU at least.
> This is still a substantial improvement over A10 6800. What do you want more ?


power consumption was never part of my agenda, i'm all for more performance in the chip. 8 CU puts it at 7750 level of performance without GDDR5 and to be honestly it is rather weak for gaming without having to reduce image quality

as for being a substantial improvement over richland, i doubt it with 8CU. it be an improvement for sure but still fighting within the same performance bracket. currently richland would crossfire with a 7750

I was hoping for atlest a 7770 level of performance so i could keep a very minimal and compact Itx box without needing a dedicated graphics but i guess its no longer possible if they decided to cut the CU and keep the horrible DDR3 memory limitation

as for 100w APU never consuming 100W at stock , you are absolutely right as they consume more then that. my A10 5800k that is undervolted and underclocked to 5700 specs still consumes over 100w during typical game load

also the 7750 has a 50W TDP not a 20W, though I'm totally fine with a 130W TDP chip. AMD has release chips of similar package size that is rated at higher TDP so heat dissipation isnt that much of an issue

still a 7790 clocked at 500mhz to emulate a kevari chip would probly indicate how poor the performance would be compared to a full 7790 clocked at its usual 1ghz


----------



## iceman595

i'm just hoping for a decent itx board form asus at this point


----------



## DaveLT

Quote:


> Originally Posted by *Usario*
> 
> Yes, actually...
> 
> 5770 ~ 7750 < 7770 < 6850 < 7790 ~ 6870
> 
> all of this is not considering overclocking though; the 7000 series cards start to look better when you take that into account.


False. 7790 > 7770 ~ 6870 > 6850 > 7750 > 5770
Quote:


> Originally Posted by *akromatic*
> 
> power consumption was never part of my agenda, i'm all for more performance in the chip. 8 CU puts it at 7750 level of performance without GDDR5 and to be honestly it is rather weak for gaming without having to reduce image quality
> 
> as for being a substantial improvement over richland, i doubt it with 8CU. it be an improvement for sure but still fighting within the same performance bracket. currently richland would crossfire with a 7750
> 
> I was hoping for atlest a 7770 level of performance so i could keep a very minimal and compact Itx box without needing a dedicated graphics but i guess its no longer possible if they decided to cut the CU and keep the horrible DDR3 memory limitation
> 
> as for 100w APU never consuming 100W at stock , you are absolutely right as they consume more then that. my A10 5800k that is undervolted and underclocked to 5700 specs still consumes over 100w during typical game load
> 
> also the 7750 has a 50W TDP not a 20W, though I'm totally fine with a 130W TDP chip. AMD has release chips of similar package size that is rated at higher TDP so heat dissipation isnt that much of an issue
> 
> still a 7790 clocked at 500mhz to emulate a kevari chip would probly indicate how poor the performance would be compared to a full 7790 clocked at its usual 1ghz


You're reading off the wall right? A A10-6800k is a bit under 100W at stock


----------



## sumitlian

Quote:


> Originally Posted by *akromatic*
> 
> power consumption was never part of my agenda, i'm all for more performance in the chip. 8 CU puts it at 7750 level of performance without GDDR5 and to be honestly it is rather weak for gaming without having to reduce image quality
> 
> as for being a substantial improvement over richland, i doubt it with 8CU. it be an improvement for sure but still fighting within the same performance bracket. currently richland would crossfire with a 7750
> 
> I was hoping for atlest a 7770 level of performance so i could keep a very minimal and compact Itx box without needing a dedicated graphics but i guess its no longer possible if they decided to cut the CU and keep the horrible DDR3 memory limitation
> 
> as for 100w APU never consuming 100W at stock , you are absolutely right as they consume more then that. my A10 5800k that is undervolted and underclocked to 5700 specs still consumes over 100w during typical game load
> 
> also the 7750 has a 50W TDP not a 20W, though I'm totally fine with a 130W TDP chip. AMD has release chips of similar package size that is rated at higher TDP so heat dissipation isnt that much of an issue
> 
> still a 7790 clocked at 500mhz to emulate a kevari chip would probly indicate how poor the performance would be compared to a full 7790 clocked at its usual 1ghz


At least for now, your expectations are too high to be achieved by an APU, even AMD can't do anything.
Do you have a PSU which is only 60% efficient ?
Since when has an ITX box become a full time gaming PC without having to reduce image quality ?
And if power consumption is not part of your agenda then why not to go to regular box ?
You really have no idea what a 7790 at even lower 450MHz both core/memory can do for low-medium graphics settings.


----------



## akromatic

Quote:


> Originally Posted by *DaveLT*
> 
> False. 7790 > 7770 ~ 6870 > 6850 > 7750 > 5770
> You're reading off the wall right? A A10-6800k is a bit under 100W at stock


off the wall yes, but remember its pulling over 100w even after underclocking and undervolting to emulate a 65w chip with bare minimal system components and SSD. at stock clocks it would pull over 160w at such loads.

dont tell me after factoring efficiency that 2 sticks of ram, an SSD and a slow moving 120mm fan would draw over 35w
Quote:


> Originally Posted by *sumitlian*
> 
> At least for now, your expectations are too high to be achieved by an APU, even AMD can't do anything.
> Do you have a PSU which is only 60% efficient ?
> Since when has an ITX box become a full time gaming PC without having to reduce image quality ?
> And if power consumption is not part of your agenda then why not to go to regular box ?
> You really have no idea what a 7790 at even lower 450MHz both core/memory can do for low-medium graphics settings.


pico PSU are over 80+ efficiency

ITX boxes has always been capable of full time gaming without reduction of image quality once you give it a mainstream graphics card, no idea what rock you live under though. I have an ITX box with a i7 and 670GTX which offers no performance to a full sized desktop with the same specs unless you are telling that the 670GTX is a rubbish card that is no capable of full time gaming.

Power is not my agenda, size is. i want the performance of a HD7770 inside an ISK110 that i can fit inside my man bag rather then bring my other box which IMO is too big

do show me what a 7790 at 450mhz would do and do post that with 3d mark results too

ether way i wouldnt call my expectation too high but rather they refuse to release it. if they did release a 13CU version instead of a 8CU version and overclockable to 1ghz (trinity can) it would be able to meet the expectation of many others including myself but no they rather release one with 8CU instead

much like intel could release GT3(HD5000) on their chips but they just stick with GT2(HD4400) instead for majority of their chips. the only systems that i know of that has a GT3 is the macbook air and the surface pro 2 is stuck with GT2

much like thunderbolt that could be on every system like USB but its currently apple exclusive and dieing


----------



## DaveLT

Reason why intel wants to release only GT2 for most of their chips is they want to milk more money. simple as that


----------



## akromatic

Quote:


> Originally Posted by *DaveLT*
> 
> Reason why intel wants to release only GT2 for most of their chips is they want to milk more money. simple as that


exactly and its not my expectations at fault which is why i'm annoyed by the fact

I'd be more then happy to pay the $100 or so more for GT3e cept other then locking it up in labs as a teaser of what they can do buy not giving it out to you just so they can milk more cash out of an inferior product

AMD could do good but those damn greedy execs...wont give me my cake and let me eat it


----------



## Imglidinhere

So AMD actually made a NEW GPU this time instead of recycling the old ones four times?

It's a start... (I'm actually rather impressed by the performance listed here.)


----------



## DaveLT

Quote:


> Originally Posted by *Imglidinhere*
> 
> So AMD actually made a NEW GPU this time instead of recycling the old ones four times?
> 
> It's a start... (I'm actually rather impressed by the performance listed here.)


Wut? Trinity and Richland aren't recycling. If they were the whole processor would be made in *40nm*


----------



## Nil Einne

Quote:


> Originally Posted by *akromatic*
> 
> off the wall yes, but remember its pulling over 100w even after underclocking and undervolting to emulate a 65w chip with bare minimal system components and SSD. at stock clocks it would pull over 160w at such loads.
> 
> dont tell me after factoring efficiency that 2 sticks of ram, an SSD and a slow moving 120mm fan would draw over 35w
> pico PSU are over 80+ efficiency


If you're pico PSU is 82% efficient (random example) and you're getting 160W at the wall this suggests only ~130W is even going in to the PC components. If we assume at least 20W for the other listed components (I'll add the CPU fan which I assume you have) this means only 110W for the CPU+mobo. And I don't know about mobo efficiency, from my experience many manufacturers don't seem to do a good job particularly with AMD frequently being and afterthought nowadays and many reviewers do an even worse job of testing it.

Anyway more importantly, you can't emulate a 65w TDP chip simply by underclocking. You have to consider thermal binning. Precisely how big an effect this has for AMD K vs non K I've never seen good reviews but it likely has some effect and in any case with only one sample you can't rule out being unlucky. More importantly, I'm unsure how you even did this underclocking. Perhaps your mobo is better than mine (which is a UP4) or you're using soft underclocking but if you want to properly emulate a 5700 you can't just limit the stock frequency and turbo frequency (and other frequencies), you also need to limit the TDP which I'm not sure is even possible. In fact with my mobo changing the stock and turbo can have some weird effects and changing the GPU is even worse (it basically sets throttling although this shouldn't have any effect during testing of a GPU+CPU load) although I assume these problems can be overcome by a better mobo or soft overclocking. The TDP will limit how often the CPU goes to turbo and I'm guessing may also have an effect on how often it goes below stock (from my experience on certain benchmarks this is quite common) yet I'm not sure if you even can limit the CPU in this way however it must be an essential to achieve proper 5700 emulation. I guess fancy software if it exists could do the TDP limiting itself but it's unclear to me if you did that.


----------



## akromatic

Quote:


> Originally Posted by *Nil Einne*
> 
> If you're pico PSU is 82% efficient (random example) and you're getting 160W at the wall this suggests only ~130W is even going in to the PC components. If we assume at least 20W for the other listed components (I'll add the CPU fan which I assume you have) this means only 110W for the CPU+mobo. And I don't know about mobo efficiency, from my experience many manufacturers don't seem to do a good job particularly with AMD frequently being and afterthought nowadays and many reviewers do an even worse job of testing it.
> 
> Anyway more importantly, you can't emulate a 65w TDP chip simply by underclocking. You have to consider thermal binning. Precisely how big an effect this has for AMD K vs non K I've never seen good reviews but it likely has some effect and in any case with only one sample you can't rule out being unlucky. More importantly, I'm unsure how you even did this underclocking. Perhaps your mobo is better than mine (which is a UP4) or you're using soft underclocking but if you want to properly emulate a 5700 you can't just limit the stock frequency and turbo frequency (and other frequencies), you also need to limit the TDP which I'm not sure is even possible. In fact with my mobo changing the stock and turbo can have some weird effects and changing the GPU is even worse (it basically sets throttling although this shouldn't have any effect during testing of a GPU+CPU load) although I assume these problems can be overcome by a better mobo or soft overclocking. The TDP will limit how often the CPU goes to turbo and I'm guessing may also have an effect on how often it goes below stock (from my experience on certain benchmarks this is quite common) yet I'm not sure if you even can limit the CPU in this way however it must be an essential to achieve proper 5700 emulation. I guess fancy software if it exists could do the TDP limiting itself but it's unclear to me if you did that.


my build has no CPU fan, its cooled entirely by a single 120mm slim fan and i'm not just underclocking as i undervolt it significantly as well with turbo disabled. ether way there is a massive power consumption drop compared to stock and seemed to be on par with other 5700 owners

expected gaming load is around 90-110w for a 5700, my 5800k with its undervolt would cap out around 110w. in stock form it would pull around 160w


----------



## Mygaffer

Quote:


> Originally Posted by *iamhollywood5*
> 
> And waste half of the APU? Waste of silicon, waste of money, when the whole die could have been used for CPU cores instead, and then the whole thing would be utilized. Combining an APU with a discrete GPU is not good value.


We have enough space now with smaller transistors that it does not matter. The APU makes sense for way more people than are better served by a straight CPU and GPU.


----------



## NaroonGTX

Quote:


> Originally Posted by *DaveLT*
> Um, no. 7750 is a massive leap from 6670. 7770 = 6870 so what do you think a 7750 is equal to?


The 7770 was equal to a 6850, matching it in some cases and slightly falling behind in others. The 7770 was more power efficient, but the GCN cards pretty much pull ahead once overclocking is taken into consideration. I'm also talking about the DDR3 version of the 7750 vs the DDR3 6670, since these desktop APU's will obviously not be using GDDR5.
Quote:


> Originally Posted by *Imglidinhere*
> So AMD actually made a NEW GPU this time instead of recycling the old ones four times?
> 
> It's a start... (I'm actually rather impressed by the performance listed here.)


What? Trinity and Richland didn't recycle anything at all. Llano was VLIW5-based and Trinity/Richland were VLIW4-based. Kaveri will be GCN-based. There wasn't any recycling done.


----------



## mtcn77

Quote:


> Originally Posted by *NaroonGTX*
> 
> The 7770 was equal to a 6850, matching it in some cases and slightly falling behind in others. The 7770 was more power efficient, but the GCN cards pretty much pull ahead once overclocking is taken into consideration. I'm also talking about the DDR3 version of the 7750 vs the DDR3 6670, since these desktop APU's will obviously not be using GDDR5.
> What? Trinity and Richland didn't recycle anything at all. Llano was VLIW5-based and Trinity/Richland were VLIW4-based. Kaveri will be GCN-based. There wasn't any recycling done.


There was one hilarious review on a web site I don't recognise currently that benchmarked a 7770 at 1350 to 1400 mhz vs older generation cards. The card was quick in tesselation tests and benchmarks, but the limited bandwidth failed its premise over 6870 in current games. Oh, and it consumed more power in the process. You have to wonder, there has to be a sweet spot of performance in silicon chips.
I found these though, pretty spectacular for a quiet small silicon chip.
3DMark 11;


3DMark Firestrike;


I sometimes wish I would be smart to pick the the next generation more efficient card because my 6870 is not as good as a 5870







Mad at AMD marketing promising more than delivered.


----------



## Usario

Quote:


> Originally Posted by *DaveLT*
> 
> False. 7790 > 7770 ~ 6870 > 6850 > 7750 > 5770


----------



## Verdant

*Some fresh news.*

The Kaveri will have indeed 256 bit interface because it will work in dual graphics mode with the *R7 260X !*

The R7 260X now also has 256-bit Bus Interface for only 139$. Check here for proof regarding the new bus interface : http://wccftech.com/sapphire-radeon-r9-radeon-r7-graphic-card-lineup-leaked-includes-radeon-r9-280x-toxic-vaporx-models/

The old 7790 came with 128 bit.

Also the R7 260X is one of the few new cards in AMD lineup that has True Audio technology. (270X & 280X, etc. don't have this)

The top end model will come with at least 13 CU like it was rumored that combined with the R7 260X will reach somewhere between 7950 and 7970.

Best value ever.


----------



## karamel

Quote:


> Originally Posted by *Verdant*
> 
> *Some fresh news.*
> 
> The Kaveri will have indeed 256 bit interface because it will work in dual graphics mode with the *R7 260X !*
> 
> The R7 260X now also has 256-bit Bus Interface for only 139$. Check here for proof regarding the new bus interface : http://wccftech.com/sapphire-radeon-r9-radeon-r7-graphic-card-lineup-leaked-includes-radeon-r9-280x-toxic-vaporx-models/
> 
> The old 7790 came with 128 bit.
> 
> Also the R7 260X is one of the few new cards in AMD lineup that has True Audio technology. (270X & 280X, etc. don't have this)
> 
> The top end model will come with at least 13 CU like it was rumored that combined with the R7 260X will reach somewhere between 7950 and 7970.
> 
> Best value ever.


Is it your prediction or a rumor? What is your source?


----------



## Gungnir

Quote:


> Originally Posted by *Verdant*
> 
> *Some fresh news.*
> 
> The Kaveri will have indeed 256 bit interface because it will work in dual graphics mode with the *R7 260X !*
> 
> The R7 260X now also has 256-bit Bus Interface for only 139$. Check here for proof regarding the new bus interface : http://wccftech.com/sapphire-radeon-r9-radeon-r7-graphic-card-lineup-leaked-includes-radeon-r9-280x-toxic-vaporx-models/
> 
> The old 7790 came with 128 bit.
> 
> Also the R7 260X is one of the few new cards in AMD lineup that has True Audio technology. (270X & 280X, etc. don't have this)
> 
> The top end model will come with at least 13 CU like it was rumored that combined with the R7 260X will reach somewhere between 7950 and 7970.
> 
> Best value ever.


No, according to that article (and the Sapphire spec sheet within), the 260X has a 128 bit interface. Besides, what relevance does the 260X's memory interface have to Kaveri's? Dual graphics works with different memory interfaces; the 6450, 6570, and 6670 (DDR3 and GDDR5) have difference interfaces, yet all are officially supported with Trinity and Richland.

In fact, I can't see anywhere in that article that says that the 260X will be supported for dual graphics with Kaveri. And wouldn't 256 bit on DDR3 be like 8 channel, anyway?

13 CUs on Kaveri is very unlikely. We might see 13 on Carrizo, or a special APU paired with GDDR5, but I sincerely doubt that a standard Kaveri APU will have more than 8.


----------



## karamel

^^ This.

260X has a 128 bit interface not 256 bit.

Real question is how will AMD solve bandwidth problem of iGPU with DDR3? We know that FM2+ only supports dual channel from motherboards launched. It doesn't matter if Kaveri comes with 8, 13 or 14 CU, until AMD solve bandwidth problem with something unexpected.


----------



## Gungnir

Quote:


> Originally Posted by *karamel*
> 
> Real question is how will AMD solve bandwidth problem of iGPU with DDR3? We know that FM2+ only supports dual channel from motherboards launched. It doesn't matter if Kaveri comes with 8, 13 or 14 CU, until AMD solve bandwidth problem with something unexpected.


IMC improvements? I remember hearing that Intel's current IMCs have way more bandwidth AMD's at the same clocks, timings, and channels, so perhaps AMD can get more performance by improving that.


----------



## NaroonGTX

There isn't any magical solution for the bandwidth issue. Yet there are rumblings floating around that the memory controller on Kaveri is much improved over Trinity/Richland. I saw it somewhere, but they were showing 2400mhz-level performance using 1600mhz RAM on Kaveri. So that could grant a nice boost if it's true. Though we won't get any massive gains until DDR4 hits and becomes supported by a Kaveri fresh or Carrizo.


----------



## Assimilator87

Quote:


> Originally Posted by *sumitlian*
> 
> yes I understand that
> 
> 
> 
> 
> 
> 
> 
> I meant by Dual Channel throughput of 1866 MHz.


I would use bit rate instead because it's not always specified how many bits a channel is. For example, most memory channels on x86 systems are 64 bits, while most ARM implementations are 32 bits. Either way, both the APUs and the 7790 have a 128 bit memory interface.


----------



## MrJava

I thought that one of the improvements on Kaveri was official support for DDR3-2400 and DDR3-2500 vs. DDR-2133 on Trinity/Richland.
Quote:


> Originally Posted by *NaroonGTX*
> 
> There isn't any magical solution for the bandwidth issue. Yet there are rumblings floating around that the memory controller on Kaveri is much improved over Trinity/Richland. I saw it somewhere, but they were showing 2400mhz-level performance using 1600mhz RAM on Kaveri. So that could grant a nice boost if it's true. Though we won't get any massive gains until DDR4 hits and becomes supported by a Kaveri fresh or Carrizo.


Those ARM chips would have been using LPDDR2. With DDR3, each channel must be 64-bit.
Quote:


> Originally Posted by *Assimilator87*
> 
> I would use bit rate instead because it's not always specified how many bits a channel is. For example, most memory channels on x86 systems are 64 bits, while most ARM implementations are 32 bits. Either way, both the APUs and the 7790 have a 128 bit memory interface.


----------



## Verdant

Correct it's 128 bit bus.
If Kaveri top model comes with 13 CU it can dual graphics with R7 260X with no problem.
Smaller models will work with Radeon R7 250, Radeon R7 240 etc.

In the past it was the same. the big difference is that Kaveri comes with a much bigger GPU and it can use the GPU to accelerate software applications.
Also rumor has it that Kaveri top model with Steamroller cores is around the same as i5 sandy bridge 2500k.


----------



## Gnomepatrol

Quote:


> Originally Posted by *Imglidinhere*
> 
> So AMD actually made a NEW GPU this time instead of recycling the old ones four times?
> 
> It's a start... (I'm actually rather impressed by the performance listed here.)


This post is funny.

These are going to be really nice for media center and an platform style gaming steam boxes! I want.


----------



## sumitlian

Quote:


> Originally Posted by *Verdant*
> 
> *Some fresh news.*
> 
> The Kaveri will have indeed 256 bit interface because it will work in dual graphics mode with the *R7 260X !*
> 
> The R7 260X now also has 256-bit Bus Interface for only 139$. Check here for proof regarding the new bus interface : http://wccftech.com/sapphire-radeon-r9-radeon-r7-graphic-card-lineup-leaked-includes-radeon-r9-280x-toxic-vaporx-models/
> 
> The old 7790 came with 128 bit.
> 
> Also the R7 260X is one of the few new cards in AMD lineup that has True Audio technology. (270X & 280X, etc. don't have this)
> 
> The top end model will come with at least 13 CU like it was rumored that combined with the R7 260X will reach somewhere between 7950 and 7970.
> 
> Best value ever.


No, R7 260x (Bonaire XTX) is upgraded model of 7790 (Bonaire XT), don't know if its improved or rebranded.
same transistor count
same die size
same shader / tmu / rop
same memory bus width (128 bit),
higher core clock (1100 MHz)
higher memory clock (1625 MHz)
this time with 2 GB official launch at about $140

http://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_Series


----------



## Gungnir

Quote:


> Originally Posted by *sumitlian*
> 
> No, R7 260x (Bonaire XTX) is upgraded model of 7790 (Bonaire XT), don't know if its improved or rebranded.
> same transistor count
> same die size
> same shader / tmu / rop
> same memory bus width (128 bit),
> higher core clock (1100 MHz)
> higher memory clock (1625 MHz)
> this time with 2 GB official launch at about $140
> 
> http://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_Series


Also TrueAudio, though I'm not sure if that's on-die, a separate chip, or just software in this case.


----------



## Verdant

TrueAudio is hardware based.

Reviews are already out for R series : http://wccftech.com/amd-radeon-r9-radeon-r7-review-roundup-officially-retail-stores-globe/

Some seem to have GCN 1 and others GCN 1.1


----------



## Clocknut

I am interested if it can triple crossfire with 2x7790.


----------



## Verdant

Looks like R7 260X way behind Ge-force 650 TI boost that got a price cut.


----------



## hollowtek

hopefully not another amd hype-machine... if this is real, i'll give up pimping.


----------



## Verdant

Kaveri will be like a i5 2500 in CPU performance.

Ifo you put a top desktop Kaveri + R7 260X in dual graphics mode you pay around 300$ for the power of an i5 2500 + 7950 GPU ( if you use the default auto overclock of 30% the ASUS motherboard comes with for the integrated GPU on Kaveri. )

That's it.

What other options you have for 300$ for the performance of an i5 2500 + 7950 ?


----------



## inedenimadam

Quote:


> Originally Posted by *Verdant*
> 
> Kaveri will be like a i5 2500 in CPU performance.
> 
> Ifo you put a top desktop Kaveri + R7 260X in dual graphics mode you pay around 300$ for the power of an i5 2500 + 7950 GPU ( if you use the default auto overclock of 30% the ASUS motherboard comes with for the integrated GPU on Kaveri. )
> 
> That's it.
> 
> What other options you have for 300$ for the performance of an i5 2500 + 7950 ?


So I am assuming the 260x will be crossfire compatible?


----------



## Verdant

But if you want to not waste your money on re-branded crap and you need a computer NOW, then buy a strong CPU from Intel and wait for NEXT Generation of GPU from AMD and Nvidia on 20nm.


----------



## Verdant

Top Desktop Kaveri is 100% compatible in dual graphics mode with R7 260X.


----------



## inedenimadam

Quote:


> Originally Posted by *Verdant*
> 
> Top Desktop Kaveri is 100% compatible in dual graphics mode with R7 260X.


Schweet! Very good news. Give me more, and cheaper!


----------



## nitrubbb

Quote:


> Originally Posted by *Verdant*
> 
> Top Desktop Kaveri is 100% compatible in dual graphics mode with R7 260X.


that was my theory too. makes a lot of sense


----------



## NaroonGTX

Where are you getting that Kaveri will be able to xfire with the R7-260x? We know that the R7-260x is basically a 7790 with 2GB of GDDR5 and slightly higher clocks, but I haven't seen anything official on this yet.


----------



## RKTGX95

Quote:


> Originally Posted by *NaroonGTX*
> 
> Where are you getting that Kaveri will be able to xfire with the R7-260x? We know that the R7-260x is basically a 7790 with 2GB of GDDR5 and slightly higher clocks, but I haven't seen anything official on this yet.


while it wasn't announced, it makes a lot of sense (speculation wise): since the last APU's supported the entery level 6xxx cards (or more precisely, the rebranded 6xxx under the 7xxx series) and it is logical for the new APU's to support entry level GCN (aka 7xxx cards and the r7 2xx rebrands) and it would actually fit well with TrueAudio being supported on the 260x as additional marketing. This is my speculation but nevertheless it is likely.


----------



## inedenimadam

We have to wait all the way til Q1 2014 for this goodness?


----------



## NaroonGTX

I know it makes sense speculation-wise, but he announced it as if it were confirmed or something. But since Kaveri's GPU is GCN-based, it should work with the lower-to-mid tier GCN cards. But Richland could xfire with the 7750, and wasn't that card GCN-based?
Quote:


> We have to wait all the way til Q1 2014 for this goodness?


Several articles have said Q1 2014, while more recent ones have claimed it's releasing to retail in Q4 2013 again. We will find out for sure at the APU conference in November.


----------



## RKTGX95

Quote:


> Originally Posted by *NaroonGTX*
> 
> I know it makes sense speculation-wise, but he announced it as if it were confirmed or something. But since Kaveri's GPU is GCN-based, it should work with the lower-to-mid tier GCN cards. But Richland could xfire with the 7750, and wasn't that card GCN-based?


AFAIK Richland has a tweaked 6000 iGPU with features from the GCN architecture to support DX11, improve HD playback and a H.264 encoder (according to Guru3d at least ). In theory it does with a 7750 but it isn't as well as it should be, or at least kaveri will make this hybrid better.


----------



## nitrubbb

my money is so ready for AMD


----------



## Assimilator87

Quote:


> Originally Posted by *MrJava*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Assimilator87*
> 
> I would use bit rate instead because it's not always specified how many bits a channel is. For example, most memory channels on x86 systems are 64 bits, while most ARM implementations are 32 bits. Either way, both the APUs and the 7790 have a 128 bit memory interface.
> 
> 
> 
> Those ARM chips would have been using LPDDR2. With DDR3, each channel must be 64-bit.
Click to expand...

This is just one quote showing 32 bit memory channels on ARM SOCs using DDR3. It's the same for Snapdragon.
Quote:


> Originally Posted by *Anand*
> With Tegra 4, complaints about memory bandwidth can finally be thrown out the window. The Tegra 4 SoC features two 32-bit LPDDR3 memory interfaces, bringing it up to par with the competition. The current max data rate supported by Tegra 4's memory interfaces is 1866MHz, but that may go up in the future.


----------



## Verdant

Kaveri is coming in November 2013 just before the holiday season.

Inside the Kaveri is an underclocked R7 260X ( or 7790 ) maybe with 1 less CU.

The top Asus motherboard for FM2+ comes with...
*Audio :* Realtek® ALC1150 8-Channel High Definition Audio CODEC ( fantastic audio chip that works perfectly fine with the true-audio on the GPU )

And this is a more common feature even on less expensive motherboards :
*GPU Boost*
Unlock integrated graphics performance for up to 30% boost

"Exclusive GPU Boost is able to unlock any APU for overclocking integrated graphics. Even non-Black Edition APUs can be boosted - for an integrated graphics performance increase of up to 30%! Gaining this unique power is as simple as flipping an onboard switch or using ASUS AI Suite 3 utility. It easily delivers stable system-wide upgrades for every use."


----------



## iceman595

Quote:


> Originally Posted by *Verdant*
> 
> Kaveri is coming in November 2013 just before the holiday season.
> 
> Inside the Kaveri is an underclocked R7 260X ( or 7790 ) maybe with 1 less CU.
> 
> The top Asus motherboard for FM2+ comes with...
> *Audio :* Realtek® ALC1150 8-Channel High Definition Audio CODEC ( fantastic audio chip that works perfectly fine with the true-audio on the GPU )
> 
> And this is a more common feature even on less expensive motherboards :
> *GPU Boost*
> Unlock integrated graphics performance for up to 30% boost
> 
> Exclusive GPU Boost is able to unlock any APU for overclocking integrated graphics. Even non-Black Edition APUs can be boosted - for an integrated graphics performance increase of up to 30%! Gaining this unique power is as simple as flipping an onboard switch or using ASUS AI Suite 3 utility. It easily delivers stable system-wide upgrades for every use.


too bad asus doesnt have any plans for a itx mobo


----------



## maarten12100

Quote:


> Originally Posted by *Verdant*
> 
> But if you want to not waste your money on re-branded crap and you need a computer NOW, then buy a strong CPU from Intel and wait for NEXT Generation of GPU from AMD and Nvidia on 20nm.


20nm is still almost a year away.
At the buy Intel statement I rather buy a AMD processor and spoof it as an Intel (if I could)


----------



## nitrubbb

oh wow if only I kaveri released in Nov, that would be so sick!!


----------



## NaroonGTX

Quote:


> Kaveri is coming in November 2013 just before the holiday season.
> 
> Inside the Kaveri is an underclocked R7 260X ( or 7790 ) maybe with 1 less CU.


Once again, where did you get this from? You're reminding me of Seronx and his crystal ball posts.


----------



## adridu59

Quote:


> Originally Posted by *Verdant*
> 
> Top Desktop Kaveri is 100% compatible in dual graphics mode with R7 260X.


I'd advise to stay sceptic about that for now, Dual Graphics with current APUs has been showed as slow-down in a review, now if it's a GDDR5 APU it will probably change the game.
Quote:


> Originally Posted by *Verdant*
> 
> The top Asus motherboard for FM2+ comes with...
> *Audio :* Realtek® ALC1150 8-Channel High Definition Audio CODEC ( fantastic audio chip that works perfectly fine with the true-audio on the GPU )


Oh that's just an ALC898 variant.


----------



## Kuivamaa

Quote:


> Originally Posted by *adridu59*
> 
> I'd advise to stay sceptic about that for now, Dual Graphics with current APUs has been showed as slow-down in a review, now if it's a GDDR5 APU it will probably change the game.


The major issue with DG isn't slowdowns that much but the horrid microstutter. Your point is valid, of course.


----------



## Durquavian

Quote:


> Originally Posted by *adridu59*
> 
> I'd advise to stay sceptic about that for now, Dual Graphics with current APUs has been showed as slow-down in a review, now if it's a GDDR5 APU it will probably change the game.
> Oh that's just an ALC898 variant.


If your speaking of the hybrid CF article that makes CF a huge issue, disregard it. A very poor article. APUs are very ram speed dependent. 1866 to 2133 showed a 20% improvement alone in iGpu performance. That article used 1866 cl13 which would create issues not inherent in most setups.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Clocknut*
> 
> I am interested if it can triple crossfire with 2x7790.


R 200 series (7000 series was so much better! *shakes old man cane*) no longer uses crossfire bridges so the only limitation should be the PCIe bus (no problem) and drivers (possibly problematic). If they released dual-GPUs for DGM with the APUs, that would be great too.


----------



## Gungnir

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> R 200 series (7000 series was so much better! *shakes old man cane*) no longer uses crossfire bridges so the only limitation should be the PCIe bus (no problem) and drivers (possibly problematic). If they released dual-GPUs for DGM with the APUs, that would be great too.


I think that only Hawaii (R9 290X) does that; the R9 280X, R9 270X, and R7 260X all have Crossfire connectors.


----------



## Clocknut

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> R 200 series (7000 series was so much better! *shakes old man cane*) no longer uses crossfire bridges so the only limitation should be the PCIe bus (no problem) and drivers (possibly problematic). If they released dual-GPUs for DGM with the APUs, that would be great too.


I am not too worried about the damn connectors, I got enough. lol each Radeon u purchase, u'll get one.

If Kaveri....

1. Support True Audio = I can forget about buying 260x to crossfire with my current 7790. I'll just get another 7790 to crossfire!
2. If they can triple crossfire with 2x 7790. Hell I'll buy a used DDR3 3000 for cheap 2years later.
3. It would be much better that I can configure which Radeon to use for gaming. for ex. APU Radeon for 2D/indie games, One 7790 for 3D 2006--2008 games, two 7790 for BF3 or may be even triple crossfire with APU!


----------



## Verdant

Quote:


> Originally Posted by *iceman595*
> 
> too bad asus doesnt have any plans for a itx mobo


Plenty of Mini ITX are coming with ALC 1150

http://wccftech.com/asrock-a88x-chipset-based-fm2a88x-itx-miniitx-motherboard-detailed/

And the ALC1150 is the successor of ALC898. It's a better beast of audio chip.

Here you find ALC 898 specs
http://www.hardwaresecrets.com/printpage/Audio-Codec-Comparison-Table/520

Input SNR : 104dB
Output SNR 110dB
http://www.realtek.com.tw/downloads/downloadsView.aspx?Langid=1&PFid=28&Level=5&Conn=4&ProdID=328&DownTypeID=1&GetDown=false&Downloads=true

ALC 1150
Input SNR 110dB
Output SNR 115dB
ftp://207.232.93.28/pc/audio/ALC1150-CG_DataSheet_1.0.pdf

Anyway it's all about the individual motherboard manufacturer's implementation.
Correct me if i am wrong.


----------



## nitrubbb

the Kaveri wait is killing me

someone should make up some rumors


----------



## Usario

Quote:


> Originally Posted by *nitrubbb*
> 
> the Kaveri wait is killing me
> 
> someone should make up some rumors


Kaveri A10-7800K 7zip: 16284
Cinebench R11.5: 6.02
3DMark FireStrike: 3972


----------



## Darklyric

Quote:


> Originally Posted by *Usario*
> 
> Kaveri A10-7800K 7zip: 16284
> Cinebench R11.5: 6.02
> 3DMark FireStrike: 3972


reposted on several seedy sites as fact and leaked benches... tyvm.


----------



## Lommi

Will there be released a laptop version Kavari chip with those GPU capabilities??


----------



## Kuivamaa

Of course there will be laptop kaveri, mobile success will make it or break it,not desktop units.


----------



## Usario

Quote:


> Originally Posted by *Darklyric*
> 
> reposted on several seedy sites as fact and leaked benches... tyvm.


Make a graph in Excel with the AMD logo in the corner if anyone gets skeptical


----------



## Asterox

Quote:


> Originally Posted by *Usario*
> 
> Kaveri A10-7800K 7zip: 16284
> *Cinebench R11.5: 6.02*
> 3DMark FireStrike: 3972


Small correction is definitely needed, and by the way archive this my post because it will be useful in the near future.









*Cinebench R11.5 CPU Multithread*

Phenom II X4 980 - 4.35

Trinity APU A10-6800K - 3.60

Kaveri APU A10-7800K - 4.80


----------



## Gungnir

Quote:


> Originally Posted by *Asterox*
> 
> Small correction is definitely needed, and by the way archive this my post because it will be useful in the near future.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Cinebench R11.5 CPU Multithread*
> 
> Phenom II X4 980 - 4.35
> 
> Trinity APU A10-6800K - 3.60
> 
> Kaveri APU A10-7800K - 4.80


The 6800K is Richland, not Trinity


----------



## Seronx

I would put Kaveri at ~600 cb @ 1.8 GHz.


----------



## ChrisB17

<3 APU's


----------



## bencher

Quote:


> Originally Posted by *ChrisB17*
> 
> <3 APU's


I am planning to open store soon. I will eb selling APu computers mostly.


----------



## mtcn77

This isn't kaveri, but the kabini model Lenovo laptop is cool.
Lenovo E145


----------



## unimatrixzero

will they fit on a FM2 A85 board


----------



## NaroonGTX

Kaveri will require a Socket FM2+ motherboard. It won't fit into FM2 boards due to it having two extra pins that the FM2 slots don't have.


----------



## Darklyric

Quote:


> Originally Posted by *NaroonGTX*
> 
> Kaveri will require a Socket FM2+ motherboard. It won't fit into FM2 boards due to it having two extra pins that the FM2 slots don't have.


fm2 will fit in fm2+ though right?


----------



## nitrubbb

Quote:


> Originally Posted by *Darklyric*
> 
> fm2 will fit in fm+ though right?


man, I was excited to see new post in this thread, but this is just....ARGRHGGHGH


----------



## AtomTM

Quote:


> Originally Posted by *EliteReplay*
> 
> is there any cpu benchmarks leak? i mean is good to have a good iGpu but i would like to get a FM2+ CPU as least able to fight a 2500k which is sitll rocking.


I agree with @EliteReplay . Any leaks?


----------



## tjwolf88

Seronx, you were right about nearly everything you've said before. At this point I assume you work at AMD







If that CineBench score is right, Kaveri might be almost equal to a Ivy I5 clock per clock. I certainly hope so anyways.


----------



## NaroonGTX

Yes, all FM2 CPU's and APU's will fit into FM2+ MOBO's.

And there are no CPU performance leaks anywhere and there most likely won't be until AMD themselves demonstrates the product at their conference in November.


----------



## tjwolf88

What day of November?


----------



## nitrubbb

Quote:


> Originally Posted by *tjwolf88*
> 
> What day of November?


11-14


----------



## Oriander

1) What some reasons to pay more for the top FM2+ ASUS motherboard?
2) Maybe makes some sense to wait one year for Carrizo and DDR4?
Or it will be overpriced at the first times?


----------



## Seronx

Quote:


> Originally Posted by *tjwolf88*
> 
> Seronx, you were right about nearly everything you've said before.


I wouldn't say right but close to far away.
Quote:


> Originally Posted by *tjwolf88*
> 
> At this point I assume you work at AMD


Nope, just paying more attention to AMD discussions.
Quote:


> Originally Posted by *tjwolf88*
> 
> If that CineBench score is right, Kaveri might be almost equal to a Ivy I5 clock per clock. I certainly hope so anyways.


It will be in that general vicinity probably a little higher.


----------



## geoxile

Somehow I highly doubt Kaveri is going to match Ivy in IPC


----------



## Derp

If Kaveri manages to pull Ivy level IPC out of it's butt then the prices will probably be much higher than the current FM2 lineup. Doubt it will happen or even come close, but I would like it to.


----------



## tjwolf88

I'm new here and you were miraculously right about 290x having more then 2560 shaders and a few other points on that graphics card which was pretty cool. I'm also just really hopeful...I want to believe AMD's Steamroller can catch up.


----------



## Seronx

Quote:


> Originally Posted by *tjwolf88*
> 
> I'm new here and you were miraculously right about 290x having more then 2560 shaders and a few other points on that graphics card which was pretty cool. I'm also just really hopeful...I want to believe AMD's Steamroller can catch up.


If you actually follow me on the internet my original projection was way, waaayyy off.

3840 ALUs
240 TMUs
64 ROPs
^-- on 20-nm.
---
Steamroller is more or less, understanding the switch of technology. It also helps that the die shot was shown much earlier than the release.

2-way FGMT to 8-way SMT.


----------



## NaroonGTX

Everyone would like Kaveri to magically be on par with Ivy or Haswell, but realistically it will be close to SB (which isn't really bad at all, if not a bit late.)


----------



## CCast88

Where's the FM2+ Mini ITX boards?


----------



## Seronx

Quote:


> Originally Posted by *CCast88*
> 
> Where's the FM2+ Mini ITX boards?


http://www.asrock.com/mb/AMD/FM2A88X-ITX+/
http://www.gigabyte.com/products/product-page.aspx?pid=4745#ov
Quote:


> Originally Posted by *NaroonGTX*
> 
> Everyone would like Kaveri to magically be on par with Ivy or Haswell, but realistically it will be close to SB (which isn't really bad at all, if not a bit late.)


Kaveri has a severe clock disadvantage, so it up to the heart of the transistors.


----------



## NaroonGTX

Clock disadvantage? Pretty sure the clocks haven't been revealed yet.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Clock disadvantage? Pretty sure the clocks haven't been revealed yet.


Quote:


> Originally Posted by *Seronx*
> 
> http://www.asrock.com/mb/AMD/FM2A88X-ITX+/
> http://www.gigabyte.com/products/product-page.aspx?pid=4745#ov
> Kaveri has a severe clock disadvantage, so it up to the heart of the transistors.


I don't care whatever the clock is.
It should be about instructions per clock. isn't it ?


----------



## Seronx

Quote:


> Originally Posted by *NaroonGTX*
> 
> Clock disadvantage? Pretty sure the clocks haven't been revealed yet.


35W Kaveri is 1.8 GHz which has a massive 700 MHz disadvantage in comparison to Richland.

35W Richland 2.5 GHz -> 35W Kaveri 1.8 GHz
100W Richland 4.1 GHz -> 95W Kaveri 2.6 GHz.
Quote:


> Originally Posted by *sumitlian*
> 
> It should be about instructions per clock. isn't it ?


Clock rate determines the ops available per second.


----------



## Usario

Quote:


> Originally Posted by *Seronx*
> 
> 35W Kaveri is 1.8 GHz which has a massive 700 MHz disadvantage in comparison to Richland.
> 
> 35W Richland 2.5 GHz -> 35W Kaveri 1.8 GHz
> 100W Richland 4.1 GHz -> 95W Kaveri 2.6 GHz.
> Clock rate determines the ops available per second.


How/where did you get these numbers?


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> Clock rate determined ops available.


Operations per second don't matter because it does not show how much CPU cycles are being consumed. Hence I always prefer IPC improvements over IPS/OPS.


----------



## Seronx

Quote:


> Originally Posted by *Usario*
> 
> How/where did you get these numbers?


----------



## DaveLT

Quote:


> Originally Posted by *sumitlian*
> 
> Operations per second don't matter because it does not show how much CPU cycles are being consumed. Hence I always prefer IPC improvements over IPS/OPS.


Kill yourself. Seriously.
We're in goddamn 2013 already
Quote:


> Originally Posted by *Seronx*
> 
> 35W Kaveri is 1.8 GHz which has a massive 700 MHz disadvantage in comparison to Richland.
> 
> 35W Richland 2.5 GHz -> 35W Kaveri 1.8 GHz
> 100W Richland 4.1 GHz -> 95W Kaveri 2.6 GHz.
> Clock rate determines the ops available per second.


Eh? How do you figure a 1.5GHz difference? *smells suspicion*
There is no way Kaveri is going to be clocked so low either. Besides, if Kaveri is going to dip down into 2.6GHz then it's not power efficient anymore. It is meant to be power efficient by also having slightly dipped clocks compared to Richland and not the other way round


----------



## sumitlian

Quote:


> Originally Posted by *DaveLT*
> 
> Kill yourself. Seriously.
> We're in goddamn 2013 already


Lets consider Kaveri becomes faster at 4.0 GHz or even 4.5 GHz than a 3.5 GHz i7 4770k in single thread. *I challenge you not to say ever that Kaveri was 0.5 to 1GHz higher core frequency at that time, because the day you say so, you'll prove automatically that you were wrong and I was right.







*


----------



## Kuivamaa

Quote:


> Originally Posted by *Seronx*
> 
> 35W Kaveri is 1.8 GHz which has a massive 700 MHz disadvantage in comparison to Richland.
> 
> 35W Richland 2.5 GHz -> 35W Kaveri 1.8 GHz
> 100W Richland 4.1 GHz -> 95W Kaveri 2.6 GHz.
> Clock rate determines the ops available per second.


Where's the source for TDP on kaveri?


----------



## azanimefan

Quote:


> Originally Posted by *Kuivamaa*
> 
> Where's the source for TDP on kaveri?


lots of rumor. and engineering samples are ALWAYS underclocked. remember the samples of haswell? they were at like 2.4ghz or something like that too.


----------



## delboy67

My dream kaveri spec is ivy ipcs, 4 cores with hyper threading and igp on par with 7790 all at the same core speed and pricing structure as richland/trinity. Am I being realistic?









So I can actually buy an fm2+ mobo now and slide my a8-5600k into it?


----------



## NaroonGTX

http://wccftech.com/amd-steamroller-apu-engineering-sample-bionic-research-database/

This ES (which was 'leaked' months ago) doesn't mean anything in relation to whatever the release Kaveri will do. I remember an Ivy Bridge ES which was clocked at less than 2GHz and the performance in the benchmark didn't get close to Sandy Bridge, yet we know IB was 5~10% faster than SB when it released. So as usual, take these projections with massive doses of salt grains.

Also the Kaveri TDP came from several MOBO manufacturers who all list FM2+ MOBO's as supporting "95W FM2+ / 100W FM2 processors"

http://www.asrock.com/mb/AMD/FM2A88X%20Extreme6+/index.asp
Quote:


> Originally Posted by *ASRock Product Page*
> Supports Socket FM2+ 95W / FM2 100W processors


So it's safe to say that the top-end Steamroller APU's and Athlon's will be 95W rather than 100W now.


----------



## sumitlian

Someone please say "we will see a quad module Kaveri as well".


----------



## Timeofdoom

Quote:


> Originally Posted by *sumitlian*
> 
> Someone please say "we will see a quad module Kaveri as well".


Not going to happen









Though I would like a dual-module Kaveri built upon the steamroller-arch with "near-Ivy" IPC and a beefy-arse gpu equal to at least a hd7850.
One man can dream.









OH well, my rigs gonna last me some time, when I'm finally gonna hafta' start upgrading again, hopefully more attractive APU's will be avaiable.


----------



## NaroonGTX

I doubt we will ever see a quad-module Kaveri, sadly. I wouldn't mind seeing a quad-module Athlon though. As it stands, the die size would have to increase pretty massively to accomodate four modules and maintain a decent GPU at the same time...Not to mention the additional two pools of L2 cache for both of the additional modules. It's safe to say these hypothetical chips wouldn't have L3 cache due to the GPU taking up space, but it wouldn't be a big deal.

Seeing as how Carrizo is rumored to have 65W parts max, with AMD aiming for 45W max (an obvious foreshadowing of the implementation of HDL and a possible die-shrink down to 20nm) we probably won't see any APU's with more than two modules anytime soon.


----------



## Kuivamaa

Quote:


> Originally Posted by *delboy67*
> 
> My dream kaveri spec is ivy ipcs, 4 cores with hyper threading and igp on par with 7790 all at the same core speed and pricing structure as richland/trinity. Am I being realistic?


You just asked for a chip that mirrors a i7-3770k cpu-wise and instead of HD4000 to have 7790 igpu performance for less than half the price of said i7-3770k. If AMD ever produces this thing, it will have the price of an i7-4770k and higher tdp.
Quote:


> Originally Posted by *NaroonGTX*
> 
> I doubt we will ever see a quad-module Kaveri, sadly. I wouldn't mind seeing a quad-module Athlon though. As it stands, the die size would have to increase pretty massively to accomodate four modules and maintain a decent GPU at the same time...Not to mention the additional two pools of L2 cache for both of the additional modules. It's safe to say these hypothetical chips wouldn't have L3 cache due to the GPU taking up space, but it wouldn't be a big deal.
> 
> Seeing as how Carrizo is rumored to have 65W parts max, with AMD aiming for 45W max (an obvious foreshadowing of the implementation of HDL and a possible die-shrink down to 20nm) we probably won't see any APU's with more than two modules anytime soon.


Making a quad module APU that includes that type of igpu is really easy to design and produce and the die size at 28nm wouldn't be that huge anyway. Bigger than FX-8xx0 but within reason. The problem is there is no market for such a chip, at least not yet. It is too big/hot for laptops,I am not sure it makes sense on the server side and desktop entusiasts and professionals would need a discreet gpu anyway.

But unless AMD decides to totally abandon even mid/high end desktop cpu segment, they will make a 3+ module chip for SR and/or EX at some point. That leaked die shot is a monster, it reveals huge ambition to just be limited to i5 competitors.


----------



## NaroonGTX

I know it would be feasible to do a chip like that, but my point was that it's not feasible to expect it to fall within AMD's TDP targets of 95W and below...not unless they drastically reduce the stock clocks or something.


----------



## Kuivamaa

Laptop is the key here, since laptops are the bulk of PCs sold. AMD offers single and dual module APUs while intel offers dual cores,dual cores with HT and quads with HT. In desktop terms that means top end AMD laptops feature downclocked (and without L3 cache) versions of FX-4xx0 and intel offers cut down i3s (in the form of mobile i3s), i7s downclocked and chopped in half (mobile i5s) and full downclocked desktop i7s (in the form of quad mobile i7s). No surprise that AMD can only compete vs i3/i5 on the mobile side and gets stomped by i7. Beefing their modular design (as they seem to be doing from that die shot) means their dual module chip will finally clearly surpass i3/i5 and command a higher price.


----------



## Darklyric

yea thats why i want the so badly to do a 4 module am3+ SR! hell theres a few 220tdp boards lol.


----------



## sumitlian

Is there any chance of a 4 Module Steamroller/Excavator FX in 2014-2015 ?
Wikipedia shows AMD don't plan for future FX release.


----------



## Darklyric

maybe if we all go buy one of those .... fx 9xxx chips..... they will see am3+ is there to stay!


----------



## NaroonGTX

Another problem, and arguably the biggest problem for mobile, is getting the damned OEM's to actually put AMD chips into their products. Even for pre-built desktop systems, I hardly see that many powered by AMD anymore. A vast majority of them will have i3's in them, some will have i5's and the jokily-overpriced ones have i7's in them, lol. Most pre-builts I've seen that have AMD chips in them are powered by APU's, however. I don't think I've seen any FX's in pre-builts except on a couple websites.

edit: Well wiki can't be trusted all the time. A more accurate listing would say that it's TBA. But don't forget that our very own os2wiz spoke with AMD employees in recent times and was told that AM3+ would receive one more offering (we don't know what) and that AM3+ would be supported for years to come. As usual, we just have to wait on the upcoming roadmaps before we say for certain what is coming.


----------



## Kuivamaa

On the laptop side, HP and Lenovo (top vendors) seem active at offering AMD solutions. On desktop I am not sure, since I do not follow the prebuilt environment, from a quick look at local lenovo they don't seem to offer AMD at all ,indeed.


----------



## tjwolf88

The FX series mite use Steamroller although given AMD's desire to focus on APU's, I doubt there would be a Carrizo FX chip. Most likely we'll eventually see 8 core APU's but that's have to wait until 20 nm or smaller for it to fit FM series TDP while still having a iGPU.


----------



## Usario

Quote:


> Originally Posted by *Seronx*


Very interesting, thank you, but I see nothing about clocks or TDP.....


----------



## MrJava

Looks like the INT score is about 30% higher than Piledriver (per thread per clock). The FP score is about 15% lower than Piledriver (per thread per clock).
But it all depends on how you interpret that CPUID, I've assumed that its running at 1.8GHz.
Quote:


> Originally Posted by *Seronx*


----------



## DaveLT

Quote:


> Originally Posted by *NaroonGTX*
> 
> I know it would be feasible to do a chip like that, but my point was that it's not feasible to expect it to fall within AMD's TDP targets of 95W and below...not unless they drastically reduce the stock clocks or something.


It's simple. The move from 32 to 28 actually makes a 8-core SR possible within 95W. Don't forget that SR drastically reduces power consumption as well. Whether they will carve out another die ... or not. Is the question
They can put something weaker than a 7750 or even none for all we care for anyone who needs a 8-core uses a R9 280X. Fair enough?


----------



## NaroonGTX

I've been saying they could do parts with more than two modules for ages, whether they use the Athlon or FX branding. I doubt even the move to 28nm would reduce the consumption for a chip that has four modules, L2 caches for each, and a GPU on there. They couldn't get away with doing an octocore part with a weak GPU because they would get slammed hard for that. An octocore part is obviously aimed at "enthusiasts", who -- buying an APU like that, would demand top-tier GPU performance. There's no way it would be merely 95W. Even the current "125W" Vishera octocores have been shown to actually have higher TDP's than that from stock.

They could do FX parts if they wanted on FM2+, yeah. But there's nothing pointing to them doing so. For now, people who want moar coarz will have to make due with AM3+.


----------



## Papadope

There not going to sacrifice any gpu portion of the die to add more modules. Kaveri will be the first big step in laying the foundation for HSA. Right now they need to build a market share of HSA enabled computers so developers will start updating their software to support HSA. Quickest way to get marketshare is release a product for the masses. In this case the casual consumer, primarily notebooks. There is no need for more than 2 modules in that market. It would do more harm than good because of reduced battery life and more heat.

3-4 Modules will come, but it's just not the right time for it. Personally I am more than happy with a 2 module APU with sandy IPC. I would be ecstatic.


----------



## NaroonGTX

Agreed. I don't need more than two for what I do, not even gaming. Even the games that are finally able to support up to 8 threads won't suddenly run like crap on four cores. There's really no point in more than four cores in any mobile product, and pushing HSA is more important than appeasing enthusiasts.


----------



## DaveLT

Hold on a sec ... Why are we still referring to SR as "module".
Quote:


> Originally Posted by *NaroonGTX*
> 
> Agreed. I don't need more than two for what I do, not even gaming. Even the games that are finally able to support up to 8 threads won't suddenly run like crap on four cores. There's really no point in more than four cores in any mobile product, and pushing HSA is more important than appeasing enthusiasts.


I'll quote you in the future. It's not going to be only quad no longer


----------



## Usario

Quote:


> Originally Posted by *Papadope*
> 
> There not going to sacrifice any gpu portion of the die to add more modules. Kaveri will be the first big step in laying the foundation for HSA. Right now they need to build a market share of HSA enabled computers so developers will start updating their software to support HSA. Quickest way to get marketshare is release a product for the masses. In this case the casual consumer, primarily notebooks. There is no need for more than 2 modules in that market. It would do more harm than good because of reduced battery life and more heat.
> 
> 3-4 Modules will come, but it's just not the right time for it. Personally I am more than happy with a 2 module APU with sandy IPC. I would be ecstatic.


Richland is a small chip, around 246mm^2. For comparison, a real high-end chip like SB-E is around 430mm^2, and Knights Corner is nearly 700mm^2. Piledriver is 315mm^2. AMD could make a 4M APU and keep it well under 400mm^2 if they really wanted to, but I'm not sure if they have the time or R&D resources at the moment for a presumably relatively low volume product like that.


----------



## NaroonGTX

@DaveLT

We still refer to SR's clusters as modules because that's precisely what they are. It's not a complete rework. Each core has its own decoder now, but the fetch is still partially share as well as the L2 cache. They are still modules, which isn't a bad thing.

And...of course. I never said it would remain as only supporting quad-cores or anything. Multi-threading will be scaling up to eight threads, but that doesn't mean people with quad-cores will have to upgrade or get left in the dust.


----------



## nitrubbb

Quote:


> Originally Posted by *DaveLT*
> 
> It's simple. The move from 32 to 28 actually makes a 8-core SR possible within 95W. Don't forget that SR drastically reduces power consumption as well. Whether they will carve out another die ... or not. Is the question
> They can put something weaker than a 7750 or even none for all we care for anyone who needs a 8-core uses a R9 280X. Fair enough?


omg that would be, amazing


----------



## Seronx

Quote:


> Originally Posted by *Usario*
> 
> Very interesting, thank you, but I see nothing about clocks or TDP.....


1304 is the GPU it indicates that it is a 35 watt part.

23 is the single module boost clock
18 is the dual module boost clock
12/14 is the stock CPU clock.
05 is the stock GPU clock.

Steamroller & Jaguar products have the same OPN design.

Jaguar: 2M201079J4461_00/20/08/06_9830
Steamroller: 2M186092H4467_23/18/12/05_1304
Steamroller: 1M186092H4468_23/18/14/05_1304


----------



## Kuivamaa

Quote:


> Originally Posted by *Usario*
> 
> Very interesting, thank you, but I see nothing about clocks or TDP.....


Ι just remembered where I've seen that graph before. From dresdenboy.

http://citavia.blog.de/2013/07/02/amd-kaveri-engineering-sample-sighted-in-the-wild-16196102/

Semiaccurate had positioned the 1304 graphics ID as high performance mobile part (spectre). Of course 1.2Ghz base clock is out of the question and 1.8 is too low as well, even for that beastly leaked module. i5-4300m (haswell dual core with HT-2.6Ghz/3.3turbo) has a 37W tdp in comparison. 2.3Ghz boosted SR core vs 3.3Ghz haswell? No way. It could be a normal kaveri running at "ULV" clocks though.


----------



## sepiashimmer

What is the estimated price?


----------



## Seronx

Quote:


> Originally Posted by *sepiashimmer*
> 
> What is the estimated price?


The maximum price Kaveri can be up to is $300.

Normal Bin: Up to $199.
ULV Bin: Up to $299.


----------



## IvantheDugtrio

It's too bad that the Athlons now are just APUs with their GPU's disabled.

Imagine if they made a quad-module FM2+ with L3 cache and no IGP and called it the Athlon II FX.


----------



## MrJava

With 4 modules you have 8MB of cache - that seems like plenty.
Quote:


> Originally Posted by *IvantheDugtrio*
> 
> It's too bad that the Athlons now are just APUs with their GPU's disabled.
> 
> Imagine if they made a quad-module FM2+ with L3 cache and no IGP and called it the Athlon II FX.


----------



## NaroonGTX

I doubt the top-end Kaveri will cost more than $150. An APU that costs more than their octocores? Lol right.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *NaroonGTX*
> 
> I doubt the top-end Kaveri will cost more than $150. An APU that costs more than their octocores? Lol right.


I think they could somewhat safely approach i5 prices. Both would be quad-cores, AMD's are unlocked for no premium (I think?), and AMD has a better iGPU at the expense of a bit of CPU power, but if they overclock like Piledriver, then the difference can easily be made up. Remember, if you want a CPU you'll get the thing clearly labeled as a CPU, but if you want an APU then you'll look in a different part of the market.

Also, sort of off-topic: what architecture do the A6s and A8s use? Are they basically Phenom IIs with an integrated 6670? Or are they FX-4100s? I know the A10s are FX-4300s+6670s on the same chip, but that's about it.


----------



## Derp

Quote:


> Originally Posted by *NaroonGTX*
> 
> I doubt the top-end Kaveri will cost more than $150. An APU that costs more than their octocores? Lol right.


If it performs well then I don't see the problem with a fast quad with decent iGPU being $200 or more. Vishera being slow and cheap shouldn't matter.


----------



## NaroonGTX

An APU with 7750 DDR3-level iGPU and a CPU that is (possibly) somewhat close to SB, retailing for the same price as a Haswell i5...Yeah no.
Quote:


> Also, sort of off-topic: what architecture do the A6s and A8s use? Are they basically Phenom IIs with an integrated 6670? Or are they FX-4100s? I know the A10s are FX-4300s+6670s on the same chip, but that's about it.


All of the A-series APU's from Trinity and Richland use Piledriver cores. There are no Phenom II or FX-anything in the APU's. The FX chips have L3 cache and are basically just Opteron dies with several HT links laser-cut disabled depending on the model -- several modules disabled. The only APU's that had the same architecture as Phenom II was the Llano series, which was K10.5 (sometimes referred to as K12 since it was enhanced-Stars) based.

The only difference between the various Trinity/Richland APU's is how much L2 cache they have and whether or not they have one or two modules, as well as the clock speeds/turbo clocks.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> I doubt the top-end Kaveri will cost more than $150. An APU that costs more than their octocores? Lol right.


Maybe a faster quad core is costlier than a slower octocore?


----------



## NaroonGTX

The only correlation speed has with price is what the company decides to price it at. In other words, the prices are simply arbitrary. When Zambezi launched, it was more expensive than the Phenom II series as well as lots of Sandy Bridge chips, despite it being slower than all of them.

Look at Intel's Extreme series chips. One chip costs around $550, and then you have the exact same chip, which has a clock speed that is literally about 200 Mhz higher, with maybe 3MB of extra L3 cache, and suddenly this raises the price up to $1000? Get real. The prices are what they are because that's what the company "decides" they should be. AMD did the exact same thing when they were "competitive" with Intel with their Athlon 64.



Even back then people scoffed at those ridiculous prices, but some people bought the chips anyway. This isn't even showing the original FX series, which was targeted specifically at gamers.


----------



## Papadope

It depends how well the A10-6800k is selling at $150 as that was a price hike from the A10-5800k. On top of that it depends how much of an increase in performance Kaveri will bring. I don't think there's anything forcing the price to stay at $150. If it delivers i5 performance I think we will see a price between the i3 and i5. They need to gain marketshare with HSA, but they also need to make some money. If Richland is selling and Kaveri's performance is around Sandy, then im thinking ~$180. But I have no idea what HSA will do for it on launch. If they have some big apps and games out that highlight HSA then it could go higher.

Llano - Low Performance - Flagship Chip Price at introduction $135 -> Llano Poor Sales, Large leftover Inventory
Trinity - Better Performance - Flagship Chip Price at introduction $122 -> Trinity Large percentage of AMD's desktop sales.
Richland - Slightly Better Performance - Flagship Price at introduction $142 -> Richland Sales TBD

Factors for Pricing
1. Sales
2. Performance
3. Competitions price at a given performance

So the number one factor over performance is sales. If sales of a previous chip does well, and performance on the new chip has increased. Then the new product should cost more money. However, it cannot exceed the cost of similar performance offered by the competition. If Kaveri has i5 performance the price tag has room to grow. Slightly better performance on Richland warranted a $20 price increase. If AMD felt the price hurt sales they would have lowered it by now but it has been stable. If Kaveri has a large performance increase I think it's safe to say we will see a price increase.


----------



## NaroonGTX

The thing with Richland is that it wasn't really worth a $20 price hike. It was the same chip, just with the ability to OC to CPU portion better (Trinity usually topped out at 4.3 or 4.4ghz, Richland could go to 5.0ghz on air). If Kaveri doesn't replace Richland in the $150 slot, they will keep all the current chips the same price and use the 6800k's $150 price tag as an excuse to price Kaveri higher, like $170 for the 7800k. I have no doubts Kaveri will surpass Trinity/Richland by a wide margin in both CPU and GPU performance, but I don't see them going any higher than $170 if they don't lower the current chips' prices.


----------



## DaveLT

Quote:


> Originally Posted by *NaroonGTX*
> 
> The thing with Richland is that it wasn't really worth a $20 price hike. It was the same chip, just with the ability to OC to CPU portion better (Trinity usually topped out at 4.3 or 4.4ghz, Richland could go to 5.0ghz on air). If Kaveri doesn't replace Richland in the $150 slot, they will keep all the current chips the same price and use the 6800k's $150 price tag as an excuse to price Kaveri higher, like $170 for the 7800k. I have no doubts Kaveri will surpass Trinity/Richland by a wide margin in both CPU and GPU performance, but I don't see them going any higher than $170 if they don't lower the current chips' prices.


But ... 6800k had a GPU that OC'd easily (I can confirm) I gave them a 30% OC on stock voltage and it was fine. Plus the FPS was actually scaling


----------



## Eliaskil

Node-shrinks don't really guarantee that. i.e. Llano; It used the Stars architecture







(K10 i.e. Phenom II and such) for the CPU cores and was on 32nm, yet the max clocks anyone could get out of it was around the 3.6 Ghz mark. Deneb and Thuban in comparison would usually top out at around 4.0~4.2 Ghz. Llano was around 6~7% faster clock-for-clock than Phenom II even with L3 cache, yet it couldn't clock higher. Some theorize this was probably due to the yield issues they (GloFo) had with the transition to 32nm.


----------



## DaveLT

Quote:


> Originally Posted by *Eliaskil*
> 
> Node-shrinks don't really guarantee that. i.e. Llano; It used the Stars architecture
> 
> 
> 
> 
> 
> 
> 
> (K10 i.e. Phenom II and such) for the CPU cores and was on 32nm, yet the max clocks anyone could get out of it was around the 3.6 Ghz mark. Deneb and Thuban in comparison would usually top out at around 4.0~4.2 Ghz. Llano was around 6~7% faster clock-for-clock than Phenom II even with L3 cache, yet it couldn't clock higher. Some theorize this was probably due to the yield issues they (GloFo) had with the transition to 32nm.


That 32nm transition yes initially but also because of TIM under the IHS


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> But ... 6800k had a GPU that OC'd easily (I can confirm) I gave them a 30% OC on stock voltage and it was fine. Plus the FPS was actually scaling


And its IMC could clock higher than Trinity's on air.


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> And its IMC could clock higher than Trinity's on air.


----------



## Yeroon

Quote:


> Originally Posted by *DaveLT*
> 
> That 32nm transition yes initially but also because of TIM under the IHS


Quote:


> Originally Posted by *Eliaskil*
> 
> Node-shrinks don't really guarantee that. i.e. Llano; It used the Stars architecture
> 
> 
> 
> 
> 
> 
> 
> (K10 i.e. Phenom II and such) for the CPU cores and was on 32nm, yet the max clocks anyone could get out of it was around the 3.6 Ghz mark. Deneb and Thuban in comparison would usually top out at around 4.0~4.2 Ghz. Llano was around 6~7% faster clock-for-clock than Phenom II even with L3 cache, yet it couldn't clock higher. Some theorize this was probably due to the yield issues they (GloFo) had with the transition to 32nm.


It was probably that Stars was the tweaked for mobile arch, so higher clocks weren't really expected, esp since it launched without a BE chip.
I thought llano was still solder, Trinity was first mainstream apu to get TIM? Llano was a cool running chip with easy undervolting. (mine does 2.9 @ 1.2v)


----------



## CynicalUnicorn

Quote:


> Originally Posted by *NaroonGTX*
> 
> All of the A-series APU's from Trinity and Richland use Piledriver cores. There are no Phenom II or FX-anything in the APU's. The FX chips have L3 cache and are basically just Opteron dies with several HT links laser-cut disabled depending on the model -- several modules disabled. The only APU's that had the same architecture as Phenom II was the Llano series, which was K10.5 (sometimes referred to as K12 since it was enhanced-Stars) based.
> 
> The only difference between the various Trinity/Richland APU's is how much L2 cache they have and whether or not they have one or two modules, as well as the clock speeds/turbo clocks.


Wait... Does that mean I can get an Opteron for AM3+? That's similar to Intel's Extreme series, right? Essentially a stripped down, high clocked Xeon with a larger L3 cache. My comparison was more based on the architecture though. Obviously there are some differences, but the CPU part of Trinity/Richland is Piledriver, albeit stripped down a bit, which gives it performance extremely similar to the FX-4300s.

As for pricing, Intel has awful iGPUs compared to even a 6670. An i5 compared to an A10 has Intel's APU win in most CPU tasks while AMD's APU wins in GPU tasks. If Steamroller gets within 25% performance of Sandy Bridge, then yes, I think that $170 would be a safe launch price. I doubt that they'll price it at $200 though.


----------



## Kuivamaa

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> If Steamroller gets within *25%* performance of Sandy Bridge, then yes, I think that $170 would be a safe launch price. I doubt that they'll price it at $200 though.


What do you mean?


----------



## CynicalUnicorn

If Steamroller has 75% the IPCs of Sandy Bridge, then AMD can safely price Kaveri higher. Piledriver has about two-thirds of SB's IPCs and its potential for overclocking makes up for a big chunk of lost single-threaded performance (though the 5GHz 2500ks would like to have a word with me...) and if Steamroller has similar potential, then AMD should be in good shape especially in conjuction with HSA.


----------



## Kuivamaa

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> If Steamroller has 75% the IPCs of Sandy Bridge, then AMD can safely price Kaveri higher. Piledriver has about two-thirds of SB's IPCs and its potential for overclocking makes up for a big chunk of lost single-threaded performance (though the 5GHz 2500ks would like to have a word with me...) and if Steamroller has similar potential, then AMD should be in good shape especially in conjuction with HSA.


Where do you get these numbers?


----------



## Usario

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Wait... Does that mean I can get an Opteron for AM3+? That's similar to Intel's Extreme series, right? Essentially a stripped down, high clocked Xeon with a larger L3 cache. My comparison was more based on the architecture though. Obviously there are some differences, but the CPU part of Trinity/Richland is Piledriver, albeit stripped down a bit, which gives it performance extremely similar to the FX-4300s.
> 
> As for pricing, Intel has awful iGPUs compared to even a 6670. An i5 compared to an A10 has Intel's APU win in most CPU tasks while AMD's APU wins in GPU tasks. If Steamroller gets within 25% performance of Sandy Bridge, then yes, I think that $170 would be a safe launch price. I doubt that they'll price it at $200 though.


There are AM3+ Opterons, but they're identical to FX chips beyond bin, clock, and TDP. C32 and G34 Optys on the other hand have extra HT links enabled for 2P and 4P systems respectively.


----------



## traktor

Quote:


> Originally Posted by *Kuivamaa*
> 
> Where do you get these numbers?


Perhaps this graph, it is already at ~72%.


----------



## sugarhell

Lol cinebench


----------



## Artikbot

Quote:


> Originally Posted by *sugarhell*
> 
> Lol cinebench


R10


----------



## DaveLT

Where a 3930k can lose to a 2500k by quite a big margin


----------



## Kuivamaa

Nuff said.


----------



## sugarhell

Quote:


> Originally Posted by *DaveLT*
> 
> Where a 3930k can lose to a 2500k by quite a big margin


3930k=2500k Confirmed by cinebench


----------



## NaroonGTX

Llano couldn't clock that high because really, the Stars arch had reached its wall. AMD knew that, which was why there was no "Phenom III" or anything, and why they tried to rush Bulldozer out so quickly. They could only do so many derivatives of the K7 before they had to move on. There just wasn't much room for improvement.

Meanwhile, the Bulldozer arch got off to a rough start, yeah, but it has TONS of room for improvement. It's a completely fresh uarch as opposed to being based off previous groundwork.


----------



## Artikbot

Quote:


> Originally Posted by *NaroonGTX*
> 
> Llano couldn't clock that high because really, the Stars arch had reached its wall. AMD knew that, which was why there was no "Phenom III" or anything, and why they tried to rush Bulldozer out so quickly. They could only do so many derivatives of the K7 before they had to move on. There just wasn't much room for improvement.
> 
> Meanwhile, the Bulldozer arch got off to a rough start, yeah, but it has TONS of room for improvement. It's a completely fresh uarch as opposed to being based off previous groundwork.


Which is what Phenom III mongers failed to realize.

But it has no redemption possible. It was just slow, hot, and not worth the money.


----------



## sepiashimmer

When will it be released and what will the price be with MB and RAM?


----------



## NaroonGTX

Kaveri will hit retail desktop in Jan. 2014. The cost of the MOBO and RAM will depend on which MOBO and RAM you buy...


----------



## yraith

I know what I want for Xmas now =)


----------



## seraph84

I wonder if a low power version of Kaveri could make it into an NUC sized platform... i would be all over that.


----------



## nitrubbb

Quote:


> Originally Posted by *NaroonGTX*
> 
> Kaveri will hit retail desktop in Jan. 2014. The cost of the MOBO and RAM will depend on which MOBO and RAM you buy...


sooner than that I hope


----------



## Kuivamaa

There are already kaveri mobos for sale. Some high end ones.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813128653&Tpk=fm2%2b%20sniper
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128655
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157457

Very affordable, after all you can't charge much when the APU themselves aren't all that expensive either.


----------



## Zyro71

I dunno why but I can for see AMD trying what they did with the 9590 and making a 5GHz APU.
Would be funny, but hard to cool im sure.
Dunno why but my 5800K was not easy to cool on air but i blame my crap asrock extreme 6 for that..
FM2A85x from Asrock...I dont recommend..
I just have a 5800k sitting at home now because of it.


----------



## DaveLT

Quote:


> Originally Posted by *Zyro71*
> 
> I dunno why but I can for see AMD trying what they did with the 9590 and making a 5GHz APU.
> Would be funny, but hard to cool im sure.
> Dunno why but my 5800K was not easy to cool on air but i blame my crap asrock extreme 6 for that..
> FM2A85x from Asrock...I dont recommend..
> I just have a 5800k sitting at home now because of it.


6800k does 5GHz on AIR. Easily, lol.


----------



## NaroonGTX

Richland core uses "enhanced Piledriver" cores which enabled them to increase clocks while staying in the same thermal envelope as the previous gen. Richland can hit 5ghz on air without problems with no disabled cores.

Kaveri will maintain Piledriver's high clocks while retaining the performance increases from the new uarch.


----------



## boot318

Quote:


> Originally Posted by *NaroonGTX*
> 
> Kaveri will maintain Piledriver's high clocks while retaining the performance increases from the new uarch.


I doubt Piledriver Steamroller will clock high. Your asking GloFlo to get it right and have 'bulk' become a hero. Not happening.


----------



## NaroonGTX

You're basing this off what, exactly? AMD won't release their new flagship APU with low clockrates. This has been in the works for ages now, it's silly to assume Kaveri will have a major clock regression. This isn't some complete re-design, this is a further evolution of the Bulldozer design. There's not really anything architecturally that points to SR not clocking high.


----------



## Bassefrom

Quote:


> Originally Posted by *boot318*
> 
> I doubt Piledriver will clock high. Your asking GloFlo to get it right and have 'bulk' become a hero. Not happening.


You surely must mean Kaveri? Because Piledriver already clocks insanely high on air.


----------



## NaroonGTX

If Kaveri retains the enhancements that the engineers made to enhanced-PD with Kaveri, along with the improved performance that Kaveri already brings, the top-end chip will be a monster.


----------



## boot318

Quote:


> Originally Posted by *Bassefrom*
> 
> You surely must mean Kaveri? Because Piledriver already clocks insanely high on air.


Sorry about that. Yeah, I meant Kaveri, or Steamroller to be precise. I can't believe I put PD instead of SR....


----------



## nitrubbb

not too long anymore!


----------



## MrJava

We've already extrapolated a minimum clock rate for the top end Kaveri being about 4GHz based on the 1050 GFLOPs figure given and an assumed GPU clock rate of 900MHz. This is AMD we're talking about though, so you never know.

Also a simplistic calculation would say that with 30% IPC improvement (on average), a 3GHz Kaveri would perform similarly to a 3.9GHz Richland on average but with lowered power consumption. One could argue that this is a big win for laptops. For desktop/enthusiasts, not so much.
Quote:


> Originally Posted by *NaroonGTX*
> 
> If Kaveri retains the enhancements that the engineers made to enhanced-PD with Kaveri, along with the improved performance that Kaveri already brings, the top-end chip will be a monster.


----------



## Seronx

Richland 35W:
4 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.5 GHz * 2 Modules => 80 GFlops(upper max)
4 (64-bit units) * 2.5 GHz * 2 Modules => 20 GFlops(lower max)
384 (32-bit units) * 2 (FMA) * 0.72 GHz => 552.96 GFlops(upper max)
384 (32-bit units) * 0.6 GHz => 230.4 GFlops(lower max)

Total Agg Upper: 632.96 GFlops
Total Agg Lower: 250.4 GFlops

Kaveri 35W:
8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 1.8 GHz * 2 Modules => 115.2 GFlops(upper max)
8 (64-bit units) * 1.8 GHz * 2 Modules => 28.8 GFlops(lower max)
512 (32-bit units) * 2 (FMA) * 0.5 GHz => 512 GFlops(upper max)
512 (32-bit units) * 0.5 GHz => 256 GFlops(lower max)

Total Agg Upper: 627.2 GFlops
Total Agg Lower: 284.8 GFlops

What IPC increase?


----------



## Gungnir

Quote:


> Originally Posted by *Seronx*
> 
> Richland 35W:
> 4 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.5 GHz * 2 Modules => 80 GFlops(upper max)
> 4 (64-bit units) * 2.5 GHz * 2 Modules => 20 GFlops(lower max)
> 384 (32-bit units) * 2 (FMA) * 0.72 GHz => 552.96 GFlops(upper max)
> 384 (32-bit units) * 0.6 GHz => 230.4 GFlops(lower max)
> 
> Total Agg Upper: 632.96 GFlops
> Total Agg Lower: 250.4 GFlops
> 
> Kaveri 35W:
> 8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 1.8 GHz * 2 Modules => 115.2 GFlops(upper max)
> 8 (64-bit units) * 1.8 GHz * 2 Modules => 28.8 GFlops(lower max)
> 512 (32-bit units) * 2 (FMA) * 0.5 GHz => 512 GFlops(upper max)
> 512 (32-bit units) * 0.5 GHz => 256 GFlops(lower max)
> 
> Total Agg Upper: 627.2 GFlops
> Total Agg Lower: 284.8 GFlops
> 
> What IPC increase?


The one that allows Kaveri to have equal or better performance than Richland with far lower clocks, according to what you just posted?


----------



## tjwolf88

It takes more then GFlops to be able to determine performance, yet, I like the potential here if you are correct. If it continues AMD's substantial cost to performance reatio, it could easily become a very big deal for budget gamers like me.


----------



## MrJava

Its even lower than what you posted due to Steamroller still having only two 128-bit FMACs, but yeah. I think people often overstate the importance of the FPU. I'd estimate that most games are 90% about INT performance.
Quote:


> Originally Posted by *Seronx*
> 
> Richland 35W:
> 4 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.5 GHz * 2 Modules => 80 GFlops(upper max)
> 4 (64-bit units) * 2.5 GHz * 2 Modules => 20 GFlops(lower max)
> 384 (32-bit units) * 2 (FMA) * 0.72 GHz => 552.96 GFlops(upper max)
> 384 (32-bit units) * 0.6 GHz => 230.4 GFlops(lower max)
> 
> Total Agg Upper: 632.96 GFlops
> Total Agg Lower: 250.4 GFlops
> 
> Kaveri 35W:
> 8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 1.8 GHz * 2 Modules => 115.2 GFlops(upper max)
> 8 (64-bit units) * 1.8 GHz * 2 Modules => 28.8 GFlops(lower max)
> 512 (32-bit units) * 2 (FMA) * 0.5 GHz => 512 GFlops(upper max)
> 512 (32-bit units) * 0.5 GHz => 256 GFlops(lower max)
> 
> Total Agg Upper: 627.2 GFlops
> Total Agg Lower: 284.8 GFlops
> 
> What IPC increase?


----------



## DaveLT

Here's hoping Kaveri desktop parts will clock like hell
If mantle really pans out very well we could see r5 m200 getting a HUGE and MASSIVE increase over Iris Pro. Totally forgot about that
Don't forget that Intel doesn't particularly make good drivers ... (At best, not good)


----------



## Zyro71

Thats also assuming AMD will keep the 35watt TDP design.

I kinda hope AMD makes a 4ghz laptop part just to troll more with the high clocks.

but more so, a 45watt or enthusiasts 55watt APU would b e nice (for those that have the high end GX60/70 from MSI)

But in all honesty I feel like Kaveri is goiing to be a step up from Richland but who knows how its going to be.

I do like to dream though.

..

35 watt quad core clocked at 2.8 with a 3.9GHz turbo
2133MHz ram
640 Radeon cores clocked at 600Mhz with a 800Mhz Turbo

Well, maybe thats a bit much for 35 watts.

One thing that would make me come back to a desktop APU is if they did like the way of the Phenom and introduced to the world a 6 core processor.
Actually, if they had the ability to put two or three of these chips in a server board...hello crossfireX


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> Richland 35W:
> 4 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.5 GHz * 2 Modules => 80 GFlops(upper max)
> 4 (64-bit units) * 2.5 GHz * 2 Modules => 20 GFlops(lower max)
> 384 (32-bit units) * 2 (FMA) * 0.72 GHz => 552.96 GFlops(upper max)
> 384 (32-bit units) * 0.6 GHz => 230.4 GFlops(lower max)
> 
> Total Agg Upper: 632.96 GFlops
> Total Agg Lower: 250.4 GFlops
> 
> Kaveri 35W:
> 8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 1.8 GHz * 2 Modules => 115.2 GFlops(upper max)
> 8 (64-bit units) * 1.8 GHz * 2 Modules => 28.8 GFlops(lower max)
> 512 (32-bit units) * 2 (FMA) * 0.5 GHz => 512 GFlops(upper max)
> 512 (32-bit units) * 0.5 GHz => 256 GFlops(lower max)
> 
> Total Agg Upper: 627.2 GFlops
> Total Agg Lower: 284.8 GFlops
> 
> What IPC increase?


This is all theoretical and definitely correct.
But there are overheads...always. And what the most important thing is to reduce overhead. ( AMD really needs it lol )
Less overhead = more efficient,
Even less overhead = more close to max theoretical capability.
And so on...

If Kaveri scores 5%, 10%, 20% or 30% or whatever higher than Richland at same clock, then it will mean Kaveri has less overhead than Richland. And then we say its IPC is higher.
For example a 4.0 GHz Thuban core will kill a 4.0 GHz FX core in single thread because FX has much higher overheads (which is bad), but if you calculate for FX, it will always win in theory, because it has more cores, more frequency.

What I've found using 1055T and 8350 that Memory Writes (which is essential for every bit of processing ) are extremely horrible clock for clock (memory clock) than any Intel parts. IMO this is the biggest overhead in all AMD CPU. idk why but I feel if FX had about 3.2 GHz of CPU-NB (at least I can dream) then it would have performed much closer to 32nm i7s. I've been watching Intel's memory writes in various CPU memory benchmarks at default settings and you know what I find, every AMD CPU is at least 30-60% slower in Memory Write speed. And If AMD still continues to retain this overhead, they can't ever win in Single Thread even with Excavator








Sorry







it was to AMD directly.


----------



## sumitlian

Quote:


> Originally Posted by *greatmember*
> 
> Chunky_Chimp is a fxxx up admin!
> 
> Chunky_Chimp ban me for no reason and send me pm saying he will ban me again and again. He just hates me and don't like what I discuss. I suggest just leave here which is what I do now and find a better place like tom hardware forum.
> 
> When Chunky_Chimp doesn't like you, he will ban you again and again too, then no point to share anything over here at all.
> 
> Hey Chunky_Chimp, I will regularly come back to this forum to make post like this. You cannot stop me at all because you are too rude. Enjoy it while it is always too late when you ban lol You are such a rude loser lol.


I believe Everyone is responsible for his/her own Destiny. (Random consequences excluded







)


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> Here's hoping Kaveri desktop parts will clock like hell
> If mantle really pans out very well we could see r5 m200 getting a HUGE and MASSIVE increase over Iris Pro. Totally forgot about that
> Don't forget that Intel doesn't particularly make good drivers ... (At best, not good)


Since Kaveri is so different from Trinity/Richland, I honestly don't know what to expect.

So for now I'll be conservative like Phenom II and hope these parts reach Trinity speeds. According to Seronx's math, they should still beat the crap out of a Richland @5GHz right?

Right now as long as they are a solid 20% faster than Trinity, and the IGP gets a similarly high performance boost, I'll be upgrading to it


----------



## NaroonGTX

Posting this again just to do it: http://juanrga.com/en/AMD-kaveri-benchmark.html


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Posting this again just to do it: http://juanrga.com/en/AMD-kaveri-benchmark.html


Thanks for this









It looks like FPUs are working as expected. (Dual Module Kaveri vs Quad Core i5 )


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> Since Kaveri is so different from Trinity/Richland, I honestly don't know what to expect.
> 
> So for now I'll be conservative like Phenom II and hope these parts reach Trinity speeds. According to Seronx's math, they should still beat the crap out of a Richland @5GHz right?
> 
> Right now as long as they are a solid 20% faster than Trinity, and the IGP gets a similarly high performance boost, I'll be upgrading to it


Even if it's max 4.8 i am still happy. I doubt AMD will make it a full Giggle below Richland though.

Those wanting max OCs on FM2+ should just stick to Richland though







It's going to be the last of it's kind


----------



## NaroonGTX

I will gladly kick Richland to curb for Kaveri, lol. I need more performance for my PCSX2 and Dolphin emu's.


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> Even if it's max 4.8 i am still happy. I doubt AMD will make it a full Giggle below Richland though.
> 
> Those wanting max OCs on FM2+ should just stick to Richland though
> 
> 
> 
> 
> 
> 
> 
> It's going to be the last of it's kind


For the rest of us who want our things quiet and fast, it's gonna be the bomb though


----------



## Clocknut

What I want is to have me using discrete GCN GPU for graphics while kaveri iGPU will handle the compute processing the game used. If this can be done it will be awesome since my discrete GPU will only be focus on render graphics.


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> I will gladly kick Richland to curb for Kaveri, lol. I need more performance for my PCSX2 and Dolphin emu's.


You will get it.








I get around 3% higher fps with AVX than SSE2 in DX11 Software mode PCSX2 emulation







(Regardless MT Vector Unit enabled or disabled ). If the 20% FPU penalty has really been fixed with Kaveri, then with Dual Core processing you'll be getting about 25-35% higher fps than 8350 in clock for clock speed.

Hoping newer versions of PCSX2 are not ICC compiled


----------



## NaroonGTX

You use PCSX2 as well? Awesome









Yeah the independent decoders in Kaveri will grant a huge boost alone, then the boost to single-thread, reduction of i-cache misses will make Kaveri a very popular option for emu lovers.

And yeah it's a shame PCSX2 was optimized for Intel. Sadly I don't think they will ever change it since the dev team is so small now, and it would require a massive amount of time to rewrite it. However several members have expressed interest in HSA if it turns out it could grant a boost in perf.


----------



## Seronx

Richland @ 5 GHz:
4 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 5 GHz * 2 Modules => 160 GFlops(upper max)
4 (64-bit units) * 5 GHz * 2 Modules => 40 GFlops(lower max)

Kaveri @ 2.5 GHz
8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.5 GHz * 2 Modules => 160 GFlops(upper max)
8 (64-bit units) * 2.5 GHz * 2 Modules => 40 GFlops(lower max)

The highest clock rate Kaveri could have is 2.9 GHz. For Richland, to compete with a Kaveri at 2.9 GHz it would need to be clocked more than 5.8 GHz.

Just so you could see how it would match up:
8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.9 GHz * 2 Modules => 185.6 GFlops(upper max)
512 (32-bit units) * 2 (cuz FMA) * 0.844 GHz => 864.256 GFlops

Which gives a total agg of ~1050 GFlops. For the GPU side, Kaveri boost clocks ≈ Richland base clocks for that particular segment.

Kaveri 35W = 500 MHz base/600 MHz boost (from FP2)
Kaveri 95W = 7xx MHz base/844 MHz boost (from FM2+)

Based on the Computex info, Kaveri will only appear as PGA on FM2+ and BGA on FP2.


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> Richland @ 5 GHz:
> 4 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 5 GHz * 2 Modules => 160 GFlops(upper max)
> 4 (64-bit units) * 5 GHz * 2 Modules => 40 GFlops(lower max)
> 
> Kaveri @ 2.5 GHz
> 8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.5 GHz * 2 Modules => 160 GFlops(upper max)
> 8 (64-bit units) * 2.5 GHz * 2 Modules => 40 GFlops(lower max)
> 
> The highest clock rate Kaveri could have is 2.9 GHz. For Richland, to compete with a Kaveri at 2.9 GHz it would need to be clocked more than 5.8 GHz.
> 
> Just so you could see how it would match up:
> 8 (64-bit units) * 2 (32-bit vectors) * 2 (cuz FMA) * 2.9 GHz * 2 Modules => 185.6 GFlops(upper max)
> 512 (32-bit units) * 2 (cuz FMA) * 0.844 GHz => 864.256 GFlops
> 
> Which gives a total agg of ~1050 GFlops. For the GPU side, Kaveri boost clocks ≈ Richland base clocks for that particular segment.
> 
> Kaveri 35W = 500 MHz base/600 MHz boost (from FP2)
> Kaveri 95W = 7xx MHz base/844 MHz boost (from FM2+)
> 
> Based on the Computex info, Kaveri will only appear as PGA on FM2+ and BGA on FP2.


Every time I see 8 x 64 bit Unit per Module, I feel like







.............Thanks to AMD
Hey Seronx, I am confused in something. If a single core is busy at 128 bit operation, will it be still allowed to do 2 x 64 bit integer operation concurrently for other loads ? Am I right ?


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> You use PCSX2 as well? Awesome
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah the independent decoders in Kaveri will grant a huge boost alone, then the boost to single-thread, reduction of i-cache misses will make Kaveri a very popular option for emu lovers.
> 
> And yeah it's a shame PCSX2 was optimized for Intel. Sadly I don't think they will ever change it since the dev team is so small now, and it would require a massive amount of time to rewrite it. However several members have expressed interest in HSA if it turns out it could grant a boost in perf.


I am a huge fan of ePSXe, PCSX2 and Project64. I have heard a lot about Gamecube but never tried it yet and dying to complete Resident Evil in Dolphin


----------



## DaveLT

Quote:


> Originally Posted by *Clocknut*
> 
> What I want is to have me using discrete GCN GPU for graphics while kaveri iGPU will handle the compute processing the game used. If this can be done it will be awesome since my discrete GPU will only be focus on render graphics.


Isn't that what HSA is for?


----------



## Seronx

Quote:


> Originally Posted by *sumitlian*
> 
> Hey Seronx, I am confused in something. If a single core is busy at 128 bit operation, will it be still allowed to do 2 x 64 bit integer operation concurrently for other loads ? Am I right ?


The math accelerator is the one that is busy if that is the case. Since, the execution of maths is coarse grain in the FPU. If it is stalling on that 128-bit operation it will move on to another instruction to execute. While it waits for the 128-bit op to get a memory op.


----------



## Thunderclap

Quote:


> Originally Posted by *NaroonGTX*
> 
> You use PCSX2 as well? Awesome
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah the independent decoders in Kaveri will grant a huge boost alone, then the boost to single-thread, reduction of i-cache misses will make Kaveri a very popular option for emu lovers.
> 
> And yeah it's a shame PCSX2 was optimized for Intel. Sadly I don't think they will ever change it since the dev team is so small now, and it would require a massive amount of time to rewrite it. However several members have expressed interest in HSA if it turns out it could grant a boost in perf.


I can foresee a Kaveri based PS emulator mini-itx PC in my future...


----------



## ChrisB17

My 6800k is nice I can't wait for kaveri if its going to be even nicer. Excited a little or a lot.


----------



## sepiashimmer

Is there a 8-core Kaveri like in PS4s?


----------



## geoxile

Quote:


> Originally Posted by *sepiashimmer*
> 
> Is there a 8-core Kaveri like in PS4s?


Jaguar


----------



## NaroonGTX

No, for now Kaveri will only ship in SKU's featuring 2-4 Steamroller cores. The custom APU's in the PS4 and X1 use low-power Jaguar cores.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> No, for now Kaveri will only ship in SKU's featuring 2-4 Steamroller cores. The custom APU's in the PS4 and X1 use low-power Jaguar cores.


Is an 8 core with low clock speed better than 2-4 core with higher clock speed?


----------



## LDV617

I'm extremely excited for Kaveri and will probably build a Kaveri rig for my little sister when released, but these benchmarks don't show ****.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *sepiashimmer*
> 
> Is an 8 core with low clock speed better than 2-4 core with higher clock speed?


Depends on the application. GPUs are several hundred to a few thousand cores and typically run below 1GHz. They aren't as efficient as CPU cores for complex, linear calculations that can't be split into many threads, which is why, even though they give several times the FLOPS as a CPU, they aren't used as a primary processor.

In single-threaded applications, the higher clockspeed (assuming the same architecture, cache size, etc.) will win since only one core is used. It doesn't matter how many cores you have, since the other seven just aren't seen.

In heavily multi-threaded applications, more cores will often be better. That's why Opterons and Xeons, even though they're under 3 or even 2GHz, are used in servers and workstations. They have 8, 12, or even 16 cores and multiple CPUs can be used together, letting dozens of cores work together. i# and FX-x000 CPUs can't do that; they're uniprocessors.

For a normal consumer or gamer, quadcore and hexacore CPUs are more than enough, but we're getting to a point where more cores will be better. Remember, dual-core CPUs became mainstream only within the last decade and programmers are finally catching up. With XB1 and PS4 using octacore APUs, games should become better multi-threaded (look at Battlefield especially), but most consumer programs are still single-threaded. And even when that does happen, WoW, Skyrim, and Starcraft will continue to be cited as reasons to use Intel. Because this is the Internet and anonymity means you can be a terrible person and not be punched in the face.


----------



## NaroonGTX

What Unicorn said.

People must remember that the PS4 and X1 are systems built from the ground up. They literally are PC's, just custom ones that are restricted. Their OS's and all the software that runs on them (everyday applications, games, etc.) will be optimized for the up-to-eight threads that those APU's can provide. This will push game devs to start making their engines truly multi-threaded (some have already done this, like DICE and Crytek) which will trickle over to the PC which will benefit all of the gamers, regardless of platform. This means that the low-power 1.6ghz octocore won't be a bottleneck like people expected.


----------



## iamwardicus

I'm enjoying the looks of Kaveri based on the speculation... I'm hoping Blizzard makes use of HSA (and/or Mantle) as usually I play SC or WoW nowadays...

I'm praying if Kaveri takes off they make a 6 core variant of it on a die shrink.... Steamroller may or may not work for my needs (I enjoy running very fast Simcrafts compared to my old Phenom II since I can use 8 threads now) but it seems Kaveri will likley smoke my 8350 if the software I use is coded to take full advantage of the new APU.

Also







for the emulation users!


----------



## LDV617

8350 is a very underrated chip from what I've heard. If the Kaveri APUs are focusing on the budget market like the last release of APUs, I seriously doubt it will smoke your 8350. That being said, I can see comparable performance for about 1/2 - 3/4 the price. But I don't think AMD wants to send Vishera to the graveyard quite yet.


----------



## Truedeal

Kaveri might just be my next build with these results!
However, I want to know what gpus Kaveri will "dual graphics" with.


----------



## nitrubbb

Quote:


> Originally Posted by *Truedeal*
> 
> Kaveri might just be my next build with these results!
> However, I want to know what gpus Kaveri will "dual graphics" with.


r7 260x most likely guess


----------



## MrJava

If you look closely, a jaguar core is half a bulldozer though it has greater capabilities in certain areas and its IPC is generally higher.
Quote:


> Originally Posted by *geoxile*
> 
> Jaguar


----------



## geoxile

So exactly when is Kaveri going to come out?


----------



## NaroonGTX

Jan. 2014.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> Jan. 2014.


Where did you read this? I'm thinking of buying new computer parts now, I would have nearly wasted my money on Richland.


----------



## NaroonGTX

AMD themselves said earlier this year that Kaveri will ship in late 2013 and be in consumer's hands (on the desktop) in very early Q1 2014, with mobile variants coming later.


----------



## SandGlass

I was thinking of making a more portable (and cheaper) mATX in a thin case for some time now, and the GPU side of thing Kaveri is definitely very powerful if these benches are to be believed, BUT why won't they release it already? And I saw a slide mentioning a DSP, does that mean it'll support Trueaudio?


----------



## NaroonGTX

I don't know if Kaveri's iGPU will support TrueAudio but it's possible it will. I know the R7-260x supports it, despite being a rebranded 7790. It turns out that the original Bonaire GPU actually already had TrueAudio tech on it, it was just disabled. No doubt to give people extra incentive to shell out for the "new" GPU, lol.

I know that Kaveri will have an ARM co-processor as well, but what function it serves I don't remember (or if it was even explained what it will do yet).

>>BUT why won't they release it already?

Well Kaveri was delayed a while back (before all the recent delay rumors which weren't really true) for as-of-now unknown reasons. Some speculate yield issues with GloFo, manufacturing issues or what have you, but these are unsubstantiated. Leaked documents and die shots points to AMD having scrapped the initial Steamroller and overhauled it to become bdver3b, or Steamroller 2.0. We will find out in a few weeks time more about Kaveri.

If it were ready for release at this time, I don't believe that Richland would have been released. Richland was a stop-gap to give people something to "upgrade" to while they wait for Kaveri.


----------



## DaveLT

The ARM processor will be there for security purposes i think .... It is said to be the first step AMD took to finally make use of the license they bought for ARM








IIRC of course.

Thing is, Richland was already announced way back and it was on the roadmaps wasn't it


----------



## tjwolf88

This AMD, roadmaps give you a broad idea on plans. They aren't really fallowed by AMD too well some of the time. I believe AMD did indeed see that Steamroller could be better so they spent a few months doing last minute refinements to the arch to make the final product signifigantly better. Admittedly, that isn't difficult as the architecture has many flaws that can be solved. Most of all, I want to finally have a decent memory controller. Seriously, the difference between AMD and Intel memory access times is a problem that needs to be fixed, also the PCI-E interface could use refinements.


----------



## DaveLT

Quote:


> Originally Posted by *tjwolf88*
> 
> This AMD, roadmaps give you a broad idea on plans. They aren't really fallowed by AMD too well some of the time. I believe AMD did indeed see that Steamroller could be better so they spent a few months doing last minute refinements to the arch to make the final product signifigantly better. Admittedly, that isn't difficult as the architecture has many flaws that can be solved. Most of all, I want to finally have a decent memory controller. Seriously, the difference between AMD and Intel memory access times is a problem that needs to be fixed, also the PCI-E interface could use refinements.


Enter PCI-E 3 on Kaveri


----------



## Clocknut

Quote:


> Originally Posted by *DaveLT*
> 
> Isn't that what HSA is for?


sounds good on paper, but how well the software maker to take advantage of it is another thing. Since the both consoles are not base on kaveri and no HSA. I am going to be quite skeptical on the HSA adoption speed.


----------



## 161029

Quote:


> Originally Posted by *sumitlian*
> 
> I am a huge fan of ePSXe, PCSX2 and Project64. I have heard a lot about Gamecube but never tried it yet and dying to complete Resident Evil in Dolphin


I need to get myself a copy of Melee to use in Dolphin. My Trinity rig isn't doing me justice in Dolphin. Dying for Kaveri (and hopefully at least a decent chunk of performance increases).

Wish we could dump Wii U games and play them on Dolphin. I want SSBU so bad. Play it with a Dualshock 4.








Quote:


> Originally Posted by *NaroonGTX*
> 
> I don't know if Kaveri's iGPU will support TrueAudio but it's possible it will. I know the R7-260x supports it, despite being a rebranded 7790. It turns out that the original Bonaire GPU actually already had TrueAudio tech on it, it was just disabled. No doubt to give people extra incentive to shell out for the "new" GPU, lol.
> 
> I know that Kaveri will have an ARM co-processor as well, but what function it serves I don't remember (or if it was even explained what it will do yet).
> 
> >>BUT why won't they release it already?
> 
> Well Kaveri was delayed a while back (before all the recent delay rumors which weren't really true) for as-of-now unknown reasons. Some speculate yield issues with GloFo, manufacturing issues or what have you, but these are unsubstantiated. Leaked documents and die shots points to AMD having scrapped the initial Steamroller and overhauled it to become bdver3b, or Steamroller 2.0. We will find out in a few weeks time more about Kaveri.
> 
> If it were ready for release at this time, I don't believe that Richland would have been released. Richland was a stop-gap to give people something to "upgrade" to while they wait for Kaveri.


That made me just realize how we will probably be able to crossfire the R7-260x with Kaveri.


----------



## NaroonGTX

PS4 will support HSA, Sony even joined the HSA foundation in early 2013. Don't know about whether or not the X1 will support it.


----------



## Gungnir

Quote:


> Originally Posted by *LDV617*
> 
> 8350 is a very underrated chip from what I've heard. If the Kaveri APUs are focusing on the budget market like the last release of APUs, I seriously doubt it will smoke your 8350. That being said, I can see comparable performance for about 1/2 - 3/4 the price. But I don't think AMD wants to send Vishera to the graveyard quite yet.


In multithreaded (8+ threads), the 8350 will almost undoubtedly win. In single threaded, the Kaveri will almost certainly win. In HSA, Kaveri will almost certainly destroy i7-EEs; the FXs aren't even a relevant comparison there.

Quote:


> Originally Posted by *NaroonGTX*
> 
> PS4 will support HSA, Sony even joined the HSA foundation in early 2013. Don't know about whether or not the X1 will support it.


SCE is even giving a keynote at APU13.

As for the X1; I really hope so, and it would be rather strange if it didn't in some way. I think M$ may try to substitute cloud processing for much of what the PS4 is going to be using HSA for, though who knows if the devs will actually go along with that, at least in the first year or two.


----------



## NaroonGTX

Indeed that SCE will be giving a keynote at the event, I forgot about that. X1 should support it, but for some reason MS have been rather tight-lipped about it AFAIK. Hopefully it would though so more devs could come up with creative ways to apply it to games, so PC gamers could benefit from that as well as Mantle.

Also just noticed that 2500k OC in your sig, really impressive.


----------



## Gungnir

Yeah, I'm not sure why MS is being so secretive about the X1. Maybe they're trying to avoid being compared to the PS4 more?

Thanks!







That definitely wasn't stable (and I wouldn't want to run that voltage for long even if it was), but it isn't too shabby for a suicide run.


----------



## Vesku

Quote:


> Originally Posted by *Gungnir*
> 
> Yeah, I'm not sure why MS is being so secretive about the X1. Maybe they're trying to avoid being compared to the PS4 more?
> 
> Thanks!
> 
> 
> 
> 
> 
> 
> 
> That definitely wasn't stable (and I wouldn't want to run that voltage for long even if it was), but it isn't too shabby for a suicide run.


Yes, I think Microsoft's console division is still a bit sore from the beating Sony gave them at E3. Best to give Sony some time to trip up, such as their revealing that most PS3 headsets won't work with the PS4 at launch: http://www.ign.com/articles/2013/10/10/existing-headsets-wont-work-with-ps4-at-launch


----------



## LDV617

Quote:


> Originally Posted by *Gungnir*
> 
> In multithreaded (8+ threads), the 8350 will almost undoubtedly win. In single threaded, the Kaveri will almost certainly win. In HSA, Kaveri will almost certainly destroy i7-EEs; the FXs aren't even a relevant comparison there..


I know they aren't a relevant comparison at the price point.. but surely people who are playing modern games won't replace a fx8350 for a Kaveri chip unless the dual graphics bonus is monstrous. Most modern games don't utilize more than 4 cores from what I've seen. But if games are making the shift towards utilizing more cores, replacing your FX with a Kaveri will surely put you a step in the wrong direction right??

But don't get me wrong, I plan on making a Kaveri budget gaming rig to test the dual graphics on say a 7850


----------



## NaroonGTX

Games will begin to scale up to more threads, but honestly not every game will even need 8 threads, let alone using them all fully. Even BF3 was shown to scale up to eight threads, though the CPU doesn't become drastically important until you're in servers with 40+ players. It showed most of the work being done on 3 or 4 threads, so people with quad-cores will be just fine in a game like BF4 or such. BF4 doesn't bring much new to the table, so I can't see the CPU requirements being heavier than BF3. Other games in comparison don't have battles with 64 players, usually topping out at 8 or 16 depending on the genre. BF is really just a special case of high player count + huge maps + lots of stuff to track and keep in sync, etc.

This is why in a lot of games, a Kaveri APU would actually surpass even an 8350 simply because of the uarch being more efficient.


----------



## DaveLT

Quote:


> Originally Posted by *LDV617*
> 
> I know they aren't a relevant comparison at the price point.. but surely people who are playing modern games won't replace a fx8350 for a Kaveri chip unless the dual graphics bonus is monstrous. Most modern games don't utilize more than 4 cores from what I've seen. But if games are making the shift towards utilizing more cores, replacing your FX with a Kaveri will surely put you a step in the wrong direction right??
> 
> But don't get me wrong, I plan on making a Kaveri budget gaming rig to test the dual graphics on say a 7850


Heh. Modern games will use 6 cores no problem, want proof? My rig. Only games which don't make use of HT which leaves 6 threads unused yes but the other 6 real cores are used


----------



## 8mm

Definitely going to upgrade the living room PC with this. Should be much quieter than using a graphics card.


----------



## nitrubbb

please Kaveri be out in nov or early dec!


----------



## DaveLT

Quote:


> Originally Posted by *8mm*
> 
> Definitely going to upgrade the living room PC with this. Should be much quieter than using a graphics card.


Or buying a low-end Haswell and ending up still having to buy a add-on GPU (LOL)


----------



## sumitlian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Games will begin to scale up to more threads, but honestly not every game will even need 8 threads, let alone using them all fully. Even BF3 was shown to scale up to eight threads, though the CPU doesn't become drastically important until you're in servers with 40+ players. It showed most of the work being done on 3 or 4 threads, so people with quad-cores will be just fine in a game like BF4 or such. BF4 doesn't bring much new to the table, so I can't see the CPU requirements being heavier than BF3. Other games in comparison don't have battles with 64 players, usually topping out at 8 or 16 depending on the genre. BF is really just a special case of high player count + huge maps + lots of stuff to track and keep in sync, etc.
> 
> [/b]This is why in a lot of games, a Kaveri APU would actually surpass even an 8350 simply because of the uarch being more efficient.[/b]


This !
Single, dual, triple or quad core performance would be better than 8350 (at same clock), Kaveri won't even have the HyperTransport and PCIe 2.0 overheads. This is why I am sure minimum FPS will always be much better with Kaveri.
I am just afraid of not having L3 cache with Kaveri, I really don't understand much this L3 cache, But I wanna know, might it affect or hinder the actual core performance of those Steamroller ?


----------



## LDV617

Quote:


> Originally Posted by *DaveLT*
> 
> Heh. Modern games will use 6 cores no problem, want proof? My rig. Only games which don't make use of HT which leaves 6 threads unused yes but the other 6 real cores are used


I believe you, I'm sure plenty of games DO utilize even up to 8, but definitely not to their full potential. Also certain games I like, *cough* Planetside 2 *cough* hardly utilize 4 cores and are insanely CPU dependent.


----------



## sumitlian

Quote:


> Originally Posted by *8mm*
> 
> Definitely going to upgrade the living room PC with this. Should be much quieter than using a graphics card.


Oh brother ! your profile pic just pushed me to that old great memories of Porsche Unleashed. That career mode......








and dat soundtracks too








I still have the original copy of PlayStation One. I think its time to do it again with ePSXe









Sorry Its offtopic, I just couldn't stop me from replying to this.


----------



## DaveLT

Quote:


> Originally Posted by *LDV617*
> 
> I believe you, I'm sure plenty of games DO utilize even up to 8, but definitely not to their full potential. Also certain games I like, *cough* Planetside 2 *cough* hardly utilize 4 cores and are insanely CPU dependent.


Badly optimized games. The devs sure are doing too little work to deserve being paid for Planetside 2


----------



## azanimefan

remember, prior to the ps4 and xb1 there was no need to program games to be multithreaded.

they NEED to program them fully threaded now. so 8 core cpus will begin to see much better utilization going forward.


----------



## LDV617

I totally agree with that. And as far as I am concerned, PS2 is the most CPU intensive game for multiple reasons. I wouldn't be surprised if Arma 3 was just as tough on the CPU, but I'm sure it runs better due to less players on the map.

I definitely noticed a BF3 increase after overclocking my old i5, but nothing compared to upgrading to a GPU with external power.


----------



## sugarhell

I will leave it here

http://semiaccurate.com/2013/10/21/amd-makes-gpu-comute-reality-hq/


----------



## NaroonGTX

>>I am just afraid of not having L3 cache with Kaveri, I really don't understand much this L3 cache, But I wanna know, might it affect or hinder the actual core performance of those Steamroller ?

L3 cache seemed to make a big difference back with K10, as I remember Phenom II's routinely pulling ahead of Athlon II's at the same clocks, despite basically being the same uarch. With Piledriver APU vs FX, the differences seem to be really neglible, to the point where I think the only performance difference is due to different NB speeds (also since FM1 and FM2 integrated the NB onto the die itself rather than relying on HyperTransport). Like in the PCSX2 CPU benchmark, which as we both know is heavily CPU-bound, an A8-5600k would get pretty much the same fps as an FX-4300 at the same clocks. So it most depends on the application I think, but the lack of L3 cache shouldn't hinder performance too much.

L1d/i caches seem to be improved in SR, and this might also directly lower the latencies of the L2 cache as well.

>>I believe you, I'm sure plenty of games DO utilize even up to 8, but definitely not to their full potential. Also certain games I like, *cough* Planetside 2 *cough* hardly utilize 4 cores and are insanely CPU dependent.

Planetside 2 offloads a vast majority of the work onto a single or two dominant thread(s). The devs admitted this when discussing the PS4 version, by saying they were forced to re-do much of the code to support proper multi-threading. This optimization will make its way to the PC version later.


----------



## Fonne

Cant wait







.... My wife is using a A10-6800k now, and would love to make a Kaveri system to myself:

GA-F2A88XN-WIFI + Kaveri + SSD + 2400 Mhz = Is about 150 Watt max totally wrong ? - Would make a killer small light gaming rig.


----------



## DaveLT

Quote:


> Originally Posted by *Fonne*
> 
> Cant wait
> 
> 
> 
> 
> 
> 
> 
> .... My wife is using a A10-6800k now, and would love to make a Kaveri system to myself:
> 
> GA-F2A88XN-WIFI + Kaveri + SSD + 2400 Mhz = Is about 150 Watt max totally wrong ? - Would make a killer small light gaming rig.


Absolutely not. If you're OC'ing 200W is more accurate i guess.


----------



## Vesku

Shame DDR4 is roughly a year out for low end consumer computers, just might see a decent gaming notebook in the ~$500 range then.


----------



## Fonne

Quote:


> Originally Posted by *DaveLT*
> 
> Absolutely not. If you're OC'ing 200W is more accurate i guess.


Will not OC, so cant see how it will reach 200 Watt ? .... A A10-6800K is around 100 Watt and dont think the Kaveri will use more ?


----------



## DaveLT

Quote:


> Originally Posted by *Fonne*
> 
> Will not OC, so cant see how it will reach 200 Watt ? .... A A10-6800K is around 100 Watt and dont think the Kaveri will use more ?


100W (Kaveri is 95W, figure 10W for the rest of the board i think) is the very max is will pull from a PSU then, most will pull less








Do remember NOT to use the stock heatsink. It's a heap of crap


----------



## NaroonGTX

Kaveri will have lower power consumption and generally be more efficient overall. Top-end Kaveri will be 95W TDP.


----------



## DaveLT

And we still know today that AMD TDP =/= Intel TDP.
In short? AMD actual power draw is always going to be lower while Intel TDP is right up there what is stated on the web


----------



## Redwoodz

Quote:


> Originally Posted by *DaveLT*
> 
> Absolutely not. If you're OC'ing 200W is more accurate i guess.


No.

http://www.kitguru.net/components/motherboard/henry-butt/gigabyte-g1-sniper-a88x-motherboard-review/20/


That's an A10 6800K overclocked to 4.6GHz @1.8v=169w


That's an increase of 11w over stock.Kaveri will be even better.


----------



## DaveLT

Wow. I was just roughly guessing. How can a rough guess be accurate?


----------



## sumitlian

Quote:


> Originally Posted by *Redwoodz*
> 
> No.
> 
> http://www.kitguru.net/components/motherboard/henry-butt/gigabyte-g1-sniper-a88x-motherboard-review/20/
> 
> 
> That's an A10 6800K overclocked to 4.6GHz @1.8v=169w
> 
> 
> That's an increase of 11w over stock.Kaveri will be even better.


LOOK AT DAT POWER EFFICIENCY









This is exponentially strengthening my love for "Kaveri".


----------



## DaveLT

All this on 32nm
Yeah Intel can go and hide in a cave now


----------



## NaroonGTX

I wonder why the 6800k was set to push 1.8v for just 4.6ghz when it can hit 5ghz on air at like 1.5v.


----------



## Fonne

Looks great







... Will start to look after a 150-200 Watt small power suply (Not SFX, but with brick, cant remenber the name) ....


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> Kaveri will have lower power consumption and generally be more efficient overall. Top-end Kaveri will be 95W TDP.


Will the top-end Kaveri be as powerful or more powerful than PS4?


----------



## DaveLT

Quote:


> Originally Posted by *sepiashimmer*
> 
> Will the top-end Kaveri be as powerful or more powerful than PS4?


More is my bet.


----------



## sepiashimmer

Quote:


> Originally Posted by *DaveLT*
> 
> More is my bet.


Wow, it'll be more but at low price than PS4.


----------



## Redwoodz

Quote:


> Originally Posted by *Fonne*
> 
> Cant wait
> 
> 
> 
> 
> 
> 
> 
> .... My wife is using a A10-6800k now, and would love to make a Kaveri system to myself:
> 
> GA-F2A88XN-WIFI + Kaveri + SSD + 2400 Mhz = Is about 150 Watt max totally wrong ? - Would make a killer small light gaming rig.


Quote:


> Originally Posted by *DaveLT*
> 
> Absolutely not. If you're OC'ing 200W is more accurate i guess.


Quote:


> Originally Posted by *DaveLT*
> 
> Wow. I was just roughly guessing. How can a rough guess be accurate?


Except you were correcting a rough guess that was in fact more accurate.


----------



## NaroonGTX

Kaveri will be very power efficient. The performance it will be able to put out while using as little power as it will use will just be astounding, quite frankly.

>>Will the top-end Kaveri be as powerful or more powerful than PS4?

The PS4's GPU side is around the strength of the HD 7850. Kaveri's iGPU will be held back by the bandwidth limitations of conventional consumer DDR3, but it will also have a greatly improved integrated memory controller as well. We will probably see Kaveri ship with native support for DDR3-2400. Since Kaveri will have 8CU's (GPU), I don't think it'll be as strong as a 7850, perhaps somewhere around 7750~7770 levels of perf, which would still be pretty kickass for an APU.

On the CPU side, it will take the PS4's lunch money, eat it, poop it out, then hand it back to the PS4.


----------



## Ultracarpet

Quote:


> Originally Posted by *sepiashimmer*
> 
> Wow, it'll be more but at low price than PS4.


apples to oranges... you can't really compare the cost of single processor to an entire gaming system...


----------



## CCast88

Question. I want to build a Mini-ITX HTPC/Gaming using this arch... I will be using the onboard gpu until maybe next gen (20nm) GPUS, then I will install a 7950equivilent gpu (what I mean is the card of the 20nm gen that will be replacing the R9 290) when it comes out. The 2 questions I have is:
1. Will this CPU bottleneck a card of that level?

2. Will this CPU be more powerful than my current gaming PC, I7 930 to the point where I can call it an upgrade and swap it out for my main gaming PC and maybe pass this one down to a relative?


----------



## NaroonGTX

>>1. Will this CPU bottleneck a card of that level?

No.

>>2. Will this CPU be more powerful than my current gaming PC, I7 930 to the point where I can call it an upgrade and swap it out for my main gaming PC and maybe pass this one down to a relative?

It is very likely that Kaveri will be stronger than Nehalem CPU-wise. Higher overall INT performance as well as being able to reach higher clocks.


----------



## Ultracarpet

Quote:


> Originally Posted by *NaroonGTX*
> 
> >>1. Will this CPU bottleneck a card of that level?
> 
> No.
> 
> >>2. Will this CPU be more powerful than my current gaming PC, I7 930 to the point where I can call it an upgrade and swap it out for my main gaming PC and maybe pass this one down to a relative?
> 
> It is very likely that Kaveri will be stronger than Nehalem CPU-wise. Higher overall INT performance as well as being able to reach higher clocks.


Wouldn't carrizo be dropping near or shortly after the 20nm cards? Also it should be on Fm2+ as well so you would have an upgrade path no?


----------



## sepiashimmer

I'm surprised Intel fans didn't ruin this thread.


----------



## DaveLT

Quote:


> Originally Posted by *sepiashimmer*
> 
> I'm surprised Intel fans didn't ruin this thread.


Good, good.


----------



## NaroonGTX

>>Wouldn't carrizo be dropping near or shortly after the 20nm cards? Also it should be on Fm2+ as well so you would have an upgrade path no?

Yes. Carrizo will be hitting retail for desktop around Q1 2015, roughly a year after Kaveri's launch. It will be a drop in to FM2+ with the expected BIOS update. Leaks have been saying that AMD is targeting 45W and 65W TDP tops. No doubt this is proof of HDL being utilized, as well as a potential process lithography drop to 20nm on the APU itself as well. Carrizo will use a Pirate Islands-derived GPU.


----------



## Ultracarpet

Quote:


> Originally Posted by *NaroonGTX*
> 
> >>Wouldn't carrizo be dropping near or shortly after the 20nm cards? Also it should be on Fm2+ as well so you would have an upgrade path no?
> 
> Yes. Carrizo will be hitting retail for desktop around Q1 2015, roughly a year after Kaveri's launch. It will be a drop in to FM2+ with the expected BIOS update. Leaks have been saying that AMD is targeting 45W and 65W TDP tops. No doubt this is proof of HDL being utilized, as well as a potential process lithography drop to 20nm on the APU itself as well. Carrizo will use a Pirate Islands-derived GPU.


I'm starting to wonder if I should give my sisters boyfriend my rig as he is doing a lot of 3d modeling and drafting... (I feel like he would utilize the 8 cores a lot more often than I would and his computer is not good at all) and getting him to pitch in on buying me a kaveri platform when it drops... fm2+ is getting me excited


----------



## CynicalUnicorn

Quote:


> Originally Posted by *sepiashimmer*
> 
> I'm surprised Intel fan*boy*s didn't ruin this thread.


Fixed, and please don't give them ideas. I still want news on Steamroller for AM3+, but Kaveri is looking better than ever. I'm considering investing a bit in AMD, actually, and this might be a good time (or after the R9 series launches and I predict doesn't do too well). An APU that powerful and HSA's potential might make Intel sweat if not panic.


----------



## iamwardicus

Quote:


> Originally Posted by *NaroonGTX*
> 
> Kaveri's iGPU will be held back by the bandwidth limitations of conventional consumer DDR3, but it will also have a greatly improved integrated memory controller as well. We will probably see Kaveri ship with native support for DDR3-2400. Since Kaveri will have 8CU's (GPU), I don't think it'll be as strong as a 7850, perhaps somewhere around 7750~7770 levels of perf, which would still be pretty kickass for an APU.
> .


I'm more interested in how the integrated GPU is going to be affecting software coded for HSA. If there are any games out there that can take advantage of it as a FPU shouldn't the in game performance be greatly increased? I use a discrete graphics card as it is so actual graphics performance is a non issue as it is, but if there are any other calculations that can be "turbo-charged" so to speak I would think that it could and should be taken advantage of.


----------



## Simplynicko

i want to see a hats-off, screw-power-efficiency version of Kaveri that goes balls-to-the-wall with performance.


----------



## Truedeal

This entire thread is such a tease.


----------



## nitrubbb

Quote:


> Originally Posted by *Truedeal*
> 
> This entire thread is such a tease.


not too long now


----------



## Antivoid

http://mag.udn.com/mag/digital/storypage.jsp?f_MAIN_ID=320&f_SUB_ID=2942&f_ART_ID=482269

This cite is saying AMD has just confirmed Kaveri is coming out before 2014. It doesn't say when exactly.


----------



## Dynamo11

Quote:


> Originally Posted by *Antivoid*
> 
> http://mag.udn.com/mag/digital/storypage.jsp?f_MAIN_ID=320&f_SUB_ID=2942&f_ART_ID=482269
> 
> This cite is saying AMD has just confirmed Kaveri is coming out before 2014. It doesn't say when exactly.


I think the idea is AMD will release Steamroller cores for servers and mobile before the desktop ones. So I think our FM2+ Steamrollers still won't be here until next February


----------



## NaroonGTX

AMD already confirmed that the desktop Kaveri will be here before mobile. If anything, the mobile will hit in Feb. or later.


----------



## sepiashimmer

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Fixed, and please don't give them ideas. I still want news on Steamroller for AM3+, but Kaveri is looking better than ever. I'm considering investing a bit in AMD, actually, and this might be a good time (or after the R9 series launches and I predict doesn't do too well). An APU that powerful and HSA's potential might make Intel sweat if not panic.


Do you mean investing in AMD company? I actually typed fanboys, then I thought there could be some females so I removed 'boy'.


----------



## CynicalUnicorn

Yes, because $4 a share in a volatile stock is not a bad idea, and even women can be fanboys. They just have to blindly preach that one whatever is better than all the other alternatives/competitors and those same alternatives/competitors are evil. Intel fans think Intel is a great company with great products, though they may have some shortcomings, while the fanboys think they're perfect and think AMD is a failure that only stays around because monopolies are illegal.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Yes, because $4 a share in a volatile stock is not a bad idea, and even women can be fanboys. They just have to blindly preach that one whatever is better than all the other alternatives/competitors and those same alternatives/competitors are evil. Intel fans think Intel is a great company with great products, though they may have some shortcomings, while the fanboys think they're perfect and think AMD is a failure that only stays around because monopolies are illegal.


LOL srsly

AMD is still here because they are king @ SoCs, no one does it better than they do. And AFAIK Intel isn't doing too well on the SoC side ...
AMD chips are powering the A380 because they can


----------



## Simplynicko

i'd say these companies are a bad investment. the push is towards services (IBM, Microsoft going exclusively software & solutions). AMD, NVIDIA, INTEL all face the same fates as DELL and HP


----------



## sepiashimmer

Quote:


> Originally Posted by *Simplynicko*
> 
> i'd say these companies are a bad investment. the push is towards services (IBM, Microsoft going exclusively software & solutions). AMD, NVIDIA, INTEL all face the same fates as DELL and HP


DELL and HP are different because they don't manufacture hardware. I don't think any of the three companies will go the way of DELL and HP. I think at least 1 of them will go bust.


----------



## DaveLT

Quote:


> Originally Posted by *sepiashimmer*
> 
> DELL and HP are different because they don't manufacture hardware. I don't think any of the three companies will go the way of DELL and HP. I think at least 1 of them will go bust.


And it definitely isn't going to be AMD .... they still make x86 designs because they have a market to fulfill and it isn't the PC users or us








Unfortunately for Intel ... All of their shares are from PC users and us ... More servers are stuffed with AMD Opterons now


----------



## Simplynicko

Quote:


> Originally Posted by *DaveLT*
> 
> And it definitely isn't going to be AMD .... they still make x86 designs because they have a market to fulfill and it isn't the PC users or us
> 
> 
> 
> 
> 
> 
> 
> 
> Unfortunately for Intel ... All of their shares are from PC users and us ... More servers are stuffed with AMD Opterons now


dell and hp dont manufacture? wut? i have a dell screen with a HP laptop at work...


----------



## sepiashimmer

Quote:


> Originally Posted by *Simplynicko*
> 
> dell and hp dont manufacture? wut? i have a dell screen with a HP laptop at work...


You misquoted post. I said they don't manufacture, they source the hardware from other companies and stamp their label.


----------



## Simplynicko

Quote:


> Originally Posted by *sepiashimmer*
> 
> You misquoted post. I said they don't manufacture, they source the hardware from other companies and stamp their label.


and so does Apple, to foxconn, but they are considered a manufactuer of thier products, thier SALES and PROFITS are dependent on those products. Do you see Microsoft or IBM branded hardware anymore?

i am NOT saying that AMD, INTEL and NVIDIA are going backrupt or anything. but i wouln't buy STOCKS. As any tech/service company will likely have a steeper curve up. that means you get a higher/better return on investment since they are less dependent on the swings of manufacturing in general.


----------



## Hattifnatten

Quote:


> Originally Posted by *Simplynicko*
> 
> and so does Apple, to foxconn, but they are considered a manufactuer of thier products, thier SALES and PROFITS are dependent on those products. Do you see Microsoft or IBM branded hardware anymore?
> 
> i am NOT saying that AMD, INTEL and NVIDIA are going backrupt or anything. but i wouln't buy STOCKS. As any tech/service company will likely have a steeper curve up. that means you get a higher/better return on investment since they are less dependent on the swings of manufacturing in general.


IBM-hardware? Yep.
Microsoft? They've always been a software company.


----------



## Thunderclap

AMD "Kaveri" Desktop APU Launch Date Revealed

Ahhh, darn it, I was expecting Kaveri a bit earlier than that, but oh well, what can ya do... I at least hope the performance is worth the wait.


----------



## LDV617

Is there any confirmation that the 78xx cards will work in Dual Graphics mode?

I know the 77xx were supposed to work with their last microprocessor. Would love to pop a 7850 into a Kaveri rig and see it beat my 1st gen i5 in every game


----------



## CynicalUnicorn

Quote:


> Originally Posted by *LDV617*
> 
> Is there any confirmation that the 78xx cards will work in Dual Graphics mode?
> 
> I know the 77xx were supposed to work with their last microprocessor. Would love to pop a 7850 into a Kaveri rig and see it beat my 1st gen i5 in every game


GPU-wise, Richland destroys Intel anything. CPU-wise, I doubt it'll be that good. I also doubt it supports anything GCN 1.0 too well in DGM, so 7790s or R7 260X are it.


----------



## nitrubbb

r7 260x in dual graphics would be incredible


----------



## LDV617

Haven't seen benchmarks on the r7 260x but that is quite disappointing, if you could get that +50% performance boost out of a 7850 2gb it would crush the 660 which is slightly more money. Oh well, I can dream.


----------



## nitrubbb

Quote:


> Originally Posted by *LDV617*
> 
> Haven't seen benchmarks on the r7 260x but that is quite disappointing, if you could get that +50% performance boost out of a 7850 2gb it would crush the 660 which is slightly more money. Oh well, I can dream.


what?

r7 260x in dual graphics would destroy 660 also


----------



## LDV617

I haven't seen any benchmarks for that card, honestly couldn't even tell you it's price point or competition. I will look more into it, if the price is right that could be a great combo.


----------



## polyzp

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> GPU-wise, Richland destroys Intel anything. CPU-wise, I doubt it'll be that good. I also doubt it supports anything GCN 1.0 too well in DGM, so 7790s or R7 260X are it.


Yes normally I support AMD, but this statement is wrong. Intel's Iris 5200 does in fact trounce AMD's most powerful richland GPU even overclocked. look at the anandtech review, iris is closer to a GT 640 then a GT 630 amd typically compared richland too. In fact AMD only pit the 8670D against the HD 4600, not Iris which is double the cores! Kaveri's jump will be enough to trounce iris however!


----------



## NaroonGTX

R7-260x = Bonaire GPU = HD 7790.

I think the reason AMD never compared their chips to Iris is because they are in totally different price brackets. You have to pay a lot more for a system that has Iris in it (since it's BGA only) IIRC.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *polyzp*
> 
> Yes normally I support AMD, but this statement is wrong. Intel's Iris 5200 does in fact trounce AMD's most powerful richland GPU even overclocked. look at the anandtech review, iris is closer to a GT 640 then a GT 630 amd typically compared richland too. In fact AMD only pit the 8670D against the HD 4600, not Iris which is double the cores! Kaveri's jump will be enough to trounce iris however!


Really? I disagree. Okay, so AMD wins practically always in games but it's hit or miss in some synthetic benchmarks and computational tasks.

EDIT: I can't read and I'm leaving this so you can ridicule my illiteracy. Their logic for not including Iris was because comparing a $150 iGPU to a $300+ iGPU isn't exactly fair. Do you compare an 8350 to a 4770k? They're both their respective company's top mainline-enthusiast chips, so that's fair right? Okay, yes, they do, but the twice as high price tag on the i7 means you should expect a biased test.


----------



## polyzp

Im just saying , its wrong to state AMD's Richland iGPU is more powerful than any intel iGPU.


----------



## Kuivamaa

Quote:


> Originally Posted by *polyzp*
> 
> Yes normally I support AMD, but this statement is wrong. Intel's Iris 5200 does in fact trounce AMD's most powerful richland GPU even overclocked. look at the anandtech review, iris is closer to a GT 640 then a GT 630 amd typically compared richland too. In fact AMD only pit the 8670D against the HD 4600, not Iris which is double the cores! Kaveri's jump will be enough to trounce iris however!


If you are quoting Anand's benchmarks, they aren't directly comparable with richland. Iris 5200 runs on a haswell i7 and richland on dual piledriver modules. Techreport did a more interesting test:

http://techreport.com/review/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed/6



Iris pro 5200 is 10% better than a desktop ddr3 6570 in metro LL. I know it is a different review but take a look at this:



An A10-6800 igpu (8670D) when paired with 2133 ram almost matches a desktop 6670 DDR3, which is a higher clocked 6570, essentially. In other words ,both 8670D and iris pro 5200 rougly offer discreet 6670 level of performance on metro LL, they are very near each other. Intel pro 5200 power is grossly overstated.


----------



## azanimefan

remember, most of the iris pro benches were run against the MOBILE apu not the desktop version. The mobile apu is decidedly weaker then the desktop. Furthermore Iris Pro was an engineering trick not a marketable part. The only devices with it in it are mac book pros, because apple can get away with charging stupid high prices because of their isheep consumer-base. Its not really a viable competitor to either the desktop or mobile apus... as the desktop is in a different form factor while the mobile apu is a fraction of the price, and sold in a much cheaper market.

as for kaveri, it will destroy an iris pro igpu... on the desktop anyway... we'll need to see the benches to see how it stacks up on mobile.


----------



## Kuivamaa

Quote:


> Originally Posted by *azanimefan*
> 
> remember, most of the iris pro benches were run against the MOBILE apu not the desktop version. The mobile apu is decidedly weaker then the desktop. Furthermore Iris Pro was an engineering trick not a marketable part. The only devices with it in it are mac book pros, because apple can get away with charging stupid high prices because of their isheep consumer-base. Its not really a viable competitor to either the desktop or mobile apus... as the desktop is in a different form factor while the mobile apu is a fraction of the price, and sold in a much cheaper market.
> 
> as for kaveri, it will destroy an iris pro igpu... on the desktop anyway... we'll need to see the benches to see how it stacks up on mobile.


Lots of tests pitted the i7-4950HQ with iris pro 5200 vs desktop APU solutions (techreport vs trinity, Tom's vs Richland). TDP alone would give intel a huge advantage but at least in anand's case, the test wasn't particularly useful since he used the mobile i7 as a desktop part inside a desktop case and all, he even used a desktop cooler sufficient for a 47 part, whatever that means. No guarantee that iris pro would produce the same numbers in a thermally limited environment like the inside of a laptop. What I am saying is that even from a pure performance standpoint, (without taking into consideration prices and availability) iris pro 5200 is rather overstated graphically.


----------



## DaveLT

Quote:


> Originally Posted by *Kuivamaa*
> 
> Lots of tests pitted the i7-4950HQ with iris pro 5200 vs desktop APU solutions (techreport vs trinity, Tom's vs Richland). TDP alone would give intel a huge advantage but at least in anand's case, the test wasn't particularly useful since he used the mobile i7 as a desktop part inside a desktop case and all, he even used a desktop cooler sufficient for a 47 part, whatever that means. No guarantee that iris pro would produce the same numbers in a thermally limited environment like the inside of a laptop. What I am saying is that even from a pure performance standpoint, (without taking into consideration prices and availability) iris pro 5200 is rather overstated graphically.


Drastically overstated. In this age where many (YES, MANY!) of computer builders are buying 2133 RAM, 2133 is the right way to pit Richland to HD4600 which is the only chip you can get on the desktop and even that costs 3 TIMES! the best richland chip which beats it left to right

Using 1600MHz RAM (citing my own rig) as a argument is shallow and pedantic and a bad excuse


----------



## MrJava

Dual channel DDR4-4266 would come close to alleviating the memory bottleneck for Kaveri's GPU ... close.


----------



## NaroonGTX

Are there any estimates when DDR4 memory at those speeds will become commercially available? I probably won't upgrade from Kaveri until a new DDR4-compatible chipset/mobo launches.


----------



## MrJava

We'll probably only be getting DDR4-2133 in 2014. Carizzo should be able to support DDR4-3200 out of the box and maybe the one-DIMM-per-channel topology will mean quad-channel as well.
Quote:


> Originally Posted by *NaroonGTX*
> 
> Are there any estimates when DDR4 memory at those speeds will become commercially available? I probably won't upgrade from Kaveri until a new DDR4-compatible chipset/mobo launches.


----------



## bardacuda

Prerelease AMD Kaveri APU Closes In on Intel's Haswell Heavyweights

I think the title of this seems a bit embellished lol but they are saying +35% integer performance increase over piledriver but with a -15% FP performance hit.


----------



## NaroonGTX

Yeah these results have been popping up everywhere now. I really wish they would stop with the sensationalist titles, they're just setting people up to think it will annihilate Haswell when we have no idea about the true performance of Kaveri.

The good thing is all of this will be cast aside in a couple weeks when Kaveri is officially unveiled and detailed at APU '13.


----------



## Redwoodz

Don't know about Iris but remember Intel's iGPU's have had terrible image quality,as well as terrible driver support.Don't see that changing any time soon.Benchmarks do not tell the whole story.


----------



## Durquavian

Quote:


> Originally Posted by *NaroonGTX*
> 
> Yeah these results have been popping up everywhere now. I really wish they would stop with the sensationalist titles, they're just setting people up to think it will annihilate Haswell when we have no idea about the true performance of Kaveri.
> 
> The good thing is all of this will be cast aside in a couple weeks when Kaveri is officially unveiled and detailed at APU '13.


Unfortunately most is with the promise of HSA/HUMA. Of course they kinda fail to state that up front. But I think AMD is barking up the right tree with this. I stated before, if AMD cant close that hardware gap with Intel ie: 28nm-14nm, then use drivers and software to close the gap. Buys you some temporary performance for some capitol gains for increased research to close that hardware gap quicker and more efficiently.


----------



## Liranan

Quote:


> Originally Posted by *Durquavian*
> 
> Unfortunately most is with the promise of HSA/HUMA. Of course they kinda fail to state that up front. But I think AMD is barking up the right tree with this. I stated before, if AMD cant close that hardware gap with Intel ie: 28nm-14nm, then use drivers and software to close the gap. Buys you some temporary performance for some capitol gains for increased research to close that hardware gap quicker and more efficiently.


HUMA and HSA aren't just software, they are also hardware solutions and the point is that AMD won't need to catch up with Intel on the CPU front because if they can release APU's that reach several TFlop of performance Intel will need to release massive core CPU's to catch up. What makes HSA and HUMA interesting is that there are many manufacturers on board, not just AMD and even nVidia will benefit from this in the mobile space with their Tegra line. Imagine your phone's SOC achieving 200 GFlops, that's not only more than any consumer CPU but also double the GFlop of my laptop's entry level 4570.

Very exciting times ahead as HSA and HUMA will finally make mobile SOC's viable for desktops running Linux regardless of whether it's Ubuntu, Android or any other distro. In effect in Android selecting the option of forcing 2D drawings by the GPU (GPU rendering) and disabling HW overlays are a sort of HSA, though very weak and not very effective.


----------



## DaveLT

Quote:


> Originally Posted by *Redwoodz*
> 
> Don't know about Iris but remember Intel's iGPU's have had terrible image quality,as well as terrible driver support.Don't see that changing any time soon.Benchmarks do not tell the whole story.


Very horrid image quality and very horrid driver support. Nothing new to see here, good old Intel iGPU


----------



## inedenimadam

Quote:


> Originally Posted by *DaveLT*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Redwoodz*
> 
> Don't know about Iris but remember Intel's iGPU's have had terrible image quality,as well as terrible driver support.Don't see that changing any time soon.Benchmarks do not tell the whole story.
> 
> 
> 
> Very horrid image quality and very horrid driver support. Nothing new to see here, good old Intel iGPU
Click to expand...

I am most interested in the x-fire with discrete for Kaveri...I tried the lucid crap with horrid results for 6670-3570k and 7850-3570k. I know that current offerings for that type of tech from AMD is still not up to snuff, as there is some debate as to the performance gains, but it is a FAR CRY better than being cornholed into using the god awful lucid garbage. Hopefully AMD will continue this trend of fixing issues with drivers on thier APU side as much as they are on the discrete side...I mean, I have had a driver everyday this week for my 7900 series cards!


----------



## knightsilver

Will we see an 8core, or more than 4core APU for FM2?


----------



## Gungnir

Quote:


> Originally Posted by *knightsilver*
> 
> Will we see an 8core, or more than 4core APU for FM2?


Hexcore is possible with Kaveri or Carrizo on FM2+ (going off rumors, at least); octocore, while also possible, doesn't seem as likely. Note, though: AMD seems to be focused more on efficiency than core counts right now, with Kaveri being max 95w and Carrizo rumored to be max 65w. With the exception of a possible Piledriver refresh due to new PD-based Optys coming out soon, I don't really expect a new AMD octocore until Excavator.

EDIT: Just to clarify, this is all based on rumors and speculation, though, and thus shouldn't be taken as fact.


----------



## Darklyric

Has there been any leaks on possible release dates for these apus? I know the motherboards are coming into to stock...


----------



## nitrubbb

Quote:


> Originally Posted by *Darklyric*
> 
> Has there been any leaks on possible release dates for these apus? I know the motherboards are coming into to stock...


formally 2013


----------



## inedenimadam

Quote:


> Originally Posted by *nitrubbb*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Darklyric*
> 
> Has there been any leaks on possible release dates for these apus? I know the motherboards are coming into to stock...
> 
> 
> 
> formally 2013
Click to expand...

so desktop sku 2014?


----------



## nitrubbb

Quote:


> Originally Posted by *inedenimadam*
> 
> so desktop sku 2014?


noones knows for sure yet


----------



## DaveLT

Quote:


> Originally Posted by *inedenimadam*
> 
> so desktop sku 2014?


AMD did say Desktop comes first


----------



## inedenimadam

Quote:


> Originally Posted by *DaveLT*
> 
> Quote:
> 
> 
> 
> Originally Posted by *inedenimadam*
> 
> so desktop sku 2014?
> 
> 
> 
> AMD did say Desktop comes first
Click to expand...

so...when can i buy one?


----------



## DaaQ

Feb of 2014


----------



## nitrubbb

Quote:


> Originally Posted by *DaaQ*
> 
> Feb of 2014


quoted for rumour


----------



## inedenimadam

Quote:


> Originally Posted by *DaaQ*
> 
> Feb of 2014


arg! that so far away.


----------



## NaroonGTX

All current dates are rumors besides it shipping to partners/OEM's in December and being available sometime in "very-early Q1 2014".

We'll probably get dates at APU '13 which begins on Nov. 11th and ends Nov. 13th.


----------



## sepiashimmer

Quote:


> Originally Posted by *knightsilver*
> 
> Will we see an 8core, or more than 4core APU for FM2?


Unlikely, AMD is already onto FM2+.


----------



## Zyro71

I hope foxconn or sapphire implement a low voltage quad core variant of this in one of those nettop PC's for HTPCs
Then it would be interesting for me to go out and buy instantly .-.


----------



## Malcom28

What card ill be able to crossfire with the new Kaveri APU A10 ?
I want to buy the card now and the rest system (apu + mobo) later when it comes out.
HD 7750 2GB DDR3 ? or R7 240/250 2GB DDR3 ? or what .. ?


----------



## nitrubbb

hopefully r7 260x


----------



## Artikbot

Quote:


> Originally Posted by *nitrubbb*
> 
> hopefully r7 260x


The slates already said Kaveri crossfires with the 260 (non-X?)


----------



## NaroonGTX

It's currently unknown what Kaveri will crossfire with. We'll find out (maybe) in a couple weeks.


----------



## Cyro999

Quote:


> Originally Posted by *NaroonGTX*
> 
> All current dates are rumors besides it shipping to partners/OEM's in December and being available sometime in "very-early Q1 2014".
> 
> We'll probably get dates at APU '13 which begins on Nov. 11th and ends Nov. 13th.


Oh good, will they bless us with two DAYS of true audio this time instead of two hours?


----------



## LDV617

I've heard rumors that Kaveri will crossfire with 7750 / 7770 / 7850 / 79xx but no REAL confirmation. Ever since the announcement of Kaveri I've heard that Richland couldn't handle dual graphics with anything > 7750, and that Kaveri would change that. Maybe I'm wrong and over optimistic, but I think even if Kaveri can boost a 7850, it will be a game changer for sure, even in the mid to low end power users.


----------



## Fonne

Is there anyone else than Gigabyte that has/will make a mITX board to Kaveri ?

Gigabyte F2A88XN-WIFI
http://www.gigabyte.com/products/product-page.aspx?pid=4745#ov


----------



## Artikbot

Quote:


> Originally Posted by *Fonne*
> 
> Is there anyone else than Gigabyte that has/will make a mITX board to Kaveri ?
> 
> Gigabyte F2A88XN-WIFI
> http://www.gigabyte.com/products/product-page.aspx?pid=4745#ov


ASRock makes the FM2A88X-ITX+.

http://www.asrock.com/mb/AMD/FM2A88X-ITX+/


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> ASRock makes the FM2A88X-ITX+.
> 
> http://www.asrock.com/mb/AMD/FM2A88X-ITX+/


Considering what i've seen from ASRock FM2 boards i would do well to stay 5 miles away from any FM2+ boards as well


----------



## Fonne

Quote:


> Originally Posted by *Artikbot*
> 
> ASRock makes the FM2A88X-ITX+.
> 
> http://www.asrock.com/mb/AMD/FM2A88X-ITX+/


Thanks







- Until now I still like the Gigabyte best, but really hopes that Asus will release a mITX board before Kaveri launch ....


----------



## azanimefan

Quote:


> Originally Posted by *DaveLT*
> 
> Considering what i've seen from ASRock FM2 boards i would do well to stay 5 miles away from any FM2+ boards as well


+1 QFT

asrock really has released nothing but junk for FM2, I'd be really hesitant to suggest anyone touch one of their boards for FM2+ till we get some dependable feedback on quality.


----------



## azanimefan

Quote:


> Originally Posted by *Artikbot*
> 
> The slates already said Kaveri crossfires with the 260 (non-X?)


that's about what i expected from the specs of the igpu going into kaveri... the r7-260x being a 7790, which would make the r7-260 a 7770.

If it will dual graphics with a 7770, you'll probably be able to make it work with a 7750... possible a 7790... we know some a10-6800k could dual graphics with 7750s even though it was a totally different core design... which bodes well for the possibilities of kaveri in dual graphics with a 7790/260x.

if kaveri matches well and scales well with the 7770/260 in dual graphics we'd be talking about something with 7950/760 performance, combined with a cpu in the i5-2500k range of performance... we could be looking at a monstrously powerful and cheap 1080p gaming machine.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *raghu78*
> 
> yeah by 2015 I expect AMD to match Intel's big core


If I had a dollar every time someone said something like that for that past 25 years, I could have retired 20 years ago.


----------



## SandGlass

Anybody what the APU will be called? A12-7800k? Any name would be helpful.
I have a script(in javascript) that can search canardpc results, and could be potentially be adapted for cosmology benchmarks, but I need to nail the name exactly, and the shorter it is the faster the script is(canardpc also blocks you after too many requests).


----------



## Gungnir

Quote:


> Originally Posted by *azanimefan*
> 
> that's about what i expected from the specs of the igpu going into kaveri... the r7-260x being a 7790, which would make the r7-260 a 7770.
> 
> If it will dual graphics with a 7770, you'll probably be able to make it work with a 7750... possible a 7790... we know some a10-6800k could dual graphics with 7750s even though it was a totally different core design... which bodes well for the possibilities of kaveri in dual graphics with a 7790/260x.
> 
> if kaveri matches well and scales well with the 7770/260 in dual graphics we'd be talking about something with 7950/760 performance, combined with a cpu in the i5-2500k range of performance... we could be looking at a monstrously powerful and cheap 1080p gaming machine.


I suspect that Kaveri will Hybrid Crossfire with Oland, Cape Verde, and Bonaire GPUs (HD77xx, R7-2xx). It might also work with Curaçao (R9-270(X)), but I doubt that will be officially supported if it does work. I don't know for sure, obviously, but that's my guess.

Also, going off Wikipedia (though I don't know if their information on the card is accurate), the R7-260 is Cape Verde PRX, not XT like the 7770. The difference is (supposedly) the PRX version has 128 more SPs and 8 more TMUs.


----------



## MrJava

It's probably going to be called the A10-7800K. There might also be some rebranding ala the R9 290X which this APU definitely deserves considering how many improvements there are.
Quote:


> Originally Posted by *SandGlass*
> 
> Anybody what the APU will be called? A12-7800k? Any name would be helpful.
> I have a script(in javascript) that can search canardpc results, and could be potentially be adapted for cosmology benchmarks, but I need to nail the name exactly, and the shorter it is the faster the script is(canardpc also blocks you after too many requests).


----------



## CynicalUnicorn

Quote:


> Originally Posted by *SandGlass*
> 
> Anybody what the APU will be called? A12-7800k? Any name would be helpful.
> I have a script(in javascript) that can search canardpc results, and could be potentially be adapted for cosmology benchmarks, but I need to nail the name exactly, and the shorter it is the faster the script is(canardpc also blocks you after too many requests).


Easy enough. Let's build it from the ground up:

A# = relative performance within a generation (A10 > A8 > A6 etc.); bigger numbers have more GPU cores and oftentimes more CPU cores
A#-G000 = generation (6 is newest, then 5, then 4, etc.)
A#-Gx00 = specific model (higher numbers is usually just more jiggahertz within a given # for A#)
A#-Gx00u = The presence or lack of the letter k, as far as I can tell, indicates whether or not it is unlocked

It's a lot like Intel's naming scheme for their Core i# CPUs. A10-7800k is probably the best one.

So, if Richland could function in DGM (essentially CrossfireX) with 7750, then I would not be surprised if Kaveri could pair up with a 78x0. Whether or not frame problems arise remains to be seen. I would be extremely happy if AMD releases a dual-7790 for ridiculous mITX rigs using Kaveri, but I don't think it will happen.


----------



## Kuivamaa

Unless they fix their drivers ,Dual graphics aren't gonna be worthwhile. Both Llano and trinity systems were plagued with Microstuttering.


----------



## azanimefan

Quote:


> Originally Posted by *Kuivamaa*
> 
> Unless they fix their drivers ,Dual graphics aren't gonna be worthwhile. Both Llano and trinity systems were plagued with Microstuttering.


they earned some grace with the tremendous effort they've made on xfire microstuttering... i think i'd be willing to buy in confidence the issue will get ironed out. especially since it's looking like when it's working xfire is better then SLi... which i didn't expect to claim last summer. give AMD a little time, i'm sure it will be mostly licked come springtime.


----------



## azanimefan

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I would be extremely happy if AMD releases a dual-7790 for ridiculous mITX rigs using Kaveri, but I don't think it will happen.


omg that is the greatest idea ever... a dual 7790... not sure what they'd call it... an r*9*-260x maybe? Something that could dual graphics with a kaveri a10... for tri-xfire? we'd be talking r9-290 performance range a that point... coming in 100W lower.

of course AMD would have to figure out how to make dual graphics work with a double gpu... but we can dream can't we?


----------



## MrJava

I'm pretty sure that if you're crossfiring AMD cards and you want to use an AMD CPU, then Kaveri-based Athlons will be your best choice due to improved singlethreaded performance and on-die PCIe 3.0.
Quote:


> Originally Posted by *azanimefan*
> 
> omg that is the greatest idea ever... a dual 7790... not sure what they'd call it... an r*9*-260x maybe? Something that could dual graphics with a kaveri a10... for tri-xfire? we'd be talking r9-290 performance range a that point... coming in 100W lower.
> 
> of course AMD would have to figure out how to make dual graphics work with a double gpu... but we can dream can't we?


----------



## DaveLT

Quote:


> Originally Posted by *MrJava*
> 
> I'm pretty sure that if you're crossfiring AMD cards and you want to use an AMD CPU, then Kaveri-based Athlons will be your best choice due to improved singlethreaded performance and on-die PCIe 3.0.


I'm pretty sure that standard Kaveri APUs will be labelled "A10" as well. He did not say anything about PD A10


----------



## NaroonGTX

Kaveri will retain the A10-style name and the top part will be the A10-7800k. The GPU will use the new-style Rx-2xx branding.


----------



## SandGlass




----------



## NaroonGTX

Yeah that's somewhat old, it's the Berlin variant, but it does confirm 512 SP's (some people seem to be confused on whether it's 384 or 512)


----------



## DaveLT

Quote:


> Originally Posted by *SandGlass*


Scratches head ... QUAD-CHANNEL?!


----------



## MrJava

Capacity, not bandwidth. Also Kyoto is single channel.
Quote:


> Originally Posted by *DaveLT*
> 
> Scratches head ... QUAD-CHANNEL?!


----------



## Snowmen

Quote:


> Originally Posted by *zalbard*
> 
> I don't even see 400% anywhere.


Indeed, only upwards of 700%.


----------



## Thunderclap

Can't wait to get an A10-7800K and overclock the cr*p out of it! If it clocks anywhere as good as the previous 6800K's and 5800K's (anywhere from 4.5 to 5GHz) and with the performance boost from the Steamroller cores, then count me in as more than happy.







C'mon AMD, you're killing me with this waiting game...


----------



## NaroonGTX

The 7800k will be selling like hotcakes. Can't wait to see benchmarks later on.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> The 7800k will be selling like hotcakes. Can't wait to see benchmarks later on.


I seriously hope not.


----------



## nitrubbb

Quote:


> Originally Posted by *NaroonGTX*
> 
> The 7800k will be selling like hotcakes. Can't wait to see benchmarks later on.


ima preorder asap


----------



## Thunderclap

Quote:


> Originally Posted by *sepiashimmer*
> 
> I seriously hope not.


Huh?


----------



## NaroonGTX

Quote:


> I seriously hope not.




Seriously, what do you mean?


----------



## nitrubbb

Quote:


> Originally Posted by *NaroonGTX*
> 
> 
> 
> Seriously, what do you mean?


that bug was the best


----------



## Hattifnatten

Which is why it's my avatar







Anyone else heard that Kaveri will feature trueaudio? That should mean it has an XDMA engine for hybrid graphics. Maybe the stuttering will finally disappear on HG


----------



## NaroonGTX

That bug was indeed awesome, just like the infamous "falling through the floor" ones, lol! Good times.

Yeah, it seems that Kaveri will have TrueAudio:


----------



## DaveLT

Looks like they skipped GCN 1.0 entirely and went for GCN 2.0
Good move, AMD. Looks like we'll be seeing more graphical horsepower than we expected








Oh and, SR 2.0


----------



## inedenimadam

Quote:


> Originally Posted by *DaveLT*
> 
> Looks like they skipped GCN 1.0 entirely and went for GCN 2.0
> Good move, AMD. Looks like we'll be seeing more graphical horsepower than we expected
> 
> 
> 
> 
> 
> 
> 
> 
> Oh and, SR 2.0


I am not surprised, it was announced that some of the last gen GPU's had the true audio cooked in, but disabled. It would be silly to expect anything less than it being cooked into next gen stuffs.


----------



## NaroonGTX

Only the Bonaire GPU released earlier this year had it (HD 7790) but it was disabled and then enabled when it got re-released as the R7-260x.


----------



## inedenimadam

Quote:


> Originally Posted by *NaroonGTX*
> 
> Only the Bonaire GPU released earlier this year had it (HD 7790) but it was disabled and then enabled when it got re-released as the R7-260x.


Yeah, that's the one, I couldn't remember which one it was. True Audio appears to have been in the works for a while.


----------



## NaroonGTX

Luckily we'll get to hear AMD spend another two hours talking about it in a few days.


----------



## sepiashimmer

Quote:


> Originally Posted by *Thunderclap*
> 
> Huh?


Quote:


> Originally Posted by *NaroonGTX*
> 
> 
> 
> Seriously, what do you mean?


If it sells like hot cakes in other countries, because of lack of stock there would be delay in my country. Usually there is 2-3 weeks delay here when a new product is released.


----------



## sepiashimmer

Is TrueAudio similar to motherboard's onboard audio? Is it like a separate soundcard?


----------



## NaroonGTX

I honestly don't know much about TrueAudio at the moment... I know it's on the PS4 and probably the X1, supposedly it offers really high-quality positional sound and all that jazz.


----------



## Durquavian

Quote:


> Originally Posted by *sepiashimmer*
> 
> Is TrueAudio similar to motherboard's onboard audio? Is it like a separate soundcard?


Not like a card, you would still need one of those. Rather a way for devs to code for positional sound and for how sound travels. Like hearing muffled talking through a thin wall. Or echoes.


----------



## Thunderclap

Quote:


> Originally Posted by *sepiashimmer*
> 
> Is TrueAudio similar to motherboard's onboard audio? Is it like a separate soundcard?


Nope, it's not like having a separate sound card, more like, having a separate sound "chip" which does all the "sound work duties" in games, so it takes that part of the workload off the CPU, so it can be more efficient for it's other tasks while gaming.


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> Considering what i've seen from ASRock FM2 boards i would do well to stay 5 miles away from any FM2+ boards as well


You tell me. I was one of the first users on OCN to experiment the self-combusting FM2A75M-ITX.


----------



## <({D34TH})>

Quote:


> Originally Posted by *Artikbot*
> 
> You tell me. I was one of the first users on OCN to experiment the self-combusting FM2A75M-ITX.


My Pro4 has been serving me well (though that's probably because it has heatsinks)


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> You tell me. I was one of the first users on OCN to experiment the self-combusting FM2A75M-ITX.











Yeah .... Mine went up in flames as well








http://www.overclock.net/t/1347709/amd-richland-a10-6800k-apu-thread/240#post_20051215
We don't want a repeat of that shall we?








I actually bought a A85-ITX ... ended up in flames as well when i OC'd.


----------



## LDV617

Lesson learned, don't OC your budget ITX board XD


----------



## DaveLT

Quote:


> Originally Posted by *LDV617*
> 
> Lesson learned, don't OC your budget ITX board XD


Considering it's ONLY 5$ less than a Gigabyte A85X ITX ... It's not budget is it


----------



## LDV617

Well then there's a whole other lesson to be learned right there. lol.


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah .... Mine went up in flames as well
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.overclock.net/t/1347709/amd-richland-a10-6800k-apu-thread/240#post_20051215
> We don't want a repeat of that shall we?
> 
> 
> 
> 
> 
> 
> 
> 
> I actually bought a A85-ITX ... ended up in flames as well when i OC'd.


It's sad, because ASRock Europe's customer support is truly amazing... Sadly this round I will go with Gigabyte again. Cheap ass low end 60€ board, solid like all freaking heck and runs like a fridge. Their A88X mini ITX board ought to be a technical marvel.


----------



## DaveLT

Quote:


> Originally Posted by *Artikbot*
> 
> It's sad, because ASRock Europe's customer support is truly amazing... Sadly this round I will go with Gigabyte again. Cheap ass low end 60€ board, solid like all freaking heck and runs like a fridge. Their A88X mini ITX board ought to be a technical marvel.


I'd rather buy a solid board that doesn't break than have to deal with any kind of customer support








If you bought a ASRock again and it's this problematic
Like as Mashiro says ... Baka.

Or Kirino style : KONO BAKA!!!


----------



## LDV617

It's all about Gigabyte boards









Merging 2 decent manufacturer's doesn't mean you get a stellar one


----------



## Artikbot

Quote:


> Originally Posted by *DaveLT*
> 
> I'd rather buy a solid board that doesn't break than have to deal with any kind of customer support
> 
> 
> 
> 
> 
> 
> 
> 
> If you bought a ASRock again and it's this problematic
> Like as Mashiro says ... Baka.
> 
> Or Kirino style : KONO BAKA!!!


Hahaha sure









That's why my money is going the Gigabyte way the next round, too


----------



## ChrisB17

Quote:


> Originally Posted by *Artikbot*
> 
> You tell me. I was one of the first users on OCN to experiment the self-combusting FM2A75M-ITX.


Every single AsRock board I have had was garbage. Thin cheap PCB's, Cheap vrms and bad tech support *US*. I had no issues lately with Asus, Gigabyte, MSI and Biostar. Oh and never had a motherboard start on fire


----------



## Artikbot

Quote:


> Originally Posted by *ChrisB17*
> 
> Every single AsRock board I have had was garbage. Thin cheap PCB's, Cheap vrms and bad tech support *US*. I had no issues lately with Asus, Gigabyte, MSI and Biostar. Oh and never had a motherboard start on fire


I find it quite ironic that one of the most battletank boards I've ever used is a 4COREDUAL-SATA2 that wasn't even middle range XD


----------



## ChrisB17

Sometimes I think older boards where built better. I have an Asus M2N32-SLI Premium board at work and I swear you could run it over and it would still work. Its heavy, Plenty of copper and always works. I have a newer Asus 990fx board sitting right next to me and its not even as beefy as this old thing.


----------



## Thunderclap

Quote:


> Originally Posted by *Artikbot*
> 
> I find it quite ironic that one of the most battletank boards I've ever used is a 4COREDUAL-SATA2 that wasn't even middle range XD


My ASRock X58 SuperComputer is pretty darn good, too... =(


----------



## 113802

Quote:


> Originally Posted by *ChrisB17*
> 
> Sometimes I think older boards where built better. I have an Asus M2N32-SLI Premium board at work and I swear you could run it over and it would still work. Its heavy, Plenty of copper and always works. I have a newer Asus 990fx board sitting right next to me and its not even as beefy as this old thing.


DFI used to always use impressive vrms but reviewers had to bash their insane amount of bios options and they ended up killing the Lanparty lineup.

Sent from my TouchPad using Tapatalk 4


----------



## ChrisB17

Quote:


> Originally Posted by *WannaBeOCer*
> 
> DFI used to always use impressive vrms but reviewers had to bash their insane amount of bios options and they ended up killing the Lanparty lineup.
> 
> Sent from my TouchPad using Tapatalk 4


Uhg I miss dfi. The colors where awesome and the bios that they used is what made them fun.


----------



## DaveLT

Quote:


> Originally Posted by *ChrisB17*
> 
> Every single AsRock board I have had was garbage. Thin cheap PCB's, Cheap vrms and bad tech support *US*. I had no issues lately with Asus, Gigabyte, MSI and Biostar. Oh and never had a motherboard start on fire


But when you do have an issue with ASUS boards now ... forget about getting it RMA'd LOL
Quote:


> Originally Posted by *WannaBeOCer*
> 
> DFI used to always use impressive vrms but reviewers had to bash their insane amount of bios options and they ended up killing the Lanparty lineup.
> 
> Sent from my TouchPad using Tapatalk 4


Quote:


> Originally Posted by *ChrisB17*
> 
> Uhg I miss dfi. The colors where awesome and the bios that they used is what made them fun.


Agreed. BIOS options still made overclocking fun and these days ... we don't have much of those. Although if you went back to a EX58 or a X58A board the options are still plenty though


----------



## Scvhero

Amd has got something good going for them ^_^


----------



## sepiashimmer

Quote:


> Originally Posted by *ChrisB17*
> 
> Sometimes I think older boards where built better. I have an Asus M2N32-SLI Premium board at work and I swear you could run it over and it would still work. Its heavy, Plenty of copper and always works. I have a newer Asus 990fx board sitting right next to me and its not even as beefy as this old thing.


SLI boards are usually made for hardcore gamers, so they probably made it using very durable material to last long. That is probably the reason why it is so good and they also cost lot more than normal ones.


----------



## LDV617

Older CPUs pulled more power, cheap Haswell boards are going to be way less durable than say an SLI Phenom board from 6 years ago. Especially if that older board was an SLI / high end board to begin with. But compared to the high end stuff of today I doubt there is much of a difference (besides maybe max power draw)


----------



## azanimefan

Quote:


> Originally Posted by *ChrisB17*
> 
> Sometimes I think older boards where built better. I have an Asus M2N32-SLI Premium board at work and I swear you could run it over and it would still work. Its heavy, Plenty of copper and always works. I have a newer Asus 990fx board sitting right next to me and its not even as beefy as this old thing.


I don't know... i just replaced my m4a78t-e with a m5a99 evo and i'll tell you the difference in quality is like night and day. this evo is a tight board... just pushed my phii which previously couldn't really break 4.0 and be stable, up to 4.2... the heat is a bit much for a day to day but i think i'll end up backing the clock down to 4.0... and calling it a day. not bad. replace the board and get a better overclock. Knew i would get a better clock out of it when i was able to stabilize and run prime95 at stock voltage at 3.8ghz (my previous day to day)... that's an improvement; before it was only prime stable at 3.6ghz and stock voltage.

proves i was getting junk power through my old board.

too much in the bios for me to play with everything tonight. i'm still patting myself on the back for successfully replacing the motherboard in my system and not needing a win7 re-install.


----------



## DaveLT

Quote:


> Originally Posted by *azanimefan*
> 
> I don't know... i just replaced my m4a78t-e with a m5a99 evo and i'll tell you the difference in quality is like night and day. this evo is a tight board... just pushed my phii which previously couldn't really break 4.0 and be stable, up to 4.2... the heat is a bit much for a day to day but i think i'll end up backing the clock down to 4.0... and calling it a day. not bad. replace the board and get a better overclock. Knew i would get a better clock out of it when i was able to stabilize and run prime95 at stock voltage at 3.8ghz (my previous day to day)... that's an improvement; before it was only prime stable at 3.6ghz and stock voltage.
> 
> proves i was getting junk power through my old board.
> 
> too much in the bios for me to play with everything tonight. i'm still patting myself on the back for successfully replacing the motherboard in my system and not needing a win7 re-install.


We're not talking about VRM power line or such, we're speaking about how close it is to being built like a tank


----------



## azanimefan

Quote:


> Originally Posted by *DaveLT*
> 
> We're not talking about VRM power line or such, we're speaking about how close it is to being built like a tank


well... the m5a99x evo board is a little heavier and has less flex then the old m4a78... it feels like i could break someones face with it if i used it as a weapon... so i guess it passes that test?


----------



## DaveLT

Quote:


> Originally Posted by *azanimefan*
> 
> well... the m5a99x evo board is a little heavier and has less flex then the old m4a78... it feels like i could break someones face with it if i used it as a weapon... so i guess it passes that test?


When it comes to ASUS boards i can probably say the only ones built well in the past were maximus and rampage boards ...


----------



## Kuivamaa

I was pleasantly surprised by the m5a97 evo R2.0 , seems well built.


----------



## monstercameron

wow, I am surprised no one updated this thread...her we go
http://www.pcper.com/reviews/Processors/AMD-Spills-more-Kaveri-Beans-AMD-APU13
Quote:


> Kaveri will be made up of 4 "Steamroller" cores, which are enhanced versions of the previous Bulldozer/Trinity/Vishera families of products. Nearly everything in the processor is doubled. It now has dual decode, more cache, larger TLBs, and a host of other smaller features that all add up to greater single thread performance and better multi-threaded handling and performance. Integer performance will be improved, and the FPU/MMX/SSE unit now features 2 x 128 bit FMAC units which can "fuse" and support AVX 256.


Quote:


> In addition, this chip will feature the TrueAudio functionality introduced with the latest AMD standalone GPUs (Hawaii and Bonaire). This DSP technology will accelerate audio functions when combined with the necessary middleware. From all indications, adding this functionality entails a very small die size hit.


Quote:


> Kaveri at the top end will feature around 856 GFlops of processing power. This is well up from the 779 GFlops of the A10-6800K.


so, improved cpu perf, true audio support but less gpu perf than previously speculated [was ~1000GFLOPs].


----------



## CptDanko

Well kaveri is steamroller based so I think we can guestimate what a real high end steamroller will do.


----------



## NaroonGTX

Just about every other Steamroller/Kaveri-based thread was updated except this one, lol.

There's no guarantee there will be any SR-based FX processors. Don't hold your breath. Wait for the roadmaps.


----------



## DaveLT

Still 2x more powerful over a GT630. And there we have it the rumored = HD7750 performance is true.

Really still very powerful for a IGP, Intel can now eat their hearts out


----------



## MoGTy

Quote:


> Originally Posted by *NaroonGTX*
> 
> Just about every other Steamroller/Kaveri-based thread was updated except this one, lol.
> 
> There's no guarantee there will be any SR-based FX processors. Don't hold your breath. Wait for the roadmaps.


Probably because the source is a steaming pile of ...


----------



## azanimefan

Quote:


> Originally Posted by *DaveLT*
> 
> Still 2x more powerful over a GT630. And there we have it the rumored = HD7750 performance is true.
> 
> Really still very powerful for a IGP, Intel can now eat their hearts out


hd7750 is about the min needed for gaming at 1080p... for an igpu to produce a 1080p capable display is certainly something. This also means it will dual graphics pretty conveniently with any of the 77xx gpus... though the 7790 might be a stretch... probably depends how fast you can get the ram clocked and how much of an overclock you can get on the igpu if it will dual graphics with the 7790/260x.


----------



## Kuivamaa

Couple of years ago I've seen a slide that described how dual graphics worked. Discreet gpu was rendering 2 frames for each frame rendered by the igpu. This seemed to agree with the gpu usage I saw on my own llano laptop,Afterburner could only monitor discreet usage but it kept returning 60-70%. I toyed with DG but the only way to truly get rid of microstuttering was to introduce a frame limiter which defeated the purpose of using dual graphics (higher framerate). I couldn't get higher graphic settings either. The setup was 7520g/7670m - slightly different arch and the discreet card was much stronger but I suspect the main culprit was my 1333 DDR3 system ram. Either way, unless AMD makes a proper driver for DG, it won't fly and it is a pity. A 150 euro kaveri quad with 512GCN cores paired with a 70-80 euro GDDR5 7750 in DG would be amazing under mantle.


----------



## TheLAWNOOB

Kuivamaa, did you try disabling the iGPU?

If the dedicated GPU does 66% of the work and only seeing 60% usage, turning off the iGPU would make the dedicated GPU run at 100%, which will really help smoothness. It would also eliminate the immense DDR1333 bottleneck.


----------



## Kuivamaa

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> Kuivamaa, did you try disabling the iGPU?
> 
> If the dedicated GPU does 66% of the work and only seeing 60% usage, turning off the iGPU would make the dedicated GPU run at 100%, which will really help smoothness. It would also eliminate the immense DDR1333 bottleneck.


When I game on that laptop I am only using the discreet one, what I report above are my findings after trying DG in numerous DX11 titles (DG doesn't work in DX9).


----------



## sugarhell

So with mantle a 8350=4770k by pcper and this guy

https://twitter.com/cavemanjim

Read his comment. First demo of mantle with nitrous engine


----------



## inedenimadam

Quote:


> Originally Posted by *sugarhell*
> 
> So with mantle a 8350=4770k by pcper and this guy
> 
> https://twitter.com/cavemanjim
> 
> Read his comment. First demo of mantle with nitrous engine


The way I understood it...I might be way off...but it is not that 4770=8350, it is that the 290 doesn't care which one it runs on because mantle levies the load off of the CPU and binds it to the GPU...

interpreting tweets of a AMD PR Rep as actual performance might be a bad idea...

I want to see numbers


----------



## MrJava

Makes sense. The is so little CPU overhead that the game becomes GPU bound at which point there is no difference between the FX and the i7.
Quote:


> Originally Posted by *inedenimadam*
> 
> The way I understood it...I might be way off...but it is not that 4770=8350, it is that the 290 doesn't care which one it runs on because mantle levies the load off of the CPU and binds it to the GPU...
> 
> interpreting tweets of a AMD PR Rep as actual performance might be a bad idea...
> 
> I want to see numbers


----------



## Clocknut

Quote:


> Originally Posted by *NaroonGTX*
> 
> Only the Bonaire GPU released earlier this year had it (HD 7790) but it was disabled and then enabled when it got re-released as the R7-260x.


I kinda hate it..... it is on my CHIP Radeon 7790, and.................... I couldnt use it.


----------



## ZealotKi11er

I personally dont care about GPU performance as much as CPU holding back GPUs.


----------



## azanimefan

I haven't seen this posted yet; here is Kavari's flagship specs courtesy of AMD

A10-7850k

cpu-
Steamroller
2m/4c
3.7ghz

igpu-
GCN 1.1
512 Radeon Cores
870mhz core frequency

here is the source picture


Spoiler: Warning: Spoiler!







We saw at APU13 Kaveri chewing up BF4 on medium settings. This was informative for a few reasons. the first being it was pulling between 24-40fps, which is about in line with a 7750 on medium in bf4 with a i7 paired to it. Considering Kaveri is using GCN 1.1 cores, and judging by past gpu performance on GCN 1.1 cores we would expect a gpu with 512 GCN 1.1 cores to perform a little closer to a 7770 then showing up on the low side of a gddr5 7750. This pretty much confirms like we all expected the igpu performance is 100% ram bottle-necked. Which again will put the premium on blazing fast system ram giving bigger returns then overclocking the igpu

I'd like to know what ram AMD was using in their demonstration rig...


----------



## NaroonGTX

Pcper's Kaveri BF4 video said the rig was using 2133mhz DDR3.


----------



## sumitlian

Quote:


> Originally Posted by *azanimefan*
> 
> I haven't seen this posted yet; here is Kavari's flagship specs courtesy of AMD
> 
> A10-7850k
> 
> cpu-
> Steamroller
> 2m/4c
> 3.7ghz
> 
> igpu-
> GCN 1.1
> 512 Radeon Cores
> 870mhz core frequency
> 
> 
> Spoiler: Warning: Spoiler!










Nearly 1 Terraflops in an APU (890 GFLOPS iGPU + 118 GFLOPS CPU).

2400 MHz Dual Channel should give ~28 GB/s of real world read/write bandwidth ( assuming at least 75% memory bandwidth efficiency )
This is the only thing that is going to be a bottleneck for 512 SP.

I still hope AMD must have planned something to compensate this.


----------



## MoGTy

Quote:


> Originally Posted by *sumitlian*
> 
> 
> 
> 
> 
> 
> 
> 
> Nearly 1 Terraflops in an APU (890 GFLOPS iGPU + 118 GFLOPS CPU).
> 
> 2400 MHz Dual Channel should give ~28 GB/s of real world read/write bandwidth ( assuming at least 75% memory bandwidth efficiency )
> This is the only thing that is going to be a bottleneck for 512 SP.
> 
> I still hope AMD must have planned something to compensate this.


Give us octa-channel DDR4 Support









Bye Bye discrete GPU


----------



## mtcn77

Quote:


> Originally Posted by *sumitlian*
> 
> 
> 
> 
> 
> 
> 
> 
> Nearly 1 Terraflops in an APU (890 GFLOPS iGPU + 118 GFLOPS CPU).
> 
> 2400 MHz Dual Channel should give ~28 GB/s of real world read/write bandwidth ( assuming at least 75% memory bandwidth efficiency )
> This is the only thing that is going to be a bottleneck for 512 SP.
> 
> I still hope AMD must have planned something to compensate this.


What I have read state iGPU compute as high as 737 GFLOP/s, just like the quote you refer, as well.


----------



## sumitlian

Quote:


> Originally Posted by *MoGTy*
> 
> Give us octa-channel DDR4 Support
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Bye Bye discrete GPU










Actually this is possible, DDR4 removes the multi-DIMMs per channel system. This means 1 DIMM = 1 Channel.
They might support 8 channel ( 8 DIMMs ) in their server segment. DDR4 4000 MHz 64 bit 8 channel = 250 GB/s









They will just need 8 separate IMCs for each DIMM


----------



## Zyro71

Wait hold up. with HSA and the ability to address the ENTIRE RAM space, wouldn't it be 256bit bus for DDR3?
I mean, it is dual channel at the moment with 128bit bus. But i remember reading somewhere that with the right programs and right software a game can take advantage of the CPU and GPU memory space making the overall bandwidth double lets say that 2133MHz area that's like 34.1 gb/s to 68.2 gb/s (i would take this with a grain of salt, but, it looks similar to whats in the Xbox 1, only that console has a better GPU with but with lower operating cores.)
Go ahead and shoot me its just my observation.
Quote:


> Originally Posted by *sumitlian*
> 
> 
> 
> 
> 
> 
> 
> 
> Actually this is possible, DDR4 removes the multi-DIMMs per channel system. This means 1 DIMM = 1 Channel.
> They might support 8 channel ( 8 DIMMs ) in their server segment. DDR4 4000 MHz 64 bit 8 channel = 250 GB/s
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They will just need 8 separate IMCs for each DIMM


Actually, on this, If AMD makes a powerful enough graphics portion of the APU, we would not need to really buy high end gaming rigs..unless graphics cards become stupidly cheap and DDR4 is stupidly high in price. I figure with DDR4, unless..games seriously take advantage of the bandwidth..it would be a serious impact on the gaming market. it kinda makes me want to buy an FM2+ motherboard but I rather rather for DDR4.
Also, If DDR4 came out now, we would see the GPU portion of the APU bottlenecked here due to too much ram and not enough GPU power


----------



## MoGTy

Quote:


> Originally Posted by *sumitlian*
> 
> 
> 
> 
> 
> 
> 
> 
> Actually this is possible, DDR4 removes the multi-DIMMs per channel system. This means 1 DIMM = 1 Channel.
> They might support 8 channel ( 8 DIMMs ) in their server segment. DDR4 4000 MHz 64 bit 8 channel = 250 GB/s
> 
> 
> 
> 
> 
> 
> 
> 
> 
> They will just need 8 separate IMCs for each DIMM


Sweet baby Jesus. That would be ~7950 memory performance on an APU.









Anyway, I'm dreaming way too much...


----------



## sumitlian

Quote:


> Originally Posted by *mtcn77*
> 
> What I have read state iGPU compute as high as 737 GFLOP/s, just like the quote you refer, as well.


No serious problem in that








Previous calculation was actually according to 870 MHz core clock. (As azanimefan has stated )
Just take it as overclocking result


----------



## AlphaC

Quote:


> Originally Posted by *Zyro71*
> 
> Wait hold up. with HSA and the ability to address the ENTIRE RAM space, wouldn't it be 256bit bus for DDR3?
> I mean, it is dual channel at the moment with 128bit bus. But i remember reading somewhere that with the right programs and right software a game can take advantage of the CPU and GPU memory space making the overall bandwidth double lets say that 2133MHz area that's like 34.1 gb/s to 68.2 gb/s (i would take this with a grain of salt, but, it looks similar to whats in the Xbox 1, only that console has a better GPU with but with lower operating cores.)
> Go ahead and shoot me its just my observation.
> Actually, on this, If AMD makes a powerful enough graphics portion of the APU, we would not need to really buy high end gaming rigs..unless graphics cards become stupidly cheap and DDR4 is stupidly high in price. I figure with DDR4, unless..games seriously take advantage of the bandwidth..it would be a serious impact on the gaming market. it kinda makes me want to buy an FM2+ motherboard but I rather rather for DDR4.
> Also, If DDR4 came out now, we would see the GPU portion of the APU bottlenecked here due to too much ram and not enough GPU power


It's simple. If GPU/CPU have direct access to memory and on an equal level: just replace word CPU with GPU.














each channel is 64-bit:
single channel = 64-bit
dual channel = 128-bit (lga 1150/1155/1156 , 775, FM2+/FM2, AM3+/AM3, AM2+)
triple channel = 195-bit (x58)
quad channel = 256-bit (x79)

http://www.hardwaresecrets.com/article/133

edit: this is partly why AMD is bottlenecked on their AM3+ platform which still uses a Northbridge instead of an IMC (integrated memory controller). It's also why AMD motherboards count more than Intel mobos for performance.

Southbridge is now integrated into PCH for Intel

ARM's goals are similar to AMD's APUs: system on a chip (SoC) in the mobile space.

edit 2: to put it in perspective each PCIE 3.0 lane provides 985MB/s (about 1 GB/s , so PCIE3 x16 = 16GB/s , x8 = 8GB/s , x4 = 4GB/s ; PCIE2 x 16 = 8GB/s , x 8 = 4 GB/s)


----------



## mtcn77

And the best of all: there are software that benefit from all these gigaflops. Wondering how fast password crackers, nested loop queries are going to get...


----------



## sumitlian

Quote:


> Originally Posted by *MoGTy*
> 
> Sweet baby Jesus. That would be ~7950 memory performance on an APU.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Anyway, I'm dreaming way too much...


Yeah we take 8 channels with grain of salt (even in dream







)

Wiki


http://techreport.com/news/25338/new-amd-embedded-roadmap-shows-64-bit-arm-cortex-a57-chip

AMD with their embedded platform Code-named Hierofalcon will support Dual Channel 64 bit DDR4 in 2H 2014.

Lets assume server platform with Quad Channel DDR4 arrives and there you go !
You will actually be able to get 125 GB/s with 4000 MHz DDR4.


----------



## Demonkev666

Quote:


> Originally Posted by *AlphaC*
> 
> It's simple. If GPU/CPU have direct access to memory and on an equal level: just replace word CPU with GPU.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> each channel is 64-bit:
> single channel = 64-bit
> dual channel = 128-bit (lga 1150/1155/1156 , 775, FM2+/FM2, AM3+/AM3, AM2+)
> triple channel = 195-bit (x58)
> quad channel = 256-bit (x79)
> 
> http://www.hardwaresecrets.com/article/133
> 
> edit: this is partly why AMD is bottlenecked on their AM3+ platform which still uses a Northbridge instead of an IMC (integrated memory controller). It's also why AMD motherboards count more than Intel mobos for performance.
> 
> Southbridge is now integrated into PCH for Intel
> 
> ARM's goals are similar to AMD's APUs: system on a chip (SoC) in the mobile space.


AMD northbridge on cpu is a Memory controller

your talking about north bridges on boards which are linked with Hyper transport

Pci-express on cpu isn't even that great while 990FX has 40 pci-express lanes anyways.


----------



## sumitlian

Quote:


> Originally Posted by *Zyro71*
> 
> Wait hold up. with HSA and the ability to address the ENTIRE RAM space, wouldn't it be 256bit bus for DDR3?
> I mean, it is dual channel at the moment with 128bit bus. But i remember reading somewhere that with the right programs and right software a game can take advantage of the CPU and GPU memory space making the overall bandwidth double lets say that 2133MHz area that's like 34.1 gb/s to 68.2 gb/s (i would take this with a grain of salt, but, it looks similar to whats in the Xbox 1, only that console has a better GPU with but with lower operating cores.)
> Go ahead and shoot me its just my observation.
> Also, If DDR4 came out now, we would see the GPU portion of the APU bottlenecked here due to too much ram and not enough GPU power


Memory bus width and no of channel will not allow you to go 68 GB/s with dual channel 64 it 2133 MHz.

See this. This is showing CPU cache to physical memory and CPU cache to GPU cache. We know that the bus width of CPU cache to Physical memory is 2 x 64 Bit. But what the hell is the bus width of CPU to GPU interconnection ?











Officially, this is still unknown. But I've read somewhere in Anandtech or guru3d (Sorry I really don't remember, but I am searching for it) that they are claiming it to be 2 x 128 bit or 2 x 256 bit wide.

Lets assume CPU to GPU interconnection as NB-2 (NB or NB-1 for CPU to Main memory as usual). So NB-2 running at 2000 MHz at 2 x 128 bit wide Bus will give 62.5 GB/s of bandwidth (Hardware coherency). This is still higher than read/write speed of L3 cache in AMD FX CPUs (By AIDA64).


----------



## Seronx

CPU <-> Memory = 2 * 64-bit
GPU <-> Memory = 8 * 32-bit
CPU <-> GPU = 16 * 3.2 GHz * 2


----------



## AlphaC

Quote:


> Originally Posted by *Demonkev666*
> 
> AMD northbridge on cpu is a Memory controller
> 
> your talking about north bridges on boards which are linked with Hyper transport
> 
> Pci-express on cpu isn't even that great while 990FX has 40 pci-express lanes anyways.


Thanks for fixing that









Been a long time since my Phenom II :S

I looked at Vishera block diagram and AMD's FX series has a Northbridge for PCIe lanes rather than a Northbridge for memory


----------



## mtcn77

Quote:


> Originally Posted by *sumitlian*
> 
> No serious problem in that
> 
> 
> 
> 
> 
> 
> 
> 
> Previous calculation was actually according to 870 MHz core clock. (As azanimefan has stated )
> Just take it as overclocking result


I wouldn't speak of those overclock results as easy as pie, discrete HD7750's don't always overclock over 830 mhz.
Quote:


> AMD HD 7750 (Ref.)
> 
> Core: 833MHz
> Memory: 4884MHz (QDR)


Source


----------



## sumitlian

Quote:


> Originally Posted by *Seronx*
> 
> CPU <-> Memory = 2 * 64-bit
> GPU <-> Memory = 8 * 32-bit
> CPU <-> GPU = 16 * 3.2 GHz * 2


Thanks for this !
and +1. I was really missing you in this topic









May you please describe in detail with 16 * 3.2 GHz * 2
Is it 2 lanes of 16 bit running at 3.2 GHz in CPU <-> GPU?

Also, was it 8 lanes of 32 bit in GPU <-> Memory ?


----------



## sumitlian

Quote:


> Originally Posted by *mtcn77*
> 
> I wouldn't speak of those overclock results as easy as pie, discrete HD7750's don't always overclock over 830 mhz.
> Source


7750 was GCN 1.0.
Kaveri's iGPU is GCN 1.1.


----------



## Kuivamaa

Quote:


> Originally Posted by *mtcn77*
> 
> I wouldn't speak of those overclock results as easy as pie, discrete HD7750's don't always overclock over 830 mhz.
> Source


This Canucks review is barely relevant to Kaveri o/c potential. You see, AMD won't physically put a 7750 in it, It will just have the same amount of GCN units, the die itself will be different ,a merge of CPU and GPU. Not to mention that 7750 SKUs may include Cape Verde chips of lesser quality (good ones being 7770) or the simple fact 7750 cards get their power straight from PCIe slot (no connector) so there is a strict limit in the amount of power you can feed it and of course, clock it.


----------



## MrJava

Two busses:
Onion (Coherent):
2 x 256 bit
Aggregate BW: 128GB/s

Garlic (Radeon Memory Bus non-coherent):
2 x 256 bit per memory channel
Aggregate BW: 256GB/s

Source: http://www.realworldtech.com/fusion-llano/2/
The Onion bus width is doubled in each direction for Kaveri.
Quote:


> Originally Posted by *sumitlian*
> 
> Lets assume CPU to GPU interconnection as NB-2 (NB or NB-1 for CPU to Main memory as usual). So NB-2 running at 2000 MHz at 2 x 128 bit wide Bus will give 62.5 GB/s of bandwidth (Hardware coherency). This is still higher than read/write speed of L3 cache in AMD FX CPUs (By AIDA64).


----------



## mtcn77

Quote:


> Originally Posted by *Kuivamaa*
> 
> This Canucks review is barely relevant to Kaveri o/c potential. You see, AMD won't physically put a 7750 in it, It will just have the same amount of GCN units, the die itself will be different ,a merge of CPU and GPU. Not to mention that 7750 SKUs may include Cape Verde chips of lesser quality (good ones being 7770) or the simple fact 7750 cards get their power straight from PCIe slot (no connector) so there is a strict limit in the amount of power you can feed it and of course, clock it.


I think there lies a discrepancy to your conclusion. Not that I would not want it to come true, just for lols.


----------



## sumitlian

Quote:


> Originally Posted by *MrJava*
> 
> Two busses:
> Onion (Coherent):
> 2 x 256 bit
> Aggregate BW: 128GB/s
> 
> Garlic (Radeon Memory Bus non-coherent):
> 2 x 256 bit per memory channel
> Aggregate BW: 256GB/s
> 
> Source: http://www.realworldtech.com/fusion-llano/2/
> The Onion bus width is doubled in each direction for Kaveri.


OH it was "Onion"







I remember !
MrJava, Many thanks for the info and +1

it is actually 2 x 256 bit for Kaveri. Thanks for confirming.








So if we take the speed of 650 MHz (Llano's) for Kaveri also.
Its about 40 GB/s (( 650 x 256 bit ) / 8 ) x 2 channel = 41600 MB/s or 40 GB/s (Twice the speed of Llano/Trinity) for CPU to GPU.

Many thanks to AMD they doubled the bus in Kaveri. But actual speed of Onion bus in Kaveri is still unknown.


----------



## MrJava

Kaveri ES CPUID String: *2M186092H4467_23/18/12/05_1304* (35W)
Turbo/Base/Northbridge/Graphics

Seems to indicate a northbridge clock of at least 1.2GHz.
Quote:


> Originally Posted by *sumitlian*
> 
> OH it was "Onion"
> 
> 
> 
> 
> 
> 
> 
> I remember !
> MrJava, Many thanks for the info and +1
> 
> it is actually 2 x 256 bit for Kaveri. Thanks for confirming.
> 
> 
> 
> 
> 
> 
> 
> 
> So if we take the speed of 650 MHz (Llano's) for Kaveri also.
> Its about 40 GB/s (( 650 x 256 bit ) / 8 ) x 2 channel = 41600 MB/s or 40 GB/s (Twice the speed of Llano/Trinity) for CPU to GPU.
> 
> Many thanks to AMD they doubled the bus in Kaveri. But actual speed of Onion bus in Kaveri is still unknown.


----------



## Kuivamaa

Quote:


> Originally Posted by *mtcn77*
> 
> I think there lies a discrepancy to your conclusion. Not that I would not want it to come true, just for lols.


Care to elaborate?


----------



## mtcn77

Quote:


> Originally Posted by *sumitlian*
> 
> 7750 was GCN 1.0.
> Kaveri's iGPU is GCN 1.1.


Could you point what the outliers are between those architectures?


----------



## AlphaC

http://www.anandtech.com/show/7457/the-radeon-r9-290x-review/2

GCN 1.1 has more power states , more instruction sets, and TrueAudio.
Quote:


> The biggest change here is support for flat (generic) addressing support, which will be critical to enabling effective use of pointers within a heterogeneous compute context. Coupled with that is a subtle change to how the ACEs (compute queues) work, allowing GPUs to have more ACEs and more queues in each ACE, versus the hard limit of 2 we've seen in Southern Islands. The number of ACEs is not fixed - Hawaii has 8 while Bonaire only has 2 - but it means it can be scaled up for higher-end GPUs, console APUs, etc. Finally GCN 1.1 also introduces some new instructions, including a Masked Quad Sum of Absolute Differences (MQSAD) and some FP64 floor/ceiling/truncation vector functions.
> 
> Along with these architectural changes, there are a couple of other hardware features that at this time we feel are best lumped under the GCN 1.1 banner when talking about PC GPUs, as GCN 1.1 parts were the first parts to introduce this features and every GCN 1.1 part (at least thus) far has that feature. AMD's TrueAudio would be a prime example of this, as both Hawaii and Bonaire have integrated TrueAudio hardware, with AMD setting clear expectations that we should also see TrueAudio on future GPUs and future APUs.
> 
> AMD's Crossfire XDMA engine is another feature that is best lumped under the GCN 1.1 banner. We'll get to the full details of its operation in a bit, but the important part is that it's a hardware level change (specifically an addition to their display controller functionality) that's once again present in Hawaii and Bonaire, although only Hawaii is making full use of it at this time.
> 
> *Finally we'd also roll AMD's power management changes into the general GCN 1.1 family, again for the basic reasons listed above. AMD's new Serial VID interface (SIV2), necessary for the large number of power states Hawaii and Bonaire support and the fast switching between them, is something that only shows up starting with GCN 1.1. AMD has implemented power management a bit differently in each product from an end user perspective - Bonaire parts have the states but lack the fine grained throttling controls that Hawaii introduces - but the underlying hardware is identical.
> 
> With that in mind, that's a short but essential summary of what's new with GCN 1.1. As we noted way back when Bonaire launched as the 7790, the underlying architecture isn't going through any massive changes, and as such the differences are of primarily of interest to programmers more than end users. But they are distinct differences that will play an important role as AMD gears up to launch HSA next year. Consequently what limited fracturing there is between GCN 1.0 and GCN 1.1 is primarily due to the ancillary features, which unlike the core architectural changes are going to be of importance to end users. The addition of XDMA, TrueAudio, and improved power management (SIV2) are all small features on their own, but they are features that make GCN 1.1 a more capable, more reliable, and more feature-filled design than GCN 1.0.*


Quote:


> AMD's Bonaire graphics processor has been kicking around inside the Radeon HD 7790 since March, and all the while, it's been harboring some secret features. Behind closed doors at the GPU14 event, we learned that Bonaire is based on the same "IP pool" as Hawaii, the next-gen GPU scheduled to premiere inside the R9 290X later this year.
> 
> In short, Bonaire has many of the same architectural perks as Hawaii: improved shaders (which also appeared in the Kabini APU), embedded TrueAudio DSP cores, and greater flexibility when it comes to connecting multiple monitors. Bonaire also has the same power management mojo as Hawaii, but unlike the other features, AMD made that functionality public at the 7790's launch.
> 
> ...
> Like Hawaii, Bonaire has shaders that support flat memory addressing and MQSAD (or masked quad sum of absolute difference) operations. With flat addressing, the idea seems to be to combine system and GPU memory into a single address space. This, among other things, should help facilitate the development of GPU computing applications.
> 
> Bonaire also supports AMD's new TrueAudio technology. Inside the GPU silicon are Tensilica HiFi EP Audio DSP cores, a streaming DMA engine, 384KB of shared internal memory, and a low-latency bus interface that ties the DSP cores to the GPU's frame buffer and main system memory. AMD doesn't say how many DSP cores there are, but it tells us they run at 800MHz, and it claims Bonaire and Hawaii have the same DSP config. That means the two chips should have the same audio processing capabilities, despite their being aimed at wildly different price points.
> 
> Thanks to TrueAudio, game developers will be able to implement advanced spatialization and reverb effects based on in-game geometry. At the GPU14 event, AMD demoed elevation and depth perception simulations on a 7.1-speaker setup. A 3D sound stage was also emulated using two speakers. The TrueAudio pipeline is programmable, so developers should have some freedom to tweak those effects and perhaps to use the DSPs for other things.
> 
> As we understand it, using specialized DSP cores is better than simply processing advanced audio effects in software, which can tax low-end CPUs and yield inconsistent performance. Crucially for audio, the specialized DSP approach also incurs lower latency than processing sound in GPU shaders via DirectCompute or OpenCL.


http://techreport.com/review/25473/
Quote:


> ...
> Bonaire is rigged to offer higher floating-point math performance, more texturing capability, and better tessellation performance than Cape Verde. Also, as you'll see on the next page, AMD equips Bonaire with substantially faster GDDR5 RAM, which gives it a bandwidth advantage despite its identical memory controller setup.
> 
> In addition to the different unit mix, Bonaire has learned a trick from Trinity and Richland, AMD's mainstream APUs. That trick takes the form of a new Dynamic Power Management (DPM) microcontroller, which enables Bonaire to switch between voltage levels much quicker than Cape Verde or other members of the Southern Islands family.
> ...
> eight discrete DPM states, each with a different clock speed and voltage. Bonaire can switch between those states as quickly as every 10 milliseconds, which removes the need for the "inferred" states seen in Tahiti-that is, clock speed reductions without corresponding voltage cuts. This means the GPU can very quickly select the optimal clock speed and voltage combination to offer the best performance at the predefined power envelope.


http://techreport.com/review/24539/amd-radeon-hd-7790-graphics-card-reviewed

HD7790 and beyond have more power states ("p-states") and for dynamic loads such as gaming it is a power savings. For GPGPU where the GPU is loaded 100% all the time it matters much less.

With Kaveri also having True Audio , it would be a top to bottom TrueAudio stack once they release 384-bit/256-bit memory Radeons at ~$200-300 with the feature. TrueAudio ought to be named TrueDSP though.


----------



## MrJava

- TrueAudio
- GCN ISA additions for unified addressing
- XDMA

According to Anandtech.
Quote:


> Originally Posted by *mtcn77*
> 
> Could you point what the outliers are between those architectures?


----------



## DaveLT

Quote:


> Originally Posted by *azanimefan*
> 
> I haven't seen this posted yet; here is Kavari's flagship specs courtesy of AMD
> 
> A10-7850k
> 
> cpu-
> Steamroller
> 2m/4c
> 3.7ghz
> 
> igpu-
> GCN 1.1
> 512 Radeon Cores
> 870mhz core frequency
> 
> here is the source picture
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> We saw at APU13 Kaveri chewing up BF4 on medium settings. This was informative for a few reasons. the first being it was pulling between 24-40fps, which is about in line with a 7750 on medium in bf4 with a i7 paired to it. Considering Kaveri is using GCN 1.1 cores, and judging by past gpu performance on GCN 1.1 cores we would expect a gpu with 512 GCN 1.1 cores to perform a little closer to a 7770 then showing up on the low side of a gddr5 7750. This pretty much confirms like we all expected the igpu performance is 100% ram bottle-necked. Which again will put the premium on blazing fast system ram giving bigger returns then overclocking the igpu
> 
> I'd like to know what ram AMD was using in their demonstration rig...


3.7GHz stock, wow. I still say it's a true 4C (as if it hasn't always been a true 4C proc by now) though. I'm dropping off the module thinking for SR
Remember the discussion earlier that Kaveri will only be 3GHz? That guy must be freaking out by now


----------



## NaroonGTX

Every Bulldozer-family processor has had 'true' cores in them from the start.


----------



## DaveLT

Quote:


> Originally Posted by *NaroonGTX*
> 
> Every Bulldozer-family processor has had 'true' cores in them from the start.


I agree.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *NaroonGTX*
> 
> Every Bulldozer-family processor has had 'true' cores in them from the start.


Why do they get called fake quad/hexa/octa-cores then? It isn't a hyperthread-situation where there are X physical/2X logical cores, it's however many modules are active times two.

So are the R9 290s GCN 2.0 or 1.1? I've only been skimming the stuff on that series since I don't have enough money to care.


----------



## mtcn77

Quote:


> Originally Posted by *Kuivamaa*
> 
> Care to elaborate?


HD 7750 is already specified for 55 watts of tdp. GDDR5 memory makes up a huge portion of it, besides PCI Express lanes have been proven not to be limited to 75 watts of board juice in real use.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Why do they get called fake quad/hexa/octa-cores then? It isn't a hyperthread-situation where there are X physical/2X logical cores, it's however many modules are active times two.
> 
> So are the R9 290s GCN 2.0 or 1.1? I've only been skimming the stuff on that series since I don't have enough money to care.


MODULES. 1 module has TWO integer cores. Seriously dude. Those people calling them fake quads/hexas/octas are just Intel fanboys or AMD fanboys living in their cave.
GCN1.1. GCN2 will be built on 20nm. R9 290 is cheap enough for me to care though. 550$ Local. Keep in mind that a GTX780 pre price reduction is a full grand


----------



## CynicalUnicorn

Fanboys are why we can't have anything nice. I'm not sure why AMD fanboys would call one module a core. Shouldn't they be blindly supporting AMD? Oh, right, logic is optional on the Internet.

$550 is my SSD, CPU, motherboard, and GPU together. I am glad that I don't live in Canada though. Meese scare me and I hear they're everywhere.


----------



## Kuivamaa

Quote:


> Originally Posted by *mtcn77*
> 
> HD 7750 is already specified for 85 watts of tdp. GDDR5 memory makes up a huge portion of it, besides PCI Express lanes have been proven not to be limited to 75 watts of board juice in real use.


And this is relevant how?


----------



## Artikbot

Quote:


> Originally Posted by *Kuivamaa*
> 
> And this is relevant how?


Further information.


----------



## Nintendo Maniac 64

And this is why it's better to just say 2 module/4 thread - because it's factually accurate and at the same time avoids the whole "what defines a core?" debate completely.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Fanboys are why we can't have anything nice. I'm not sure why AMD fanboys would call one module a core. Shouldn't they be blindly supporting AMD? Oh, right, logic is optional on the Internet.
> 
> $550 is my SSD, CPU, motherboard, and GPU together. I am glad that I don't live in Canada though. Meese scare me and I hear they're everywhere.


Agreed, though. I just have to very factual and accurate. And a bit harsh


----------



## ericore

Quote:


> Originally Posted by *polyzp*
> 
> Showing up to 800% the performance of the little old Richland 8670D!!
> 
> "
> 
> 
> This Speaks for itself!
> 
> Intel HD 5200 Iris with eDRAM is now pathetic compared to a DDR3 1600 Mhz Kaveri system. Watch HuMA and HSA at work with that memory bandwidth as well as General Purpose Cryptography up to near 500% the performance (+400%) of Iris 5200 and up to ~840% the performance (~+740%)! Both systems are running with 1600 Mhz DDR3.
> 
> Kaveri's GPU is essentially an underclocked 7790 with DDR3 instead of GDDR5 with only 1 CU disabled. This means Kaveri's GPU will sport 832 Stream Processors and overshoots the original estimates and rumours of 512 stream processors by alot! Stock clock will be 600 Mhz but there is no word if there will be a turbo clock implemented but my guess is YES!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And Now that Kaveri motherboards have entered the enthusiast field, I'd love to see 2400-2600 Mhz DDR3 testing! This is a GREAT time for 2400 Mhz purchasing aswell, as the price is only marginally higher than 1866 or 2133 Mhz options. Asus claims a +30% increase in performance on their Kaveri FM2+ page here going from 1333-2133 Mhz! So compare the above results at 1600 Mhz to a theoretical system with 2600 Mhz DDR3!"
> 
> Source (AMDFX)
> Source (Kaveri benchmarks)


*Impossible, the 832 Stream Processors alone would suck up all the power and leave none for the CPU, this is FM2+ with 95 TDP; irrefutably fake.
From 384SPs(Richland) to 832 with no die size changes, your insanity is awsome.*


----------



## Nintendo Maniac 64

Did anyone think that maybe the 1 teraflop estimate was back when there was going to be a 3-module Kaveri?


----------



## DaveLT

Quote:


> Originally Posted by *ericore*
> 
> *Impossible, the 832 Stream Processors alone would suck up all the power and leave none for the CPU, this is FM2+ with 95 TDP; irrefutably fake.
> From 384SPs(Richland) to 832 with no die size changes, your insanity is awsome.*


Wut. Look at HD7790 before pulling statements like that.

Besides, Richland/Trinity IS VLIW4, GCN is built to be far more efficient per core and that it has less power consumption PER core and then there's GCN1.1 which exists in the 7790, R7 260x (same GPU as 7790), R9 290 and 290X
Right now they downsized the power consumption of the CPU core and upsized the GPU power consumption
No matter how many SPs it has, TMUs and ROPs also matter. Which GCN1.1 has more of them and it did improve performance per core, look at HD7790, it's only 85W TDP and it manages to keep up by about 10% within the 7850, talk about impressive
Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> Did anyone think that maybe the 1 teraflop estimate was back when there was going to be a 3-module Kaveri?


Lol. A overclocked (max OC) of a E5-2660 (A 8-core no less) pulls about 200Gflops so which is to say that it's indeed faster clock for clock than a sandy.
A 4GHz i5-2500k pulls 107GFlops BTW and a 4.2GHz 3570k pulls 114GFlops. We have a winner here! 118GFlops @ 3.7GHz. If this is true Haswell can go into the graveyard.


----------



## Cyro999

Quote:


> A 4GHz i5-2500k pulls 107GFlops BTW and a 4.2GHz 3570k pulls 114GFlops. We have a winner here! 118GFlops @ 3.7GHz. If this is true Haswell can go into the graveyard.


What? Haswell can break 200gflops by 3.8ghz or so. Avx2 is gud man. Really old screenshot:



^One single run high priority, was doing other stuff with the first two runs so lower gflops

FLOPS is not the greatest measure of a CPU's performance, sadly - otherwise Haswell would be twice as fast as Ivy. As it stands, it's marginally faster in many tasks, the FP power and avx2 only helps a bit in stuff like x264, which people mostly ignore because it's apparently useless for most other loads (though 20% ipc gain over sandy bridge for encoding is nice)

As shown though, you don't need an 8-core xeon to hit 200gflops. If you set realtime prio you can actually bench 200gflops with as low as ~1-1.05vcore on my chip i'd imagine. RAM plays a part, of course, but i could probably hit 210-215 at 4ghz with RAM that's fast but nothing special - it's trivial to hit 200 in observable (not theoretical) numbers


----------



## monstercameron

in case you missed it, kaveri running BF4 ULTRA 720p at ~30fps [indoor scene]
http://www.reddit.com/r/APUsilicon/comments/1qsxjc/more_kaveri_bf4_gameplay_tldr_720p_ultra_playable/


----------



## Liranan

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> And this is why it's better to just say 2 module/4 thread - because it's factually accurate and at the same time avoids the whole "what defines a core?" debate completely.


They are cores, they just share the FP, which can split into 2x128 or remain as one 256bit unit.


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *Liranan*
> 
> They are cores, they just share the FP, which can split into 2x128 or remain as one 256bit unit.










I did not say that they were or were not cores, I just said that measuring in modules/threads is safer because discussing the amount of cores in AMD's module CPU architectures is a volatile subject.

Or perhaps you actually enjoy participating in heated discussions? I know that I personally do not.


----------



## yawa

Truth is if we know anything more now it is likely that 832 SP's are possible but wouldn't be worth it due to DDR3 bandwidth limitations, hence why first model Kaveri has stopped at 512 SP's.

The only thing disappointing to me ( well other than lack of information on Excavator and DDR3 socket FM2+ boards), is the fact AMD didn't make a grab for a true enthusiast six core at 28nm. It just seems obvious to me that Kaveri should come in two flavors, one with the Highest SP count possible for their standard and future APU applications, and one for enthusiasts and people who are guranteed to use discreet high powered cards in their setups that offers 3 Steamroller cores at the sacrifice of SP count for a pure single core IPC boost across the board.

Now maybe three modules just isn't possible at 28nm, but if it is, and AMD wants everyone off AM3+ and onto their new chipset standard, offering an enthusiast 3 module, lower SP chip for power users with discreet GFX cards alongside their 2 module Kaveri chips would go a long way towards accomplishing that goal.


----------



## DaveLT

Quote:


> Originally Posted by *Cyro999*
> 
> What? Haswell can break 200gflops by 3.8ghz or so. Avx2 is gud man. Really old screenshot:
> 
> 
> 
> ^One single run high priority, was doing other stuff with the first two runs so lower gflops
> 
> FLOPS is not the greatest measure of a CPU's performance, sadly - otherwise Haswell would be twice as fast as Ivy. As it stands, it's marginally faster in many tasks, the FP power and avx2 only helps a bit in stuff like x264, which people mostly ignore because it's apparently useless for most other loads (though 20% ipc gain over sandy bridge for encoding is nice)
> 
> As shown though, you don't need an 8-core xeon to hit 200gflops. If you set realtime prio you can actually bench 200gflops with as low as ~1-1.05vcore on my chip i'd imagine. RAM plays a part, of course, but i could probably hit 210-215 at 4ghz with RAM that's fast but nothing special - it's trivial to hit 200 in observable (not theoretical) numbers


Using AVX2 -_- That's a unfair comparison considering that ZERO apps actually use AVX2


----------



## Cyro999

Quote:


> Originally Posted by *DaveLT*
> 
> Using AVX2 -_- That's a unfair comparison considering that ZERO apps actually use AVX2


x264 uses avx2. It's also the most common high CPU load that i apply and the same can be said for many of my friends. One of the best available video encoders, the encoder of choice for all of the high tier livestreaming programs, too - Xsplit, OBS

My point was; high gflops are not really relevant for performance in very many areas. Haswell is capable of twice as many FLOPS as Ivy Bridge, yet the limited usefulness of that means that among other architectural changes, the more significant IPC gains are only in the leagues of 12 percent - not over +100%.

I am no expert on this matter but somewhat basically educated and owning a Haswell CPU, as many others are.
Quote:


> We have a winner here! 118GFlops @ 3.7GHz. If this is true Haswell can go into the graveyard.


No offense meant, but you are mistaken if you think that this means anything - as i've shown Haswell is capable of twice as many FLOPS in a theoretical or benchmarkable sense and it's quite irrelevant for actual performance. Not here to defend anything, just to point out that the numbers suggested were both wrong for haswell and irrelevant in either case


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *Cyro999*
> 
> x264 uses avx2.
> 
> My point was; high gflops are not really relevant for performance in very many areas. Haswell is capable of twice as many FLOPS as Ivy Bridge, yet the limited usefulness of that means that among other architectural changes, the more significant IPC gains are only in the leagues of 12 percent - not over +100%.


Emulators seemed to benefit quite a bit from Haswell.


----------



## ericore

Quote:


> Originally Posted by *DaveLT*
> 
> Wut. Look at HD7790 before pulling statements like that.
> 
> Besides, Richland/Trinity IS VLIW4, GCN is built to be far more efficient per core and that it has less power consumption PER core and then there's GCN1.1 which exists in the 7790, R7 260x (same GPU as 7790), R9 290 and 290X
> Right now they downsized the power consumption of the CPU core and upsized the GPU power consumption
> No matter how many SPs it has, TMUs and ROPs also matter. Which GCN1.1 has more of them and it did improve performance per core, look at HD7790, it's only 85W TDP and it manages to keep up by about 10% within the 7850, talk about impressive
> Lol. A overclocked (max OC) of a E5-2660 (A 8-core no less) pulls about 200Gflops so which is to say that it's indeed faster clock for clock than a sandy.
> A 4GHz i5-2500k pulls 107GFlops BTW and a 4.2GHz 3570k pulls 114GFlops. We have a winner here! 118GFlops @ 3.7GHz. If this is true Haswell can go into the graveyard.


You fail to understand 3 insurmountable problems First there has already been a demo of battlefield 4 presented by AMD with the flagship (top of the line Kaveri) @ 1080p on medium settings running at a minimum frame rate of 28 FPS (indoors) without mantle. FYI, I saw a benchmark with an Iris Pro 5200 get 19 min FPS on high settings outdoors 1080p. If what you said had any truth, Kaveri would have performed significantly better at the demo. It didn't do bad, but it didn't blow minds either. Secondly, AMD cannot release a standard non premium APU with a tdp higher than 95 watts. You just said, the R7 260 uses 85 watts; yes because AMD has released many desktop chips at 10W (sarcasm). I mean the best amd was able to do was 65 watts on an APU with modest clocks. Keep in mind, amd is going from 32nm to 28nm; don't go spewing unfounded rumors. Third, so let me get this straight you dimbo, you say AMD from 32nm to 28nm and without a die increase (using same socket after all) more than doubled the SP units of the predecessor; you are nothing but a lost cause. You think AMD would release an APU with graphics better than its graphics started line aka R7; you really are a dimbo.


----------



## Usario

Quote:


> Originally Posted by *ericore*
> 
> *Impossible, the 832 Stream Processors alone would suck up all the power and leave none for the CPU, this is FM2+ with 95 TDP; irrefutably fake.
> From 384SPs(Richland) to 832 with no die size changes, your insanity is awsome.*


How would it be impossible? The 7790 with 896 cores has an 85W TDP at 1GHz, and that's including 1GB of 6GHz GDDR5 and all the other stuff that's on the PCB. At a more reasonable clock speed, 832 cores in a 95W 2M/4C APU seems fairly doable.


----------



## Deadboy90

Quote:


> Originally Posted by *monstercameron*
> 
> in case you missed it, kaveri running BF4 ULTRA 720p at ~30fps [indoor scene]
> http://www.reddit.com/r/APUsilicon/comments/1qsxjc/more_kaveri_bf4_gameplay_tldr_720p_ultra_playable/


Very exciting! We finally have an integrated graphics system that can handle any game at 1080p! (Low settings of course but 1080p nonetheless)


----------



## mtcn77

Quote:


> Originally Posted by *Kuivamaa*
> 
> And this is relevant how?


Sorry, I mistyped my post. It should be HD 7750 = 55 tdp.


----------



## Kuivamaa

Quote:


> Originally Posted by *mtcn77*
> 
> Sorry, I mistyped my post. It should be HD 7750 = 55 tdp.


Let's take it from the start. From your posts I assumed you questioned Kaveri graphics overclocking ability based on the fact that a 7750 (same amount of GCN units) couldn't get high clocks compared to a 7770. As an answer I mentioned a few reasons why kaveri will not be limited by what a 7750 can or can't do (being physically different etc). As for 7750 tdp, AMD launched 7750 without a power output but included one in 7770. So I can make an educated guess and say that a 7750 near 7770 clocks ,would require somewhat similar (albeit a bit lower) power amount. Not having the ability to draw extra from the PSU probaby limits its ability to reach said (7770) clocks.


----------



## sumitlian

Quote:


> Originally Posted by *DaveLT*
> 
> Lol. A overclocked (max OC) of a E5-2660 (A 8-core no less) pulls about 200Gflops so which is to say that it's indeed faster clock for clock than a sandy.
> A 4GHz i5-2500k pulls 107GFlops BTW and a 4.2GHz 3570k pulls 114GFlops. We have a winner here! 118GFlops @ 3.7GHz. If this is true Haswell can go into the graveyard.


AVX is 256 bit / cycle instruction set
It does 8 flops / cycle ( 8 x 32 bit = 256 bit )

i5 2500k 4.0 GHz.
AVX performance (Theoretical Max or IPC = 1.0), 4.0 GHz x 8 flops/cycle x 4 cores = 128 GFLOPS
AVX performance (Real World in Linx) = 107 GFLOPS
Efficiency: 83.59 % (Higher is better)
*Real world IPC = 0.8359*

i5 3570k 4.2 GHz.
AVX Performance (Theoretical Max or IPC = 1.0), 4.2 GHz x 8 flops/cycle x 4 cores = 134.4 GFLOPS
AVX Performance (Real world in Linx) = 114 GFLOPS
Efficiency: 84.82 % (Higher is better)
*Real world IPC = 0.8482*

AVX 2 is 512 bit / Cycle Instruction set
So here its 16 flops / cycle (16 x 32 bit = 512 bit )

i7 4770k 4.0 GHz (Cyro999's result)
AVX 2 Performance (Theoretical Max or IPC =1.0), 4.0 GHz x 16 flops/cycle x 4 cores = 256 GFLOPS
AVX 2 Performance (Real world in Linx) = 208 GFLOPS
Efficiency: 81.25 %
*Real world IPC = 0.8125*

Kaveri A10-7850K
AVX Performance (Theoretical Max or IPC = 1.0), 3.7 GHz x 8 flops/cycle x 4 cores = 118.4 GFLOPS
AVX Performance (Real world in Linx) = Not supported because Intel doesn't allow AMD CPU to use AVX through their compilers.
Efficiency: Unknown
*Real world IPC = Unknown*

_Note: This is only AVX IPC performance we were talking about, any 32 bit, 64 bit or 128 bit IPC performance has got nothing to do with AVX IPC result, though in real world scenario IPC can never be equal to 1.0 (because nothing in the world is 100% efficient)._

We will not be able to know the real world AVX performance of Kaveri's CPU portion until Intel decides to remove unfair dispatcher from its compiler. They didn't wanna let AMD use AVX in ICC in professional applications, its okay. I called it unfair because they should have let us (AMD users) just calculate AVX performance by Linx or IBT. At least this way we could know more about AMD CPU's IPC performance and efficiency.

May be we could see some HSA benchmarking apps after Kaveri launch that calculates AVX speed of APU.


----------



## infamouskid

ok so bottom line is it worth going FM2 for the 1080p gamer??


----------



## nitrubbb

Quote:


> Originally Posted by *infamouskid*
> 
> ok so bottom line is it worth going FM2 for the 1080p gamer??


sure, FM2+ that is


----------



## NaroonGTX

Quote:


> ok so bottom line is it worth going FM2 for the 1080p gamer??


Yeah. FM2+ in particular.


----------



## Kuivamaa

If you mean 1080p gaming on integrated graphics probably yes-this is the best igpu out there by far and If it can push BF4 at reasonable settings and decent framerates it will run anything atm. With 2400Mhz memory and a nice overlock on the igpu it will fly-you won't like to play FPS competitively with it though. If you mean 1080p gaming with a discreet card, yeah,quad kaveri will have plenty of cpu power to drive even high end cards.


----------



## sepiashimmer

Will it run games at reasonable resolution with normal-high graphics settings @ 30+ FPS for the next 8-10 years? Or for the lifetime of PS4 or Xbox One?


----------



## NaroonGTX

I'll need to find my crystal ball to answer that one...

It's impossible to answer that. But I don't know why anyone would buy one and then not upgrade it within that 8-10 year span when undoubtedly there would be much more powerful APU's released, lol.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> I'll need to find my crystal ball to answer that one...
> 
> It's impossible to answer that. But I don't know why anyone would buy one and then not upgrade it within that 8-10 year span when undoubtedly there would be much more powerful APU's released, lol.


I don't know why others would, in my case it would be budget.


----------



## sioisleowq

You will actually be able to get 125 GB/s with 4000 MHz DDR4.


----------



## mtcn77

Quote:


> Originally Posted by *Kuivamaa*
> 
> Let's take it from the start. From your posts I assumed you questioned Kaveri graphics overclocking ability based on the fact that a 7750 (same amount of GCN units) couldn't get high clocks compared to a 7770. As an answer I mentioned a few reasons why kaveri will not be limited by what a 7750 can or can't do (being physically different etc). As for 7750 tdp, AMD launched 7750 without a power output but included one in 7770. So I can make an educated guess and say that a 7750 near 7770 clocks ,would require somewhat similar (albeit a bit lower) power amount. Not having the ability to draw extra from the PSU probaby limits its ability to reach said (7770) clocks.


You seem to disregard my comment on Pci-express lanes' power flow. If that 7750 were underpowered, it wouldn't pass the boot check, or else, it wouldn't overclock quite as much as it already does. GDDR5 as well require a lot of amperes next to the gpu, especially as a result of an elevated rate of errors following an overclock.
They have successfully overclocked both the gpu and gddr5; do you recon they overclocked the gpu a bit and then decided to increase memory frequency? I don't think so, they - like every overclocker on this planet does - increased the gpu clocks as high as the card held on and then resorted to memory frequency gains once the gpu consumed all reserve power stored in the capacitors. The card gave everything it had, that's just all there is to it.


----------



## Kuivamaa

Quote:


> Originally Posted by *mtcn77*
> 
> You seem to disregard my comment on Pci-express lanes' power flow. If that 7750 were underpowered, it wouldn't pass the boot check, or else, it wouldn't overclock quite as much as it already does. GDDR5 as well require a lot of amperes next to the gpu, especially as a result of an elevated rate of errors following an overclock.
> They have successfully overclocked both the gpu and gddr5; do you recon they overclocked the gpu a bit and then decided to increase memory frequency? I don't think so, they - like every overclocker on this planet does - increased the gpu clocks as high as the card held on and then resorted to memory frequency gains once the gpu consumed all reserve power stored in the capacitors. The card gave everything it had, that's just all there is to it.


Two cards in question 7750 and 7770. Same tech generation, basic difference is 512 vs 640 shaders, core frequency and memory frequency. One card doesn't need to be connected to a PSU,the other does. HC actually said what limited them from going higher was the weak heatsink their 7750 had (they didn't have time to tweak voltage). What I said was that there are more reasons why a 7750 most likely won't hit 7770 clocks.

- In the early days of the process, good CV chips probably became 7770s and the poorer quality ones 7750 due to yields etc.
- There is a reason a 7770 needs to be connected to a PSU and a 7750 doesn't need to, power draw. Even If you have a 7750 SKU that can be clocked as high as a 7770 SKU odds are you won't be able to feed it through the motherboard only, power draw would have been similar to a 7770 (tad lower though) and those need to be connected to a PSU

Both aforementioned reasons are irrelevant to Kaveri igpu. The only common thing between that igpu and Cape Verde 7750 is the amount of shaders, 512 in both cases. AMD isn't pulling 7750 chips to unite them with SR cores, we are talking about totally different dies. Kaveri igpu might or might not be a good overclocker but suspecting it won't be just because 7750 can't get high overclocks is nonsensical.


----------



## rpsgc

Quote:


> Originally Posted by *sepiashimmer*
> 
> Will it run games at reasonable resolution with normal-high graphics settings @ 30+ FPS for the next 8-10 years? Or for the lifetime of PS4 or Xbox One?


Considering that Kaveri will support Mantle, there's a good possibility it will run 1080p at 30fps Medium/Low depending on the game.


----------



## sepiashimmer

Quote:


> Originally Posted by *rpsgc*
> 
> Considering that Kaveri will support Mantle, there's a good possibility it will run 1080p at 30fps Medium/Low depending on the game.


Presently I don't have 1080p monitor, I don't think I'll be buying one. So it's only 1280x720 or 1440x900, I want to run games at normal or high. I'm skeptical of the hype of needing a new hardware to play latest games. You see my rig in my signature, it played games fine at 720p at half the cost of PS3. Exceptions are Splinter Cell Conviction, Crysis 2. I bought that rig in 2006-7, it's GPU died only this year.


----------



## mtcn77

Quote:


> Originally Posted by *Kuivamaa*
> 
> - In the early days of the process, good CV chips probably became 7770s and the poorer quality ones 7750 due to yields etc.
> - There is a reason a 7770 needs to be connected to a PSU and a 7750 doesn't need to, power draw. Even If you have a 7750 SKU that can be clocked as high as a 7770 SKU odds are you won't be able to feed it through the motherboard only, power draw would have been similar to a 7770 (tad lower though) and those need to be connected to a PSU...


Again, I agree with the first clause and disagree with the second.


----------



## MrJava

Welcome! Are you on the intel engineering team or marketing team?
Quote:


> Originally Posted by *ericore*
> 
> You fail to understand 3 insurmountable problems First there has already been a demo of battlefield 4 presented by AMD with the flagship (top of the line Kaveri) @ 1080p on medium settings running at a minimum frame rate of 28 FPS (indoors) without mantle. FYI, I saw a benchmark with an Iris Pro 5200 get 19 min FPS on high settings outdoors 1080p. If what you said had any truth, Kaveri would have performed significantly better at the demo. It didn't do bad, but it didn't blow minds either. Secondly, AMD cannot release a standard non premium APU with a tdp higher than 95 watts. You just said, the R7 260 uses 85 watts; yes because AMD has released many desktop chips at 10W (sarcasm). I mean the best amd was able to do was 65 watts on an APU with modest clocks. Keep in mind, amd is going from 32nm to 28nm; don't go spewing unfounded rumors. Third, so let me get this straight you dimbo, you say AMD from 32nm to 28nm and without a die increase (using same socket after all) more than doubled the SP units of the predecessor; you are nothing but a lost cause. You think AMD would release an APU with graphics better than its graphics started line aka R7; you really are a dimbo.


----------



## MrJava

No, its completely bandwidth limited. If AMD stays dual-channel then you'll need 4 more CU's and DDR4-4266 to get to Xbox One levels of performance. Even with quad-channel DDR4-4266, you're not up to the PS4's bandwidth numbers.
Quote:


> Originally Posted by *sepiashimmer*
> 
> Will it run games at reasonable resolution with normal-high graphics settings @ 30+ FPS for the next 8-10 years? Or for the lifetime of PS4 or Xbox One?


----------



## Usario

Quote:


> Originally Posted by *sepiashimmer*
> 
> Will it run games at reasonable resolution with normal-high graphics settings @ 30+ FPS for the next 8-10 years? Or for the lifetime of PS4 or Xbox One?


Let me put it this way.

If you can play games with an E4500 and 8600GT in 2013, you *probably* will be able to play games with a 7850K in 2021. But no one really knows for sure.


----------



## DaveLT

Quote:


> Originally Posted by *MrJava*
> 
> Welcome! Are you on the intel engineering team or marketing team?


----------



## NaroonGTX

http://www.youtube.com/watch?v=axyHkKn_e80


----------



## aweir

Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.youtube.com/watch?v=axyHkKn_e80


"Wahaahaa! Woohaahaahaaa wahaahaaawaa!"


----------



## Papadope

Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.youtube.com/watch?v=axyHkKn_e80


3:37 Oltra reary nice!


----------



## Milestailsprowe

I want one


----------



## Roaches

Thats impressive! if thats without Mantle.... Though it would've been more interesting to see combat footage where all the processing gets all serious and graphically intense.


----------



## tjwolf88

Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.youtube.com/watch?v=axyHkKn_e80


720p on Medium with 60 FPS. Welp, no reason to buy an xbone unless you really want that Kinect based OS.


----------



## CynicalUnicorn

I'll be happy if dual-graphics gets fixed and works with multiple dGPUs. I'm thinking compact mATX case, two 7750s, and a 7850k for an awesome LAN rig. As crappy as a 7750 is as a dGPU, seeing it integrated into the CPU with zero/minimal performance loss is awesome. All of that is only 100W and one chip.
Quote:


> Originally Posted by *MrJava*
> 
> Welcome! Are you on the intel engineering team or marketing team?


The ad hominen attacks imply he'd be with the AMD marketing team, and he's trying to sound smart, so probably an engineer.


----------



## sepiashimmer

Quote:


> Originally Posted by *Usario*
> 
> Let me put it this way.
> 
> If you can play games with an E4500 and 8600GT in 2013, you *probably* will be able to play games with a 7850K in 2021. But no one really knows for sure.


But they were playable with more than 30 FPS.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> http://www.youtube.com/watch?v=axyHkKn_e80


Where was this shot?


----------



## Milestailsprowe

Would be nice with this could cross fire with any amd card in 7000 series or current R


----------



## MrJava

We'll have to see how dual graphics performs, but I think Kaveri Athlon ($90-100) + R7 260X ($150) would be an optimal budget config. The full Kaveri A10's will definitely be nice toys to see how far you can push it with fast RAM.
Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'll be happy if dual-graphics gets fixed and works with multiple dGPUs. I'm thinking compact mATX case, two 7750s, and a 7850k for an awesome LAN rig. As crappy as a 7750 is as a dGPU, seeing it integrated into the CPU with zero/minimal performance loss is awesome. All of that is only 100W and one chip.
> The ad hominen attacks imply he'd be with the AMD marketing team, and he's trying to sound smart, so probably an engineer.


----------



## CynicalUnicorn

Yeah, a basically-7750 and a 290X. There's only a difference of 500MHz and 2500 cores. Now, I wouldn't be suprised if it worked with the 7700 and R7 GPUs, but that's it.


----------



## MrJava

All the 832 SPU stuff we saw was Kaveri testing with an R7 240. This configuration might work well, but then I also see no reason that the R7 250 or the 7750 DDR3 would not work.

AMD should really specify recommended configurations when Kaveri comes out and possibly have some APU/GPU/Game bundles available.
Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Yeah, a basically-7750 and a 290X. There's only a difference of 500MHz and 2500 cores. Now, I wouldn't be suprised if it worked with the 7700 and R7 GPUs, but that's it.


----------



## ThePath

I'm really surprised that this is not in the rumor section


----------



## sumitlian

Quote:


> Originally Posted by *ThePath*
> 
> I'm really surprised that this is not in the rumor section


THIS IS NOT RUMOR, THUS THIS IS NOT IN THE RUMOR SECTION !!!


----------



## TheLAWNOOB

Quote:


> Originally Posted by *sumitlian*
> 
> THIS IS NOT RUMOR, THUS THIS IS NOT IN THE RUMOR SECTION !!!


But the source does not appear to be very authoritative now does it?


----------



## NaroonGTX

Quote:


> "Wahaahaa! Woohaahaahaaa wahaahaaawaa!"


Yeah hahaha, my favorite part is when seemingly the entire room jizzed their pants simultaneously when the guy put it on High and it was still getting good frames.


----------



## MrJava

His numbers are from Sisoft Sandra - this is the performance you get with Kaveri + R7 240 + dual channel DDR3-1600.
Quote:


> Originally Posted by *TheLAWNOOB*
> 
> But the source does not appear to be very authoritative now does it?


----------



## infamouskid

wow. 60fps on medium.
that's not bad.


----------



## MoGTy

Quote:


> Originally Posted by *TheLAWNOOB*
> 
> But the source does not appear to be very authoritative now does it?


It has been said multiple times yes, but people don't seem to care and continue this discussion, possibly feeding the source website.
No-one can be forbidden to discus out-of-context info in crappy sensationalist writing from a questionable source. But I'd like to think it's an option to ignore it.

Anyway, long story short. That guy just got a whole lot of clicks.

Also, look at his other "ground breaking" stories. It's hilarious.


----------



## Zyro71

Well I look at the bright side when it comes to gaming on these hardware.

Hopefully devs target the xbox one first so thats its the most compatible with PCs then work on to the PS4.
But mainly i can care less about consoles, unless I can get my hands on a processor similar to what the consoles have.


----------



## CynicalUnicorn

I think PS4 would wind up more similar to a PC than XB1. Sure, there's a different type of system and graphics memory, but neither have 32MB of ESRAM and the only CPUs that come close to that are Xeons and some derivatives with their massive 20MB or more L3 caches, assuming I understand its function correctly.


----------



## NaroonGTX

http://translate.google.it/translate?sl=it&tl=en&js=n&prev=_t&hl=it&ie=UTF-8&u=http%3A%2F%2Fwww.bitsandchips.it%2F9-hardware%2F3641-specifiche-della-apu-a8-76x0k-con-moltiplicatore-sbloccato


----------



## mtcn77

Italians nailed it
Wow, what remarks!
Quote:


> ...Intel knows that the AMD solution could be devastating if it were to be viable (and the number of partners that is attracting AMD seems to confirm this), so try to do something about entering a GPGPU solution at a good price for Workstation and Server midrange...
> 
> Apparently the three-fight between AMD, Intel and Nvidia is about to catch fire. Who will win? The army led by AMD, or the two lone heroes named Intel and nVidia?


----------



## MrJava

The 32MB ESRAM in the Xbox One is programmable and accessible like regular memory by the developer. Regular CPU caches like L3 or Iris Pro's eDRAM are used implicitly because of temporal and spatial locality of data access.
Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I think PS4 would wind up more similar to a PC than XB1. Sure, there's a different type of system and graphics memory, but neither have 32MB of ESRAM and the only CPUs that come close to that are Xeons and some derivatives with their massive 20MB or more L3 caches, assuming I understand its function correctly.


----------



## CynicalUnicorn

And will be a pain in the arse to utilize effectively, so my point still stands. So programs don't utilize the cache but the CPU's firmware does? Seems like MS would have been better off with larger caches if only for simplicity, or, even better, MOAR (GPU) COARS. Nobody will be able to actually use it well until a few years into the cycle, and even then it won't give a big enough advantage to make it more powerful than a PS4.


----------



## sumitlian

Quote:


> Originally Posted by *ThePath*
> 
> I'm really surprised that this is not in the rumor section


Quote:


> Originally Posted by *TheLAWNOOB*
> 
> But the source does not appear to be very authoritative now does it?


Are you skeptical of legitimacy of sisoftware.eu, planet3dnow.de, pctuning.tyden.cz, anandtech.com, notebookcheck.net, hwbot.org, pcper and many sources which you can easily find in amdfx.blogspot.in.
Quote:


> Originally Posted by *MoGTy*
> 
> It has been said multiple times yes, but people don't seem to care and continue this discussion, possibly feeding the source website.
> No-one can be forbidden to discus out-of-context info in crappy sensationalist writing from a questionable source. But I'd like to think it's an option to ignore it.
> 
> Anyway, long story short. That guy just got a whole lot of clicks.
> 
> Also, look at his other "ground breaking" stories. It's hilarious.


I beg your pardon ! I am still learning English. And I am having a little problem deciphering the literature behind what you've just written.
Are you trolling me or hating me ?
if yes, then please advise me how to do it correctly.
or
Did I just misunderstand it all?


----------



## MrJava

@sumitlian

I don't whether he's debating the accuracy of the posted numbers. However, thread starter's article is completely inaccurate and his site as a whole is fairly ridiculous as well.


----------



## sumitlian

Quote:


> Originally Posted by *MrJava*
> 
> @sumitlian
> 
> I don't whether he's debating the accuracy of the posted numbers. However, thread starter's article is completely inaccurate and his site as a whole is fairly ridiculous as well.


Thanks MrJava, I believe in you.
Then it sounds Its me who mistakenly exaggerated the accuracy of Thread Starter's article and info.
I apologize to ThePath, TheLAWNOOB and MoGTy.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> And will be a pain in the arse to utilize effectively, so my point still stands. So programs don't utilize the cache but the CPU's firmware does? Seems like MS would have been better off with larger caches if only for simplicity, or, even better, MOAR (GPU) COARS. Nobody will be able to actually use it well until a few years into the cycle, and even then it won't give a big enough advantage to make it more powerful than a PS4.


MOAR COARS strategy always pans out, just ask the old AMD


----------



## CynicalUnicorn

Duh. I mean, the 9590 is the fastest CPU on the market. Do you not see all those jiggahertz? 5 jiggahertz times 8 cores equals 40 COAR-JIGGAHERTZ! Ya don't see Intel doing that.

Are there any predictions for this thing's power compared to a normal 7750? I assume it's better since the RAM is shared, even at the expense of some bandwidth, and textures don't need to be copied from system to video memory, but by how much?


----------



## AlphaC

Quote:


> Originally Posted by *MrJava*
> 
> We'll have to see how dual graphics performs, but I think Kaveri Athlon ($90-100) + R7 260X ($150) would be an optimal budget config. The full Kaveri A10's will definitely be nice toys to see how far you can push it with fast RAM.


I'm not so sure about R7-260X, Kaveri has TrueAudio already.

An older gen HD7870 (aka R9-270X), R9-270 or HD7850 for cheap may actually be better. The R9-270 is ~$180 at launch so a pricedrop or two would bring it down.
Quote:


> Originally Posted by *mtcn77*
> 
> Italians nailed it
> Wow, what remarks!


Inflammatory commentary , that's not an unbiased reviewer for sure


----------



## azanimefan

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Duh. I mean, the 9590 is the fastest CPU on the market. Do you not see all those jiggahertz? 5 jiggahertz times 8 cores equals 40 COAR-JIGGAHERTZ! Ya don't see Intel doing that.
> 
> Are there any predictions for this thing's power compared to a normal 7750? I assume it's better since the RAM is shared, even at the expense of some bandwidth, and textures don't need to be copied from system to video memory, but by how much?


40 COAR-JIGGAHERTZ???!!!!!! well golly! I need MOAR CORZ!

seriously though, it's a combined 95W tdp chip... i think it's probably safe to assume the TDP of the cpu is somewhere around 65W, which would mean the TDP of the gpu is 30W. Seems low to me, but not out of line. When you consider all the rendering power they're getting out of GCN 1.1 at lower TDPs... i mean look at the 7790. That thing is a 90W part, yet if it was made with GCN 1.0 parts it would have been closer to the 7850's 125W. Or look at the r7-250... they've got a part that's basically identical to the 7730, only instead of GCN1.0 cores it's got GCN 1.1 cores... the result is it's almost identical to the 7750 uses LESS power then the 7750, and has almost 20% less cores.

so i guess it's possible that the igpu is a 30W or 35W part.


----------



## LuckyStarV

Quote:


> Originally Posted by *azanimefan*
> 
> 40 COAR-JIGGAHERTZ???!!!!!! well golly! I need MOAR CORZ!
> 
> seriously though, it's a combined 95W tdp chip... i think it's probably safe to assume the TDP of the cpu is somewhere around 65W, which would mean the TDP of the gpu is 30W. Seems low to me, but not out of line. When you consider all the rendering power they're getting out of GCN 1.1 at lower TDPs... i mean look at the 7790. That thing is a 90W part, yet if it was made with GCN 1.0 parts it would have been closer to the 7850's 125W. Or look at the r7-250... they've got a part that's basically identical to the 7730, only instead of GCN1.0 cores it's got GCN 1.1 cores... the result is it's almost identical to the 7750 uses LESS power then the 7750, and has almost 20% less cores.
> 
> so i guess it's possible that the igpu is a 30W or 35W part.


Seems like a full on 7750 draws about 43W max which with no vram, slightly lower clocks and more tweaks, would not be impossible to hit 30ish watts


----------



## Usario

Quote:


> Originally Posted by *azanimefan*
> 
> 40 COAR-JIGGAHERTZ???!!!!!! well golly! I need MOAR CORZ!
> 
> seriously though, it's a combined 95W tdp chip... i think it's probably safe to assume the TDP of the cpu is somewhere around 65W, which would mean the TDP of the gpu is 30W. Seems low to me, but not out of line. When you consider all the rendering power they're getting out of GCN 1.1 at lower TDPs... i mean look at the 7790. That thing is a 90W part, yet if it was made with GCN 1.0 parts it would have been closer to the 7850's 125W. Or look at the r7-250... they've got a part that's basically identical to the 7730, only instead of GCN1.0 cores it's got GCN 1.1 cores... the result is it's almost identical to the 7750 uses LESS power then the 7750, and has almost 20% less cores.
> 
> so i guess it's possible that the igpu is a 30W or 35W part.


In reality I highly doubt the CPU portion of Kaveri will actually generate considerably more than 45W of heat. You're right about the GPU, will probably come in at around 30W. Generally chips don't ever actually reach their TDP, though it depends on ASIC quality and the particular design (look at Bulldozer).


----------



## DaveLT

Quote:


> Originally Posted by *Usario*
> 
> In reality I highly doubt the CPU portion of Kaveri will actually generate considerably more than 45W of heat. You're right about the GPU, will probably come in at around 30W. Generally chips don't ever actually reach their TDP, though it depends on ASIC quality and the particular design (look at Bulldozer).


With bulldozer it definitely always did, as with intel chips. Here's hoping SR is a real revamp


----------



## sumitlian

Quote:


> Originally Posted by *LuckyStarV*
> 
> Seems like a full on 7750 draws about 43W max which with no vram, slightly lower clocks and more tweaks, would not be impossible to hit 30ish watts
> 
> 
> Spoiler: Warning: Spoiler!


Quote:


> Originally Posted by *azanimefan*
> 
> 40 COAR-JIGGAHERTZ???!!!!!! well golly! I need MOAR CORZ!
> 
> seriously though, it's a combined 95W tdp chip... i think it's probably safe to assume the TDP of the cpu is somewhere around 65W, which would mean the TDP of the gpu is 30W. Seems low to me, but not out of line. When you consider all the rendering power they're getting out of GCN 1.1 at lower TDPs... i mean look at the 7790. That thing is a 90W part, yet if it was made with GCN 1.0 parts it would have been closer to the 7850's 125W. Or look at the r7-250... they've got a part that's basically identical to the 7730, only instead of GCN1.0 cores it's got GCN 1.1 cores... the result is it's almost identical to the 7750 uses LESS power then the 7750, and has almost 20% less cores.
> 
> so i guess it's possible that the igpu is a 30W or 35W part.


I Agree with both of you, igpu being around 30w max !

I applied core and memory frequency on my 7790 to Minimum with Afterburner.
Core at 535 MHz = 958.7 Gflops ( 30% faster gflops than Kaveri's iGPU)
Memory at 800 MHz = 50 GB/s ( 33.3% faster memory bandwidth than Kaveri with 2400 MHz dual channel DDR3 )

Average Power Consumption in Real World gaming is around 25 watts max.


Average Power Consumption with Kombustor is around 37 watts max.


_GPUz was launched after the game and benchmark were started, thus average power consumption is absolutely accurate because it started calculation from core and memory at max clocks._

If AMD used better implementation of HDL in Kaveri, then its iGPU should consume even less volts. My 7790's ASIC Quality is 71.7%.
I am guessing ~20 watts max in gaming and ~30 watts max in Kombustor, which I believe should obviously be the most power efficient desktop class GPU/iGPU for 1080p gaming.


----------



## sumitlian

Quote:


> Originally Posted by *mtcn77*
> 
> Could you point what the outliers are between those architectures?


As AlphaC and MrJava has cleared almost all differences between GCN 1.0 and 1.1.

There is still one major difference between 7750 and 7790, Which is the primitive rate.
7750 and 7770 (Cape Verde) can do one primitive per clock while all 78xx, 79xx and 7790 do 2x primitive per clock.
Kaveri's iGPU should also have Dual Geometry and Tessellation Engine because 7790 already supports it. (7750 and 7770 don't have this)


----------



## nitrubbb

is there any official word what gpu will kaveri dual-graphics with?


----------



## NaroonGTX

Not yet.


----------



## azanimefan

Quote:


> Originally Posted by *nitrubbb*
> 
> is there any official word what gpu will kaveri dual-graphics with?


no official word but some pretty good guesses.

looks like the r7-250, 7750, and 7770 should dual graphics with kaveri pretty well. the main question is if the r7-260x/7790 will.


----------



## CynicalUnicorn

Both are GCN 1.1, so I would assume so, though the 260X has more RAM, faster clocks, and TrueAudio and other goodies disabled in the 7790, which makes it a little bit more appealing. I mean, the 512 SP 7750 worked, albeit not too well, with the 384 SP Richland iGP, and I think that was partially due to performance differences but also thanks to driver issues. What are the 240s, 250s, and 260s (not X) rebrands of?


----------



## MrJava

The benchmarks which are the subject of this thread show that Kaveri in a dual-graphics configuration with the 320 SPU R7-240. I think its reasonable that any of AMD's DDR3-based GCN cards would work well.
Quote:


> Originally Posted by *nitrubbb*
> 
> is there any official word what gpu will kaveri dual-graphics with?


----------



## MrJava

Ayrton Senna lost the world championship in 1989. Couldn't you have called yourself senna88 or senna90?
Quote:


> Originally Posted by *senna89*
> 
> 500% ? ahahahah SO FAKE BY AMD


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Both are GCN 1.1, so I would assume so, though the 260X has more RAM, faster clocks, and TrueAudio and other goodies disabled in the 7790, which makes it a little bit more appealing. I mean, the 512 SP 7750 worked, albeit not too well, with the 384 SP Richland iGP, and I think that was partially due to performance differences but also thanks to driver issues. What are the 240s, 250s, and 260s (not X) rebrands of?


Down to the CPU in the Richland. This time round we've got a kicker so i'll say any dGPU would CF fine.

Don't forget PCI-e 3.0 guys. There's a reason AMD put it there. Maybe AMD switched to XDMA for DGM now

But Richland has had a PCI-e 3 bus but was disabled for some strange reason.


----------



## azanimefan

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> What are the 240s, 250s, and 260s (not X) rebrands of?


the r7-250 isn't a re-brand. it's a sorta new card. Remember the 7730? Well they took that card, changed the cores to GCN 1.1 cores, and clocked it up to 1ghz. Turns out its pretty much identical to a 7750 in performance only it uses even less power.

no idea about the r7-240 or the basic r7-260... though i expect the 260 will be a 7770 rebrand.


----------



## CynicalUnicorn

Why did they call it a 7730 then if it's better? I thought it was a revamped 6670 for DGM with Richland/Trinity. Okay... Still, that's practically my 7850 if you use DGM with one, though probably a bit worse since the iGPU has to deal with DDR3 and data needs to be copied to the GDDR5 in the dGPU. It's the worst of both worlds!

(I wish they called the 7870 XT a 7930 though. That would make a lot more sense.)


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Why did they call it a 7730 then if it's better?


He's saying that the 250 is an improved 7730. It's the 250 that is about equal to the 7750 but with lower power consumption.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> He's saying that the 250 is an improved 7730. It's the 250 that is about equal to the 7750 but with lower power consumption.










Yes he is, and I am bad at reading. I was, uh, multitasking? Yeah, that's a good defense... It's got 384 shaders, so it's a bit underpowered for DGM with Kaveri. I'll take a 7770, please.


----------



## sumitlian

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> What are the 240s, 250s, and 260s (not X) rebrands of?


Quote:


> Originally Posted by *azanimefan*
> 
> the r7-250 isn't a re-brand. it's a sorta new card. Remember the 7730? Well they took that card, changed the cores to GCN 1.1 cores, and clocked it up to 1ghz. Turns out its pretty much identical to a 7750 in performance only it uses even less power.
> 
> no idea about the r7-240 or the basic r7-260... though i expect the 260 will be a 7770 rebrand.


R7-240 and R7-250 seem to be rebrand of HD 8570 (OEM) and HD 8670 (OEM) respectively.
http://en.wikipedia.org/wiki/Radeon_HD_8000_Series
http://en.wikipedia.org/wiki/AMD_Radeon_Rx_200_Series

Same Codename (Oland), (Oland Pro for 240 and Oland XT for 250)
Same die size (90mm^2),
same no transistors (1040m),
same SP to DP ratio (1:16),
R7-240 just has less 64 shaders and 8 TMUs than HD 8570. But 240 seems to be significantly more power efficient than 8570

HD 7730 is GCN 1.0. It is based on Cave Verde (Specific code name is "Cape Verde LE"), cut off version of Cave Verde Pro or HD 7750.
123mm^2 and 1500m transistors for HD 7730, HD 7750 and HD 7770.

Radeon R7 260 is Cave Verde PRX, seems to be a rebrand of 7770.


----------



## CynicalUnicorn

I think they're both closer to a 250, which is exactly the same other than clockspeeds and PCIe interface.


----------



## skeklsaad

We'll have to see how dual graphics performs, but I think Kaveri Athlon ($90-100) + R7 260X ($150) would be an optimal budget config.


----------



## sumitlian

Hey !
Does anyone have any idea of overclock-ability of CPU portion of A10-7850k ?
I mean, should it be able to do 4.9-5.0 GHz ?

lol lots of OCN members seem to be coming this thread under the impression of new Title


----------



## CynicalUnicorn

If they use the same voodoo magic as they did in Richland, then yeah, that's easily achievable. I didn't even notice the title change; I just check my subscriptions.


----------



## NaroonGTX

No way to gauge how the CPU or GPU portions will overclock as of now. There has been a small clock regression compared to Richland (the 7850k is clocked only 100mhz lower than the 5800k, however, not that huge) but since Kaveri is apparently on BULK rather than FD-SOI, supposedly it won't OC hugely. We'll have to wait and see for some reviews/testing.


----------



## Nintendo Maniac 64

I don't think BULK alone means low overclocking, supposedly Intel uses BULK as well.


----------



## sumitlian

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> If they use the same voodoo magic as they did in Richland, then yeah, that's easily achievable. I didn't even notice the title change; I just check my subscriptions.


If the voodoo magic comes true then I could replace my 8350 with A10-7850K at 4.8-5.0 GHz








It should consume at least 100w less power than 8350 at 4.6 GHz








As my system has been running at an average of 16.5 Hrs per day since 3 years, I might see significant reduction in power usage per month.


----------



## MrJava

It might be tougher to overclock, but I don't think the process itself would hold back overclocking potential as much as the design of the CPU. They could've sacrificed some frequency for IPC, but I think AMD's stock clocks for the 7850K are a little on the conservative side (target CPU Base Frequency was 4GHz AFAIK).
Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> I don't think BULK alone means low overclocking, supposedly Intel uses BULK as well.


----------



## iamwardicus

Here's a thought for folks. What do you think the chances are of a multi-processor motherboard coming out with APUs on it in the future? I for one would be all over a dual socket Kaveri. It would also make a bit of sense since their new R9 GPUs dont need a physical CrossfireX connector any more so the integratred GPU power could also theoretically be increased just off the processors themselves. Disable the integrated Crossfire should a 3rd GPU come along that isn't compatible with it and you'd still have the HSA benefits there as well.


----------



## DaveLT

Quote:


> Originally Posted by *iamwardicus*
> 
> Here's a thought for folks. What do you think the chances are of a multi-processor motherboard coming out with APUs on it in the future? I for one would be all over a dual socket Kaveri. It would also make a bit of sense since their new R9 GPUs dont need a physical CrossfireX connector any more so the integratred GPU power could also theoretically be increased just off the processors themselves. Disable the integrated Crossfire should a 3rd GPU come along that isn't compatible with it and you'd still have the HSA benefits there as well.


I want so bad.


----------



## NaroonGTX

Don't think there will be any dual-socket boards in the future. They don't even have the APU's on the server platforms with those. Not only that, but they've done multi-socket boards before (I think it was called Quad-FX or something) and they were pretty ridiculously expensive IIRC. Doesn't seem likely at all for the foreseeable future, not even for EX.


----------



## iamwardicus

I personally agree and think it's dependent on Kaveri going into the server market. If the APU does than eventually someone will develop a consumer niche product. Personally I feel if AMD won't release an FX (AM3+) version of Steamroller than a dual socket consumer varient of Kaveri (if it were even possible, I doubt they engineered for the capabilities to do so) would appease the enthusiast market if the costs were reasonable.


----------



## NaroonGTX

I've no idea what AMD's plans are for the enthusiasts. Admittedly it's a very niche consumer base, but it's still there. Worst-case scenario, they just hope people stick with Vishera and take advantage of Mantle.

They do have several APU's in the server market, though none of them are multi-socket right now. There were two low-power codename Kyoto APU's (Jaguar x86 + GCN GPU) released earlier this year, which apparently will receive successors not long from now. Then there's the upcoming Berlin APU, which is basically identical to Kaveri, sans TrueAudio. It doesn't seem to have any die logic to support a multi-socket setup.


----------



## yawa

I think L3 cache being ineffective on Steamroller cores is the problem here, especially in regards to a stand alone six core or eight core GPUless variant. AMD would have no reason not to otherwise, unless what I've stated before is true and an FX steamroller simply makes no sense if they want people off of Am3+.

What I don't get is why we aren't getting a second Kaveri with a lower SP count and another module ( or 2) tacked on for the enthusiast, discreet high end GPU crowd when that would go a very long way towards getting people to jump to FM2+ . That's where I don't understand the strategy. Release one processor that maxes out SP's and another that maxes out IPC at the cost of GCN compute, and you'd have people ditching AM3+ left and right.


----------



## Tojara

Quote:


> Originally Posted by *yawa*
> 
> I think L3 cache being ineffective on Steamroller cores is the problem here, especially in regards to a stand alone six core or eight core GPUless variant. AMD would have no reason not to otherwise, unless what I've stated before is true and an FX steamroller simply makes no sense if they want people off of Am3+.
> 
> What I don't get is why we aren't getting a second Kaveri with a lower SP count and another module ( or 2) tacked on for the enthusiast, discreet high end GPU crowd when that would go a very long way towards getting people to jump to FM2+ . That's where I don't understand the strategy. Release one processor that maxes out SP's and another that maxes out IPC at the cost of GCN compute, and you'd have people ditching AM3+ left and right.


L3 cache on AMD's CPUs has never been very effective. If the don't bother fixing it they will probably not make a processor without integrated graphics.

Because they can't. They are manufacturing maybe one or two different kinds of Kaveri-based chips and then binning them accordingly for mobile devices and different performance brackets, much like Intel does it. Another chip would take a lot of additional work and resources, most likely more than it would pay back since they're not making one. Because the GPU would be so weak it would be pointless to have that as well, they might just as well remove it entirely.


----------



## MrJava

Berlin (Kaveri server variant) is single socket. While I think it is possible to have ccNUMA with another Kaveri (since its possible with a discrete GPU), it would probably be way to slow without HT (Hypertransport).

I believe to have more than 2 modules would require a redesign of the Unified Northbridge. Also, multithreaded scaling might be negatively affected as you add more modules (too much Snoop Traffic). That's where an L3 cache (even the current mostly-exclusive one) could help. These are just educated guesses on my part though.


----------



## Darklyric

Quote:


> Originally Posted by *yawa*
> 
> I think L3 cache being ineffective on Steamroller cores is the problem here, especially in regards to a stand alone six core or eight core GPUless variant. AMD would have no reason not to otherwise, unless what I've stated before is true and an FX steamroller simply makes no sense if they want people off of Am3+.
> 
> What I don't get is why we aren't getting a second Kaveri with a lower SP count and another module ( or 2) tacked on for the enthusiast, discreet high end GPU crowd when that would go a very long way towards getting people to jump to FM2+ . That's where I don't understand the strategy. Release one processor that maxes out SP's and another that maxes out IPC at the cost of GCN compute, and you'd have people ditching AM3+ left and right.


it would be more like this... http://www.youtube.com/watch?v=yUc3wd4It8g


----------



## Phaethon666

Quote:


> Originally Posted by *sumitlian*
> 
> If the voodoo magic comes true then I could replace my 8350 with A10-7850K at 4.8-5.0 GHz
> 
> 
> 
> 
> 
> 
> 
> 
> It should consume at least 100w less power than 8350 at 4.6 GHz
> 
> 
> 
> 
> 
> 
> 
> 
> As my system has been running at an average of 16.5 Hrs per day since 3 years, I might see significant reduction in power usage per month.


I am having the same exact dreams as you are lol!


----------



## yawa

Er why though? I get that FP performance may be tied to having a lot of SP's on die, but why don't you think a lower SP higher module processor is possible? The cores don't fit maybe? It just feels like FM2+ is kind of a waste with DDR4 and Excavator right around the corner and nothing more than a stopgap socket if only one flagship level Kaveri winds up on it. Especially since Mantle is supposedly going to remove the CPU bottleneck on older Pile drivers, most users don't have a reason to upgrade.
Quote:


> Originally Posted by *Phaethon666*
> 
> I am having the same exact dreams as you are lol!


I definitely feel you on this one. Hey if 4 core SR is a miracle chip that out performs nearly everything high midrange, I'll bite myself. I'm just struggling with thought that FM2+ has fairly high end Motherboards, but doesn't seem to take into account the idea of running a higher core count chip.


----------



## NaroonGTX

It's physically possible, but it doesn't make sense financially. One of Kaveri's strong points is its scalability. It can be anywhere from the desktop to low-power devices while still offering great performance. Why would someone buy a mobile device that has six cores, yet a crappy GPU? For them to make a processor like that would require a lot of planning and R&D budget, things AMD simply can't afford to waste. There were originally plans for a 3-module Kaveri, but those plans died with whatever the original version of Steamroller and Kaveri were.

There's no indication that FM2+ is just a stopgap, either. There's nothing stopping AMD from having another dual-IMC for Carrizo, where the chip could support both FM2+ and FM3. They did the same with the original Phenom chips where they supported both DDR2 and DDR3 from AM2+ and AM3. This means that people who don't care about DDR4 could simply perform a BIOS update and drop Carrizo in a year after Kaveri launches without getting a new MOBO.

Regardless, DIY builders and enthusiasts are still a small niche market -- Kaveri is looking to be very impressive in the mobile sector. I honestly don't see the need for more than 4 cores for most purposes for most users. People who do professional rendering or run lots of VM's can still get by with Vishera, or high-end Intel offerings.


----------



## sepiashimmer

Quote:


> Originally Posted by *NaroonGTX*
> 
> Regardless, DIY builders and enthusiasts are still a small niche market -- Kaveri is looking to be very impressive in the mobile sector. I honestly don't see the need for more than 4 cores for most purposes for most users. People who do professional rendering or run lots of VM's can still get by with Vishera, or high-end Intel offerings.


Since new consoles have 8 cores, wouldn't we be needing more cores as programming develops for consoles.


----------



## ejb222

Quote:


> Originally Posted by *sepiashimmer*
> 
> Since new consoles have 8 cores, wouldn't we be needing more cores as programming develops for consoles.


I personally doubt that many games would be written for full 8 cores...at least off the bat anyway. I think maybe 4 cores will be the norm as the other for will be used for other tasks like OS etc. But I"m not expert so who knows.


----------



## NaroonGTX

One thing you have to consider is the fact that both consoles won't be using all eight cores for gaming. The X1 has either 2 or 3 cores reserved for the bloated OS, and the PS4 has either 1 or 2 from what I've heard reserved for various things. However, games will indeed start being properly multi-threaded. It'll depend on what type of game it is you're playing what type of gains you'll get from more threads. It has been shown time and time again, for example, that BF3 and BF4 are perfectly playable with over 60fps even on quad-cores in 64-player servers. I doubt four cores will become a bottleneck for gaming any time soon, even with the advent of better-threaded engines popping up.


----------



## geoxile

Anyone ever consider that AMD just doesn't make a lot of money on their FX chips? Rory is clearly cutting AMD's low-margin goods. It could be a smart move that will help them more in the future than catering to enthusiasts now. Plus, the entire point of Kaveri is HSA so cutting out the iGPU to put more modules wouldn't make sense


----------



## Kuivamaa

BF4 absolutely saturates quads-even the strongest of them all, haswell i5s can get 90%+ usage. That game (together with crysis 3) need more threads to run optimally, and mesh set on high+ hits hard even i7s, so quads are already bottlenecking - definitely playable, but you see benefits going for more cores/threads, intel extremes are having a blast there. And with Frostbie/cryengine taking off like that it is a matter of time before this pattern becomes dominant-just imagine if UE4 is just as well threaded. Quads will be playable for a long time still (and mantle will help on that) but the multicore gaming era has already begun.

Other than that, no SR opteron=no SR FX. Simple as that. If AMD can't make a 3M unit with decent igpu on a reasonable TDP for laptops, it simply won't exist on desktop either. Intel is exactly like that too ,their laptop and mainstream desktop in one group, their server and extreme desktop chips on another.


----------



## sugarhell

I heard that bf4 will have HSA optimizations for kaveri~i am searching for the video

Oh found it: http://www.youtube.com/watch?v=l3RY94zGda4


----------



## rpsgc

Quote:


> Originally Posted by *sugarhell*
> 
> I heard that bf4 will have HSA optimizations for kaveri~i am searching for the video


You mean besides Mantle?


----------



## NaroonGTX

Quote:


> Anyone ever consider that AMD just doesn't make a lot of money on their FX chips?


Yep. I remember seeing a statement from AMD where they basically said 70% of their shipments were APU's, FX making up the rest in the consumer space. I think it was when they mentioned FM1 being phased out in mid-2013 with AM3 (not AM3+) being phased out in late 2013.
Quote:


> BF4 absolutely saturates quads-even the strongest of them all, haswell i5s can get 90%+ usage. That game (together with crysis 3) need more threads to run optimally, and mesh set on high+ hits hard even i7s, so quads are already bottlenecking - definitely playable, but you see benefits going for more cores/threads, intel extremes are having a blast there. And with Frostbie/cryengine taking off like that it is a matter of time before this pattern becomes dominant-just imagine if UE4 is just as well threaded. Quads will be playable for a long time still (and mantle will help on that) but the multicore gaming era has already begun.


That's just BF4 though. A game already-known for having shoddy optimization even after releasing "fully", lol. This CPU stress only really happens when the server has 40 or more players in it. A vast majority of games won't come close to that type of stress. Planetside 2 was running a crapload of things all on one thread, and it seems to perform much better after they released the title update that added in true multi-threaded support.

It'll depend on the game and what the engine's demands are and how well/terribly-coded it is, but I doubt suddenly games will start requiring 12 cores to get optimal performance on just because they're multi-threaded.


----------



## sepiashimmer

Quote:


> Originally Posted by *sugarhell*
> 
> I heard that bf4 will have HSA optimizations for kaveri~i am searching for the video
> 
> Oh found it: http://www.youtube.com/watch?v=l3RY94zGda4


I think all games made with Frostbite will support it and anything else AMD comes up with. AMD and DICE are like crossed fingers.


----------



## PostalTwinkie

Quote:


> Originally Posted by *NaroonGTX*
> 
> Yep. I remember seeing a statement from AMD where they basically said 70% of their shipments were APU's, FX making up the rest in the consumer space. I think it was when they mentioned FM1 being phased out in mid-2013 with AM3 (not AM3+) being phased out in late 2013.
> That's just BF4 though. A game already-known for having shoddy optimization even after releasing "fully", lol. This CPU stress only really happens when the server has 40 or more players in it. A vast majority of games won't come close to that type of stress. Planetside 2 was running a crapload of things all on one thread, and it seems to perform much better after they released the title update that added in true multi-threaded support.
> 
> It'll depend on the game and what the engine's demands are and how well/terribly-coded it is, but I doubt suddenly games will start requiring 12 cores to get optimal performance on just because they're multi-threaded.


In addition to this, Mantle so far appears to resolve pretty much any bottleneck at the CPU. They were performing demonstrations with an 8350 clocked DOWN to 2.0Ghz on Mantle, and didn't take a performance hit when doing it.

I wouldn't be surprised if a mid range APU and a mid range GPU from AMD doesn't work perfectly fine with Mantle based games. Imagine being able to game perfectly fine on ~$250 worth of CPU and GPU in games like BF4.


----------



## Kuivamaa

It isn't only BF4 ,it is Crysis 3 as well, and those games are forerunners of their engines. BF4 may be buggy stil but it is state of the art,bleeding edge, make no mistake. That's the shape of things to come, more cores will return more frames. Mass Effect, Dragon Age:Inquisition, new Mirror's Edge,new Star Wars,Star Citizen, Ryse if it ever finds its way for the PC...It is happening, here,now.


----------



## CynicalUnicorn

My only real issue with this is that, while AAA games will be fine, Indie games won't be able to take as much of an advantage. Prison Architect is in alpha, and they release a new version and dev video every month. Alpha build 13 or 14 was when they added a second thread solely for pathfinding. It took one guy basically the entire month to redo the code to support it. Creeper World, as much fun as it is, gets really laggy on big maps, and it's not even framerate drops: the game itself slows down so that one second game time takes five in real time. How many threads? One. It's about the same on a 2.1GHz Phenom II/4250 Mobiliy as it is on a 4.6GHz Piledriver/7850 due to the number of calculations run each second, such as A* hundreds of times and calculating thermal flow, what the enemy is based on.


----------



## NaroonGTX

Funny how most of the games you just listed will have Mantle support, completely eliminating any CPU bottleneck there could be.

I'm not arguing against getting more frames with extra cores in non-Mantle scenarios, I'm just saying people with quad-cores will be just fine pulling 60fps as long as they have a decent GPU and the game itself isn't coded like garbage. I was also saying how BF4 being CPU-intensive wouldn't mean other games would have such strenuous loads even if they're on the same engine. BF4 doesn't really look too much better than BF3 does and it certainly didn't revolutionize the genre or anything. It's more like BF3.1 if that, lol. Not sure why it has such a heavier load than BF3 does.


----------



## Kuivamaa

Quote:


> Originally Posted by *NaroonGTX*
> 
> Funny how most of the games you just listed will have Mantle support, completely eliminating any CPU bottleneck there could be.
> 
> I'm not arguing against getting more frames with extra cores in non-Mantle scenarios, I'm just saying people with quad-cores will be just fine pulling 60fps as long as they have a decent GPU and the game itself isn't coded like garbage. I was also saying how BF4 being CPU-intensive wouldn't mean other games would have such strenuous loads even if they're on the same engine. BF4 doesn't really look too much better than BF3 does and it certainly didn't revolutionize the genre or anything. It's more like BF3.1 if that, lol. Not sure why it has such a heavier load than BF3 does.


Oh it is a revolution in many aspects, weather effects alone are on a totally different level, I had my doubts and then Paracel Storm happened. BF4 is so much better , so much more sophisticated than 3.

As for the rest,Mantle will help alot I bet, but not everyone is having a radeon you know


----------



## Stoffie

Games are already multithreaded not just bf4 take a look at this:

http://www.overclock.net/t/1444040/thread-usage-in-games


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *PostalTwinkie*
> 
> They were performing demonstrations with an 8350 clocked DOWN to 2.0Ghz on Mantle, and didn't take a performance hit when doing it.












This is the first time I've heard of this, and I followed APU13 very closely.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This is the first time I've heard of this, and I followed APU13 very closely.


----------



## Nintendo Maniac 64

What the crap? Why doesn't that have its own news thread on here?!


----------



## NaroonGTX

There was a slide from one of the devs where they mentioned they had the FX-8350 downclocked to 2GHz and the game was totally GPU-bound. I thought I saved that image but apparently I didn't -_-.


----------



## DaaQ

It was a tweet from caveman jim during the oxide stuff


----------



## MrJava

Too bad for them, lol.
Quote:


> Originally Posted by *Kuivamaa*
> 
> As for the rest,Mantle will help alot I bet, but not everyone is having a radeon you know


AMD should spread Mantle tech to its partners in the HSA foundation Imagination, ARM and QCom for mobile GPUs. It would be a wise move to submit this to Khronos as well ASAP. The longer NVidia waits to counter, the sooner it becomes a de facto standard.


----------



## DaveLT

Quote:


> Originally Posted by *Tojara*
> 
> L3 cache on AMD's CPUs has never been very effective. If the don't bother fixing it they will probably not make a processor without integrated graphics.
> 
> Because they can't. They are manufacturing maybe one or two different kinds of Kaveri-based chips and then binning them accordingly for mobile devices and different performance brackets, much like Intel does it. Another chip would take a lot of additional work and resources, most likely more than it would pay back since they're not making one. Because the GPU would be so weak it would be pointless to have that as well, they might just as well remove it entirely.


Ask Intel, their iGPUs are still a big joke for such a big company and knowledge for making GPUs
Quote:


> Originally Posted by *yawa*
> 
> Er why though? I get that FP performance may be tied to having a lot of SP's on die, but why don't you think a lower SP higher module processor is possible? The cores don't fit maybe? It just feels like FM2+ is kind of a waste with DDR4 and Excavator right around the corner and nothing more than a stopgap socket if only one flagship level Kaveri winds up on it. Especially since Mantle is supposedly going to remove the CPU bottleneck on older Pile drivers, most users don't have a reason to upgrade.
> I definitely feel you on this one. Hey if 4 core SR is a miracle chip that out performs nearly everything high midrange, I'll bite myself. I'm just struggling with thought that FM2+ has fairly high end Motherboards, but doesn't seem to take into account the idea of running a higher core count chip.


Lol stopgap. It's been on the roadmaps for like forever now, Richland appears to be the real stopgap


----------



## Malcom28

I WANT THE A10 7850K CROSSIFRE WITH 260X !!!


----------



## CynicalUnicorn

January 14th. Mark your calendar and be patient.


----------



## ghdftrdsf

They don't even have the APU's on the server platforms with those.


----------



## DaveLT

Quote:


> Originally Posted by *ghdftrdsf*
> 
> They don't even have the APU's on the server platforms with those.


Went on a Acid trip?


----------



## NaroonGTX

Quote:


> They don't even have the APU's on the server platforms with those.


u wot m8

What do you mean?


----------



## Shadychevyowner

Quote:


> Originally Posted by *Malcom28*
> 
> I WANT THE A10 7850K CROSSIFRE WITH 260X !!!


Me too. I hope it will crossfire with something faster but also hope the 260X is the fastest.

Sounds dumb but this is the first system I have ever built with best of current gen. My wife will be mad if i have to buy a 290X.


----------



## DaveLT

Quote:


> Originally Posted by *Shadychevyowner*
> 
> Me too. I hope it will crossfire with something faster but also hope the 260X is the fastest.
> 
> Sounds dumb but this is the first system I have ever built with best of current gen. My wife will be mad if i have to buy a 290X.


You can be sure 260x is the highest recommended GPU to hybrid CF but of course anything else is possible

People have hybrid CF'd fine on the 6800k with a 7790/260x ... I would imagine 280x would be fine given if Kaveri really improved.
Ah, don't forget Mantle.

I also completely forgot about PCIe 3.0. That should solve past Hybrid CF problems


----------



## NaroonGTX

I don't recall ever seeing someone Hybrid Xfire with the 7790, pretty sure the highest anyone ever xfire'd was with the 7750.


----------



## LuckyStarV

Quote:


> Originally Posted by *DaveLT*
> 
> You can be sure 260x is the highest recommended GPU to hybrid CF but of course anything else is possible
> 
> People have hybrid CF'd fine on the 6800k with a 7790/260x ... I would imagine 280x would be fine given if Kaveri really improved.
> Ah, don't forget Mantle.
> 
> I also completely forgot about PCIe 3.0. That should solve past Hybrid CF problems


A 280x has twice the ROPs, 4 times the shaders, tons more memory bandwidth, and 3-4x the TMU count and is clocked higher. I don't see that working very well.


----------



## DaveLT

Quote:


> Originally Posted by *LuckyStarV*
> 
> A 280x has twice the ROPs, 4 times the shaders, tons more memory bandwidth, and 3-4x the TMU count and is clocked higher. I don't see that working very well.


Past issues have been with PCI-E 2.0 latency and general lack of bandwidth to Hybrid CF properly. the CPU matters only really because of those CPU intensive games from 2010


----------



## LuckyStarV

Quote:


> Originally Posted by *DaveLT*
> 
> Past issues have been with PCI-E 2.0 latency and general lack of bandwidth to Hybrid CF properly. the CPU matters only really because of those CPU intensive games from 2010


CFX works best though with similar graphics cards, otherwise you could crossfire an 7750 with an R9 290x


----------



## DaveLT

Quote:


> Originally Posted by *LuckyStarV*
> 
> CFX works best though with similar graphics cards, otherwise you could crossfire an 7750 with an R9 290x


Hybrid CF bro not CFX.


----------



## Alatar

Hybrid CF with the igpu is terrible anyways and isn't really good for anything but nice results in bar graphs. Just like CFX before frame pacing drivers.

You buy a budget system, spend the extra on the discrete GPU and still end up with the same real world gaming experience as with the igpu only. So you essentially wasted your money buying a discrete card which is quite terrible for people on a budget who don't know better.


----------



## DaveLT

Quote:


> Originally Posted by *Alatar*
> 
> Hybrid CF with the igpu is terrible anyways and isn't really good for anything but nice results in bar graphs. Just like CFX before frame pacing drivers.
> 
> You buy a budget system, spend the extra on the discrete GPU and still end up with the same real world gaming experience as with the igpu only. So you essentially wasted your money buying a discrete card which is quite terrible for people on a budget who don't know better.


You clearly either have a hate for Hybrid CF and CFX or you haven't got any idea of Hybrid CFX
You would be a great mod if you didn't back yourself up with a actual use of Hybrid CF but no, you just go around spouting stuff like that

Anything AMD either keeps you shut or have you rage like a mad cow claiming it will be crap!


----------



## Alatar

Quote:


> Originally Posted by *DaveLT*
> 
> You clearly either have a hate for Hybrid CF and CFX or you haven't got any idea of Hybrid CFX
> You would be a great mod if you didn't back yourself up with a actual use of Hybrid CF but no, you just go around spouting stuff like that
> 
> Anything AMD either keeps you shut or have you rage like a mad cow claiming it will be crap!


I have used it and fully agree with one of the few articles written about it (if not the only proper one) : http://www.tomshardware.com/reviews/dual-graphics-crossfire-benchmark,3583-10.html

Great for getting a bigger number in fraps, real life experience isn't better.

And my experience is with a 6800K and a 6670 which I borrowed from a friend because I was curious.


----------



## Ultracarpet

Delete... Alatar beat me to it


----------



## specopsFI

I have faith in hybrid CF for Mantle titles. The way Mantle handles multiple GPUs sounds very clever, throwing AFR out of the window and combining all the GPUs on the same queue, each doing what they're capable of. The problem is that Mantle isn't everything or everywhere, but still it will make hybrid CF more interesting.


----------



## Gereti

hmm, im wating that moment when they mix something like 7870 GPU and FX83XX CPU (on desktop pc)
but those memory speed support¨s








My Kingston HyperX 1333mhz/1.5V run's with athlon II 651K 1866mhz/1.65V easily


----------



## Milestailsprowe

Any chance of these CPUs doing Hybrid Crossfire with a 270x or higher?


----------



## maarten12100

Quote:


> Originally Posted by *Milestailsprowe*
> 
> Any chance of these CPUs doing Hybrid Crossfire with a 270x or higher?


Mantle has crossfire load scaling so hybrid CF will be good there with any amd gcn gpu I suppose


----------



## Milestailsprowe

Quote:


> Originally Posted by *maarten12100*
> 
> Mantle has crossfire load scaling so hybrid CF will be good there with any amd gcn gpu I suppose


If so I'm switching to Kaveri Day one.


----------



## nitrubbb

Quote:


> Originally Posted by *Milestailsprowe*
> 
> If so I'm switching to Kaveri Day one.


this!


----------



## Kuivamaa

Dual graphics (hybrid CFX with an igpu) is still atrocious and I really hope they sort the drivers on this one. Nothing like normal CFX before frame pacing,you could tweak around and fix that one in many games, hybrid igpu CFX is totally broken everywhere.


----------



## maarten12100

Quote:


> Originally Posted by *Kuivamaa*
> 
> Dual graphics (hybrid CFX with an igpu) is still atrocious and I really hope they sort the drivers on this one. Nothing like normal CFX before frame pacing,you could tweak around and fix that one in many games, hybrid igpu CFX is totally broken everywhere.


Gcn "1,1" brought a lot of improvements on this front.
I wonder how it'll do with a 260x or a 7790


----------



## Kuivamaa

Me too, If they sort it out, the potential is endless.


----------



## Fabriz89

Any idea if Kaveri will work with Virtu MVP? I tried it in the past with an I5 and a 5770 with games like WoW, Diablo 3, Deus Ex and I had good results.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Fabriz89*
> 
> Any idea if Kaveri will work with Virtu MVP? I tried it in the past with an I5 and a 5770 with games like WoW, Diablo 3, Deus Ex and I had good results.


Probably not. Only Intel systems have Virtu MVP's support. That and it hasn't exactly been a killer app so I don't think they'll bother adding AMD support.


----------



## SandGlass

I know I might get flamed for saying this, but Virtu MVP is pure vaporware, it does not work, their "hyper optimization" does nothing, zilch, the frame rates do not actually change, and the stutter is worse.


----------



## Yeroon

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> Probably not. Only Intel systems have Virtu MVP's support. That and it hasn't exactly been a killer app so I don't think they'll bother adding AMD support.


Quote:


> Originally Posted by *SandGlass*
> 
> I know I might get flamed for saying this, but Virtu MVP is pure vaporware, it does not work, their "hyper optimization" does nothing, zilch, the frame rates do not actually change, and the stutter is worse.


VirtuMVP has been available for some fm2 motherboards since Trinity launched. Never tried it but its supposedly available.


----------



## rudyae86

I was about to step away from AMD but this is making me come back...well im still going Intel CPU but the whole graphics part is going to be ALL AMD. Looks like my build for next year isnt going to be that expensive at all


----------



## CynicalUnicorn

What's your current rig? Put it in your sig! I agree. Intel will be the CPU king for quite a while, but AMD will have awesome performance for the price. The A10s so far have cost less than the cheapest i5s, and they have those ridiculous iGPUs and fairly good CPUs. Piledriver isn't a particularly good architecture per se, but it's overclockability helps alleviate that a bit.


----------



## xPwn

Quote:


> Originally Posted by *NaroonGTX*
> 
> Intel's Extreme-series CPU's don't have iGPU's because they are just cut-down Xeon chips that are multiplier-unlocked. If you could see the die, there is no space whatsoever for an iGPU because of the massive L3 caches and I think the L2 caches are bigger as well. Their "normal" chips have completely different dies and thus they can have the iGPU on there.


Sorry, but I'm going to go with a no on that one... my i7 has no iGPU and its 1156.


----------



## inedenimadam

Quote:


> Originally Posted by *xPwn*
> 
> Quote:
> 
> 
> 
> Originally Posted by *NaroonGTX*
> 
> Intel's Extreme-series CPU's don't have iGPU's because they are just cut-down Xeon chips that are multiplier-unlocked. If you could see the die, there is no space whatsoever for an iGPU because of the massive L3 caches and I think the L2 caches are bigger as well. Their "normal" chips have completely different dies and thus they can have the iGPU on there.
> 
> 
> 
> Sorry, but I'm going to go with a no on that one... my i7 has no iGPU and its 1156.
Click to expand...

Correct, but for all the wrong reasons. http://www.techpowerup.com/188836/ivy-bridge-e-not-a-cut-down-8-core-20-mb-llc-die.html


----------



## DaveLT

Quote:


> Originally Posted by *xPwn*
> 
> Sorry, but I'm going to go with a no on that one... my i7 has no iGPU and its 1156.


Yours is simply a disabled GPU in your CPU


----------



## PostalTwinkie

I love how people, especially Alatar, are talking trash about what this chip will do in Hybrid CF; when it isn't even to market yet!

Were there issues in the past with hybrid? Sure, but it was in the past, we have no idea what AMD has done to resolve it for this new product, because it isn't out yet! Something tells me eliminating the need for the Crossfire bridge on the new R series, instead using the PCI-E bus directly, will help improve the problem seen by Tom's.

While it hasn't been tested yet, *because this new hardware isn't even out*, what Tom's (A real source for sure....) described is exactly what you seen running Crossfire WITHOUT the bridge. Most people don't realize it, but you can run Crossfire with the 7000 series, and older, without the bridge. You just have terrible framing issues and generally lower performance. So, how will the new design by AMD impact the iGPU + Discrete situation? How the cards themselves are seen has been changed....

Though, more important is the fact we have a Mod in here, and others, crapping on a product that isn't even out yet.

I am going with the notion that how AMD works with Crossfire directly over the PCI-E bridge, and how they are addressing resources with hUMA, that the issues people are screaming about can easily be resolved.


----------



## maarten12100

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I love how people, especially Alatar, are talking trash about what this chip will do in Hybrid CF; when it isn't even to market yet!
> 
> Were there issues in the past with hybrid? Sure, but it was in the past, we have no idea what AMD has done to resolve it for this new product, because it isn't out yet! Something tells me eliminating the need for the Crossfire bridge on the new R series, instead using the PCI-E bus directly, will help improve the problem seen by Tom's.
> 
> While it hasn't been tested yet, *because this new hardware isn't even out*, what Tom's (A real source for sure....) described is exactly what you seen running Crossfire WITHOUT the bridge. Most people don't realize it, but you can run Crossfire with the 7000 series, and older, without the bridge. You just have terrible framing issues and generally lower performance. So, how will the new design by AMD impact the iGPU + Discrete situation? How the cards themselves are seen has been changed....
> 
> Though, more important is the fact we have a Mod in here, and others, crapping on a product that isn't even out yet.
> 
> I am going with the notion that how AMD works with Crossfire directly over the PCI-E bridge, and how they are addressing resources with hUMA, that the issues people are screaming about can easily be resolved.


Indeed they'll solve it besides all mantle games have cf resource managing so everything works with everything supposedly.

Alatar can say what he wants, these chips seem good so far if multi scaling is improved and single by 20%(30 clock for clock).
AMD is catching up faster than Intel is improving and whether this is due to Intel shiftibg focus doesn't matter as it is good for us.

As for AMD's ultra mobile rumour has 14nm by late 2014. Also Beema should give great power gating halving the typical usage power.

Heavy competition the future is either corporate greed or fusion or both.
And Alatar we know you're just thumpin' to OC the new apus to hell and back.


----------



## Pip Boy

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> What's your current rig? Put it in your sig! I agree. Intel will be the CPU king for quite a while, but AMD will have awesome performance for the price. The A10s so far have cost less than the cheapest i5s, and they have those ridiculous iGPUs and fairly good CPUs. Piledriver isn't a particularly good architecture per se, but it's overclockability helps alleviate that a bit.


All i can say is even running less than optimized drivers (20 -30% slower at the moment than windows) on Linux Mint 16 I bought Metro Last night on the steam sale. It is playing fine @ 1080p on my A10-5800k ! I was fairly shocked to see the level of image quality being displayed given the relative GPU power to Cost ratio, its only on lowish settings but it still looks amazing. The whole setup is compact totally silent, uses 30w on idle from the wall and cost less than $300

If factored in driver improvements (steamOS should force this issue) and Mantle then things get really good and that's on Trinity as things progress with Kaveri and beyond i think these are going to become the defacto choice for budget and even medium cost gaming systems, laptops. Further to that, even the high end rigs used for gaming might see a benefit because who wouldn't want a 512+ core (and future 1024 core 5850 like performance) slapped on the side of their 290x ?

I call that a win.


----------



## CynicalUnicorn

Alatar can say whatever he wants. Either party can gloat and say "I told you so!" once January 14th and benchmarks come. But apparently the bridge-less crossfire issues have been fixed if the 290s' 80+% scaling are any indication and GCN 1.1 allows for more distributed load-sharing so a 270X and 7850k might just work. I don't know; we need more info from AMD first.

I think we'll see more cores when the die shrinks take off. 512 shaders is not a whole lot, but it's half the power of a 7850 or one quarter a 7970. Not sure how many ROPs or TMUs it has though, and ROPs seem to be important since my dual 7850s are the same as a single 7970 except for twice the ROPs and beat it until bandwidth and/or RAM become issues. But I see 768 shaders minimum with 20nm GCN 2.0 and six core CPUs with hopefully 20nm Carrizo.


----------



## Pip Boy

Quote:


> I think we'll see more cores when the die shrinks take off. 512 shaders is not a whole lot, but it's half the power of a 7850 or one quarter a 7970. .


is the 7970 really that much faster than a 7850 though in the real world? i doubt its one quarter but still thats just semantics.. I agree with your point. If were talking a 1024 core iGPU on DDR4 quad channel ram @ 1100mhz core clock and the RAM at 2400mhz per channel for example then we would be close to the level of a 5850 and that was able back in the day (sold mine (







) what a great card ! 5 months ago for the APU build) to run Crysis / 2 at 1080p in medium-high settings 40 - 60 and BF3 once optimized at all high settings 1080p 40-60fps. Also anything below those extreme titles was running pretty damn nice and if i had the choice of more CPU with intel VS more FPS for less with AMD on a iGPU crossfire setup as games move away from CPU to GPU then i would choose take high end APU.

Also bear in mind when the 5850 was supported with drivers they were not a patch on what AMD are offering now (finally) the experience on a 7xxx series card is smoother than what the 5xxx series were getting even if the peak FPS was 60 there were always spikes and lags occasionally and make games never had that smooth frame sequencing like now.


----------



## Cyro999

Quote:


> is the 7970 really that much faster than a 7850 though in the real world?


I'd hope so, considering it costs 2.5x as much


----------



## Pip Boy

Quote:


> Originally Posted by *Cyro999*
> 
> I'd hope so, considering it costs 2.5x as much


without going off topic too much the highly optimized AMD game BF4 has a 7970 at 48fps and a 7850 at 28fps @ 1440p with everything on ultra ((not considering that at this res the 7970 will overtake of course and its not the 7850's sweet spot which would be high-some ultra @ 1080p )) that's not 4 times as powerful, that's 30 vs 50 .. so yea its definitely creams it but was just in the context of cynicalunicorn pointing toward x4 as fast


----------



## Cyro999

He said this
Quote:


> half the power of a 7850 or one quarter a 7970. .


So 2x as fast - it's a bit difficult to make a comparison without clock speeds and looking at other, stuff, but i'd expect somewhat close


----------



## DaveLT

Quote:


> Originally Posted by *phill1978*
> 
> is the 7970 really that much faster than a 7850 though in the real world? i doubt its one quarter but still thats just semantics.. I agree with your point. If were talking a 1024 core iGPU on DDR4 quad channel ram @ 1100mhz core clock and the RAM at 2400mhz per channel for example then we would be close to the level of a 5850 and that was able back in the day (sold mine (
> 
> 
> 
> 
> 
> 
> 
> ) what a great card ! 5 months ago for the APU build) to run Crysis / 2 at 1080p in medium-high settings 40 - 60 and BF3 once optimized at all high settings 1080p 40-60fps. Also anything below those extreme titles was running pretty damn nice and if i had the choice of more CPU with intel VS more FPS for less with AMD on a iGPU crossfire setup as games move away from CPU to GPU then i would choose take high end APU.
> 
> Also bear in mind when the 5850 was supported with drivers they were not a patch on what AMD are offering now (finally) the experience on a 7xxx series card is smoother than what the 5xxx series were getting even if the peak FPS was 60 there were always spikes and lags occasionally and make games never had that smooth frame sequencing like now.


1024 GCN cores > 1600 VLIW5 though. Actually, besides obvious clockspeed advantage if you OC a 7850, it is as powerful as a 6970 in DX11 titles. Yep. It packs a serious punch

Considering it has only 1/2 the shaders of a 7850 (no idea about ROPs/TMUs) i would say the performance is about 50-60% of a 7850. If these are GCN1.1 cores then that goes up


----------



## CynicalUnicorn

Quote:


> Originally Posted by *Cyro999*
> 
> He said this
> Quote:
> 
> 
> 
> half the power of a 7850 or one quarter a 7970. .
> 
> 
> 
> So 2x as fast - it's a bit difficult to make a comparison without clock speeds and looking at other, stuff, but i'd expect somewhat close
Click to expand...

I'm going by shaders and texture-maps. 7850s have 1024 SPUs and 64 TMUs. 7970s have 2048 SPUs and 128 TMUs. Both have 32 ROPs. So realistically, dual 7850s should be a bit more powerful and benchmarks back that up. Even better is 4GB 7870s. Each is exactly half an R9 290 - half the bus and bandwidth, half the shaders, half the TMUs, half the ROPs, and depending on the application, dual 7870s can actually win despite crossfire lowering potential performance. That's >100% scaling, by the way. I'd be happy to see a dual GPU come out with two GCN 1.1 7750s on it for hybrid crossfire. ASUS, get on it!


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'm going by shaders and texture-maps. 7850s have 1024 SPUs and 64 TMUs. 7970s have 2048 SPUs and 128 TMUs. Both have 32 ROPs. So realistically, dual 7850s should be a bit more powerful and benchmarks back that up. Even better is 4GB 7870s. Each is exactly half an R9 290 - half the bus and bandwidth, half the shaders, half the TMUs, half the ROPs, and depending on the application, dual 7870s can actually win despite crossfire lowering potential performance. That's >100% scaling, by the way. I'd be happy to see a dual GPU come out with two GCN 1.1 7750s on it for hybrid crossfire. ASUS, get on it!


Actually, it's hard to compare R9 290 to R9 270/270X because R9 290 is GCN 1.1 and not GCN1.0 like the 7870s are


----------



## LDV617

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> by the way. I'd be happy to see a dual GPU come out with two GCN 1.1 7750s on it for hybrid crossfire. ASUS, get on it!


----------



## CynicalUnicorn

Quote:


> Originally Posted by *DaveLT*
> 
> Actually, it's hard to compare R9 290 to R9 270/270X because R9 290 is GCN 1.1 and not GCN1.0 like the 7870s are


Not really. I don't think GCN 1.1 brought anything to the table other than improved power consumption and some features like bridgeless crossfire and TrueAudio. I don't think there were any significant performance gains. There probably is some discrepancies among different types of CPUs, e.g. a 4100 with a 290 will lose to a 4960X with dual 7870s/270(X)s.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Not really. I don't think GCN 1.1 brought anything to the table other than improved power consumption and some features like bridgeless crossfire and TrueAudio. I don't think there were any significant performance gains. There probably is some discrepancies among different types of CPUs, e.g. a 4100 with a 290 will lose to a 4960X with dual 7870s/270(X)s.


GCN1.1 brought per-shader performance increases as such.


----------



## CynicalUnicorn

By how much? Not a whole lot I'm sure.


----------



## sumitlian

http://techreport.com/review/25473/
^it says 7790 will also have been supported TrueAudio in a driver update.


----------



## CynicalUnicorn

Hooray? That way they won't get backlash from the 7790 early-adopters I suppose. Was there any reason to disable it in the first place?


----------



## sumitlian

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Hooray? That way they won't get backlash from the 7790 early-adopters I suppose. Was there any reason to disable it in the first place?










I think they just wanted to introduce it with R series of GPU.


----------



## Nintendo Maniac 64

Maybe the software side for TrueAudio wasn't ready yet?


----------



## AlphaC

Quote:


> Originally Posted by *sumitlian*
> 
> http://techreport.com/review/25473/
> ^it says 7790 will also have been supported TrueAudio in a driver update.


It still hasn't happened. I wouldn't count on it.

If it does happen then the R7 260X is screwed over at the price it asks (HD 7790 is $100 often).


----------



## llodke

I for one would be all over a dual socket Kaveri. It would also make a bit of sense since their new R9 GPUs dont need a physical CrossfireX connector any more so the integratred GPU power could also theoretically be increased just off the processors themselves.


----------



## sumitlian

Quote:


> Originally Posted by *AlphaC*
> 
> It still hasn't happened. I wouldn't count on it.
> 
> If it does happen then the R7 260X is screwed over at the price it asks (HD 7790 is $100 often).


Why do you think that ?

7790 launch price was $150 with 1GB GDDR5.
R7 260X comes with 2GB GDDR5 for $140.

Even if TrueAudio is on both card, 260X still has advantage of more VRAM. But Its just a rebrand like 5770 and 6770 were.

As long as 260X doesn't starts to show magic over 7790 in TrueAudio games, I won't even need to think to have TrueAudio. Its not that big problem. I was just thinking they could enable it, as 7790 already has all those DSPs in its chip like 260x has.
Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> Maybe the software side for TrueAudio wasn't ready yet?


Obviously !


----------



## CynicalUnicorn

Were they really $150? They've dropped a lot in just a little time. I can see 260Xs approaching those prices by the end of next quarter.


----------



## AlphaC

sumitlian , HD7790 has 2GB versions too.

When it launched it got stomped by GTX 650 Ti Boost. It was a STUPID move to discontinue HD7850 for the HD7790 / R7 260X , both have 1 power connector. The redeeming factor is the R9 270 which has one power connector but has more performance than HD7850 or HD 7790 , being a power / TDP limited HD7870.

http://www.newegg.com/Product/Product.aspx?Item=N82E16814121772
http://www.newegg.com/Product/Product.aspx?Item=N82E16814125454

1GB Vs 2GB comparison , at 1080p it doesn't seem to be much different (memory bandwidth limited)
*http://www.legitreviews.com/sapphire-radeon-hd-7790-2gb-oc-edition-video-card-review_2175/17*


----------



## CynicalUnicorn

Have 7850s really been discontinued? That's not a smart move. Remember that 1GB is standard on the 7790s, not 2GB. Sure it won't do much at 1080p (for now) but in the future it will and that extra RAM is standard.


----------



## sumitlian

Quote:


> Originally Posted by *AlphaC*
> 
> sumitlian , HD7790 has 2GB versions too.
> 
> When it launched it got stomped by GTX 650 Ti Boost. It was a STUPID move to discontinue HD7850 for the HD7790 / R7 260X , both have 1 power connector. The redeeming factor is the R9 270 which has one power connector but has more performance than HD7850 or HD 7790 , being a power / TDP limited HD7870.
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814121772
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814125454
> 
> 1GB Vs 2GB comparison , at 1080p it doesn't seem to be much different (memory bandwidth limited)
> *http://www.legitreviews.com/sapphire-radeon-hd-7790-2gb-oc-edition-video-card-review_2175/17*


I got mine just after 14 days from launch. And there were no 2GB version of 7790 within a month at least at that time.
You are not understanding what I am trying to say, I am saying AMD launched 1GB 7790 as a standard (as CynicalUnicorn has said) at $150 and has launched 2GB 260x at $140.
So even there is no difference between 1 GB and 2 GB, people now will be going to buy 2GB 260x instead of old 7790. why ? because its new.

And for TrueAudio do you really think that it would be some sort of loss for AMD ?
Do you really think that "AMD thinks" 7790 owners might wanna switch to 260X for just TrueAudio ? I don't think so.

I tell you a story.
HD 5770 had only UVD 2.2 support. It supported MPEG4 hardware acceleration but It didn't support H.264 hardware acceleration, I had been testing it on VLC and MPC-HC with ffdshow and LAV video decoder. Any H.264 video was not running fine with every latest drivers of that time.
When HD 6xxx series launched, it came with UVD 3.0. UVD 3.0 supported H.264 GPU acceleration and almost every card was running fine that format even that low end one like HD 6450.
After the launch of HD 6xxx, more drivers came and one day I tried it again. And it was fixed, they neither officially announced nor it got reviewed. 5770 never ran 360p h.264 and now it was running 4K H.264 video flawlessly with around 65% GPU usage. TrueAudio was not big deal, every GCN 1.1 card is going to have it even Kaveri's iGPU as well, then why would they leave 7790 behind ?


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *sumitlian*
> 
> And it was fixed, they neither officially announced nor it got reviewed. 5770 never ran 360p h.264 and now it was running 4K H.264 video flawlessly with around 65% GPU usage. TrueAudio was not big deal, every GCN 1.1 card is going to have it even Kaveri's iGPU as well, then why would they leave 7790 behind ?


Heck, my little integrated Radeon HD4200 iGP does h.264 1080p @ 30fps or 720p @ 60fps with CPU utilization at around 20-30% with my Brisbane underclocked to a measly 1GHz! By comparison that's better than my massive-by-comparison 8800GS which doesn't seem to fully support DXVA decoding of h.264 and therefore I need my CPU running at something like the stock 2.5GHz for 1080p video.

So go figure, my iGP is better than my discrete GPU at video playback.


----------



## sumitlian

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> Heck, my little integrated Radeon HD4200 iGP does h.264 1080p @ 30fps or 720p @ 60fps with CPU utilization at around 20-30% with my Brisbane underclocked to a measly 1GHz! By comparison that's better than my massive-by-comparison 8800GS which doesn't seem to fully support DXVA decoding of h.264 and therefore I need my CPU running at something like the stock 2.5GHz for 1080p video.
> 
> So go figure, my iGP is better than my discrete GPU at video playback.


What was the time when you first saw HD 4200 running H.264 flawlessly ?

As far as I remember my 5770 till at least 7-August-2012 never ran h.264 video without any color loss, stuttering, graphical bugs with any graphics driver up to that time.
I had also started a thread about it, nobody answered. I had 1055T at 4.0 GHz, UD5 and 5770 at that time.
http://www.overclock.net/t/1291092/is-6770-better-in-gpu-accelerated-video-decoding-than-5770/0_20

I don't exactly remember but it should be catalyst 12.5 or 12.6 that had really fixed it at least for my 5770.


----------



## Nintendo Maniac 64

I got my mobo on May 9th 2012 but I don't think I had tried DXVA playback until a bit later because I was using XP as my main OS at the time, and XP does not support UVD 2.0.


----------



## sumitlian

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> I got my mobo on May 9th 2012 but I don't think I had tried DXVA playback until a bit later because I was using XP as my main OS at the time, and XP does not support UVD 2.0.


Then it seems all our HD 4xxx/5xxx had been updated to support UVD 3 or h.264 by a driver that came around mid or post mid 2012.


----------



## LuckyStarV

I was running a 5650m with catalyst 11.x in early 2010 (got my laptop in March 2011) again only with UVD 2.2 but it always decoded h.264 fine for me. AFAIK UVD 2.2 did h.264 and UVD 3 was for 3D playback support. In fact my HD 2600 Pro did h264 decode as well with the original UVD. The HD2600 Pro would do high bitrate 1080p H264 paired with a Pentium D 930. And this is under XP which has less support. And this was tested in 2011.


----------



## sumitlian

Quote:


> Originally Posted by *LuckyStarV*
> 
> I was running a 5650m with catalyst 10.x in early 2010 (got my laptop in March 2010) again only with UVD 2.2 but it always decoded h.264 fine for me. AFAIK UVD 2.2 did h.264 and UVD 3 was for 3D playback support. In fact my HD 2600 Pro did h264 decode as well with the original UVD.
> 
> The HD2600 Pro would do high bitrate 1080p H264 paired with a Pentium D 930. And this is under XP which has less support.


Then AMD really hid something regarding h.264 support in HD 5xxx or 5770. Because I bought it on 6-September-2010, and it never decoded h.264 without stutter.until mid 2012. I had been using Win 7 64 bit SP1.


----------



## LuckyStarV

My 5650m uses Win 7 SP1 and I tried my HD 2600 with an E5200 on Win 7 SP1 and it did HD264 fine as well (Summer 2011ish)

I wonder if its an driver issue with the 5770 or playback settings.

Going by this, H264 was supported since the original UVD https://en.wikipedia.org/wiki/Unified_Video_Decoder


----------



## sumitlian

Quote:


> Originally Posted by *LuckyStarV*
> 
> My 5650m uses Win 7 SP1 and I tried my HD 2600 with an E5200 on Win 7 SP1 and it did HD264 fine as well (Summer 2011ish)
> 
> I wonder if its an driver issue with the 5770 or playback settings.
> 
> Going by this, H264 was supported since the original UVD https://en.wikipedia.org/wiki/Unified_Video_Decoder


Well, I've been watching that page many times since three years.
Finally It should probably be a driver issue with 5770 at that time and later got fixed.


----------



## ndiils

I have ever built with best of current gen. My wife will be mad if i have to buy a 290X.


----------



## LuckyStarV

Quote:


> Originally Posted by *ndiils*
> 
> I have ever built with best of current gen. My wife will be mad if i have to buy a 290X.


Always best to buy more mainstream/performance cards and stuffs and upgrade more frequently than to buy the top of each generation


----------



## NaroonGTX

Quote:


> I for one would be all over a dual socket Kaveri. It would also make a bit of sense since their new R9 GPUs dont need a physical CrossfireX connector any more so the integratred GPU power could also theoretically be increased just off the processors themselves.kA7FwG


Won't happen. Kaveri has no HyperTransport or any type of necessary way for interconnects between multiple chips. Even Berlin is meant for 1P systems.


----------



## MrJava

AMD has been talking about moving some GPU production to GloFo. I think the easiest way to do this would be put Kaveri on an AIB and sell it as a GCN 1.1 version of the 7750 DDR3. They could slot this card above the R7-250 or it could replace it altogether.

And voila! You can Crossfire 2 (3?) Kaveris!


----------



## monstercameron

http://www.cpu-world.com/news_2013/2013120701_Pre-order_prices_of_upcoming_AMD_Kaveri_and_Richland_APUs.html

$167-189, bit high?


----------



## Sand3853

I'd think that unless performance turns out to be much better than anticipated (at the least), or close to some of the more outrageous claims then the price would seem a bit high. That puts the 7850k just shy of a k series i5 chip...and it's not like intel motherboards are that much more than a nice fm2+ board either so any price/performance hopes would pretty much be negated....Other reports speculate the kaveri chips around the 150 mark max which I'd think would be more realistic. Granted, with the reported GPU power of the chips AMD could be banking on that strength to leverage a higher price....guess we will find out soon


----------



## Milestailsprowe

Quote:


> Originally Posted by *monstercameron*
> 
> http://www.cpu-world.com/news_2013/2013120701_Pre-order_prices_of_upcoming_AMD_Kaveri_and_Richland_APUs.html
> 
> $167-189, bit high?


That put it into low end I5 range

Any chance of these crossfiring with my 270X if so I'll buy it with a smile


----------



## CynicalUnicorn

Apparently GCN 1.1 can split the load a lot better than 1.0. Even though your 270X has more than twice the shaders, there shouldn't necessarily be any issues. I'm not sure if the 270X is GCN 1.1 though, but the 290(X)s, 260Xs, and 7790s all are.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Apparently GCN 1.1 can split the load a lot better than 1.0. Even though your 270X has more than twice the shaders, there shouldn't necessarily be any issues. I'm not sure if the 270X is GCN 1.1 though, but the 290(X)s, 260Xs, and 7790s all are.


270X is just a higher binned 7870.


----------



## MrJava

Think we should blame GloFo for these prices. The yields must be less than stellar.
Quote:


> Originally Posted by *Sand3853*
> 
> I'd think that unless performance turns out to be much better than anticipated (at the least), or close to some of the more outrageous claims then the price would seem a bit high. That puts the 7850k just shy of a k series i5 chip...and it's not like intel motherboards are that much more than a nice fm2+ board either so any price/performance hopes would pretty much be negated....Other reports speculate the kaveri chips around the 150 mark max which I'd think would be more realistic. Granted, with the reported GPU power of the chips AMD could be banking on that strength to leverage a higher price....guess we will find out soon


----------



## LuckyStarV

My opinion was that selling GloFo was a mistake on AMD's part. it got them some short term cash to use but they've spent way more on GloFlo to spin them off and paying them now. It would have been better to keep it themselves so they could silicon costs. They could also have opened it up to other people like Intel has now in order to keep the foundry's busy.


----------



## DaveLT

Quote:


> Originally Posted by *LuckyStarV*
> 
> My opinion was that selling GloFo was a mistake on AMD's part. it got them some short term cash to use but they've spent way more on GloFlo to spin them off and paying them now. It would have been better to keep it themselves so they could silicon costs. They could also have opened it up to other people like Intel has now in order to keep the foundry's busy.


You mean AMD spun off their manufacturing arm to get them out of near-bankruptcy and named it GloFo? With the investment of a saudi arabian?


----------



## maarten12100

GloFo does better now as AMD isn't the one behind it you can't improve nodes without a R&D budget to go with it selling it of was the right thing even though it came with those mandatory purchases of wafers.

I just had a crazy idea what if Kaveri would get APU co-processors over PCI-e.
Just throw 2 more Kaveri dies on a pcb that would give 7950 performance and 6 modules then add in a some R9 290's and you have a killer rig.
Certainly not going to happen that way but if they put just one Kaveri on a PCI-e card to serve the enthusiast market that would be so cool and awsome (though their PCI-E direct interconnect has only been used for the R9 290(x))

it would be so cool


----------



## DaveLT

Quote:


> Originally Posted by *maarten12100*
> 
> GloFo does better now as AMD isn't the one behind it you can't improve nodes without a R&D budget to go with it selling it of was the right thing even though it came with those mandatory purchases of wafers.
> 
> I just had a crazy idea what if Kaveri would get APU co-processors over PCI-e.
> Just throw 2 more Kaveri dies on a pcb that would give 7950 performance and 6 modules then add in a some R9 290's and you have a killer rig.
> Certainly not going to happen that way but if they put just one Kaveri on a PCI-e card to serve the enthusiast market that would be so cool and awsome (though their PCI-E direct interconnect has only been used for the R9 290(x))
> 
> it would be so cool


We all want that. And a dual kaveri board. Or a quad. Who knows if AMD will step out and suddenly enable a hidden feature in Kaveri for managing cache coherency? One can dream but we all know AMD does always have included hidden features


----------



## CynicalUnicorn

Quote:


> Originally Posted by *DaveLT*
> 
> We all want that. And a dual kaveri board. Or a quad. Who knows if AMD will step out and suddenly enable a hidden feature in Kaveri for managing cache coherency? One can dream but we all know AMD does always have included hidden features


Dear Mr. Unicorn:

Thank you for your interest in our products. I'm happy that we have people like you in our community. We had almost forgotten to tell everybody about the dual-socket FM2+ motherboards that are being launched with Kaveri on January 14, 2014. As a thank you, please accept this prototype 4M/8C Kaveri chip, named the "A11-7950k" (this one goes up to 11!







), and be sure to look out for the 1024 shader A11-7900k. Both should be released this summer, and attached is the revised roadmap. As you can see, Carrizo will support 6M/12C at launch and will have what is essentially a 7970 integrated. These are exciting times in the microprocessor industry!

Go Red Team,

Rory Read

...Okay, we can dream, right?







None of that will happen unfortunately and a single 7850k is the best we'll get.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Dear Mr. Unicorn:
> 
> Thank you for your interest in our products. I'm happy that we have people like you in our community. We had almost forgotten to tell everybody about the dual-socket FM2+ motherboards that are being launched with Kaveri on January 14, 2014. As a thank you, please accept this prototype 4M/8C Kaveri chip, named the "A11-7950k" (this one goes up to 11!
> 
> 
> 
> 
> 
> 
> 
> ), and be sure to look out for the 1024 shader A11-7900k. Both should be released this summer, and attached is the revised roadmap. As you can see, Carrizo will support 6M/12C at launch and will have what is essentially a 7970 integrated. These are exciting times in the microprocessor industry!
> 
> Go Red Team,
> 
> Rory Read
> 
> ...Okay, we can dream, right?
> 
> 
> 
> 
> 
> 
> 
> None of that will happen unfortunately and a single 7850k is the best we'll get.


'Tis the season for materialism


----------



## AlphaC

I think people are looking at this the wrong way though.

If Kaveri is better than an i3 with a Nvidia GT630 /GT640 , then you could build a slim ITX box instead of needing to slot in something for the PCIE slot. As long as you're restricted to using a two slot GPU or single slot with a cooler that protrudes into a second slot , you need a bigger case.

It's unfortunate that all the Thin Mini ITX boards are so crappy (iron chokes and all) and AMD Kaveri TDP is high so you cannot expect a Thin Mini ITX for it since you'd still need a decent CPU cooler, but AMD has a decent shot with regular ITX.


----------



## lolslk

I was just thinking they could enable it, as 7790 already has all those DSPs in its chip like 260x has.


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *AlphaC*
> 
> It's unfortunate that all the Thin Mini ITX boards are so crappy (iron chokes and all) and AMD Kaveri TDP is high so you cannot expect a Thin Mini ITX for it since you'd still need a decent CPU cooler, but AMD has a decent shot with regular ITX.


I would think the bigger issue would be the PSU. If you get the top-end Kaveri, you can't make due with a physically small PSU and therefore it'd most likely be the largest thing outside of the mobo itself.


----------



## maarten12100

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> I would think the bigger issue would be the PSU. If you get the top-end Kaveri, you can't make due with a physically small PSU and therefore it'd most likely be the largest thing outside of the mobo itself.


Just use a laptop 12V DC psu and if needed a pico PSU


----------



## Nintendo Maniac 64

Oh, I didn't realize they had Pico PSUs up to 160w; I thought they maxed out at 90w. Nevermind then!


----------



## maarten12100

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> Oh, I didn't realize they had Pico PSUs up to 160w; I thought they maxed out at 90w. Nevermind then!


There is even a 192W 12V Dc version with 90% eff or so totally great


----------



## opty165

Quote:


> Originally Posted by *Nintendo Maniac 64*
> 
> Oh, I didn't realize they had Pico PSUs up to 160w; I thought they maxed out at 90w. Nevermind then!


Check the link in my sig









The PSU is super tiny. Just got it in the mail this weekend!


----------



## Faelore

http://www.overclock.net/t/1449629/amd-a10-kaveri-imagine-if-there-was-an-8-core-that-we-could-delid#post_21351990

Thats a link to my amd request to allow us to delid Kaveri chips just like Ivy to and Haswell to increase performance gains in the area of cooling, also will allow us to overclock the Igpu better which will make apus even better which will hurt nvidia sales and if amd plays the ball right with allowing Xfire with gpus allowing them to make more cash all around on both ends of the market.


----------



## Nintendo Maniac 64

Um... if it can't be delided safely then the IHS is soldered anyway. There would be minimal gains from delidding a soldered chip.

Disclaimer: I run a delidded Brisbane.


----------



## Cyro999

Delidding doesn't help thermal transfer significantly (like nm64 said)

it just fixes terrible heat transfer back to comparable to "normal" levels


----------



## maarten12100

Quote:


> Originally Posted by *opty165*
> 
> Check the link in my sig
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The PSU is super tiny. Just got it in the mail this weekend!


Wanna do some clash of the mTitans?
I shall continue on my mac mini soon actually I have the shell drilling it was such a pain.


----------



## dcubek

Quote:


> Originally Posted by *maarten12100*
> 
> There is even a 192W 12V Dc version with 90% eff or so totally great


I'm planning on doing a mini steambox. Streacom F1CWS Evo, one hard drive, a10-7850k and 8gb 2300mhz ram. Not sure whether a pico psu would handle handle this. Would do you recommend?


----------



## opty165

Quote:


> Originally Posted by *maarten12100*
> 
> Wanna do some clash of the mTitans?
> I shall continue on my mac mini soon actually I have the shell drilling it was such a pain.


Damn.... That is quite the project you have with the Mac mini! I just shrugged off the water cooling of my Kaveri build, but maybe I'll rethink it now... haha


----------



## maarten12100

Quote:


> Originally Posted by *dcubek*
> 
> I'm planning on doing a mini steambox. Streacom F1CWS Evo, one hard drive, a10-7850k and 8gb 2300mhz ram. Not sure whether a pico psu would handle handle this. Would do you recommend?


There are pico psu's that go up to 120W so it should be fine I however don't use a PSU as my board can run of the 12V directly.


----------



## Nintendo Maniac 64

Quote:


> Originally Posted by *maarten12100*
> 
> There are pico psu's that go up to 120W


As I just learned a bit ago there are 160w pico PSUs as well.


----------



## ryana

if the r7-260x is pciex3.0 and xdma, it shouldn't require a crossfirex connector even though it has one? maybe that connector is just there so it can crossfirex w/ r7-260(non-x) and r7-280? It could potentially crossfirex/dual graphics with an a10?


----------



## ladcrooks

A lot of people out there don't game, so a cpu/gpu does have it attractions


----------



## DaveLT

Quote:


> Originally Posted by *ladcrooks*
> 
> A lot of people out there don't game, so a cpu/gpu does have it attractions


Even for the medium gamers it's still a hell of fine rig just from a APU alone


----------



## CynicalUnicorn

Yeah, I mean, I was perfectly happy with just my single 7850. I got the second one because it seemed like fun, not because I needed it. 512+896 shaders with a 7850k and 260X with what is essentially an old model i5 is wnough for anything 1080p short of Crysis.


----------



## ximage

Kaveri with GCN graphics to show off Ruby and Mantle boost software is just a short time away and the fans of AMD are starting to rock the house in anticipation with a song called "AMD Fanboy Song"!


----------



## lightsout

lol !!!!!!!!!!!!!!


----------



## NaroonGTX

Lol, Elric is awesome.


----------



## Ultracarpet

My anthem lol


----------



## NaroonGTX

What the eff: https://twitter.com/AMDRadeon/status/411586461381562368

I guess I'll watch it. AMD on Jimmy's show??


----------



## monstercameron

just awesome.


----------



## CynicalUnicorn

Ah, late night TV. I'll wait for your reports tomorrow.

I am totes adding that song to my library o' stupid funny crap, and so should you! It doesn't work for videos, but that's what TubeEnhancer Plus is for.


----------



## CynicalUnicorn

Quote:


> Originally Posted by *ksiiid*
> 
> I mean look at the 7790. That thing is a 90W part, yet if it was made with GCN 1.0 parts it would have been closer to the 7850's 125W. Or look at the r7-250


Theoretically it should a bit shy of 110W if its 14 CUs versus the 7850's 16 scale thermally linearly. However, with half the ROPs instead seven-eighths, the heat should drop even more and the reference 1GB VRAM (not sure if 1GB or 2GB is reference for 7850s) would make it about 105W. GCN 1.1 still offers a 10-15% power consumption reduction which is quite good.


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Theoretically it should a bit shy of 110W if its 14 CUs versus the 7850's 16 scale thermally linearly. However, with half the ROPs instead seven-eighths, the heat should drop even more and the reference 1GB VRAM (not sure if 1GB or 2GB is reference for 7850s) would make it about 105W. GCN 1.1 still offers a 10-15% power consumption reduction which is quite good.


You probably mean 20-30%








With a performance boost no less







And a much more scalable clock


----------



## CynicalUnicorn

I'm just extrapolating from a scaled-down 7850 to a 7790. You probably have, ya know, numbers and data and stuff.







I find it improbable that it is that much better, but 7790s came out, what, a year and a half after the initial launch of the 7000 series? That's enough time for a refreshed architecture; just look at Intel (I'm not sure if that's an observation or a snarky comment...)


----------



## DaveLT

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> I'm just extrapolating from a scaled-down 7850 to a 7790. You probably have, ya know, numbers and data and stuff.
> 
> 
> 
> 
> 
> 
> 
> I find it improbable that it is that much better, but 7790s came out, what, a year and a half after the initial launch of the 7000 series? That's enough time for a refreshed architecture; just look at Intel (I'm not sure if that's an observation or a snarky comment...)


More like a rehashed architecture that did a magical 5% boost in performance for 10% increased power on desktop. WOAH. SUCH FANTASTIC


----------

