# [HWLUXX] Computex: AMD Fiji aka Fury X slower than GeForce GTX 980 Ti



## Waro

Quote:


> Again, the installed memory was confirmed to have a capacity of 4 GB. Clock speeds of GPU and memory were sadly not betrayed. The cards also can't run in their current form, as no BIOS is present. It can therefore only be switched on, but no image will appear on a screen. The Radeon Fury X cards will have a BIOS switch, just like the "Hawaii" cards. As for display outputs, the card had a HDMI 2.0 and three DisplayPort 1.2 ports.
> 
> One can also roughly estimate the size of the GPU. This is achieved by what AMD already said about the size of the HBM chips. The "Fiji" GPU is likely to be sized 520 mm².
> 
> The partner did hint at the performance. Apparently the Radeon Fury X ought to be slower than the GeForce GTX 980 Ti. Currently, AMD is still trying to optimize higher clock rates and is making adjustments to the driver's performance. Performance related to power consumption is therefore likely to be a critical issue for "Fiji".


Source

This is the english version of the website. The german version is pretty big and reputable, they wouldn't lie about what they've been told.

But still keep in mind:
Quote:


> Everything about the Radeon Fury X continues to be based on hearsay. Especially any information relating to performance has to treated with caution. AMD certainly will still be able to improve numerous details. Neither BIOS or drivers are likely to have been finalised.


----------



## The Stilt

4GB of VRAM and heads will roll at AMD once again.
Using HBM when limited to just 4GB is a huge mistake especially when it brings no additional performance. It will certainly reduce the power consumption and make the PCB design more simple and cheaper to produce but still.

They should have stayed with 512-bit GDDR5 interface.


----------



## Cybertox

Is it going to be at least better than the 290X?








Rather disappointing news, if they are true...


----------



## Asus11

I dont want to read I want to see

AMD just release it, finished or unfinished do it

its still going to be the same outcome..

how can you have Nvidia wait so long they get bored so they release the answer to your new GPU without you even giving them the question!

SMH... might invest in AMD stocks while low.. hoping for samsung to overtake them


----------



## 47 Knucklehead

Quote:


> Originally Posted by *The Stilt*
> 
> 4GB of VRAM and heads will roll at AMD once again.
> Using HBM when limited to just 4GB is a huge mistake especially when it brings no additional performance. It will certainly reduce the power consumption and make the PCB design more simple and cheaper to produce but still.
> 
> They should have stayed with 512-bit GDDR5 interface.


Agreed, at least until HBM GEN2 is perfected. Twice the speed and 16 times the capacity.

BTW, this is the second source who has used the "physical size" method to determine that the interposer is not large enough to hold more than 4GB of memory, so it seems to be true with every passing rumor.


----------



## Alatar

I've been saying that before buying into the 60-70% faster than 290X hype people should first look at what kind of scaling that would require from Fiji's extra resources.

slightly slower than 980Ti would put the thing at ~40% faster than 290X. Slightly faster than TX would be 48% faster than 290X or so. Assuming a rough 1GHz clock I would personally put the thing in the 40% faster than 290X range.

But who knows.


----------



## Olivon

Quote:


> The partner did hint at the performance. Apparently the Radeon Fury X ought to be slower than the GeForce GTX 980 Ti. Currently, AMD is still trying to optimize higher clock rates and is making adjustments to the driver's performance. Performance related to power consumption is therefore likely to be a critical issue for "Fiji".


Really hope it's not true. If so, it will be quite a disappointment


----------



## Clocknut

Quote:


> Originally Posted by *Alatar*
> 
> I've been saying that before buying into the 60-70% faster than 290X hype people should first look at what kind of scaling that would require from Fiji's extra resources.
> 
> slightly slower than 980Ti would put the thing at ~40% faster than 290X. Slightly faster than TX would be 48% faster than 290X or so. Assuming a rough 1GHz clock I would personally put the thing in the 40% faster than 290X range.
> 
> But who knows.


the problem is the HBM is the wild card, so we cant really take the shader count as a way to guess its performance. I guess we have to see the actually review b4 we jump to conclusion.


----------



## Cybertox

If this is indeed going to be the case then I will just wait for Nvidia to release their Pascal architecture. I am not desperate to upgrade, the 290X still serves me well enough.


----------



## Superplush

Quote:


> Originally Posted by *Olivon*
> 
> Really hope it's not true. If so, it will be quite a disappointment


We're all hoping and we're all hanging on the edge of our seat with rumours. Lets face it, we want -Actual physical- benchmarks. I'm waiting with baited breath about these too and Might get one, however with HBM 2nd gen not too far off, will it even be worth buying these cards for the, what, year you'd get out of them?

I would like for others to get their hands on them, they've taken long enough to engineer, who knows? We might be surprised by how fast these things are, still holding out hope atm until a proper released statement of numbers and speeds from a few sources.
Quote:


> Originally Posted by *Clocknut*
> 
> the problem is the HBM is the wild card, so we cant really take the shader count as a way to guess its performance. I guess we have to see the actually review b4 we jump to conclusion.


Who knows, twice the bandwidth might mean it comes to twice the speeds. I know that's a baseless over-estimation but we can take educated guesses but that's all we have atm, guesses.


----------



## kot0005

If this is true then no way the card will be $850. Probably back to $550 or $600 like the 290x


----------



## jmcosta

it might be slower but cheaper
they have to change their look of being just a value option.


----------



## hawker-gb

Tomorrow is press conference.
I guess we will know then.


----------



## Alatar

Quote:


> Originally Posted by *Clocknut*
> 
> the problem is the HBM is the wild card, so we cant really take the shader count as a way to guess its performance. I guess we have to see the actually review b4 we jump to conclusion.


It's not really a wild card in that sense. We know that it's going to bump up the memory bandwidth by a big chunk. Doesn't change the fact that unless you're really BW bottlenecked you're still going to be dealing with a 45% bump in SPs. And extracting 65% more perf from 45% extra SPs is going to be tough.

Where HBM's wild card comes in is power consumption and die space taken on the GPU die. I really hope either AMD releases a straight up old style die shot of Fiji or someone like chipworks do their magic in order to provide us with one.


----------



## John Shepard

if this is more expensive than the 980ti the AMD is done

I don't like this either.Monopoly would be a bad thing.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *hawker-gb*
> 
> Tomorrow is press conference.
> I guess we will know then.


Can't come fast enough. Thankfully we only have to wait until 10pm EST tonight for the 10am Wednesday press conference.

Tired of the waiting and the guess and all the rumors that appear to be truer and truer with every passing month, and AMD not saying a dang thing.

Oh well, off to work I go.


----------



## Alatar

I honestly doubt we'll see anything about Fiji tomorrow.

E3 is where I'd show the thing if I was AMD. Much bigger gamer audience.


----------



## hawker-gb

Quote:


> Originally Posted by *Alatar*
> 
> I honestly doubt we'll see anything about Fiji tomorrow.
> 
> E3 is where I'd show the thing if I were AMD. Much bigger gamer audience.


They must give something to stop rumours.
I think Fury X will be shown live tomorrow.


----------



## Woundingchaney

Unfortunately they are in a situation where more information right now would be better. The 980ti has solidified Nvidias lineup and they now have very relevant and very capable cards in every price bracket, potential buyers have ample opportunity to find a product in Nvidias line up. If AMD can put attention towards their next coming flagship then some consumers may be willing to wait a few more weeks. Given that AMD has recently missed very large titles releasing such as GTAV and TW3, hopefully they can have this card prepared for the next AAA game (which should be Batman).


----------



## Blackops_2

I thought AMD was competitive with the 290x, the issue has been getting it to market on time. If Hawaii would've came out at an appropriate time I.E. we didn't have to wait on Nvidia to release GK110 to get a response from AMD, i don't think they would be in so much of the situation they're in.

Though truth be told i haven't seen a reason to upgrade from my 780s. I have yet to move to 1440p either but still.

Either way i hope it provides some healthy competition to the market, but AMD needs a good jump ahead of Nvidia. Not inbetween their two flagships or barely better than their flagship.


----------



## Rickles

My 980 Ti that arrives tomorrow is looking better and better...

I was hoping Fury would be faster than titan x...


----------



## t00sl0w

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Can't come fast enough. Thankfully we only have to wait until 10pm EST tonight for the 10am Wednesday press conference.
> 
> Tired of the waiting and the guess and all the rumors that appear to be truer and truer with every passing month, and AMD not saying a dang thing.
> 
> Oh well, off to work I go.


well, we know AMD hype = an incredible let down.
so maybe a lack of hype = something better than expected?


----------



## raghu78

Quote:


> Originally Posted by *Alatar*
> 
> It's not really a wild card in that sense. We know that it's going to bump up the memory bandwidth by a big chunk. Doesn't change the fact that unless you're really BW bottlenecked you're still going to be dealing with a 45% bump in SPs. And extracting 65% more perf from 45% extra SPs is going to be tough.
> 
> Where HBM's wild card comes in is power consumption and die space taken on the GPU die. I really hope either AMD releases a straight up old style die shot of Fiji or someone like chipworks do their magic in order to provide us with one.


its all speculation and rumours. Nobody knows anything about performance. People seem to believe that AMD cannot improve perf/sp at all. Thats my fundamental issue. If Nvidia can increase 35% perf/cc I am sure AMD has enough improvements to extract atleast 15-20%. Tonga improved all the supporting systems - tesselation, ROP and bandwidth. But AMD left the shader core untouched. My guess is AMD made those changes to facilitate perf and effiicency scaling when it makes improvements to the shader core. Anyway people can say what they want but it does not matter till launch day reviews are seen. btw you will get a die shot from a site like techpowerup. They always show the board layout and die when they review the card.








Quote:


> Originally Posted by *t00sl0w*
> 
> well, we know AMD hype = an incredible let down.
> so maybe a lack of hype = something better than expected?


exactly we have seen AMD hype it up and fail with products like bulldozer. but this time none of that. they talked about HBM a month before product launch and thats because they are the inventors of HBM. Definitely credit goes to AMD for designing the next gen graphics memory. As for the product no drama except for few teasers starting from late May. Again marketing a month before product launch is very normal.


----------



## svenge

Quote:


> Originally Posted by *t00sl0w*
> 
> maybe a lack of hype = something better than expected?


Either that or everyone in PR has already been laid off...


----------



## LancerVI

Quote:


> Originally Posted by *hawker-gb*
> 
> They must give something to stop rumours.
> I think Fury X will be shown live tomorrow.


Yes, they do. Because simple minded fools like me are like, with each passing moment, my switch to the green side becomes more complete!!! I can even hear Emperor Jen-Hsun laugh in the back of my mind.


----------



## Blackops_2

Quote:


> Originally Posted by *t00sl0w*
> 
> well, we know AMD hype = an incredible let down.
> so maybe a lack of hype = something better than expected?


Sound logic to me


----------



## Alatar

Quote:


> Originally Posted by *raghu78*
> 
> . btw you will get a die shot from a site like techpowerup. They always show the board layout and die when they review the card.


I mean a proper die shot like the good old GK110 one:



Spoiler: Warning: Spoiler!







Not just a DSLR shot of the bare die sitting in the packaging. A die shot that actually gives you an idea how much different things take space on the die.

And the only sourced for those that I've seen have been either AMD/Intel/NV themselves or chipworks. Sadly in recent times AMD has stopped all the GPU die shots and with Maxwell NV also first covered their GM204 die with fake crap and then failed to release a GM200 shot entirely.


----------



## Redwoodz

Yeah,the thing has no driver and can only turn on,but performance is lower than TitanX.


----------



## Clocknut

Quote:


> Originally Posted by *Alatar*
> 
> It's not really a wild card in that sense. We know that it's going to bump up the memory bandwidth by a big chunk. Doesn't change the fact that unless you're really *BW bottlenecked* you're still going to be dealing with a 45% bump in SPs. And extracting 65% more perf from 45% extra SPs is going to be tough.
> 
> Where HBM's wild card comes in is power consumption and die space taken on the GPU die. I really hope either AMD releases a straight up old style die shot of Fiji or someone like chipworks do their magic in order to provide us with one.


We dont know that because there is no 512bit card bonaire GPU to test that.


----------



## LancerVI

Quote:


> Originally Posted by *Redwoodz*
> 
> Yeah,the thing has no driver and can only turn on,but performance is lower than TitanX.


LOL. Funny trick that!


----------



## WorldExclusive

If slower than the second fastest card from Nvidia...oh boy AMD, get ready to hold your ankles.


----------



## Pantsu

Not sure how they derived the performance if they didn't even get a picture out of the damn card.







Rumors come and rumors go, there's zero reason to react to any of it, and hardly anyone in truth cares outside a few forum fanboys. What matters is what's ultimately delivered and when. I'm sure whatever AMD might do or not do people here will spin it in a way that'll look bad.


----------



## raghu78

Quote:


> Originally Posted by *Alatar*
> 
> I mean a proper die shot like the good old GK110 one:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Not just a DSLR shot of the bare die sitting in the packaging. A die shot that actually gives you an idea how much different things take space on the die.
> 
> And the only sourced for those that I've seen have been either AMD/Intel/NV themselves or chipworks. Sadly in recent times AMD has stopped all the GPU die shots and with Maxwell NV also first covered their GM204 die with fake crap and then failed to release a GM200 shot entirely.


oh you mean that one. that has not been seen from Nvidia for GM204 and GM200. I doubt AMD will show that to Nvidia as it will give clues to Nvidia about the HBM memory controller layout. You don't want to give any info to the competition especially when you invented HBM and are first to market with a 4096 bit HBM controller. Nvidia will figure it out as they did but whether they do a good job the first time round remains to be seen. With Fermi Nvidia's GDDR5 memory controller was not as good as AMD's . With Kepler though they had a very good GDDR5 memory controller.








Quote:


> Originally Posted by *Pantsu*
> 
> Not sure how they derived the performance if they didn't even get a picture out of the damn card.
> 
> 
> 
> 
> 
> 
> 
> Rumors come and rumors go, there's zero reason to react to any of it, and hardly anyone in truth cares outside a few forum fanboys. *What matters is what's ultimately delivered and when. I'm sure whatever AMD might do or not do people here will spin it in a way that'll look bad*.


well said.







the rumour mill has been working overtime for 6 months now with actual credible leaks/articles being very few.


----------



## akromatic

all those AMD fan bashing nvidia and banking on a titan x killer with 8gb vram is gonna go red in the face


----------



## SpeedyVT

While supposedly reputable, less reputable things have happened from reputable sites. I take this with a grain of salt. From this point on Windows 10 matters.


----------



## jeffro37

I usually buy green team, but I'll wait until they actually have a working card to see which way I will go this time. How can they really say it's slower when the bios and drivers are still being worked on. Sounds like more rumors til we have actual reviews.


----------



## SuprUsrStan

If the Fury X is indeed a short and stubby graphics card, then Nvidia's future GPUs will probably also be short and stubby with the introduction of the second generation HBM.

Based on that alone, I think I might have to pick up three GTX 980Ti for a more full look.


----------



## BinaryDemon

If this is true, AMD is going to have to sell it for a lot cheaper than the rumored price. At $850 there isnt going to be much market for a 4gb card, even with massive bandwidth and advanced compression techniques. Seems that developers recently havent been doing any vram optimization when it comes to Ultra settings. Of course the 980TI was originally rumored for $800, so lets see.


----------



## EniGma1987

Quote:


> Originally Posted by *The Stilt*
> 
> 4GB of VRAM and heads will roll at AMD once again.
> Using HBM when limited to just 4GB is a huge mistake especially when it brings no additional performance. It will certainly reduce the power consumption and make the PCB design more simple and cheaper to produce but still.
> 
> They should have stayed with 512-bit GDDR5 interface.


You know AMD, they are always trying to do something radical with their GPUs and hype the feature all up, only to neglect incredibly important other areas of the GPU
Quote:


> Originally Posted by *Clocknut*
> 
> the problem is the HBM is the wild card, so we cant really take the shader count as a way to guess its performance. I guess we have to see the actually review b4 we jump to conclusion.


It is only a wild card on the performance if we were bandwidth limited in the first place. Lower end cards are bandwidth limited, but when you are already on a 512-bit high clock GDDR5 setup you are not limited, so doubling or tripling the bandwidth wont give you more than just a tiny fraction more performance. You have to actually have more resources on the GPU that use bandwidth to make use of any more.
Quote:


> Originally Posted by *Alatar*
> 
> I honestly doubt we'll see anything about Fiji tomorrow.
> 
> E3 is where I'd show the thing if I was AMD. Much bigger gamer audience.


That would be a terrible decision IMO. The show is all about console gaming. Now AMD is trying to bring PC back into it so of course they will put on a big deal about their GPUs there, but AMD will just shut even more people out of PC gaming if they do that. Console people buy their console not just because "it works" and supports all games that are released, but the console is only a few hundred bucks. If AMD makes a whole big show and then says you need our new $750+ GPU to play the games on PC then console people will just laugh in their faces at how absurd that is and go back to playing their XBox.

But still, we should wait and see how fast this is when it actually releases. Though I expect it to be a power hungry monster, given that there is no significant architectural change from the last gen cards and no process change. If we look at the history of GPUs we see that an arch gets released and built on for 2-3 generations usually with continued rise in power draw with each new iteration of the same big arch until it simply cant be sustained any longer, then a new major architecture revision is done to bring power draw back down to the new starting point. Fiji is this "nearing the end of big arch" timeline


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Alatar*
> 
> I honestly doubt we'll see anything about Fiji tomorrow.
> 
> E3 is where I'd show the thing if I was AMD. Much bigger gamer audience.


Maybe, maybe not.

IMO, if they don't do something official now to stop the flood of purchases from the GTX 980Ti's, it won't matter, no one (or I should say, very few) if going to buy a 980Ti and then 3 weeks later unless the 390X is another 15% or more faster than it, especially with less memory.

So honestly, If AMD doesn't talk about the 390X in some pretty good detail, then I will pretty much think that the rumors are true. If they do, then they can dispel the rumors, get people to stop buying 980Ti's, and actually stand a chance.


----------



## Mel0ns

Didnt Johan Andersson tweet a picture of the Fiji card like 2 weeks ago? If there isnt a working bios would that even make sense to send someone a card for development? "Impressive & sweet GPU" hopefully means these rumors arent true.


----------



## royalkilla408

I rather wait but if this is true than bye bye AMD. The price of the card and the performance is not there to compete with Nvidia then I just don't see how AMD is going to survive much longer. I think it be a good thing because they are run by incompetent people. All hype but they always let you down. Still, lets really hope this isn't true and they release a killer card with good price to hurt Nvidia, but to also make some money. They need income big time.


----------



## iamhollywood5

If this is true, might as well assume AMD is sunk. They really needed to hit it out of the park with this one, and it's been shaping up to be the opposite at this point. With the huge die, interposer, likely poor yields from 4096 cores AND poor yields from HBM, and the watercooler stuck on top of it all, the card will be extremely expensive to produce and really squeeze margins. Then you got the 4GB limit, and the fact that it's apparently slower than both the titan x and 980 Ti... All while Nvidia's gm200 cards are cheaper to make giving them larger profit margins. AMD is a sinking ship, anyone still holding stocks should dump them. GG AMD. I hope Samsung takes over.


----------



## Anateus

Please change the title. Its misleading.


----------



## Particle

The statement "______ or else AMD is doomed/sunk/dead" occurs so often that it should classify as a meme.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Maybe, maybe not.
> 
> IMO, if they don't do something official now to stop the flood of purchases from the GTX 980Ti's, it won't matter, no one (or I should say, very few) if going to buy a 980Ti and then 3 weeks later unless the 390X is another 15% or more faster than it, especially with less memory.
> 
> So honestly, If AMD doesn't talk about the 390X in some pretty good detail, then I will pretty much think that the rumors are true. If they do, then they can dispel the rumors, get people to stop buying 980Ti's, and actually stand a chance.


I totally agree with that. These rumors are TERRIBLE for AMD. Slower than a Titan X, hell, slower than a 980 Ti, with LESS VRAM and it's rumored to be expensive. That's not good for the on-the-fencers who stay informed on these topics.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Mel0ns*
> 
> Didnt Johan Andersson tweet a picture of the Fiji card like 2 weeks ago? If there isnt a working bios would that even make sense to send someone a card for development? "Impressive & sweet GPU" hopefully means these rumors arent true.


Easy for them to control leaks by providing the video card with no BIOS and a device that allows a BIOS (.ROM file for example) to be flashed to it. Something like: http://www.amazon.com/Signstek-Universal-MiniPro-Programmer-Interface/dp/B00K756PB6/ref=sr_1_2?ie=UTF8&qid=1433251550&sr=8-2&keywords=BIOS+programmer for example.


----------



## Prophet4NO1

Well, i was hoping for something good, but rumors have been pretty solid the last few GPU launches. So, things are not looking so good.


----------



## raghu78

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I totally agree with that. These rumors are TERRIBLE for AMD. Slower than a Titan X, hell, slower than a 980 Ti, with LESS VRAM and it's rumored to be expensive. That's not good for the on-the-fencers who stay informed on these topics.


anybody who has common sense will not get carried away by rumours and will wait for actual reviews. the people who are flaming any negative information about AMD products are the usual culprits. Also people seem to be talking as if Nvidia and AMD only sell graphics cards on launch day or the first 2-3 weeks. If that were the case how does Nvidia show the same revenue quarter after quarter. roughly in the range of USD 1 billion. something to ponder.


----------



## Ghoxt

Cough "Bulldozer"...

Seriously I really hope not and I'm rocking Titan X SLI. We really need competition here.

OT:

Oh nevermind, I have no credibility left after Nvidia shipped the 980TI 3 months and 99% of the performance of the TX for nearly half the price...







Basically I paid $500.00 extra per card for 6G memory and one SMX.

To see how ridiculous that sounds, what would a GPU with 1SMX and 6Gig memory cost... Yeah I know $500 dollars...


----------



## p4inkill3r

Quote:


> Originally Posted by *Particle*
> 
> The statement "______ or else AMD is doomed/sunk/dead" occurs so often that it should classify as a meme.


"Things aren't proceeding at a pace that I prefer therefore...."


----------



## harney

One can only hope re AMD i want them to succeed as i do not want to be in a gpu market dominated by nvidia where then the drip feed will be even less ....

http://247wallst.com/technology-3/2015/06/02/could-amd-go-to-zero/

So i wish the best for AMD with this new card what ever there calling it but they must get the selling price just right ...its clear Nvidia have rushed the 980 ti out the door looking at distribution supply's so they know something we don't ....and looking at all the reviews re the 980 ti most of them are stating hang tight till AMD show there card ...so hardware review sites for all i know my have the new card and are hinting as best as possible without saying much......

Good luck AMD looks like you need it .....


----------



## Woundingchaney

Quote:


> Originally Posted by *raghu78*
> 
> anybody who has common sense will not get carried away by rumours and will wait for actual reviews. the people who are flaming any negative information about AMD products are the usual culprits. Also people seem to be talking as if Nvidia and AMD only sell graphics cards on launch day or the first 2-3 weeks. If that were the case how does Nvidia show the same revenue quarter after quarter. roughly in the range of USD 1 billion. something to ponder.


Common sense is a rare commodity. Rumors can definitely negatively impact AMD at this point, particularly when there is so many prevalent positive reviews of Nvidias product line. With no concrete information available many consumers are simply going to make a purchase with what is currently known, more so for the less informed consumers which represent the majority of the market.


----------



## Menta

what is AMD doing might as well hand over the market on a silver platter


----------



## Kinaesthetic

Quote:


> Originally Posted by *raghu78*
> 
> anybody who has common sense will not get carried away by rumours and will wait for actual reviews. the people who are flaming any negative information about AMD products are the usual culprits. Also people seem to be talking as if Nvidia and AMD only sell graphics cards on launch day or the first 2-3 weeks. If that were the case how does Nvidia show the same revenue quarter after quarter. roughly in the range of USD 1 billion. something to ponder.


So, hate to break it to you, but that means you also get carried away? Because when a rumor is going well for AMD, you suck it up. But when a rumor for AMD isn't going well, you move heaven and earth to say it isn't true. You are kinda like the reverse Alatar.

Then again, you probably cannot see the irony in your statements yourself.


----------



## Blameless

Quote:


> Originally Posted by *Waro*
> 
> Source
> 
> This is the english version of the website. The german version is pretty big and reputable, they wouldn't lie about what they've been told.
> 
> But still keep in mind:


No mention of the specific SKU being used and the die size figures seem a bit low. Though if that ~520mm^2 is accurate, I certainly would not expect the part to perform better than a ~630mm^2 Maxwell.

Anyway, somewhat of a disappointment, if true...unless prices are low to match.
Quote:


> Originally Posted by *Superplush*
> 
> Who knows, twice the bandwidth might mean it comes to twice the speeds. I know that's a baseless over-estimation but we can take educated guesses but that's all we have atm, guesses.


Faster memory doesn't bring appreciably more performance unless you were memory bandwidth or latency limited in the first place.

Hawaii was not wanting for memory performance. Going from 1250MHz to 1500MHz (the next step up that maintains the similarly low latency of the default clocks) on my 290X adds essentially nothing to game performance.

To double GPU performance, you'd need to double ROP, TMU, and shader performance.
Quote:


> Originally Posted by *Alatar*
> 
> Doesn't change the fact that unless you're really BW bottlenecked you're still going to be dealing with a 45% bump in SPs. And extracting 65% more perf from 45% extra SPs is going to be tough.


Same comment about bottlenecks can apply to SPs as well. My 290X is usually more limited by fill rate than shader performance. I would likely see a very significant increase in performance in many games if there were ~50 more ROPs and TMUs, even if the SPs were not touched.
Quote:


> Originally Posted by *Rickles*
> 
> My 980 Ti that arrives tomorrow is looking better and better...


A nonsensical statement. Your 980Ti will perform like a 980Ti regardless of the existence or performance of an AMD competitor, and you already bought your 980Ti.
Quote:


> Originally Posted by *Clocknut*
> 
> We dont know that because there is no 512bit card bonaire GPU to test that.


Memory bottlenecks are easy to test for.

Reduce memory clock. If performance doesn't go down proportionally, no bottleneck.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *raghu78*
> 
> anybody who has common sense will not get carried away by rumours and will wait for actual reviews. the people who are flaming any negative information about AMD products are the usual culprits. Also people seem to be talking as if Nvidia and AMD only sell graphics cards on launch day or the first 2-3 weeks. If that were the case how does Nvidia show the same revenue quarter after quarter. roughly in the range of USD 1 billion. something to ponder.


Yes but sometimes "I WANT IT NOW, I CAN'T WAIT ANY LONGER" beats common sense (unfortunately). Consumers who purchased 980 Ti(s) will receive them shortly and talk about how great they are which can sway those looking to spend $700 on a new card as they can get one right now and if rumors turn out to be false, then it's not like they made a terrible mistake getting a 980 Ti as I don't think there will be too drastic of a difference between price/performance of the two cards.


----------



## Ha-Nocri

AMD is closer to NV than ever before. Think about it, when AMD had the strongest GPU NV would respond very quickly with something even stronger. It was usually NV that held the crown. If Fiji is faster than TX, for the 1st time AMD will have the fastest card on the market for a year or so. So even if Fiji is a bit slower (which I expect) it's not the end of the world. Amd is catching up.


----------



## zealord

It is quite funny. I actually expect the Flagship Fury to be faster than the Titan X. Everything else would be a hug disappointment by now.

I mean the card has never been officially delayed (due to never being officially announced duh), but I think June 2015 is way too late. Like 8-9 months to late. We've had rumours for so long and I am 99% sure AMD had the card quite ready for a long time but little problems came up with HBM and stuff.

AMD shouldn't have went with HBM and released this card with GDDR5 right around the time the 980 and 970 launched.

Whatever the card is going to be like it probably can't live up to the expectation I've made in my mind. By now the cards needs to be like 600$ with 8GB HBM and 5% faster than the 980 Ti to be interesting for me. (People probably think that this are very steep and unrealistic expecatations, but truth be told the time AMD takes to release a driver they long promised and FreeSync isn't working quite like it should )


----------



## HeadlessKnight

I pray and hope Fiji is not another bulldozer from AMD.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *zealord*
> 
> Whatever the card is going to be like it probably can't live up to the expectation I've made in my mind. By now the cards needs to be like 600$ with 8GB HBM and 5% faster than the 980 Ti to be interesting for me. (People probably think that this are very steep and unrealistic expecatations, but truth be told the time AMD takes to release a driver they long promised and FreeSync isn't working quite like it should )


Such is the problem with anything, graphics card, CPU, video game, or car, that is OVERLY hyped. It will never live up to the expectations that people have built into their own minds. Even if it is good, odds are it won't be as good as what someone thinks, and thus it will be a failure.

Heck, I can name at least 2 dozen video games over the past 2 years that are "good", but have crashed and burned because they were hyped all to hell and couldn't possibly live up to the hype.


----------



## rdr09

Quote:


> Originally Posted by *zealord*
> 
> It is quite funny. I actually expect the Flagship Fury to be faster than the Titan X. Everything else would be a hug disappointment by now.
> 
> I mean the card has never been officially delayed (due to never being officially announced duh), but I think June 2015 is way too late. Like 8-9 months to late. We've had rumours for so long and I am 99% sure AMD had the card quite ready for a long time but little problems came up with HBM and stuff.
> 
> AMD shouldn't have went with HBM and released this card with GDDR5 right around the time the 980 and 970 launched.
> 
> Whatever the card is going to be like it probably can't live up to the expectation I've made in my mind. By now the cards needs to be like 600$ with 8GB HBM and 5% faster than the 980 Ti to be interesting for me. (People probably think that this are very steep and unrealistic expecatations, but truth be told the time AMD takes to release a driver they long promised and FreeSync isn't working quite like it should )


Quit hoping. Get a 980 Ti, a Gsync monitor, and the driver that works.


----------



## sugalumps

Quote:


> Originally Posted by *raghu78*
> 
> its all speculation and rumours. Nobody knows anything about performance. People seem to believe that AMD cannot improve perf/sp at all. Thats my fundamental issue. If Nvidia can increase 35% perf/cc I am sure AMD has enough improvements to extract atleast 15-20%. Tonga improved all the supporting systems - tesselation, ROP and bandwidth. But AMD left the shader core untouched. My guess is AMD made those changes to facilitate perf and effiicency scaling when it makes improvements to the shader core. Anyway people can say what they want but it does not matter till launch day reviews are seen. btw you will get a die shot from a site like techpowerup. They always show the board layout and die when they review the card.
> 
> 
> 
> 
> 
> 
> 
> 
> exactly we have seen AMD hype it up and fail with products like bulldozer. but this time none of that. they talked about HBM a month before product launch and thats because they are the inventors of HBM. Definitely credit goes to AMD for designing the next gen graphics memory. As for the product no drama except for few teasers starting from late May. Again marketing a month before product launch is very normal.


You kidding? They put a massive digital hype add up in times, and said "hunting titans" or something to those lines.


----------



## Leopard2lx

the pasta sauce guy must be having a heart attack now...


----------



## Mel0ns

According to http://www.sweclockers.com/nyhet/20607-amd-radeon-fiji-positioneras-mot-geforce-gtx-980-ti Fiji is no longer suppose to compete against TX, but TI instead, at a 600$ price point. I have a feeling performance is a bit lacking, but for me this is a good move. More cards around 600-650$ rather than 850-1000$ is always welcome.


----------



## criminal

I hope the rumor is not true, but if it is I will still buy it if it is $599 or less. They can't afford to price it higher (new tech or not) if it is slower than the 980Ti.

I am trying to give AMD money this round... they just need to release something tangible on the cards already.
Quote:


> Originally Posted by *Mel0ns*
> 
> According to http://www.sweclockers.com/nyhet/20607-amd-radeon-fiji-positioneras-mot-geforce-gtx-980-ti Fiji is no longer suppose to compete against TX, but TI instead, at a 600$ price point.


I'd take one at that price with that level of performance. I don't need more then 4GB of vram for the foreseeable future either.


----------



## Kommanche

Quote:


> Originally Posted by *sugalumps*
> 
> You kidding? They put a massive digital hype add up in times, and said "hunting titans" or something to those lines.


That was a fanmade mock-up on WCCF. Not actual AMD marketing


----------



## Ha-Nocri

Quote:


> Originally Posted by *criminal*
> 
> I hope the rumor is not true, but if it is I will still buy it if it is $599 or less. They can't afford to price it higher (new tech or not) if it is slower than the 980Ti.
> 
> I am trying to give AMD money this round... they just need to release something tangible on the cards already.


And it will be a WC'ed card, so temps should be rly low from day 1. No need to wait for non-reference cards


----------



## raghu78

Quote:


> Originally Posted by *sugalumps*
> 
> You kidding? They put a massive digital hype add up in times, and said "hunting titans" or something to those lines.


So yeah the ad was AMD - Processing the Revolution. Whats wrong with an ad ? Don't Nvidia and Intel spend on ads.

http://arstechnica.co.uk/gadgets/2015/05/amd-takes-next-gen-gpu-battle-to-giant-half-acre-billboard-in-times-square/

The hunting Titans bit was some image found on rumour articles.


----------



## sugalumps

Quote:


> Originally Posted by *Kommanche*
> 
> That was a fanmade mock-up on WCCF. Not actual AMD marketing


Oh but it looks so official









What about?


Quote:


> Originally Posted by *raghu78*
> 
> So yeah the ad was AMD - Processing the Revolution. Whats wrong with an ad ? Don't Nvidia and Intel spend on ads.
> 
> http://arstechnica.co.uk/gadgets/2015/05/amd-takes-next-gen-gpu-battle-to-giant-half-acre-billboard-in-times-square/
> 
> The hunting Titans bit was some image found on rumour articles.


So one of the biggest add spaces calling their card a revolution is not hype?

Also you have been the chief of hype for this card for months now, every single thread saying how nvidia are going to be in major trouble especialy in the long term, you have alot to answer for if this does not live up to the hype









I mean if it does not even live up to the 980ti are you going to do a golden tiger on us and disappear?!


----------



## Menta

Maxwell actually lived up to the hype, not saying it was ground breaking in all aspects


----------



## epic1337

which tier is Fury X? the $849 chip or the $599 chip? the latter would sound awesome considering 980Ti is marked $799.
but the former is ridiculous, regardless of HBM or whatever, its still 4GB vs 6GB, even the $749 tier still sounds ridiculous.

HBM supposedly gave AMD a big advantage over GDDR5, and they sacrificed capacity for it.
Quote:


> Originally Posted by *DarkLiberator*
> 
> Also the same guy on chiphell posted this:
> 
> R9 380 4gb 199
> R9 390 8gb 299
> R9 390x 8gb 399
> Fiji pro 4gb hbm 599
> Fiji xt 4gb hbm 749
> Fiji xt /xtx 8gb hbm 849
> Fiji vr 2x8gb hbm 1399-1499
> 
> the leaker might be a distributor and not an internal worker


----------



## Alatar

Quote:


> Originally Posted by *Mel0ns*
> 
> According to http://www.sweclockers.com/nyhet/20607-amd-radeon-fiji-positioneras-mot-geforce-gtx-980-ti Fiji is no longer suppose to compete against TX, but TI instead, at a 600$ price point. I have a feeling performance is a bit lacking, but for me this is a good move. More cards around 600-650$ rather than 850-1000$ is always welcome.


Sweclockers has usually been a trustworthy site and almost all of their "our sources say X" articles about GPUs have been correct.


----------



## zealord

Quote:


> Originally Posted by *epic1337*
> 
> which tier is Fury X? the $849 chip or the $599 chip? *the latter would sound awesome considering 980Ti is marked $799.*
> but the former is ridiculous, regardless of HBM or whatever, its still 4GB vs 6GB, even the $749 tier still sounds ridiculous.
> 
> HBM supposedly gave AMD a big advantage over GDDR5, and they sacrificed capacity for it.


?







?

980 Ti is 649$

Quote:


> Originally Posted by *rdr09*
> 
> Quit hoping. Get a 980 Ti, a Gsync monitor, and the driver that works.


I got my 290X for 200€ and that is quite an amazing performance/€ so it is really hard for me to consider spending 1500€ on just a GPU and a monitor to get 35% more fps in games and actually not more fps if I jump up the resolution to a 1440p g-sync screen. I still think Nvidia cards are too expensive no matter how bad AMD currently is. I can wait. I don't have to upgrade. I love fancy tech but it is non-essential for my survival. The 980 Ti is nice and I can understand that people buy it, but the games that come out don't get better or are better optimized. I feel like the high-end PC users have been neglected by the gaming companies. Medium and ultra textures look nearly identical. No AA support in games. No real tesselation. Witcher 3 PS4 looks like PC ultra. Tomb Raider Definitive Edition on consoles look actually better than PC ultra (yeah I just found out yestereday).

Also can we really expect the 849$ Fury to be slower than the 650$ 980 Ti? Why would anyone pay 200$ more for less and go with AMD? Truth be told ( I am currently an AMD user ) and if 2 identical GPUs (one from AMD and one from NVIDIA) cost the same then 90% of the people are more likely to go with Nvidia. It is just the way it works. Kind of like US people only buying EVGA cards. I haven't figured out yet why they do that. It is unreal like a religion or something. It freaks me out a bit to be honest, but they have good support or so I've heard.

Either way AMD can't win right now if the Fury is slower than the 980 Ti. If it is then they have to release a super expensive HBM card below the 980 Ti no matter how fancy HBM sounds.


----------



## StereoPixel

This "news" based on rumors by *[email protected]* member from 3Dcenter forum
http://www.forum-3dcenter.org/vbulletin/showthread.php?p=10649436#post10649436 (german)

but... you must to read the reply of other member - http://www.forum-3dcenter.org/vbulletin/showthread.php?p=10649856#post10649856


----------



## DerkaDerka

Quote:


> Originally Posted by *Prophet4NO1*
> 
> Well, i was hoping for something good, but rumors have been pretty solid the last few GPU launches. So, things are not looking so good.


Yeah the rumors have been solid, kind of like that one that said the 980TI was going to be $800? Oh wait.

Not buying into these rumors, need actual benchmarks first. Hoping AMD releases something at least half decent to hold me over until the second round of HBM cards.

They really do need to release some information on this thing or risk their already miniscule market share shrink even more.


----------



## kingduqc

If it's true AMD might become the zombie we know it on the cpu world. Dead but still walking. I hope it's all made up, honestly I'm getting worried, they are taking their sweet time and it's not in their favor.


----------



## zealord

wait I haven't even noticed this. 4GB is actually confirmed now lol









Oh boy this card is spelling disaster


----------



## 47 Knucklehead

Quote:


> Originally Posted by *zealord*
> 
> wait I haven't even noticed this. 4GB is actually confirmed now lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Oh boy this card is spelling disaster


AMD hasn't confirmed ANYTHING. Price, speed, memory, release date, water block, hell, they barely have confirmed the name.

Multiple rumors from multiple reliable sources ... yes.


----------



## Prophet4NO1

Quote:


> Originally Posted by *DerkaDerka*
> 
> Yeah the rumors have been solid, kind of like that one that said the 980TI was going to be $800? Oh wait.
> 
> Not buying into these rumors, need actual benchmarks first. Hoping AMD releases something at least half decent to hold me over until the second round of HBM cards.
> 
> They really do need to release some information on this thing or risk their already miniscule market share shrink even more.


Pricing has been all over, but rough specs have been pretty much what we end up getting. And with so many negative rumores piling up from varried sources, it is not looking good.


----------



## Creator

I'm not sure how this can be slower. If it has 45% more cores, 10% higher base clock, and even some optimizations to the architecture, it should be ~60% faster than the 290X?


----------



## zealord

Quote:


> Originally Posted by *47 Knucklehead*
> 
> AMD hasn't confirmed ANYTHING.
> 
> Multiple rumors from multiple reliable sources ... yes.


yeah ok you are right. confirmed was a bad choice of word. It is not _officially confirmed_, but I am currently reading information from sources that have fumbled around with the card on computex and they all say 4GB









Also is the 980 Ti now 649$ or more? In europe it is 740€+ currently


----------



## rdr09

Quote:


> Originally Posted by *zealord*
> 
> ?
> 
> 
> 
> 
> 
> 
> 
> ?
> 
> 980 Ti is 649$
> I got my 290X for 200€ and that is quite an amazing performance/€ so it is really hard for me to consider spending 1500€ on just a GPU and a monitor to get 35% more fps in games and actually not more fps if I jump up the resolution to a 1440p g-sync screen. I still think Nvidia cards are too expensive no matter how bad AMD currently is. I can wait. I don't have to upgrade. I love fancy tech but it is non-essential for my survival. The 980 Ti is nice and I can understand that people buy it, but the games that come out don't get better or are better optimized. I feel like the high-end PC users have been neglected by the gaming companies. Medium and ultra textures look nearly identical. No AA support in games. No real tesselation. Witcher 3 PS4 looks like PC ultra. Tomb Raider Definitive Edition on consoles look actually better than PC ultra (yeah I just found out yestereday).
> 
> Also can we really expect the 849$ Fury to be slower than the 650$ 980 Ti? Why would anyone pay 200$ more for less and go with AMD? Truth be told ( I am currently an AMD user ) and if 2 identical GPUs (one from AMD and one from NVIDIA) cost the same then 90% of the people are more likely to go with Nvidia. It is just the way it works. Kind of like US people only buying EVGA cards. I haven't figured out yet why they do that. It is unreal like a religion or something. It freaks me out a bit to be honest, but they have good support or so I've heard.
> 
> Either way AMD can't win right now if the Fury is slower than the 980 Ti. If it is then they have to release a super expensive HBM card below the 980 Ti no matter how fancy HBM sounds.


Skp the g monitor first. you are getting a Titan X for almost half the price. this rumor or not, the 6GB at that price is . . .


----------



## hawker-gb

Quote:


> Originally Posted by *zealord*
> 
> wait I haven't even noticed this. 4GB is actually confirmed now lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Oh boy this card is spelling disaster


Did AMD confirm it?


----------



## decimator

Quote:


> Originally Posted by *Creator*
> 
> I'm not sure how this can be slower. If it has 45% more cores, 10% higher base clock, and even some optimizations to the architecture, it should be ~60% faster than the 290X?


The topic is about how it's slower than the GTX 980 Ti, not previous-gen AMD GPU's. The Fury X is basically guaranteed to wipe the floor with the 290X.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *DerkaDerka*
> 
> Yeah the rumors have been solid, kind of like that one that said the 980TI was going to be $800? Oh wait.
> 
> Not buying into these rumors, need actual benchmarks first. Hoping AMD releases something at least half decent to hold me over until the second round of HBM cards.
> 
> They really do need to release some information on this thing or risk their already miniscule market share shrink even more.


The spec's on the 980Ti have been pretty dang close to the rumors for a long time now, as have the rumors on the 390X. Price is harder to nail down because you can easily eat into your profit margin (or even take a loss), thus it is harder to predict.

Price can be changed overnight, hardware specs can't.

Remember when AMD came out with the $1500 R9 295X2? Is it still $1500? No. How about the FX-9590 for $1000? Nope. Both OVERNIGHT had their prices slashed by nearly 50%.


----------



## boot318

N/A

Didn't mean to post.


----------



## xxdarkreap3rxx

If the Fury X WCE is $600 then I can imagine the cut-down version, Fury, will come in @ $450-500 (as they will be air cooled which is even cheaper) and that would DESTROY the GTX 980.


----------



## thegreatsquare

Poor AMD. This and Intel kicking it in the IGP too?

...not a good place to be.


----------



## zealord

Quote:


> Originally Posted by *hawker-gb*
> 
> Did AMD confirm it?


no sorry my bad (no official AMD confirmation), but people that are at computex right now and had first hand experience with the card (Fury X) are saying that it is infact 4GB


----------



## 47 Knucklehead

Quote:


> Originally Posted by *zealord*
> 
> yeah ok you are right. confirmed was a bad choice of word. It is not _officially confirmed_, but I am currently reading information from sources that have fumbled around with the card on computex and they all say 4GB
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also is the 980 Ti now 649$ or more? In europe it is 740€+ currently


Yes, the 980Ti is $649, that is confirmed ... and people have ordered them and having them delivered tomorrow.

Europe and Australia prices are ... well ... different. Such is what you get with a 17% VAT and radically different regulations and other government issue. People need to that that issue up with their respective governments.


----------



## harney

Quote:


> Originally Posted by *boot318*
> 
> N/A
> 
> Didn't mean to post.










me neither


----------



## zealord

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Yes, the 980Ti is $649, that is confirmed ... and people have ordered them and having them delivered tomorrow.
> 
> Europe and Australia prices are ... well ... different. Such is what you get with a 17% VAT on imports and radically different regulations and other government issue. People need to that that issue up with their respective governments.


Yeah you are probably right, but normally € = $ for most stuff. I feared Nvidia made some last minute adjustments to the price after they have heard 4GB Fury X is slower than 980 Ti


----------



## Alatar

Quote:


> Originally Posted by *zealord*
> 
> no sorry my bad (no official AMD confirmation), but people that are at computex right now and had first hand experience with the card (Fury X) are saying that it is infact 4GB


Are you referring to some site / person who is saying this publicly on some website?

If so then you wouldn't happen to have the links?


----------



## Jedi Mind Trick

Quote:


> Originally Posted by *Alatar*
> 
> I've been saying that before buying into the 60-70% faster than 290X hype people should first look at what kind of scaling that would require from Fiji's extra resources.
> 
> slightly slower than 980Ti would put the thing at ~40% faster than 290X. Slightly faster than TX would be 48% faster than 290X or so. Assuming a rough 1GHz clock I would personally put the thing in the 40% faster than 290X range.
> 
> But who knows.


Quote:


> Originally Posted by *decimator*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Creator*
> 
> I'm not sure how this can be slower. If it has 45% more cores, 10% higher base clock, and even some optimizations to the architecture, it should be ~60% faster than the 290X?
> 
> 
> 
> The topic is about how it's slower than the GTX 980 Ti, not previous-gen AMD GPU's. The Fury X is basically guaranteed to wipe the floor with the 290X.
Click to expand...

The above quote by Alatar is what he is bringing up, if I am not mistaken. At 60% faster than a 290x it would be faster than a TX (which isnt that much faster than the 980ti, which is [going by Alatar's numbers] slightly more than 40%).

I'm hoping for this, but I really don't care about this gen of GPUs, my 290s will last me for awhile on 1080p.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Price can be changed overnight, hardware specs can't.
> 
> Remember when AMD came out with the $1500 R9 295X2? Is it still $1500? No. How about the FX-9590 for $1000? Nope. Both OVERNIGHT had their prices slashed by nearly 50%.


To add, the 290X was in the $500+ range when 900 series was launched and the 970 was competing with the 290X @ 1080. All of a sudden, prices of the 290X dropped quickly afterwards:

http://camelcamelcamel.com/XFX-Dissipation-R9-290X-EDFD-512-Bit-CrossFireX/product/B00HHIPN5A?context=browse
http://camelcamelcamel.com/MSI-Computer-Corp-290X-GAMING/product/B00HPS4AHE?context=browse
http://camelcamelcamel.com/Sapphire-Version-PCI-Express-Graphics-11226-00-40G/product/B00HJOKARI?context=browse
http://camelcamelcamel.com/MSI-Graphics-R9-290X-LIGHTNING/product/B00IZNE2ZS?context=browse


----------



## zealord

Quote:


> Originally Posted by *Alatar*
> 
> Are you referring to some site / person who is saying this publicly on some website?
> 
> If so then you wouldn't happen to have the links?


source : http://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/35572-amd-fiji-alias-fury-x-langsamer-als-geforce-gtx-980-ti.html
Quote:


> Der verbaute Speicher wurde uns gegenüber mit einer Kapazität von 4 GB bestätigt.


It translates roughly to : They(one of AMDs partners) confirmed to us that the card has a capacity of 4GB


----------



## maltamonk

So rumor mill has it an unfinished product is slower than a finished product. There's a shocker


----------



## Majin SSJ Eric

If Fiji really is slower than the 980Ti then it would seem that Nvidia really goofed selling the 980Ti at its $650 price point. No need for them have sold it for less than $800 if AmDs only competition is slower especially since they've now all but completely made the TX irrelevant.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *zealord*
> 
> Yeah you are probably right, but normally € = $ for most stuff. I feared Nvidia made some last minute adjustments to the price after they have heard 4GB Fury X is slower than 980 Ti


I completely agree. If the rumors are true, and honestly, there are "industrial spies" everywhere in today's business world (and the term "spy" is really heavy handed, all it takes is a little schmoozing and some loose lips), nVidia knows 90% of what AMD is doing and AMD knows 90% of what nVidia is doing.

Odds are they dropped the price per unit in an attempt to slaughter AMD once again in market share. Sure if you sell 800,000 units at $1000 each that is the same thing as selling 1,000,000 units at $800 each, but what isn't factored in is that by making a lower profit per unit, you still make the same money, but you end up denying your competition the possibility of selling 200,000 units. End result, you make the same money, but you hurt your competition more.

Business 101.


----------



## BigMack70

Second coming of Faildozer???


----------



## xxdarkreap3rxx

I hope this doesn't refer to TDP: "375 Watt"

http://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/35572-amd-fiji-alias-fury-x-langsamer-als-geforce-gtx-980-ti.html


----------



## DADDYDC650

I'm guessing the Fury will be as fast as a 980 Ti and will come with 4GB of HBM and cost $599-$630. I'm HOPING for a card that's slightly faster than a Titan X and will feature 8GB via 2.5D stacking for around $850-$1000.


----------



## zealord

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> If Fiji really is slower than the 980Ti then it would seem that Nvidia really goofed selling the 980Ti at its $650 price point. No need for them have sold it for less than $800 if AmDs only competition is slower especially since they've now all but completely made the TX irrelevant.


That also depends on how many GM200 chips they have. The difference between 800 and 650$ is night and day for a GPU imho. I am pretty sure they make overall more profit going with 650$. Also more high end GPUs sold = more likely to sell more G-Sync modules.

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I hope this doesn't refer to TDP: "375 Watt"
> 
> http://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/35572-amd-fiji-alias-fury-x-langsamer-als-geforce-gtx-980-ti.html


Nah it doesn't. It just says 2x8 pin + PCIE = theoretical power draw 375W of the specifications (not what the card is actually needing)


----------



## Ha-Nocri

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> If Fiji really is slower than the 980Ti then it would seem that Nvidia really goofed selling the 980Ti at its $650 price point. No need for them have sold it for less than $800 if AmDs only competition is slower especially since they've now all but completely made the TX irrelevant.


NV don't want to allow AMD to earn more money. So they narrowed the gap between 980's and 980ti's price. If I were AMD I would price Fiji to compete against 980, so NV would be forced to lower 980's price. That way we all win


----------



## harney

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I hope this doesn't refer to TDP: "375 Watt"
> 
> http://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/35572-amd-fiji-alias-fury-x-langsamer-als-geforce-gtx-980-ti.html


Yep saw that one scary if true


----------



## Alatar

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I hope this doesn't refer to TDP: "375 Watt"
> 
> http://www.hardwareluxx.de/index.php/news/hardware/grafikkarten/35572-amd-fiji-alias-fury-x-langsamer-als-geforce-gtx-980-ti.html


No it's just saying that with two 8-pin connectors the theoretical maximum that AMD could set while staying within spec is 375W.

Quite irrelevant info as far as I'm concerned. However I do think that we might see 300W+ TDP.


----------



## sugarhell

So.

The german version says this:

Some performance data are made. So the Radeon Fury X should be slower than the GeForce GTX 980 Ti. Currently, AMD is still trying to optimize on higher clock rates and adjustments to the driver's performance. Especially the power depending on the power consumption is therefore likely to be a critical issue for "Fiji".

And the english version says this :

The partner did hint at the performance. Apparently the Radeon Fury X ought to be slower than the GeForce GTX 980 Ti. Currently, AMD is still trying to optimize higher clock rates and is making adjustments to the driver's performance. Performance related to power consumption is therefore likely to be a critical issue for "Fiji".


----------



## harney

http://wccftech.com/amd-radeon-r9-fury-fiji-xt-gpu-slower-gtx-980-ti/


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *sugarhell*
> 
> So.
> 
> The german version says this:
> 
> Some performance data are made. So the Radeon Fury X should be slower than the GeForce GTX 980 Ti. Currently, AMD is still trying to optimize on higher clock rates and adjustments to the driver's performance. Especially the power depending on the power consumption is therefore likely to be a critical issue for "Fiji".
> 
> And the english version says this :
> 
> The partner did hint at the performance. Apparently the Radeon Fury X ought to be slower than the GeForce GTX 980 Ti. Currently, AMD is still trying to optimize higher clock rates and is making adjustments to the driver's performance. Performance related to power consumption is therefore likely to be a critical issue for "Fiji".


Sounds similar to what they did with the 290. "AMD's Last Minute 290 Revision" Source: http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review/2


----------



## Rickles

Quote:


> Originally Posted by *47 Knucklehead*
> 
> AMD hasn't confirmed ANYTHING. Price, speed, memory, release date, water block, hell, they barely have confirmed the name.
> 
> Multiple rumors from multiple reliable sources ... yes.


Quote:


> Originally Posted by *hawker-gb*
> 
> Did AMD confirm it?


If it had 4 GB you can bet they wouldn't confirm it, it would drive 980 Ti sales like crazy.

If it really is a 4 GB card they won't say anything until release so they can have review sites show how it performs at 4k and other high resolutions.

I'd expect the Fury X to edge out Titan X in 4k+ resolutions, or trade blows, but at 1080p or 1440p I think it will be behind the 980 TI.


----------



## zealord

Quote:


> Originally Posted by *Rickles*
> 
> If it had 4 GB you can bet they wouldn't confirm it, it would drive 980 Ti sales like crazy.
> 
> If it really is a 4 GB card they won't say anything until release so they can have review sites show how it performs at 4k and other high resolutions.
> 
> I'd expect the Fury X to edge out Titan X in 4k+ resolutions, or trade blows, but at 1080p or 1440p I think it will be behind the 980 TI.


I think this sounds like a reasonable expectation. Hope we will get frametime benchmarks too. They are very critical and previous tests have shown that whilst the average framerate is not impacted the frametime can be worse on cards with 4GB compared to cards with 6GB.


----------



## BinaryDemon

I bet Fiji is faster, but the 4gb vram will make it a tough sell. More a psychological deterrent than a performance one since the R9 295x2 still holds it's own vs 980TI and Titan very well. Still, no one wants to buy something that highend with only 4gb.


----------



## Alatar

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Sounds similar to what they did with the 290. "AMD's Last Minute 290 Revision" Source: http://www.anandtech.com/show/7481/the-amd-radeon-r9-290-review/2


With the 290 you could raise the perf by giving it better cooling. Didn't even have to officially raise clocks.

This time AMD is going to have an AIO on the thing. So any change is likely going to come from an outright increase in clocks.

In the end for people who overclock it'd actually be preferable if AMD didn't increase the stock clocks and had to price the card lower because of it. Increasing the stock clocks doesn't really affect an overclocker's performance but the price of the card will go up.
Quote:


> Originally Posted by *harney*
> 
> http://wccftech.com/amd-radeon-r9-fury-fiji-xt-gpu-slower-gtx-980-ti/


wccf comments sections are funny. It's like youtube comments but worse.


----------



## sugalumps

Quote:


> Originally Posted by *BinaryDemon*
> 
> I bet Fiji is faster, but the 4gb vram will make it a tough sell. More a psychological deterrent than a performance one since the R9 295x2 still holds it's own vs 980TI and Titan very well. *Still, no one wants to buy something that highend with only 4gb*.


Tell that to my now obsolete 980


----------



## Mand12

Quote:


> Originally Posted by *sugalumps*
> 
> Tell that to my now obsolete 980


It's been nine months. You can't reasonably expect one card to maintain supremacy for that long. I'm in the same boat you are, I'm running 980s. They'll last me through Pascal nicely.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *sugalumps*
> 
> Tell that to my now obsolete 980


Obsolete is a pretty strong word. I'm sticking with my 980 (been planning to pick up a second one for cheap once the 980Ti and 390X's come out) and ride them out until 2016 when Pascal drops.

The 980 is hardly an obsolete card, by any stretch of the imagination. Sure it isn't "King of the hill", but then again, we all knew it would remain that way, even with the 980Ti and 390X. Titan's were always better. The 980 is just as fast as it was when we bought it (actually after the latest driver, mine is faster in the games I play).

I dare say that the 980 is STILL faster than what 90% or more of the gamers out there have. Hell, there are plenty of gamers still rocking old 660's and 7770's ... or older.


----------



## De_stroyer

Or we could all just wait till it launches ?


----------



## sugalumps

Quote:


> Originally Posted by *Mand12*
> 
> It's been nine months. You can't reasonably expect one card to maintain supremacy for that long. I'm in the same boat you are, I'm running 980s. They'll last me through Pascal nicely.


That is true but I am more peeved at the "there will be no ti variant this gen"...... ok I will buy a 980, "oh btw get hyped ti coming soon". Not only is it better, but unexpectedly even better in price/performance.

Will hold onto this fossil now till both teams have their full line up out next gen, not jumping on the first "high end" that comes out this time only for it to be the mid tier of that gen.

980 still an amazing up to date card, Im just messing about


----------



## 47 Knucklehead

Quote:


> Originally Posted by *sugalumps*
> 
> That is true but I am more peeved at the "there will be no ti variant this gen"......


Ok, I'll bite. Where/when did nVidia say that? I'd really be interested in a link to that.


----------



## ZealotKi11er

4GB, Slower then GTX980 Ti, $550 price incoming.


----------



## BinaryDemon

Quote:


> Originally Posted by *sugalumps*
> 
> Tell that to my now obsolete 980


At least you have 4gb! I only got 3.5gb.









I'm hoping that DX12 magic will bring my pooled vram to 7gb. Although I'll probably have new GPU's before that happens.


----------



## MadRabbit

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 4GB, Slower then GTX980 Ti, $550 price incoming.


Again, this hasn't been confirmed by anyone.

Just some "I heard it from Lisa Su's cat" rumors.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *MadRabbit*
> 
> Again, this hasn't been confirmed by anyone.
> 
> Just some "I heard it from Lisa Su's cat" rumors.


Speaking of Lisa Su ... seems she was spot on when she spread the rumor about Microsoft releasing Windows 10 on July 29th.

But hey, I guess rumors are only true when AMD spreads them, right?


----------



## MadRabbit

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Speaking of Lisa Su ... seems she was spot on when she spread the rumor about Microsoft releasing Windows 10 on July 29th.
> 
> But hey, I guess rumors are only true when AMD spreads them, right?


That wasn't her cat saying that tho...









Also, so I should believe someone "who knows someone about someone"? Yeah, I'll skip.


----------



## rt123

Quote:


> Originally Posted by *Alatar*
> 
> With the 290 you could raise the perf by giving it better cooling. Didn't even have to officially raise clocks.
> 
> This time AMD is going to have an AIO on the thing. So any change is likely going to come from an outright increase in clocks.
> 
> In the end for people who overclock it'd actually be preferable if AMD didn't increase the stock clocks and had to price the card lower because of it. *Increasing the stock clocks doesn't really affect an overclocker's performance but the price of the card will go up.*
> wccf comments sections are funny. It's like youtube comments but worse.


Lol what...?
How does that make sense.


----------



## Stealth Pyros

Sigh... these guys just have no ganas anymore.


----------



## decimator

Quote:


> Originally Posted by *rt123*
> 
> Lol what...?


He's talking about the speeds at which the GPU's are shipped -- factory overclocks. It's true. Most overclockers here will be able to blow past factory OC frequencies on reference cards without a sweat.


----------



## Cyro999

Quote:


> Originally Posted by *Alatar*
> 
> I've been saying that before buying into the 60-70% faster than 290X hype people should first look at what kind of scaling that would require from Fiji's extra resources.
> 
> slightly slower than 980Ti would put the thing at ~40% faster than 290X. Slightly faster than TX would be 48% faster than 290X or so. Assuming a rough 1GHz clock I would personally put the thing in the 40% faster than 290X range.
> 
> But who knows.


T-X only ~40-45% faster than 290/290x? That sounds too low unless you're looking at gm200 at 1200-1300mhz instead of cranked with a 350w+ power limit


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *rt123*
> 
> Lol what...?


Higher clocked reference card = does better in comparison to other cards so they can price it higher. If you're OCing anyway, it's just more money that you end up spending for no added benefit.

Say a card can overclock to 1400 MHz core on air and the base clock is 900 MHz. Now they raise the base clock to 1100 MHz to get more performance out of the card and price it higher. It can still only OC to 1400 though. It's like EVGA with their regular cards and "superclocked" cards. You're just paying more cash for something you can do in software (unless the GPUs are binned higher).


----------



## ZealotKi11er

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Higher clocked reference card = does better in comparison to other cards so they can price it higher. If you're OCing anyway, it's just more money that you end up spending for no added benefit.
> 
> Say a card can overclock to 1400 MHz core on air and the base clock is 900 MHz. Now they raise the base clock to 1100 MHz to get more performance out of the card and price it higher. It can still only OC to 1400 though. It's like EVGA with their regular cards and "superclocked" cards. You're just paying more cash for something you can do in software (unless the GPUs are binned higher).


But then the card will not sell as well. They should clock it as high as possible and price it as high as possible. Overclockers are a small % of their sales. Stock performance = Mindset of the people.


----------



## scotthulbs

I wonder how much the AIO cooler adds to the cost of this card? The EVGA 980 Hybrid is $100 more than stock.

Also wonder if there will be a cheaper non AIO cooled version or if the AIO is basically essential as it would have been on the 290X?

I just hope it comes out ahead of the competition, we need to keep NV honest with some serious competition something we haven't seen since the 7970.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *ZealotKi11er*
> 
> But then the card will not sell as well. They should clock it as high as possible and price it as high as possible. Overclockers are a small % of their sales. Stock performance = Mindset of the people.


Then you get people complaining about noise because it's generating so much heat that needs to be dissipated. There needs to be a balance of noise, performance, and price.


----------



## rt123

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Higher clocked reference card = does better in comparison to other cards so they can price it higher. If you're OCing anyway, it's just more money that you end up spending for no added benefit.
> 
> Say a card can overclock to 1400 MHz core on air and the base clock is 900 MHz. Now they raise the base clock to 1100 MHz to get more performance out of the card and price it higher. It can still only OC to 1400 though. It's like EVGA with their regular cards and "superclocked" cards. You're just paying more cash for something you can do in software (unless the GPUs are binned higher).


Ahh.
It performs better out of the gate so you can price it higher. Now it makes sense.

I was confused there for a second.


----------



## BinaryDemon

Quote:


> Originally Posted by *Cyro999*
> 
> T-X only ~40-45% faster than 290/290x? That sounds too low unless you're looking at gm200 at 1200-1300mhz instead of cranked with a 350w+ power limit


That sounds about right, the R9 295x2 still scores a lot of wins vs a Titan X. So a Titan X definitely isn't 50% faster. (stock clocks assumed on both).


----------



## SkateZilla

Lets judge performance on a card running without a bios.

So basically everything is running a failsafe clock. Which is likely no where near the default 3D clocks.

GDDR5 is dead, let it go, 4096-Bit interface less mV, smaller card, 4GB doesnt bother me, I run 3/5 screen setups from my crossfired Lightning 7970s.


----------



## SkateZilla

Quote:


> Originally Posted by *scotthulbs*
> 
> I wonder how much the AIO cooler adds to the cost of this card? The EVGA 980 Hybrid is $100 more than stock.
> 
> Also wonder if there will be a cheaper non AIO cooled version or if the AIO is basically essential as it would have been on the 290X?
> 
> I just hope it comes out ahead of the competition, we need to keep NV honest with some serious competition something we haven't seen since the 7970.


$30-50 its likely a oem asetek AIO


----------



## joeh4384

There are probably too many rumors about Fiji to know how it is going to perform other than that there will be an AIO version and that it will have HBM. It is really hard to predict it.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *SkateZilla*
> 
> GDDR5 is dead, let it go, 4096-Bit interface less mV, smaller card, 4GB doesnt bother me, I run 3/5 screen setups from my crossfired Lightning 7970s.


Yeah, GDDR5 is dead because you say so.









Hate to break the news to you, but even the vast majority of AMD's own 300 series line is still GDDR5 based, not to mention all of nVidia's. That doesn't sound "dead" to me ... especially given the facts of the limitations of HBM GEN1 and ESPECIALLY the costs and limited availability of it right now.

Will GDDR5 eventually die? Sure, say by mid to end 2016, but no, GDDR5 is NOT dead now, not by ANY stretch of the imagination.


----------



## hawker-gb

Well,GDDR is not dead but very near dead.


----------



## zipper17

Slower than 980Ti but they could just release it with a good value price,
so nvidia will force 980ti to lower the price, that's a good news for 980ti buyer.


----------



## Alatar

Nvidia (Jensen) poking AMD a bit:
Quote:


> The GeForce GTX 980 Ti uses the same Maxwell GPU, known internally as the GM200, as the company's $1,000 Titan X. That means it delivers high performance, but is *"quiet and cool" with no need for liquid cooling, Huang joked.*


Quote:


> Huang said that for now HBM is too expensive and in limited availability,


http://www.zdnet.com/article/computex-2015-nvidia-talks-geforce-gtx-980-ti-android-gaming-and-self-driving-cars/


----------



## 47 Knucklehead

Quote:


> Originally Posted by *hawker-gb*
> 
> Well,GDDR is not dead but very near dead.


Odds are by mid to late 2016 you will see both AMD using HBM on their high end cards, but GDDR5 will still most likely be used for their lower end (sub $250 cards). I fully expect HBM to totally kill off GDDR5 by say 2017-2018, but not really before.


----------



## harney

Quote:


> Originally Posted by *Alatar*
> 
> Nvidia (Jensen) poking AMD a bit:
> 
> http://www.zdnet.com/article/computex-2015-nvidia-talks-geforce-gtx-980-ti-android-gaming-and-self-driving-cars/


Love this part ....Huang said that there are some 100-200 million GeForce gamers worldwide-many of them using the older GeForce GTX 680 and GTX 780/780 Ti and due for upgrades.

Love the fact he states 780 ti due for an upgrade ...ouch


----------



## 47 Knucklehead

Quote:


> Originally Posted by *harney*
> 
> Love this part ....Huang said that there are some 100-200 million GeForce gamers worldwide-many of them using the older GeForce GTX 680 and GTX 780/780 Ti and due for upgrades.
> 
> Love the fact he states 780 ti due for an upgrade ...ouch


Why not, look around OCN, you have people lining up to sell off their 970's and 980's (no to mention their 4GB and 8GB R9 290X's) for a 980Ti .

He's right.


----------



## SoloCamo

Quote:


> Originally Posted by *Alatar*
> 
> Nvidia (Jensen) poking AMD a bit:
> Quote:
> 
> 
> 
> The GeForce GTX 980 Ti uses the same Maxwell GPU, known internally as the GM200, as the company's $1,000 Titan X. That means it delivers high performance, but is *"quiet and cool" with no need for liquid cooling, Huang joked.*
> 
> 
> 
> Quote:
> 
> 
> 
> Huang said that for now HBM is too expensive and in limited availability,
> 
> Click to expand...
> 
> http://www.zdnet.com/article/computex-2015-nvidia-talks-geforce-gtx-980-ti-android-gaming-and-self-driving-cars/
Click to expand...

Except the titan-x and 980ti aren't exactly quiet and cool... Sure, they are not 290x levels loud or hot but in reality they are not exactly far behind on those reused reference coolers


----------



## 47 Knucklehead

Quote:


> Originally Posted by *SoloCamo*
> 
> Except the titan-x and 980ti aren't exactly quiet and cool... Sure, they are not 290x levels loud or hot but in reality they are not exactly far behind on those reused reference coolers


He isn't poking fun at the Titan or 980Ti ... he is poking fun at the loud R9 290X and the R9 295X2 water cooled and especially the upcoming R9 390X water cooled.


----------



## Ha-Nocri

So having WC'ed card from day 1 is somehow worse now?! Nice









I would applaud NV if they did the same with 480, but no, u could fry eggs on that card


----------



## decimator

Quote:


> Originally Posted by *harney*
> 
> Love this part ....Huang said that there are some 100-200 million GeForce gamers worldwide-many of them using the older GeForce GTX 680 and GTX 780/780 Ti and due for upgrades.
> 
> Love the fact he states 780 ti due for an upgrade ...ouch


Yeah, um, no, Jen-Hsun...









Just speaking for myself, of course, but these SLi GTX 780 Ti Classy's will last me until Pascal at the very least. I only game on 1080p and with Windows 10 and DX12 with VRAM stacking coming soon, I should be able to squeeze some more life out of them. In any case, I need to upgrade my CPU setup before I even think about upgrading my GPU setup...


----------



## GorillaSceptre

No BIOS, can only be switched on but no image..









Am i missing something? How the hell do they know if it's slower?


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Ha-Nocri*
> 
> So having WC'ed card from day 1 is somehow worse now?! Nice


Having a water cooled card from day 1 isn't worse ... have it be REQUIRED from day one because of heat and noise if it wasn't IS.

Come on people, you know what he is saying, stop being intentionally obtuse.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *GorillaSceptre*
> 
> No BIOS, can only be switched on but no image..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Am i missing something? How the hell do they know if it's slower?


Because that wasn't their ONLY SOURCE. Not to mention just because a source give you a card and allows you to take off the heat sink doesn't mean that is the ONLY card that the source has.


----------



## Exilon

There's a difference between having a water cooling stock setup due to necessity or as a luxury add on.


----------



## SkateZilla

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Yeah, GDDR5 is dead because you say so.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hate to break the news to you, but even the vast majority of AMD's own 300 series line is still GDDR5 based, not to mention all of nVidia's. That doesn't sound "dead" to me ... especially given the facts of the limitations of HBM GEN1 and ESPECIALLY the costs and limited availability of it right now.
> 
> Will GDDR5 eventually die? Sure, say by mid to end 2016, but no, GDDR5 is NOT dead now, not by ANY stretch of the imagination.


Of course, because they are all rebrands...


----------



## Clockster

lol I'm not gonna say anything...

Just wait and see


----------



## Ha-Nocri

Quote:


> Originally Posted by *Exilon*
> 
> There's a difference between having a water cooling stock setup due to necessity or as a luxury add on.


I would have one always


----------



## SoloCamo

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SoloCamo*
> 
> Except the titan-x and 980ti aren't exactly quiet and cool... Sure, they are not 290x levels loud or hot but in reality they are not exactly far behind on those reused reference coolers
> 
> 
> 
> He isn't poking fun at the Titan or 980Ti ... he is poking fun at the loud R9 290X and the R9 295X2 water cooled and especially the upcoming R9 390X water cooled.
Click to expand...

Exactly my point... all I'm saying is he is calling the 980ti/Titan X 'quiet and cool' when we all know that's hardly the case on those reused reference coolers.

Anyways, likely sitting out this generation as I've said (I need to keep telling myself to or i'll impulse buy). If AMD releases a 4gb card even beating the Titan X by 25% (which won't happen) I still wouldn't buy it simply due to the vram limit.

If I do for whatever reason get a card from this just released gen, it will likely be a used 980ti so I can stick to my around $500 budget

Edit: if I didn't go 4k I'd not even be contemplating upgrading yet... 4k is a gift and a curse at this point because once you see the clarity difference 1080p looks horrible, and at the same time to even play most new games at 4k you pretty much need the fastest single gpu's at minimum for even med-high settings. Was really hoping the 980ti/Titan X/390x class cards would handle it better than they do but I'd probably be wisest to wait for 2nd gen HBM... atleast those of us on 780ti/290x/titan/980 or higher


----------



## Imburnal

I'll believe it when I see it, until then.....


----------



## GorillaSceptre

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Because that wasn't their ONLY SOURCE. Not to mention just because a source give you a card and allows you to take off the heat sink doesn't mean that is the ONLY card that the source has.


You want to post that source then? It's a clibait article with no substance whatsoever, shoddy journalism if you ask me.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *SoloCamo*
> 
> Exactly my point... all I'm saying is he is calling the 980ti/Titan X 'quiet and cool' when we all know that's hardly the case on those reused reference coolers.


Several reviews put the 980Ti with a stock cooler sound levels at the EXACT same noise level as the Titan X, 980, 970, Titan Black, 780Ti, 590, 650 Boost ... and all are quieter than AMD's higher end cards. And a full 9C cooler than the R9 290X.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *GorillaSceptre*
> 
> You want to post that source then? It's a clibait article with no substance whatsoever, shoddy journalism if you ask me.


Why don't you click on the link, create an account, and ask the people who their source is.


----------



## GorillaSceptre

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Why don't you click on the link, create an account, and ask the people who their source is.


You said "it wasn't their only source". That implies you already know.

So i'm asking.


----------



## Ghoxt

My question is how much GPU driver overhead is present in the HBM GPU setup to both tear down a 3D object to be put into memory by 1024 lanes, and the overhead to put it back together. I get the bus speed itself. But the drivers is where I question. This is all new of course... hence my question.


----------



## Lex Luger

AMD wasnt able to match Kepler in performance or efficiency last generation. Thats why they (Kepler cards) were priced much higher.

I dont know why people would expect their newest cards to be able to match Maxwell which only made the gap in efficiency even larger.

Nvidia GPU research dept is much better funded than AMD's. Don't expect miracles and you won't be disappointed. At long as the price is right (under 500) they will still sell just fine.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Ghoxt*
> 
> My question is how much GPU driver overhead is present in the HBM GPU setup to both tear down a 3D object to be put into memory by 1024 lanes, and the overhead to put it back together. I get the bus speed itself. But the drivers is where I question. This is all new of course... hence my question.


Maybe there is long forgotten data that is just taking up space and nothing more...


----------



## keikei

So the rumored $850 price is thrown out the window based on this rumor.







Assuming this is true, i would expect pricing to adjust accordingly. Whoever mentioned $550 may have hit the nail on the head.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Lex Luger*
> 
> AMD wasnt able to match Kepler in performance or efficiency last generation. Thats why they (Kepler cards) were priced much higher.
> 
> I dont know why people would expect their newest cards to be able to match Maxwell which only made the gap in efficiency even larger.
> 
> Nvidia GPU research dept is much better funded than AMD's. Don't expect miracles and you won't be disappointed. At long as the price is right (under 500) they will still sell just fine.


+1

Exactly


----------



## criznit

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Why don't you click on the link, create an account, and ask the people who their source is.


Their "source" is a guy who knows a guys who knows a guys who has a friend that just bought a 980 ti and came down with buyer's remorse when he knew the 390x will dominate it


----------



## 47 Knucklehead

Quote:


> Originally Posted by *keikei*
> 
> So the rumored $850 price is thrown out the window based on this rumor.
> 
> 
> 
> 
> 
> 
> 
> Assuming this is true, i would expect pricing to adjust accordingly. Whoever mentioned $550 may have hit the nail on the head.


Doubtful.

You aren't going to pay for the R&D costs for a brand new and revolutionary memory system, on $550.


----------



## OostBlokBoys

Quote:


> Originally Posted by *Clockster*
> 
> lol I'm not gonna say anything...
> 
> Just wait and see


I've seen you post this like 20 times in different threads already, you make me more excited than AMD does lol.


----------



## Clockster

Quote:


> Originally Posted by *OostBlokBoys*
> 
> I've seen you post this like 20 times in different threads already, you make me more excited than AMD does lol.












Nearly there, not long to wait.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Clockster*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nearly there, not long to wait.


9 hours and 40 minutes.


----------



## criznit

Quote:


> Originally Posted by *Lex Luger*
> 
> AMD wasnt able to match Kepler in performance or efficiency last generation. Thats why they (Kepler cards) were priced much higher.
> 
> I dont know why people would expect their newest cards to be able to match Maxwell which only made the gap in efficiency even larger.
> 
> Nvidia GPU research dept is much better funded than AMD's. Don't expect miracles and you won't be disappointed. At long as the price is right (under 500) they will still sell just fine.


If you fast forward a year or so, you will see the 290x matching or beating the 780 ti in games (when the driver is updated)

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_980_ti_review,15.html

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_980_ti_review,13.html

I picked these games since they were just released, but the Hawaii architecture has aged pretty well.


----------



## darealist

So this explains why AMD didn't even bother leaking the performance like nvidia did. HBM only matters when nvidia uses it.


----------



## Ultracarpet

Watch them announce the fury and have it be slightly below expectations, 4gb, slower than 980ti... Then be like, "oh, oops that's just the cut down version... Here is the xxx69xxx420xxx fury x..." *mic drop*. Then all the kids salivate and take their parents credit cards.


----------



## rt123

Quote:


> Originally Posted by *darealist*
> 
> So this explains why AMD didn't even bother leaking the performance like nvidia did. HBM only matters when nvidia uses it.


And if AMD didn't invent it with Hynix, you would never use it.


----------



## Clockster

Quote:


> Originally Posted by *47 Knucklehead*
> 
> 9 hours and 40 minutes.


a lot of people are gonna be surprised at the performance. The price on the other hand...Mmmm lol


----------



## magnek

Ok I know we all hate WCCF so won't link anything, but thought I'd provide their update...
Quote:


> Update: Our sources close to AMD have reported that any reports of performance issues at this time are pointless. *We have concrete confirmation that the GPU will launch at E3* - so we have something to look forward to very soon and will get a chance to see its final performance as well.


----------



## dboythagr8

Even if it's slower...I'm glad AMD is doing something. Competition is good for us. There is a reason NVIDIA released the 980Ti only 3 months after the Titan X and at that price. We need these companies to push each other. Sadly I feel that AMD's timing has wasted too many opportunities to really stick it to NVIDIA with this card.


----------



## Rickles

Quote:


> Originally Posted by *scotthulbs*
> 
> I wonder how much the AIO cooler adds to the cost of this card? The EVGA 980 Hybrid is $100 more than stock.
> 
> Also wonder if there will be a cheaper non AIO cooled version or if the AIO is basically essential as it would have been on the 290X?
> 
> I just hope it comes out ahead of the competition, we need to keep NV honest with some serious competition something we haven't seen since the 7970.


If you look at things like the NZXT g10 bracket and an AIO that puts you at like $100, the accelero hybrids used to run $150 and a DIY bracket/mod will run you $50ish depending on the cooler you go with.

Personally, I think the looks of the EVGA hybrid are really nice, but as far as options you are better getting a blower style cooler and purchasing the AIO/shroud separate from EVGA imo. (It will cost like $9 more though)


----------



## Blameless

Quote:


> Originally Posted by *zealord*
> 
> people that are at computex right now and had first hand experience with the card (Fury X) are saying that it is infact 4GB


Proves a 4GiB SKU, but doesn't rule out an 8GiB one.
Quote:


> Originally Posted by *SoloCamo*
> 
> Except the titan-x and 980ti aren't exactly quiet and cool... Sure, they are not 290x levels loud or hot but in reality they are not exactly far behind on those reused reference coolers


If you prevent a reference 290X from throttling, you get an extremely loud card. Far louder than a reference Titan X/980Ti ever gets.

I hit 65% fan speed in games (and past 40% the card is the loudest thing in my decidedly unsilent system), and 100% fan speed is needed to keep the card below the 94C forced throttle point in stress tests like FurMark or OCCT...which is blow dryer loud.


----------



## Pantsu

Yeah, it's very unlikely we'll see anything about Fiji at Computex. Most likely all they'll talk about is Carrizo.


----------



## criminal

Quote:


> Originally Posted by *Blameless*
> 
> Proves a 4GiB SKU, but doesn't rule out an 8GiB one.
> If you prevent a reference 290X from throttling, you get an extremely loud card. Far louder than a reference Titan X/980Ti ever gets.
> 
> I hit 65% fan speed in games (and past 40% the card is the loudest thing in my decidedly unsilent system), and 100% fan speed is needed to keep the card below the 94C forced throttle point in stress tests like FurMark or OCCT...which is blow dryer loud.


I couldn't stand the noise from my 980, so I strapped an AIO cooler to it. I don't know how you can even stand your situation!

And yes, even Maxwell benefits from having an AIO cooler. I am glad AMD gives that as an option from the start. Seems like a plus to me.


----------



## Clockster

I've had so many people pm and ask me already









Official reveal is E3, I'm not sure if they are planning on showing anything at computex.


----------



## criminal

Quote:


> Originally Posted by *Clockster*
> 
> I've had so many people pm and ask me already
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Official reveal is E3, I'm not sure if they are planning on showing anything at computex.


Yeah, I understand. I was just pulling your leg!


----------



## BigMack70

I did not own 290X reference cards, but if they were louder than the Titan X... wow. To keep my Titan X SLI setup from throttling horribly when overclocked required a fan profile which at times would push the fans above 80%, and I could hear them two rooms away in my house. They were at least as loud, if not louder, than my overclocked 7970 Lightning crossfire setup which was also unbearably loud.

IMO for overclocking, some form of custom cooling is required for the Titan X, not optional. Suspect that the custom coolers on the 980ti will do a great job keeping the temps/noise in check but the reference cooler is just not good enough for a card sucking down 250-280W of power.

I think it's great if all the Fiji SKUs come with a 120mm water cooler attached. Nearly everyone can get a 120mm rad into their case - heck, lots of ITX cases are even starting to incorporate that compatibility.


----------



## Forceman

Quote:


> Originally Posted by *Clockster*
> 
> a lot of people are gonna be surprised at the performance. The price on the other hand...Mmmm lol


Meh, anything they announce/show today is just going to be marketing numbers and slides, and we all know how those can be "tweaked". Real reviews will be needed, especially for the 4GB version.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Clockster*
> 
> I've had so many people pm and ask me already
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Official reveal is E3, I'm not sure if they are planning on showing anything at computex.


Two plus weeks of unanswered GTX 980Ti sales ... stick a fork in the 390X series.

I mean seriously, less and less people are willing to wait for it all the time. Only the reddest of team red people will blindly wait for the 390X without any official word.

Silence is deafening.


----------



## maarten12100

Just going to disregard this for now and then laugh at people who agreed with such claims in 2 weeks. Well maybe I will apologize to those that were in that case right.
Slower than Titan X sounds like a joke considering it uses a good 300W.


----------



## SuprUsrStan

Quote:


> Originally Posted by *maarten12100*
> 
> Slower than Titan X sounds like a joke considering it uses a good 300W.


Joke's gonna be on you in two weeks. Quoted for proof.


----------



## Clockster

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Two plus weeks of unanswered GTX 980Ti sales ... stick a fork in the 390X series.
> 
> I mean seriously, less and less people are willing to wait for it all the time. Only the reddest of team red people will blindly wait for the 390X without any official word.
> 
> Silence is deafening.


I agree with that, Nvidia were very smart to release the 980Ti in this window.

AMD are planning on focusing on the gaming market which is why they chose E3 for the reveal.
I dare say more gamers check out E3, so maybe that was a smart move but the silence is annoying lol


----------



## SoloCamo

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Clockster*
> 
> I've had so many people pm and ask me already
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Official reveal is E3, I'm not sure if they are planning on showing anything at computex.
> 
> 
> 
> Two plus weeks of unanswered GTX 980Ti sales ... stick a fork in the 390X series.
> 
> I mean seriously, less and less people are willing to wait for it all the time. Only the reddest of team red people will blindly wait for the 390X without any official word.
> 
> Silence is deafening.
Click to expand...

Two weeks isn't a long wait to atleast have both options on the table. Especially most people buying these already have more than capable gpu's. But yes, for those on the fence and have a need it now desire or the upgrade itch it's really a no brainier. I just don't understand AMD at times... it's like they intentionally shoot themselves in the foot at every chance.

Why wait for E3? The crowd at E3 minus a select few *do not care* about PC hardware, but the games themselves. Why is it so hard to atleast confirm the name and the specs. Do a freaking paper launch at least so people at least know what is truly under the hood.

I'm led to believe the price is too high to compete at this point and it only makes sense because Nvidia could easily sell even the 980ti at $500 if they wanted and launching it at $650 is telling. Nvidia have simply made the better decision in this game of gpu chess. It almost feels like the king is the only piece left and nvidia can just chase it around the board all day, knowing if it wanted to it could end it whenver it wanted.


----------



## hawker-gb

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Two plus weeks of unanswered GTX 980Ti sales ... stick a fork in the 390X series.
> 
> I mean seriously, less and less people are willing to wait for it all the time. Only the reddest of team red people will blindly wait for the 390X without any official word.
> 
> Silence is deafening.


They will announce something today,at least a hint to put flame on rumours.

Personally i will wait and i am not reddest among the red.


----------



## Master__Shake

Quote:


> Originally Posted by *Exilon*
> 
> There's a difference between having a water cooling stock setup due to necessity or as a luxury add on.


i would rather peak performance at all times and not throttling when the temps go up.


----------



## harney

Quote:


> Originally Posted by *SoloCamo*
> 
> Two weeks isn't a long wait to atleast have both options on the table. Especially most people buying these already have more than capable gpu's. But yes, for those on the fence and have a need it now desire or the upgrade itch it's really a no brainier. I just don't understand AMD at times... it's like they intentionally shoot themselves in the foot at every chance.
> 
> Why wait for E3? The crowd at E3 minus a select few *do not care* about PC hardware, but the games themselves. Why is it so hard to atleast confirm the name and the specs. Do a freaking paper launch at least so people at least know what is truly under the hood.
> 
> I'm led to believe the price is too high to compete at this point and it only makes sense because Nvidia could easily sell even the 980ti at $500 if they wanted and launching it at $650 is telling. Nvidia have simply made the better decision in this game of gpu chess. It almost feels like the king is the only piece left and nvidia can just chase it around the board all day, knowing if it wanted to it could end it whenver it wanted.


+1

I am about to fall off this damn fence i am on and just get the 980 ti ....

AMD are being far too quite


----------



## erocker

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Two plus weeks of unanswered GTX 980Ti sales ... stick a fork in the 390X series.
> 
> I mean seriously, less and less people are willing to wait for it all the time. Only the reddest of team red people will blindly wait for the 390X without any official word.
> 
> Silence is deafening.


Because EVERYBODY is scrambling to buy a new GPU RIGHT NOW!!!

I'll never understand some folk's thinking.


----------



## GMcDougal

This isnt great news ahead of the launch but this is not a total loss if the price is VERY competitive. If they get anywhere close to 980ti prices with lower performance this is going to end badly for AMD.


----------



## darealist

Now it makes sense why AMD wants e3 so badly. They want to wow those misinformed consolers with Fiji. Computex is too tech savy for that.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Two plus weeks of unanswered GTX 980Ti sales ... stick a fork in the 390X series.
> 
> I mean seriously, less and less people are willing to wait for it all the time. Only the reddest of team red people will blindly wait for the 390X without any official word.
> 
> Silence is deafening.


Oh I don't know, all the smug TX and 980Ti owners will probably start to really wish they waited the two weeks if Fiji starts relegating all of their precious benching numbers down to SECOND place!


----------



## BigMack70

Quote:


> Originally Posted by *GMcDougal*
> 
> This isnt great news ahead of the launch but *this is not a total loss* if the price is VERY competitive. If they get anywhere close to 980ti prices with lower performance this is going to end badly for AMD.


I kind of disagree. If AMD cannot compete with Nvidia's best when they are incorporating a brand new memory technology (HBM), I believe that puts the nail in the coffin for any hopes AMD will ever compete for the performance crown again. And that is a total loss in every way for consumers.

My expectations are still that Fiji XT will be Titan X + 10% performance; no idea what to expect on pricing and availability, though.


----------



## Clockster

Quote:


> Originally Posted by *darealist*
> 
> Now it makes sense why AMD wants e3 so badly. They want to wow those misinformed consolers with Fiji. Computex is too tech savy for that.


Why do you bother posting if this is your attitude, fanboynism is bad for the industry.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Oh I don't know, all the smug TX and 980Ti owners will probably start to really wish they waited the two weeks if Fiji starts relegating all of their precious benching numbers down to SECOND place!


Fury's memory excels at higher resolutions.
4K is where it's at









Edit: And now I'm keeping quiet lol


----------



## HiTechPixel

Quote:


> Originally Posted by *Clockster*
> 
> Fury's memory excels at higher resolutions.
> 4K is where it's at


HBM won't excel anywhere but resolutions lower than 3840x2160 due to the rumoured 4GB constraint.


----------



## criminal

Yeah I really don't get the extreme fanboyish posts. We want both GPU vendors to do well so prices stay lower and performance increases between generations go higher.


----------



## Wishmaker

inb4 another bulldozer moment


----------



## keikei

Quote:


> Originally Posted by *darealist*
> 
> Now it makes sense why AMD wants e3 so badly. They want to wow those misinformed consolers with Fiji. Computex is too tech savy for that.


They want to reveal Fiji at the biggest gaming stage.

Quote:


> Originally Posted by *Clockster*
> 
> I've had so many people pm and ask me already
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Official reveal is E3, I'm not sure if they are planning on showing anything at computex.


Can i be your best friend?!


----------



## Anateus

Quote:


> Originally Posted by *HiTechPixel*
> 
> HBM won't excel anywhere but resolutions lower than 3840x2160 due to the rumoured 4GB constraint.


You know nothing about HBM performance. Neither does anyone in this thread.


----------



## Blameless

Quote:


> Originally Posted by *BigMack70*
> 
> If AMD cannot compete with Nvidia's best when they are incorporating a brand new memory technology (HBM), I believe that puts the nail in the coffin for any hopes AMD will ever compete for the performance crown again.


And what makes you believe this?

Early adoption of new technologies does not mean they are necessarily needed, or leveraged, right away and both AMD/ATI and NVIDIA have had generations where they failed to retake a performance crown lost to the other, even when substantial new technologies were being introduced by them.
Quote:


> Originally Posted by *Anateus*
> 
> You know nothing about HBM performance. Neither does anyone in this thread.


We know more significantly more about HBM than we do Fiji/Fury.


----------



## Wishmaker

Quote:


> Originally Posted by *Anateus*
> 
> You know nothing about HBM performance. Neither does anyone in this thread.


Sounds like we are all John Snow and know nothing!


----------



## HiTechPixel

Quote:


> Originally Posted by *Anateus*
> 
> You know nothing about HBM performance. Neither does anyone in this thread.


I know enough about game engines to know that 4GB won't work at resolutions higher than 3840x2160. No matter how fast the memory is, 4GB is 4GB and that is a bottleneck at high resolutions even if you don't have any anti-aliasing on.


----------



## raghu78

Quote:


> Originally Posted by *SoloCamo*
> 
> Two weeks isn't a long wait to atleast have both options on the table. Especially most people buying these already have more than capable gpu's. But yes, for those on the fence and have a need it now desire or the upgrade itch it's really a no brainier. I just don't understand AMD at times... it's like they intentionally shoot themselves in the foot at every chance.
> 
> Why wait for E3? The crowd at E3 minus a select few *do not care* about PC hardware, but the games themselves. Why is it so hard to atleast confirm the name and the specs. Do a freaking paper launch at least so people at least know what is truly under the hood.
> 
> I'm led to believe the price is too high to compete at this point and it only makes sense because Nvidia could easily sell even the 980ti at $500 if they wanted and launching it at $650 is telling. Nvidia have simply made the better decision in this game of gpu chess. It almost feels like the king is the only piece left and nvidia can just chase it around the board all day, knowing if it wanted to it could end it whenver it wanted.


Do people think that a GPU is sold only on launch date to the first 2-3 weeks. This battle against Nvidia is not for 2-3 weeks but for the next 2 years. AMD's next gen R9 3xx has to be a very successful launch for them to regain market share and so executing a near perfect launch is important. As always the software aspect of AMD will be the one which could leave long lasting impressions. AMD needs competitive performance and consistent frametimes both in single and multi GPU (especially in Gameworks titles which have the ability to distort the competitive situation) , working CF Freesync etc.

People need to understand that 16/14nm FINFET GPUs are going to take well into 2017 to become the higher volume node (wrt 28nm GPUs) so what AMD launches now will drive the complete GPU volume for roughly 15 months and the vast majority of their GPU sales even after that till mid 2017. Getting their launch right is more important than caving into rampant misinformation and click bait trash.


----------



## rt123

Quote:


> Originally Posted by *Wishmaker*
> 
> inb4 another bulldozer moment


Keep dreaming, because that is practically impossible.









Bulldozer was such a big fail because, not only was it significantly slower than its competition, but the single threaded performance was lower than their own previous gen chip.

Here Fiji will be faster than Hawaii just based on the extra cores it has, not to mention HBM. The worst case scenario is its 5-10% slower than 980Ti. This is far from the Bulldozer moment you Green Goblins have been wishing for.

As I said, keep on dreaming.


----------



## Wishmaker

Quote:


> Originally Posted by *raghu78*
> 
> Do people think that a GPU is sold only on launch date to the first 2-3 weeks. This battle against Nvidia is not for 2-3 weeks but for the next 2 years. AMD's next gen R9 3xx has to be a very successful launch for them to regain market share and so executing a near perfect launch is important. As always the software aspect of AMD will be the one which could leave long lasting impressions. AMD needs competitive performance and consistent frametimes both in single and multi GPU (especially in Gameworks titles which have the ability to distort the competitive situation) , working CF Freesync etc.
> 
> People need to understand that *16/14nm FINFET GPUs are going to take well into 2017* to become the higher volume node (wrt 28nm GPUs) so what AMD launches now will drive the complete GPU volume for roughly 15 months and the vast majority of their GPU sales even after that till mid 2017. Getting their launch right is more important than caving into rampant misinformation and click bait trash.


Meanwhile INTEL show boats 14nm iGPUs because they can.


----------



## Anateus

Quote:


> Originally Posted by *Wishmaker*
> 
> Sounds like we are all John Snow and know nothing!


Come on. You guys MIGHT know something about its architecture or specifications, but nobody here has tested it ingame.


----------



## Blameless

AMD and NVIDIA should have Intel build their GPUs.


----------



## darealist

Quote:


> Originally Posted by *Blameless*
> 
> AMD and NVIDIA should have Intel build their GPUs.


AMD should sell ATI to Intel. #1 GPU company in a year.


----------



## Ha-Nocri

Quote:


> Originally Posted by *darealist*
> 
> AMD should sell ATI to Intel. #1 GPU company in a year.


Using 14nh process alone would make them #1


----------



## epic1337

Quote:


> Originally Posted by *HiTechPixel*
> 
> HBM won't excel anywhere but resolutions lower than 3840x2160 due to the rumoured 4GB constraint.


actually not quite, if you withhold doing 4xMSAA or above, VRAM usage stays under 4GB.
although yes, at the rate of game development, 4GB wouldn't hold out in the long run.
but for the most part, the card for now will suffice.

on the other hand, HBM wouldn't contribute much to performance though, so its rather questionable to take 4GB HBM rather than 6GB 386bit ~ 8GB 512bit GDDR5.
as for the price difference, i had expected the $/GB of HBM to be drastically higher than GDDR5, so that too is a questionable choice.


----------



## schmotty

Quote:


> Originally Posted by *darealist*
> 
> AMD should sell ATI to Intel. #1 GPU company in a year.


Yes, and then AMD goes bankrupt and Intel profit margins exceed Apples...because they can.


----------



## raghu78

Quote:


> Originally Posted by *Wishmaker*
> 
> Meanwhile INTEL show boats 14nm iGPUs because they can.


But we all know even the mighty Intel struggled with 14nm yield issues last year and have been selling 82 sq mm chips at 5W TDP for 6 months. Its now we are seeing the broad availability of 14nm Core i5/Core i7 desktop and notebook SKUs. Even now I don't expect Intel's 14nm CPU volume to overtake their 22nm CPU volume (Haswell) until Skylake starts to ship. But when Skylake ramps alongwith 14nm Broadwell Xeons in Q1 2016 they will finally get to 14nm being their primary CPU volume node. As for 16/14nm discrete GPUs consider yourself lucky if AMD or Nvidia launch a flagship 16/14nm GPU (250-300 sq mm) with HBM2 by late Q3 2016. Even then demand will vastly outstrip supply and here will be major price gouging from retailers like newegg and amazon.







As for 16/14nm GPUs availability at all price points its going to take atleast mid-2017.


----------



## Kuivamaa

Intel? Do you really wanna pay 400 for GTX960 or R9 285? Because with intel margins the price would be something like that.


----------



## maarten12100

Quote:


> Originally Posted by *Syan48306*
> 
> Joke's gonna be on you in two weeks. Quoted for proof.


Game on!

Twilight of the gods. Historically the cards they released required some response from the competitor and they have no bigger chip than Titan X nor can they go bigger on 28nm. The one who wins holds the crown for a year.
Quote:


> Originally Posted by *HiTechPixel*
> 
> HBM won't excel anywhere but resolutions lower than 3840x2160 due to the rumoured 4GB constraint.


The keyword here is rumoured


Spoiler: Warning: Spoiler!






Quote:


> Originally Posted by *darealist*
> 
> AMD should sell ATI to Intel. #1 GPU company in a year.


some people just want the world to burn Intel would stomp Nvidia and then we have a 2 front monopoly position for Intel.

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Using 14nh process alone would make them #1


Yeah because Intel's 14nm was a huge success, right?
It's gonna get better but at this point it is the node you really really don't want. It doesn't even have higher density and the gating is horrible. Maybe in a year from now but at this point no thanks.


----------



## hawker-gb

Quote:


> Originally Posted by *Kuivamaa*
> 
> Intel? Do you really wanna pay 400 for GTX960 or R9 285? Because with intel margins the price would be something like that.


Yes they want that. There is a lot of stupid people.


----------



## fatmario

My guess is amd is not trying to beat titan x performance instead they are aiming for gtx 980 performance maybe slight better with decent price tag $400-$600. amd probably going to release another gpu in a year to compete with titan x performance


----------



## h2spartan

Quote:


> Originally Posted by *fatmario*
> 
> My guess is amd is not trying to beat titan x performance instead they are aiming for gtx 980 performance maybe slight better with decent price tag $400-$600. amd probably going to release another gpu in a year to compete with titan x performance


...and Nvidia has them right where they want them. By then, Nvidia will probably have their next line up waiting to go when AMD makes their move. It seems Nvidia is always one step ahead as of late.


----------



## criminal

Quote:


> Originally Posted by *fatmario*
> 
> My guess is amd is not trying to beat titan x performance instead they are aiming for gtx 980 performance maybe slight better with decent price tag $400-$600. amd probably going to release another gpu in a year to compete with titan x performance


980Ti you mean? The 290x already competes with the 980.


----------



## rt123

Quote:


> Originally Posted by *h2spartan*
> 
> ...and Nvidia has them right where they want them. By then, Nvidia will probably have their next line up waiting to go when AMD makes their move. It seems Nvidia is always one step ahead as of late.


Nvidia doesn't have anything else on 28nm.

Next from Nvidia is going to be Pascal 14/16nm. Which will come in 2016. They are done this year.


----------



## spacin9

I knew it. If they had something spectacular, they would have released it. With 4GB of VRAM..I say $499. I wouldn't pay a penny more. Actually, I wouldn't pay $499 for it, if it's slower than a Ti. Time for AMD to stop thinking beat nvidia and start thinking price-performance king, like they've always been anyway,


----------



## Olivon

Quote:


> Originally Posted by *rt123*
> 
> Nvidia doesn't have anything else on 28nm.
> 
> Next from Nvidia is going to be Pascal 14/16nm. Which will come in 2016. They are done this year.


I won't be surprised if nVidia bring another SKU cut-down for GM204 and GM200 each on the table this year.


----------



## rt123

Quote:


> Originally Posted by *Olivon*
> 
> I won't be surprised if nVidia bring another SKU cut-down for GM204 and GM200 each on the table this year.


Cut-down.? Yes.
But more than GM200 TitanX.? No..

Nvidia can't push the performance above TitanX this year. So if AMD loses they are done, if they win, Nvidia is done. There isn't gonna be a moment like last year when Nvidia stole the 290X thunder by launching the 780Ti.


----------



## fatmario

Quote:


> Originally Posted by *h2spartan*
> 
> ...and Nvidia has them right where they want them. By then, Nvidia will probably have their next line up waiting to go when AMD makes their move. It seems Nvidia is always one step ahead as of late.


well you maybe right or wrong on that part but just look at it amd s 7950,7970 graphic cards the people who bought them at release date still getting nice performance boost til this day.
amd has improve a lot performance since 7950,7970 release date with driver updates still competing with nvidia mid range gpu's. even if nvidia release another sku line up based on 28mm i am sure amd 300 series gpu will have same performances gain over time with driver update just like amd 7000 series gpu.


----------



## Ganf

Meh... Wouldn't be the first time AMD disappointed, but we'll just have to wait and see. It is odd though that a rep would say their "Halo card" is slower than the competitions $650 cut die, without even being badgered or having any working cards to show.


----------



## michaelius

Quote:


> Originally Posted by *rt123*
> 
> Cut-down.? Yes.
> But more than GM200 TitanX.? No..
> 
> Nvidia can't push the performance above TitanX this year. So if AMD loses they are done, if they win, Nvidia is done. There isn't gonna be a moment like last year when Nvidia stole the 290X thunder by launching the 780Ti.


Zotac already has 980ti with 20% increased clock speeds annouced so there's plenty nvidia can do.


----------



## szeged

dont want to read through 24 pages, is the thread just pointless arguing between amd fans and nvidia fans or did we actually get any civil discussion going about the truthfulness of the article?


----------



## Anateus

Quote:


> Originally Posted by *szeged*
> 
> dont want to read through 24 pages, is the thread just pointless arguing between amd fans and nvidia fans or did we actually get any civil discussion going about the truthfulness of the article?


Step away sir, nothing to see here


----------



## h2spartan

Quote:


> Originally Posted by *rt123*
> 
> Nvidia doesn't have anything else on 28nm.
> 
> Next from Nvidia is going to be Pascal 14/16nm. Which will come in 2016. They are done this year.


Yeah, I didn't mean Nvidia releasing anything this year again. I was responding to the other posters hypothetical situation that AMD would be releasing a Titan X competitor in a year or so (obviously pure speculation) which by then pascal would be ready to stomp it. Just feels like AMD has been lagging behind a lot lately.


----------



## rt123

Quote:


> Originally Posted by *michaelius*
> 
> Zotac already has 980ti with 20% increased clock speeds annouced so there's plenty nvidia can do.


And AMD's custom AIB partners will release a card clocked higher too. Anything else.?

Releasing an OCed card isn't a move, a bigger die(ala more GPU cores) is. The only thing Nvidia has left in the tank is too allow custom TitanX.
A TitanX Lighting would be interesting.


----------



## ozlay

If you look at the performance of the 285 vs 280/280x you can see that AMD did fix some ram issues so I think if they integrate some of the features of the 285 into there new GPU's. We will see better ram usage and 4GB maybe enough. They will just have issues selling it to the customer who will be looking at the vram size thinking more is better which isn't always true

Quote:


> Originally Posted by *michaelius*
> 
> Zotac already has 980ti with 20% increased clock speeds annouced so there's plenty nvidia can do.
> 
> you mean that triple slot card that actually looks like it might be quad slot? put a massive cooler on any card and your looking at a 15-20% overclock


----------



## SuprUsrStan

Quote:


> Originally Posted by *Blameless*
> 
> AMD and NVIDIA should have Intel build their GPUs.


If Intel ever buys Nvidia, AMD would just pack up and close up shop.


----------



## rt123

Quote:


> Originally Posted by *h2spartan*
> 
> Yeah, I didn't mean Nvidia releasing anything this year again. I was responding to the other posters hypothetical situation that AMD would be releasing a Titan X competitor in a year or so (obviously pure speculation) which by then pascal would be ready to stomp it. Just feels like AMD has been lagging behind a lot lately.


Agreed. I thought you meant this year.

I dont think AMD can do much more on 28nm either. The only possible thing would be an 8GB HBM card later on if this one comes with 4GB.
They catch TitanX now or they don't catch it at all. Because if fury doesn't cut it, AMD will need the next node to catch up, but Nvidia is almost guaranteed to get to the next node before AMD. So they will just raise the bar.


----------



## EniGma1987

Quote:


> Originally Posted by *rdr09*
> 
> Quit hoping. Get a 980 Ti, a Gsync monitor, and the driver that works.


For all the people who hate on AMD, it is sad that the situation DEFINITELY applies to Nvidia when you want to use certain features, about needing a drive that even works properly with that feature. A couple releases ago GSync was kinda broken, Shadowplay often times gets broken in certain situations on driver releases, and 3D also requires specific drivers. Then there is the chrome issue... Oh and the downclock issue, and the black screen issue, and once every other year Nvidia releases a "brick driver" that causes 10% of cards to overheat and die to drum up more business.


----------



## BinaryDemon

Quote:


> Originally Posted by *criminal*
> 
> 980Ti you mean? The 290x already competes with the 980.


Not very well, but the improved 390x will probably do much better.


----------



## Olivon

Quote:


> Originally Posted by *rt123*
> 
> Cut-down.? Yes.
> But more than GM200 TitanX.? No..
> 
> Nvidia can't push the performance above TitanX this year. So if AMD loses they are done, if they win, Nvidia is done. There isn't gonna be a moment like last year when Nvidia stole the 290X thunder by launching the 780Ti.


nVidia got an easy 10% remaining regarding GM200 performance. Titan-X is quite conservative regarding clocks.
Drivers can improve it a little bit too.

Regarding Fiji,
Quote:


> In any event, in light of our discussions, it seems clear that the positioning map "best price / performance ratio" is not excluded at the moment. AMD tries to keep the secret as much as possible at this level and is currently working hard on recent optimizations in drivers.


*Hardware.fr (translated)*


----------



## Vesku

Quote:


> Originally Posted by *47 Knucklehead*
> 
> The spec's on the 980Ti have been pretty dang close to the rumors for a long time now, as have the rumors on the 390X. Price is harder to nail down because you can easily eat into your profit margin (or even take a loss), thus it is harder to predict.
> 
> Price can be changed overnight, hardware specs can't.
> 
> Remember when AMD came out with the $1500 R9 295X2? Is it still $1500? No. How about the FX-9590 for $1000? Nope. Both OVERNIGHT had their prices slashed by nearly 50%.


Biggest recent example of that was Titan Z, slashed in half and basically buried.


----------



## Cyro999

Quote:


> Shadowplay often times gets broken in certain situations on driver releases


Didn't you notice the huge "beta" tag that's still on it?

What? it was released 2 years ago? oh nvidia


----------



## Menta

When will people learn that if AMD fails this aint a good thing, there are room for both and even more company's...this is why prices are way overpriced and still there are folks so used to this that say its cheap and fair.


----------



## Ganf

Quote:


> Originally Posted by *szeged*
> 
> dont want to read through 24 pages, is the thread just pointless arguing between amd fans and nvidia fans or did we actually get any civil discussion going about the truthfulness of the article?


Hard to tell about any truthfulness other than "We talked to a guy, he said things. There was a card, it didn't work."

4gb of HBM we already knew. Size we already knew, design we already knew. I think it was only considered new to German sites, but I could easily be mistaken.

Edit: Oh, he did mention that the cooler was made by CoolIT. Not sure if that's a good or a bad thing.


----------



## Blameless

Quote:


> Originally Posted by *Syan48306*
> 
> If Intel ever buys Nvidia, AMD would just pack up and close up shop.


I'm not talking about Intel buying anyone, I'm talking about the major discrete GPU designers using Intel's, rather than TSMC's, foundries.

Intel has built chips for third parties before, and they have what is arguably the most advanced CMOS manufacturing process on the planet.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *SoloCamo*
> 
> Two weeks isn't a long wait to atleast have both options on the table. Especially most people buying these already have more than capable gpu's. But yes, for those on the fence and have a need it now desire or the upgrade itch it's really a no brainier. I just don't understand AMD at times... it's like they intentionally shoot themselves in the foot at every chance.
> 
> Why wait for E3? The crowd at E3 minus a select few *do not care* about PC hardware, but the games themselves. Why is it so hard to atleast confirm the name and the specs. Do a freaking paper launch at least so people at least know what is truly under the hood.
> 
> I'm led to believe the price is too high to compete at this point and it only makes sense because Nvidia could easily sell even the 980ti at $500 if they wanted and launching it at $650 is telling. Nvidia have simply made the better decision in this game of gpu chess. It almost feels like the king is the only piece left and nvidia can just chase it around the board all day, knowing if it wanted to it could end it whenver it wanted.


Yes, but most people don't go to E3 or Computex, they do, however, look at reviews, rumors, and release information. So the sooner they give out real information (or allow it to be released) the more people will get it and make their choice.

But let's get serious here for a second, it really isn't about 2 weeks from a technical point of view. Computex or E3 really won't matter as far as the hardware goes. Unless the 390X isn't going to be ready and the design is still in flux, 2 weeks won't matter at all. But from a marketing point of view (and really, that is all that Computex and E3 really are), 2 weeks is HUGE, especially when your competitor has beat you to the punch for a product announcement and is selling their product ALREADY.

So let's assume for a minute that the 390X is done, the factories have already made boards, chips, coolers, manuals, boxes, etc. What is the logical reason for NOT giving information when your competitor is selling product and for every card they sell, that means that is one LESS card that they will buy from you?


----------



## rt123

Quote:


> Originally Posted by *Olivon*
> 
> nVidia got an easy 10% remaining regarding GM200 performance. Titan-X is quite conservative regarding clocks.
> Drivers can improve it a little bit too.
> 
> Regarding Fiji,
> *Hardware.fr (translated)*


Same argument as the other guy. I know there is overclocking headroom.

Quite a few TitanX owners are running 1450-1500Mhz on air & 1600Mhz with full cover block on water. But Fiji can be overclocked too. I don't remember a recent card from either of the companies that didn't have an OC headroom.

Driver optimizations can add another 5% maybe.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *EniGma1987*
> 
> For all the people who hate on AMD, it is sad that the situation DEFINITELY applies to Nvidia when you want to use certain features, about needing a drive that even works properly with that feature. A couple releases ago GSync was kinda broken, Shadowplay often times gets broken in certain situations on driver releases, and 3D also requires specific drivers. Then there is the chrome issue... Oh and the downclock issue, and the black screen issue, and once every other year Nvidia releases a "brick driver" that causes 10% of cards to overheat and die to drum up more business.


Funny, I have nVidia cards and G-Sync drivers and my machine has been working all the time. Some drivers aren't as good as others (but then again, it's not like nVidia is the only company that happens to), but you just do a roll back and move forward.

I've been rocking G-Sync (on both monitors) on my card for some time now (and before I sold my 780 Classified, I was rocking G-Sync with 2 video cards. NEITHER of which FreeSync can do. Further, I don't have to pack up my monitor and send it back to BenQ and wait for them to fix it because FreeSync was busted and needs a bios flash to fix it.


----------



## ozlay

Quote:



> Originally Posted by *Syan48306*
> 
> If Intel ever buys Nvidia, AMD would just pack up and close up shop.


Well Intel maybe rich but not that rich I don't think we will run into that issue for a long time


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Menta*
> 
> When will people learn that if AMD fails this aint a good thing, there are room for both and even more company's...this is why prices are way overpriced and still there are folks so used to this that say its cheap and fair.


You know, it really is getting old to keep saying "If AMD fails, everyone suffers, so keep propping up a failing company just for the good of everyone."

Honestly, maybe it's time to let AMD fail and let someone step in and pick up their pieces (say like Samsung) and give nVidia some REAL competition for a change.

Sometimes you have to pull the band-aid off and deal with the pain for the wound to get better faster.

I'm just saying.


----------



## SoloCamo

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SoloCamo*
> 
> Two weeks isn't a long wait to atleast have both options on the table. Especially most people buying these already have more than capable gpu's. But yes, for those on the fence and have a need it now desire or the upgrade itch it's really a no brainier. I just don't understand AMD at times... it's like they intentionally shoot themselves in the foot at every chance.
> 
> Why wait for E3? The crowd at E3 minus a select few *do not care* about PC hardware, but the games themselves. Why is it so hard to atleast confirm the name and the specs. Do a freaking paper launch at least so people at least know what is truly under the hood.
> 
> I'm led to believe the price is too high to compete at this point and it only makes sense because Nvidia could easily sell even the 980ti at $500 if they wanted and launching it at $650 is telling. Nvidia have simply made the better decision in this game of gpu chess. It almost feels like the king is the only piece left and nvidia can just chase it around the board all day, knowing if it wanted to it could end it whenver it wanted.
> 
> 
> 
> Yes, but most people don't go to E3 or Computex, they do, however, look at reviews, rumors, and release information. So the sooner they give out real information (or allow it to be released) the more people will get it and make their choice.
> 
> But let's get serious here for a second, it really isn't about 2 weeks from a technical point of view. Computex or E3 really won't matter as far as the hardware goes. Unless the 390X isn't going to be ready and the design is still in flux, 2 weeks won't matter at all. But from a marketing point of view (and really, that is all that Computex and E3 really are), 2 weeks is HUGE, especially when your competitor has beat you to the punch for a product announcement and is selling their product ALREADY.
> 
> So let's assume for a minute that the 390X is done, the factories have already made boards, chips, coolers, manuals, boxes, etc. What is the logical reason for NOT giving information when your competitor is selling product and for every card they sell, that means that is one LESS card that they will buy from you?
Click to expand...

Exactly, E3 means nothing to the majority of the people that would actually be buying the product. IMO it would be more effective to just drop the card early, and market it 1,000% the same day on tech sites, industry players having reviews up, etc. Go ahead and still do E3 to get it to the casuals, but the card needs to drop literally as soon as it's in a box at the warehouse.

Nvidia did this with the 980ti and it works - caught people by surprise by being early, had a better than expected price point and the performance was more than anticipated. The hype more than built itself, and as far as I've seen, Nvidia didn't even have to market it much

If AMD could even achieve one of the above it would a win for them in the grand scheme of things for this card.


----------



## GorillaSceptre

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Yes, but most people don't go to E3 or Computex, they do, however, look at reviews, rumors, and release information. So the sooner they give out real information (or allow it to be released) the more people will get it and make their choice.
> 
> But let's get serious here for a second, it really isn't about 2 weeks from a technical point of view. Computex or E3 really won't matter as far as the hardware goes. Unless the 390X isn't going to be ready and the design is still in flux, 2 weeks won't matter at all. But from a marketing point of view (and really, that is all that Computex and E3 really are), 2 weeks is HUGE, especially when your competitor has beat you to the punch for a product announcement and is selling their product ALREADY.
> 
> So let's assume for a minute that the 390X is done, the factories have already made boards, chips, coolers, manuals, boxes, etc. What is the logical reason for NOT giving information when your competitor is selling product and for every card they sell, that means that is one LESS card that they will buy from you?


Don't know why you're making such a big deal over 2 weeks.

Whoever wins, be it AMD or Nvidia, that king of the hill card will be the king for over a year. 2 weeks of potential lost sales in the scheme of things means nothing.

E3 is a big place to launch because it involves millions of non-pc gamers, there is a lot of potential new buyers there.


----------



## zealord

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Don't know why you're making such a big deal over 2 weeks.
> 
> Whoever wins, be it AMD or Nvidia, that king of the hill card will be the king for over a year. 2 weeks of potential lost sales in the scheme of things means nothing.
> 
> *E3 is a big place to launch because it involves millions of non-pc gamers, there is a lot of potential new buyers there*.


Well consoles gamers that are blind and are barely able to afford a 399$ console don't suddenly need a 700$ GPU. I think computex would've been way better for AMD.

E3 = console show imho


----------



## Awsan

Boy oh Boy,New GPU + New ROG Swift + steam sales


----------



## keikei

Quote:


> Originally Posted by *zealord*
> 
> Well consoles gamers that are blind and are barely able to afford a 399$ console don't suddenly need a 700$ GPU. I think computex would've been way better for AMD.
> 
> E3 = console show imho


The key word there is gamers.


----------



## GorillaSceptre

Quote:


> Originally Posted by *zealord*
> 
> Well consoles gamers that are blind and are barely able to afford a 399$ console don't suddenly need a 700$ GPU. I think computex would've been way better for AMD.
> 
> E3 = console show imho


E3 will be huge this year. For the first time ever PC will have it's own conference. Oculus is going to have a massive presence there, AMD having their GPU's driving it will have a big impact.

But i agree with you







I would much rather see what this thing can do tomorrow, but it looks like we have to wait a bit longer.


----------



## Shogon

Quote:


> Originally Posted by *GorillaSceptre*
> 
> *E3 is a big place to launch because it involves millions of non-pc gamers*, there is a lot of potential new buyers there.


Quote:


> Originally Posted by *keikei*
> 
> The key word there is *gamers*.


Why would non-PC gamers, or gamers in general spend $600+ on a video card?

Also for the sake of AMD, after losing it's iGPU crown to Intel I hope Fiji isn't performing under a 980ti. If so, what a waste of resources and hoping HBM would be the heavens reward AMD wanted.


----------



## keikei

Quote:


> Originally Posted by *Shogon*
> 
> Why would non-PC gamers, or gamers in general spend $600+ on a video card?
> 
> Also for the sake of AMD, after losing it's iGPU crown to Intel I hope Fiji isn't performing under a 980ti. If so, what a waste of resources and hoping HBM would be the heavens reward AMD wanted.


You assume the Fiji card(s) is the only one in their line up. How many console gamers also own a PC? How many gamers game on multiple platforms? Just the fact they are revealing the card at the show is big.


----------



## Ganf

Quote:


> Originally Posted by *zealord*
> 
> Well consoles gamers that are blind and are barely able to afford a 399$ console don't suddenly need a 700$ GPU. I think computex would've been way better for AMD.
> 
> E3 = console show imho


Fury isn't the only card they'll be showcasing at E3, they have their entire lineup left to launch, and from what we've seen there's at least 10 cards there.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Shogon*
> 
> Why would non-PC gamers, or gamers in general spend $600+ on a video card?
> 
> Also for the sake of AMD, after losing it's iGPU crown to Intel I hope Fiji isn't performing under a 980ti. If so, what a waste of resources and hoping HBM would be the heavens reward AMD wanted.


I don't expect non-pc gamers to buy a flagship for $600 or more.

I think you're underestimating what "Halo" cards do for a brands image. Having the best performing card makes your whole lineup of cards look better.

When NV/AMD fanboys argue about who's better, they don't use benchmarks of the low-end.


----------



## bucdan

Looks like a repeat of GDDR4. Only AMD was on it when Nvidia and AMD then jumped to GDDR5. Now Fury will be HBM 1, now next gen everyone will be on HBM2.


----------



## erocker

Quote:


> Originally Posted by *GorillaSceptre*
> 
> I don't expect non-pc gamers to buy a flagship for $600 or more.
> 
> I think you're underestimating what "Halo" cards do for a brands image. Having the best performing card makes your whole lineup of cards look better.
> 
> When NV/AMD fanboys argue about who's better, they don't use benchmarks of the low-end.


No. Do you know when AMD/Nvidia do best? When they have a good price/performance ratio... Every single time.. Check earnings reports for the past 10 years. These "halo" cards have never done a thing to boost sales.


----------



## Blameless

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Honestly, maybe it's time to let AMD fail and let someone step in and pick up their pieces (say like Samsung) and give nVidia some REAL competition for a change.


I agree with businesses sinking or swimming on merit and ability to be successful businesses, rather than being artificially propped up by fanboy subsidies some would wish on their favored team.

However, I'm at all convinced AMD is incapable of seriously competing with NVIDIA.


----------



## GorillaSceptre

Quote:


> Originally Posted by *erocker*
> 
> No. Do you know when AMD/Nvidia do best? When they have a good price/performance ratio... Every single time.. Check earnings reports for the past 10 years. These "halo" cards have never done a thing to boost sales.


All earnings reports will show me is what i already know, cheaper products sell more..

If all that mattered was a good price/performance ratio, then AMD would be wiping the floor with Nvidia. Brand image is just as important as one specific product.

It's a fact that "halo" products boost a brand. That goes for nearly any industry.


----------



## joeh4384

Quote:


> Originally Posted by *Eni[quote name=[/B]
> No. Do you know when AMD/Nvidia do best? When they have a good price/performance ratio... Every single time.. Check earnings reports for the past 10 years. These "halo" cards have never done a thing to boost sales.*


*
They do good too when their cards are first to market. AMD is not doing themselves any favors having Nvidia free reign on the high end market even though the 290x is competitive with 970 and not too far from 980.*


----------



## Tivan

Quote:


> The cards also can't run in their current form, as no BIOS is present. It can therefore only be switched on, but no image will appear on a screen.


Quote:


> Apparently the Radeon Fury X ought to be slower than the GeForce GTX 980 Ti.


I don't get it. It 'ought' to be slower due to not being operational? lol.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Blameless*
> 
> I agree with businesses sinking or swimming on merit and ability to be successful businesses, rather than being artificially propped up by fanboy subsidies some would wish on their favored team.
> 
> However, I'm at all convinced AMD is incapable of seriously competing with NVIDIA.


Honestly, IMO, the best thing for AMD is to sell off their graphic division (aka ATI) to someone like Samsung and keep going as a CPU/APU company so they can focus and compete with Intel (and not totally send the world into chaos with the x86/x64 license agreement) and as part of that deal with Samsung, work out a deal so AMD can keep making APU's for as long as they want.

AMD just doesn't have the resources or R&D and marketing budgets to fight a "two front war" alone against nVidia AND Intel.


----------



## Shogon

Quote:


> Originally Posted by *keikei*
> 
> You assume the Fiji card(s) is the only one in their line up. How many console gamers also own a PC? How many gamers game on multiple platforms? Just the fact they are revealing the card at the show is big.


Well, yeah. Aren't the rest of the cards simply rebrands? Luckily though nobody knows code names like Grenada, Hawaii, or Tonga. At this point AMD revealing anything is big because it's finally coming™.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> I don't expect non-pc gamers to buy a flagship for $600 or more.
> 
> I think you're underestimating what "Halo" cards do for a brands image. Having the best performing card makes your whole lineup of cards look better.
> 
> When NV/AMD fanboys argue about who's better, they don't use benchmarks of the low-end.


You're right, they just buy the GTX 960 and move on lol







. I know 2 individuals who are die-hard AMD fans that invested in AMD 8 core CPUs less than 2 months ago and what cards did they buy? The 960. I was shocked they didn't consider a r9-290 at the least or even a 280X.

Also, if Fiji doesn't perform better than the competitions Halo product, does that signal their entire lineup is mediocre? Works both ways unfortunately doesn't it?
Quote:


> Originally Posted by *GorillaSceptre*
> 
> All earnings reports will show me is what i already know, cheaper products sell more..
> 
> If all that mattered was a good price/performance ratio, then AMD would be wiping the floor with Nvidia. Brand image is just as important as one specific product.
> 
> It's a fact that "halo" products boost a brand. That goes for nearly any industry.


That price to performance ratio hasn't been helping AMD at all with those price cuts. All you have to do is look at the market share numbers and see that. 290 is the same price as a 4GB 960 but blows it away, yet the 960 is more attractive and even the 970 is with it's gimped memory system.


----------



## GorillaSceptre

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Honestly, IMO, the best thing for AMD is to sell off their graphic division (aka ATI) to someone like Samsung and keep going as a CPU/APU company so they can focus and compete with Intel (and not totally send the world into chaos with the x86/x64 license agreement) and as part of that deal with Samsung, work out a deal so AMD can keep making APU's for as long as they want.
> 
> AMD just doesn't have the resources or R&D and marketing budgets to fight a "two front war" alone against nVidia AND Intel.


If that happened then say goodbye to Nvidia too.


----------



## robertparker

Quote:


> Originally Posted by *Blameless*
> 
> I agree with businesses sinking or swimming on merit and ability to be successful businesses, rather than being artificially propped up by fanboy subsidies some would wish on their favored team.
> 
> However, I'm at all convinced AMD is incapable of seriously competing with NVIDIA.


I think that is becoming clearer as time goes by. Unless something miraculous happens, AMD is going to continue to spiral into a weaker and weaker company. They are in a tough position right now even for good leadership, much less what they have had over the last few years.


----------



## p4inkill3r

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Honestly, IMO, the best thing for AMD is to sell off their graphic division (aka ATI) to someone like Samsung and keep going as a CPU/APU company so they can focus and compete with Intel (and not totally send the world into chaos with the x86/x64 license agreement) and as part of that deal with Samsung, work out a deal so AMD can keep making APU's for as long as they want.
> 
> AMD just doesn't have the resources or R&D and marketing budgets to fight a "two front war" alone against nVidia AND Intel.


...or they can get their finances together and exist as a comfortable #2, much like Burger King has against intel McDonald's. They can make money and pressure intel/nvidia without selling off ATI, it is just going to take some luck and better stewardship of the company.


----------



## Blameless

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Honestly, IMO, the best thing for AMD is to sell off their graphic division (aka ATI) to someone like Samsung and keep going as a CPU/APU company so they can focus and compete with Intel (and not totally send the world into chaos with the x86/x64 license agreement) and as part of that deal with Samsung, work out a deal so AMD can keep making APU's for as long as they want.
> 
> AMD just doesn't have the resources or R&D and marketing budgets to fight a "two front war" alone against nVidia AND Intel.


AMD is as much a graphics company as a CPU company and they can compete more readily in the GPU arena than in the CPU one.

I don't think it would be easy or practical to split up AMD again, nor would they be able to sell effective APU's without direct control over their graphics components. Anyone who would be interested in buying AMD's graphics from AMD is going to want to leverage that purchase, and probably would not be willing to subsidize or work closely enough with AMD to give them a credible APU product.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Shogon*
> 
> Well, yeah. Aren't the rest of the cards simply rebrands? Luckily though nobody knows code names like Grenada, Hawaii, or Tonga. At this point AMD revealing anything is big because it's finally coming™.
> You're right, they just buy the GTX 960 and move on lol
> 
> 
> 
> 
> 
> 
> 
> . I know 2 individuals who are die-hard AMD fans that invested in AMD 8 core CPUs less than 2 months ago and what cards did they buy? The 960. I was shocked they didn't consider a r9-290 at the least or even a 280X.
> *
> Also, if Fiji doesn't perform better than the competitions Halo product, does that signal their entire lineup is mediocre? Works both ways unfortunately doesn't it?
> *That price to performance ratio hasn't been helping AMD at all with those price cuts. All you have to do is look at the market share numbers and see that. 290 is the same price as a 4GB 960 but blows it away, yet the 960 is more attractive and even the 970 is with it's gimped memory system.


Imo, yes. Nvidia is seen as the better/premium brand. AMD has a lot of work to do to change consumers opinions of them. I think this year and 2016 is the biggest chance they'll have at doing that.


----------



## Rickles

Quote:


> Originally Posted by *GorillaSceptre*
> 
> If that happened then say goodbye to Nvidia too.


No, that's not how it would work. EU and the US wouldn't just break up Nvidia overnight if AMD filed for bankruptcy they would immediately be looking for potential buyers for the company. Their patent portfolio alone would be enough to generate plenty of interest.

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Imo, yes. Nvidia is seen as the better/premium brand. AMD has a lot of work to do to change consumers opinions of them. I think this year and 2016 is the biggest chance they'll have at doing that.


This year and 2016 could actually be their last chance as they are currently structured... perhaps another reason altogether to avoid purchasing from them, never know if they get bought out if RMA's will be issued or warranties honored.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Rickles*
> 
> *No, that's not how it would work. EU and the US wouldn't just break up Nvidia overnight if AMD filed for bankruptcy they would immediately be looking for potential buyers for the company. Their patent portfolio alone would be enough to generate plenty of interest.
> *This year and 2016 could actually be their last chance as they are currently structured... perhaps another reason altogether to avoid purchasing from them, never know if they get bought out if RMA's will be issued or warranties honored.


That's not what i meant.

I was saying if Samsung came into the GPU space, then they would slaughter Nvidia. It would be like the Intel/AMD CPU situation.


----------



## Tojara

Quote:


> Originally Posted by *GorillaSceptre*
> 
> That's not what i meant.
> 
> I was saying if Samsung came into the GPU space, then they would slaughter Nvidia. It would be like the Intel/AMD CPU situation.


Probably an order of magnitude worse. Samsung has an absolutely massive budget and they also stand to gain from having the best GPUs in mobile. Hard to say how interested they would be since their GPU designers already seem rather competent.


----------



## geoxile

Quote:


> Originally Posted by *Tojara*
> 
> Probably an order of magnitude worse. Samsung has an absolutely massive budget and they also stand to gain from having the best GPUs in mobile. Hard to say how interested they would be since their GPU designers already seem rather competent.


I don't think they've ever produced a 1st party GPU that went into mass production/market. They just had some prototype that was shown once and then disappeared IIRC.


----------



## Xuper

According to Dsogaming :

-R9 380 4gb $199
-R9 390 8gb $299
-R9 390x 8gb $399
-Fiji pro 4gb hbm $599
-Fiji xt 4gb hbm $749
-Fiji xt /xtx 8gb hbm $849
-Fiji vr 2x8gb hbm $1399-$1499

So Why is Fury XT slower than 980Ti if price Is $749.either HWLUXX is wrong or this one.


----------



## Anateus

Quote:


> Originally Posted by *Xuper*
> 
> According to some random user on the internet:
> 
> _-R9 380 4gb $199__
> __-R9 390 8gb $299__
> __-R9 390x 8gb $399__
> __-Fiji pro 4gb hbm $599__
> __-Fiji xt 4gb hbm $749__
> __-Fiji xt /xtx 8gb hbm $849__
> __-Fiji vr 2x8gb hbm $1399-$1499_


Fixed that for you.


----------



## Kuivamaa

Quote:


> Originally Posted by *Xuper*
> 
> According to Dsogaming :
> 
> _-R9 380 4gb $199__
> __-R9 390 8gb $299__
> __-R9 390x 8gb $399__
> __-Fiji pro 4gb hbm $599__
> __-Fiji xt 4gb hbm $749__
> __-Fiji xt /xtx 8gb hbm $849__
> __-Fiji vr 2x8gb hbm $1399-$1499_
> 
> So Why is Fury XT slower than 980Ti if price Is $749.either HWLUXX is wrong or this one.


The price difference between flagship with 4GB and 8GB is just 13%? Doesn't pass the sniff test.


----------



## MerkageTurk

The community seems to not understand the fact that HBM is new, which obviously is a bit more expensive then gddr5, if nVidia can sell old tech (GDDR5) at a higher price and also not their flagship model, Why cannot AMD?

Because of this perspective AMD seems cheap and can never exceed, as people think it should be couple of hundred below the competition, even for newer tech.

Watch the precedent nVidia created,

1) fast at launch and short run
2) Amd better at the long run


----------



## OostBlokBoys

Quote:


> Originally Posted by *MerkageTurk*
> 
> The community seems to not understand the fact that HBM is new, which obviously is a bit more expensive then gddr5, if nVidia can sell old tech (GDDR5) at a higher price and also not their flagship model, Why cannot AMD?
> 
> Because of this perspective AMD seems cheap and can never exceed, as people think it should be couple of hundred below the competition, even for newer tech.
> 
> Watch the precedent nVidia created,
> 
> 1) fast at launch and short run
> 2) Amd better at the long run


Why cannot Samsung sell its phones for more than Apple does? Marketing. AMD's marketing.. I'm not even going to go there.
Yes HBM is new, maybe even too new. It's performance and stability is unknown.
AMD makes great performance/price cards. On the highest level, they don't compete with Nvidia though (yet).
AMD seems cheap because they created this image for themselves. The latest 290 and 290x were aimed to be great price/performance cards.
If they price their cards as high as Nvidia's, they better make sure the performance is better.


----------



## criminal

Quote:


> Originally Posted by *MerkageTurk*
> 
> The community seems to not understand the fact that HBM is new, which obviously is a bit more expensive then gddr5, if nVidia can sell old tech (GDDR5) at a higher price and also not their flagship model, Why cannot AMD?
> 
> Because of this perspective AMD seems cheap and can never exceed, as people think it should be couple of hundred below the competition, even for newer tech.
> 
> Watch the precedent nVidia created,
> 
> 1) fast at launch and short run
> 2) Amd better at the long run


Doesn't matter new tech or not. AMD has to have competitive pricing now because they need to regain market share. True the top tier card will not help with market share, but they need to be competitve across the whole line as far as I am concerned.

Edit: I know AMD doesn't want to be know as the "cheaper brand" any longer, but they can't expect that to happen overnight just by raising prices. If it were that easy I am sure they would have taken advantage of that long ago.


----------



## carlhil2

Quote:


> Originally Posted by *HeadlessKnight*
> 
> I pray and hope Fiji is not another bulldozer from AMD.


+1, I hope that Fury comes in at about 5% faster than Titan X/980ti, then AMD could price that card between the 2 nvidia cards,say $700-800.00, that would be a win for AMD, and, consumers in general..


----------



## Tarifas

Quote:


> Originally Posted by *criminal*
> 
> Doesn't matter new tech or not. AMD has to have competitive pricing now because they need to regain market share. True the top tier card will not help with market share, but they need to be competitve across the whole line as far as I am concerned.
> 
> Edit: I know AMD doesn't want to be know as the "cheaper brand" any longer, but they can't expect that to happen overnight just by raising prices. If it were that easy I am sure they would have taken advantage of that long ago.


You are right, they do need to regain market share, the only way for them to not be the "cheap brand" this time around is if they force Nvidia to price cut across the board.


----------



## Vesku

Fingers crossed that this upcoming, few hours now, Computex AMD event is the hardware reveal and E3 will be about how good they are in DX 12 and Vulkan games.


----------



## Tarifas

It might just be a paper launch today, who knows?


----------



## zealord

Quote:


> Originally Posted by *Vesku*
> 
> Fingers crossed that this upcoming, few hours now, Computex AMD event is the hardware reveal and E3 will be about how good they are in DX 12 and Vulkan games.


maybe they show it, but I don't think we will see any performance or reviews. Seems like they don't even have good drivers for it yet and no final clock









E3 it is very likely


----------



## Nvidia Fanboy

Quote:


> Originally Posted by *BinaryDemon*
> 
> I bet Fiji is faster, but the 4gb vram will make it a tough sell. More a psychological deterrent than a performance one since the R9 295x2 still holds it's own vs 980TI and Titan very well. Still, no one wants to buy something that highend with only 4gb.


Yeah the 295X2 having 4GB isn't the main reason why it doesn't sell well. It's the fact that it's a crossfire card. Lots of people don't like crossfire or SLI.


----------



## carlhil2

Quote:


> Originally Posted by *michaelius*
> 
> Zotac already has 980ti with 20% increased clock speeds annouced so there's plenty nvidia can do.


990Ti Black=full chip, 6gigs, bumped clock=AIB goodness=$800.000...


----------



## harney

Quote:


> Originally Posted by *zealord*
> 
> maybe they show it, but I don't think we will see any performance or reviews. Seems like they don't even have good drivers for it yet and no final clock
> 
> 
> 
> 
> 
> 
> 
> 
> 
> E3 it is very likely


Don't forget no bios too


----------



## Olivon

Quote:


> Originally Posted by *harney*
> 
> Don't forget no bios too


Having mock up cards for Computex presentation is not a crime.
I do hope partners already have BIOS and functionals cards, launch date is expected soon according to insistant rumours.


----------



## BulletSponge

Looks like the only solution is for Intel to buy AMD's graphics division and Nvidia to buy their CPU division.


----------



## SlackerITGuy

Quote:


> Originally Posted by *Anateus*
> 
> Fixed that for you.


This.

We need to start reading people.


----------



## Ganf

AMD is getting more free press by saying nothing and releasing on time than Nvidia ever got by releasing early, just a shame that so much of it is negative.


----------



## azanimefan

Quote:


> Originally Posted by *Ganf*
> 
> AMD is getting more free press by saying nothing and releasing on time than Nvidia ever got by releasing early, just a shame that so much of it is negative.


I just hope they're not tone deaf.

if they release a card thats slower then the 980ti with less ram and for more $$ they deserve all the negative pub they get.


----------



## rt123

Quote:


> Originally Posted by *azanimefan*
> 
> I just hope they're not tone deaf.
> 
> if they release a card thats slower then the 980ti with less ram and for more $$ they deserve all the negative pub they get.


HBM is new tech. Its gonna be expensive. I wouldn't blame AMD if it costs more.


----------



## BigMack70

Quote:


> Originally Posted by *rt123*
> 
> HBM is new tech. Its gonna be expensive. I wouldn't blame AMD if it costs more.


I would blame them if it performs worse, though...


----------



## rt123

Quote:


> Originally Posted by *BigMack70*
> 
> I would blame them if it performs worse, though...


Everyone will. Myself included.


----------



## magnek

Quote:


> Originally Posted by *azanimefan*
> 
> I just hope they're not tone deaf.
> 
> if they release a card thats slower then the 980ti with less ram and for more $$ they deserve all the negative pub they get.


Has this ever happened before?

Closest I can think of is when AMD got caught with its pants down when the 680 launched, which prompted the 7970 to drop to $450 (I think?), while they rushed out the 7970 GE @ $500. At least there the 7970 had OMG MOAR RAMZ!!!11 so at least it didn't lose on every metric lol.


----------



## royalkilla408

What time is the conference? Anyone know if we can watch a stream? Thanks!


----------



## BigMack70

Quote:


> Originally Posted by *magnek*
> 
> Has this ever happened before?
> 
> Closest I can think of is when AMD got caught with its pants down when the 680 launched, which prompted the 7970 to drop to $450 (I think?), while they rushed out the 7970 GE @ $500. At least there the 7970 had OMG MOAR RAMZ!!!11 so at least it didn't lose on every metric lol.


That wouldn't really be comparable. Reviewers loved the GK204 cards more than the 7970, but I don't think it was fair at all. 7970 OC matched 680 OC and beat it at high resolution, and overclocking was easier to do on the 7970 than with GPU Boost on the 680. Even their crossfire support was OK as long as you weren't talking about 3+ cards. Two cards was great if you were willing to use framerate limiters to fix microstutter if/when it occurred. AMD was ******ed to release the 7970 clocked at 925 MHz with a crappy cooler. I don't think I ever saw a 7970 that wouldn't do 1050 on stock volts and most launch 7970s would do 1125+ just fine, and if they had taken even a little care to make their cooler perform well, it would have been a completely different story....

If they release Fiji XT and it's slower with less VRAM than the 980 ti, then they would deserve all the bad press that would come with it.


----------



## royalkilla408

Never mind I think it's at 7PM PST and the link could be: http://m.ustream.tv/channel/amd-2011-computex-press-conference-live


----------



## epic1337

Quote:


> Originally Posted by *magnek*
> 
> Has this ever happened before?
> 
> Closest I can think of is when AMD got caught with its pants down when the 680 launched, which prompted the 7970 to drop to $450 (I think?), while they rushed out the 7970 GE @ $500. At least there the 7970 had OMG MOAR RAMZ!!!11 so at least it didn't lose on every metric lol.


R9 200 series, R9 290 and R9 290X came from 450~550 down to 300+ due to GTX970 alone.
on the CPU side, just take a look at their FX ( over! ) 9000 series, just because it could factory clock to 5Ghz doesn't mean it was worth $800.


----------



## tsm106

There is way too much arguing and opinions regarding others opinions aka rumors these days. Whole threads based on zero facts lol.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *royalkilla408*
> 
> Never mind I think it's at 7PM PST and the link could be: http://m.ustream.tv/channel/amd-2011-computex-press-conference-live


Some chick is at the lectern now.

Nevermind, that is from 2011.


----------



## rt123

Quote:


> Originally Posted by *epic1337*
> 
> R9 200 series, R9 290 and R9 290X came from 450~550 down to 300+ due to GTX970 alone.
> on the CPU side, just take a look at their FX ( over! ) 9000 series, just because it could factory clock to 5Ghz doesn't mean it was worth $800.


R9 200 series brought down the Titan prices & actually made Nvidia release full Gk110. That's a stupid comparison.

9590 was ridiculous, agreed.


----------



## magnek

Quote:


> Originally Posted by *BigMack70*
> 
> That wouldn't really be comparable. Reviewers loved the GK204 cards more than the 7970, but I don't think it was fair at all. 7970 OC matched 680 OC and beat it at high resolution, and overclocking was easier to do on the 7970 than with GPU Boost on the 680. Even their crossfire support was OK as long as you weren't talking about 3+ cards. Two cards was great if you were willing to use framerate limiters to fix microstutter if/when it occurred. AMD was ******ed to release the 7970 clocked at 925 MHz with a crappy cooler. I don't think I ever saw a 7970 that wouldn't do 1050 on stock volts and most launch 7970s would do 1125+ just fine, and if they had taken even a little care to make their cooler perform well, it would have been a completely different story....
> 
> If they release Fiji XT and it's slower with less VRAM than the 980 ti, then they would deserve all the bad press that would come with it.


Right but in the end the 7970 got a price cut. I'm saying I can't recall a time AMD released a GPU that was inferior in every single metric and still charged more for it, or didn't bother to at least compete on price when something better came out later and they didn't have an answer to it.
Quote:


> Originally Posted by *epic1337*
> 
> R9 200 series, R9 290 and R9 290X came from 450~550 down to 300+ due to GTX970 alone.
> on the CPU side, just take a look at their FX ( over! ) 9000 series, just because it could factory clock to 5Ghz doesn't mean it was worth $800.


Well the 980 and 970 dropped almost a year _after_ 290X, so that's hardly analogous to releasing something priced higher but slower with less vram, _when you already know your competitor's hand._ Because words fail to describe that kind of stupidity. As far as FX-9590 goes, not sure what happened there, except maybe someone wanted to flip a giant bird to the entire world.


----------



## Crouch

If this is true then boy oh boy, AMD is screwed


----------



## s-x

Anyone find it funny that the leaks all pointed to fury being a monster, and then the day after the 980 ti is released, we get a leak basically saying all those other leaks were wrong and it will perform worse than a 980 ti. I wonder who might have spread this rumor.


----------



## tsm106

Quote:


> Originally Posted by *s-x*
> 
> Anyone find it funny that the leaks all pointed to fury being a monster, and then the day after the 980 ti is released, we get a leak basically saying all those other leaks were wrong and it will perform worse than a 980 ti. I wonder who might have spread this rumor.


Unpossible! Cannot have rumors trump release reducing sales!


----------



## Majin SSJ Eric

Best part is how people swallow the "slower than 980Ti" rumors hook, line, and sinker while calling the "faster than TX" ones "just rumors". The timing of this particular rumor is pretty sketchy, combined with the details that the cards "didn't have a bios" and were only able to output a "black screen". Naturally they were able to give the Fury a thorough shake down in 3dmark with these limitations!









At any rate, I personally cannot wait to see what these cards will do in your hands TSM! Take it to those TX scores in the benching section man!


----------



## tajoh111

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Best part is how people swallow the "slower than 980Ti" rumors hook, line, and sinker while calling the "faster than TX" ones "just rumors". The timing of this particular rumor is pretty sketchy, combined with the details that the cards "didn't have a bios" and were only able to output a "black screen". Naturally they were able to give the Fury a thorough shake down in 3dmark with these limitations!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At any rate, I personally cannot wait to see what these cards will do in your hands TSM! Take it to those TX scores in the benching section man!


Lol, there are alot of people that took the AMD being faster rumors based on graphs that came out in January. E.g remember the captain Jack rumors, captain jack = pirates, Pirate = pirate islands, pirate islands = amd lineup code name and thus the cards have to be true.

People are still holding on to those rumors have substance when it should be clear that the cards were not close to being ready.

Same thing with 8gb of memory.

The rumor mill is more likely to be fed from AMD side than Nvidia at this point because Nvidia knows people will buy their gtx 980 ti at 649. They don't need rumors to help these cards sell out at this price. I think anyone on overclock.net knows this.

The thing is, during this whole maxwell affair, everytime Nvidia released a card after gtx 980, a leak from AMD popped up. E.g gtx 980 is launched, we get the water cooling pics. I find that to be a more convenient leak on AMD side.

The likelyhood of these rumors being true are more likely than those january ones. The cards should be launching in 3 weeks and partners have already been boxing and manufacturing these cards for months now. Computex is mostly a consumer show and press show, where the partners show off their goods to these groups. Partners should know.

Not saying the rumors are true, but these rumors are more likely to be true than the ones launched in January showing titan x and gtx 980 ti losing to island in a graph.


----------



## magnek

The Captain Jack "leak" was outed as a photoshop fake from Chiphell quite a while back. As for the water cooling pic, if you meant the purported 390X WC shroud prototype, well that also came from the Chinese Baidu forums. So at least in those 2 instances it's very likely the rumors originated from a bored Chinese gamer who had too much time on his hands.

Not saying AMD has nothing at all to do with any of the leaks, but the more outrageous ones at least I doubt could be traced back to AMD. Especially when you take into account Roy Taylor's unusual silence in 2015, when it's usually him who's the first to conduct the hype train.


----------



## magnek

On a different note, GPU launch @ E3 launch confirmed?

http://www.amd.com/en-us/products/graphics/graphics-technology


----------



## GorillaSceptre

Quote:


> Originally Posted by *magnek*
> 
> On a different note, GPU launch @ E3 launch confirmed?
> 
> http://www.amd.com/en-us/products/graphics/graphics-technology


Man, that's going to be such a damn long wait..


----------



## MapRef41N93W

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Best part is how people swallow the "slower than 980Ti" rumors hook, line, and sinker while calling the "faster than TX" ones "just rumors". The timing of this particular rumor is pretty sketchy, combined with the details that the cards "didn't have a bios" and were only able to output a "black screen". Naturally they were able to give the Fury a thorough shake down in 3dmark with these limitations!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> At any rate, I personally cannot wait to see what these cards will do in your hands TSM! Take it to those TX scores in the benching section man!


Best part is going to be seeing your rationalization once Fury comes out with 4GB and is slower than a Titan X and even the 980ti. After 3 months of your hilariously sour pants posting I can not wait.


----------



## Ganf

Quote:


> Originally Posted by *magnek*
> 
> On a different note, GPU launch @ E3 launch confirmed?
> 
> http://www.amd.com/en-us/products/graphics/graphics-technology


Was there ever any doubt? AMD confirmed it long ago, I don't know why people expect them to suddenly change their mind, especially when it costs millions to get a time slot at E3.


----------



## magnek

Sorry but all these rumors are slowly making me lose my mind. Where the Computex launch rumors come from btw?


----------



## Ganf

Quote:


> Originally Posted by *magnek*
> 
> Sorry but all these rumors are slowly making me lose my mind. Where the Computex launch rumors come from btw?


From January, when we didn't know jack other than AMD will be at Computex and didn't know that they were going to be at E3. They've just been holding their steam since then because most people missed the statement that said AMD's skipping Computex for announcing GPU's.

http://www.overclock.net/t/1555379/guru3d-amd-radeon-r9-390x-to-skip-computex-but-launches-at-e3/0_100

Nvidia fanatics couldn't hate enough on launching them at E3 to keep that article on the top too long.


----------



## magnek

Btw does this count as a hard launch at Computex?








Quote:


> Lisa shows off #Fiji chip @AMDRadeon


----------



## Forceman

Quote:


> Originally Posted by *magnek*
> 
> Btw does this count as a hard launch at Computex?


That definitely looks faster than a TX.


----------



## rt123

Well how about some pictures of the chip.


----------



## The Stilt

That´s one huge package


----------



## Forceman

Quote:


> Originally Posted by *The Stilt*
> 
> That´s one huge package


4GB on that one for sure. Not sure 8 stacks would fit on that interposer either, but kind of hard to tell.


----------



## rt123

Yes. Looks like 4GB.


----------



## magnek

Quote:


> Originally Posted by *The Stilt*
> 
> That´s one huge package


That's what she said held


----------



## Exilon

https://twitter.com/thesamreynolds/status/605928407440891904
Quote:


> AMD was set set to unveil its new Radeon card this morning at #computex2015, but literally at the 11th hour its launch got pushed to E3.


Taking it back to the shop and adding some volts here, some more MHz there?


----------



## magnek

Better angle:










Translated webpage
Quote:


> The boss himself has the idea of the next AMD graphics card announced: Built on the Fiji-chip Radeon R9 to mid-June as the fastest card in the world are presented. We have previously received new unofficial information.
> 
> AMD's chief Lisa Su has first shown the chip of the next high-end graphics card in public at Computex press conference. *Around the designated with the codename Fiji GPU are four stacks of video memory, along to this combination are the fastest graphics card in the world - said AMD vice president Matt Skinner a few minutes earlier.*
> 
> According to our information, the Fiji-based graphics card on 4 GB High Bandwidth Memory to have, in addition to three DisplayPort outputs installed AMD an HDMI 2.0 output. The board is very short with less than 20 cm, as the storage stack to sit next to the graphics chip and not as GDDR5 memory distributed over the entire board.
> 
> The two 8-pin connectors allow together with the PCIe slot theoretically a power consumption of up to 375 watts, but AMD's Graphics CEO Joe Macri said Golem.de that Fiji graphics card should not require more energy than the Radeon R9 290X. Thus, should the upcoming models remain at under 300 watts - possibly even significantly.
> 
> *This is due, among other things with the final GPU frequency to be in excess of 1 GHz.* Whether this is a basic clock is meant as with Nvidia's Geforce cards or a boost, we do not know. This should be clarified on 16 June 2015: AMD would like to introduce the Fiji Radeon streaming LIVE[


If Matt Skinner actually, literally said "fastest card in the world" and it's not a misquote, "paraphrasing" or lost in translation funny business, then perhaps Fiji just might surprise us


----------



## Ganf

.....

How is May 13 the 11th hour?....


----------



## rt123

Quote:


> Originally Posted by *Exilon*
> 
> https://twitter.com/thesamreynolds/status/605928407440891904
> Taking it back to the shop and adding some volts here, some more MHz there?


The guy is sensationalizing & you are helping.


----------



## Serandur

Quote:


> Originally Posted by *Forceman*
> 
> 4GB on that one for sure. Not sure 8 stacks would fit on that interposer either, but kind of hard to tell.


Almost looks like they've got just enough space to squeeze in 4 more stacks in the picture below, hard to tell for sure. I really hope they manage it.

Quote:


> Originally Posted by *magnek*
> 
> Better angle:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Translated webpage
> If Matt Skinner actually, literally said "fastest card in the world" and it's not a misquote, "paraphrasing" or lost in translation funny business, then perhaps Fiji just might surprise us


Edit: Actually looks like it would be too tight. The HBM stacks look a bit more elongated than the empty spaces on the interposer. They would probably need a differently-sized interposer to fit a full 8 stacks. I'd settle for 6, personally.


----------



## geoxile

Quote:


> Originally Posted by *rt123*
> 
> The guy is sensationalizing & you are helping.


Sensationalizing is an understatement. That guy's a rat.


----------



## p4inkill3r

Quote:


> Originally Posted by *geoxile*
> 
> Sensationalizing is an understatement. That guy's a rat.


I bet he posts here.


----------



## sugalumps

Quote:


> Originally Posted by *s-x*
> 
> Anyone find it funny that the leaks all pointed to fury being a monster, and then the day after the 980 ti is released, we get a leak basically saying all those other leaks were wrong and it will perform worse than a 980 ti. I wonder who might have spread this rumor.


So rumours are only true when it's in amds favour? That's kinda the running theme in this thread it seems, I want to chalk that up to ya'll being optimists and not camp redmonds.


----------



## magnek

Quote:


> Originally Posted by *rt123*
> 
> The guy is sensationalizing & you are helping.


I don't even


----------



## Alatar

Would Lisa only show the 4GB package if there actually was an 8GB one? Doubtful, if there was a point in time to stop the 4GB rumors it's now that people are buying 980Tis.

I honestly think that as a card it's going to be a cool concept but perhaps not the most realistic thing for someone to buy and hold on to for 2+ years. It's seems like it might be the ultimate early adopters card. Big advantages, but also big drawbacks.

Personally I wont buy a 4GB card. Been saying it since the first HBM rumors.
Quote:


> Originally Posted by *rt123*
> 
> And AMD's custom AIB partners will release a card clocked higher too. Anything else.?
> 
> Releasing an OCed card isn't a move, a bigger die(ala more GPU cores) is. The only thing Nvidia has left in the tank is too allow custom TitanX.
> A TitanX Lighting would be interesting.


Quote:


> Originally Posted by *rt123*
> 
> Same argument as the other guy. I know there is overclocking headroom.
> 
> Quite a few TitanX owners are running 1450-1500Mhz on air & 1600Mhz with full cover block on water. But Fiji can be overclocked too. I don't remember a recent card from either of the companies that didn't have an OC headroom.
> 
> Driver optimizations can add another 5% maybe.


I'm sorry but this is mostly nonsense. Not all cards come with equal overclocking headroom.

Fiji is going to be pushed as far as AMD thinks it can be pushed on the watercooler. That point is much closer to the maximum OCing potential of the card than the clocks of the aircooled TX/980Ti.

If Nvidia wants they can extract more stock performance by 1) going with stock watercooling or 2) releasing a 3072 core 980Ti equivalent. The question just is, do they need to do either of those?


----------



## Ultracarpet

Yeeeeeeaaa... I'm thinking 4gb is going to be the number. It will be interesting to see how that affects things down the road.


----------



## raghu78

Quote:


> Originally Posted by *Alatar*
> 
> Would Lisa only show the 4GB package if there actually was an 8GB one? Doubtful, if there was a point in time to stop the 4GB rumors it's now that people are buying 980Tis.
> 
> I honestly think that as a card it's going to be a cool concept but perhaps not the most realistic thing for someone to buy and hold on to for 2+ years. It's seems like it might be the ultimate early adopters card. Big advantages, but also big drawbacks.
> 
> Personally I wont buy a 4GB card. Been saying it since the first HBM rumors.


Why don't you wait for reviews before rushing to conclusions. If you are quick to write off a product before its reviewed it means you are biased. Reviewers are definitely going to push this card at 4K and see if VRAM issues come up. especially in multi GPU.
Quote:


> I'm sorry but this is mostly nonsense. Not all cards come with equal overclocking headroom.
> 
> Fiji is going to be pushed as far as AMD thinks it can be pushed on the watercooler. That point is much closer to the maximum OCing potential of the card than the clocks of the aircooled TX/980Ti.
> 
> If Nvidia wants they can extract more stock performance by 1) going with stock watercooling or 2) releasing a 3072 core 980Ti equivalent. The question just is, do they need to do either of those?


Again you are assuming that Fiji is pushed closer to its max OC potential. Why do you think AMD has no improvements to perf/sp ? There is a very good chance that AMD has done significant architectural improvements to perf/sp and that Fiji could be a clear runaway winner both at stock and OC.

GF 28SHP is a custom high performance 28nm process developed for AMD's high performance APU needs (Kaveri).

http://www.tomshardware.co.uk/fx-7600p-kaveri-apu,review-32965.html



GF 28SHP is a far superior process compared to TSMC 28HP in terms of leakage.

http://www.anandtech.com/show/7974/amd-beema-mullins-architecture-a10-micro-6700t-performance-preview

"AMD claims a 19% reduction in core leakage/static current for Puma+ compared to Jaguar at 1.2V, and a 38% reduction for the GPU. The drop in leakage directly contributes to a substantially lower power profile for Beema and Mullins."

Beema / Mullins were manufactured at GF 28SHP while Kabini / Temash were manufactured at TSMC 28HP.

The transistor density of GF 28nm which is a gate first process is 10-20% higher than TSMC 28nm which is a gate last process. see page 7

http://globalfoundries.com/docs/default-source/brochures-and-white-papers/globalfoundries-stmicro-28nm-mobile-apps.pdf?sfvrsn=2

Fiji looks closer to 600 sq mm if not greater from the shots of Lisa Su holding up the Fiji package. So AMD's design does not look like an ultra high dense design (Hawaii). All these indicate 1050 Mhz could be very conservative with excellent OC headroom.


----------



## rt123

Quote:


> Originally Posted by *Alatar*
> 
> Personally I wont buy a 4GB card. Been saying it since the first HBM rumors.
> 
> I'm sorry but this is mostly nonsense. Not all cards come with equal overclocking headroom.
> 
> Fiji is going to be pushed as far as AMD thinks it can be pushed on the watercooler. That point is much closer to the maximum OCing potential of the card than the clocks of the aircooled TX/980Ti.
> 
> If Nvidia wants they can extract more stock performance by 1) going with stock watercooling or 2) releasing a 3072 core 980Ti equivalent. The question just is, do they need to do either of those?


I know not all cards overclock the same.

But you are saying AMD is going to push the card as far as it think it going to, do have any source(except rumors) or proof for that.? Have you overclocked a Fiji yet.?

Or is it just an assumption like mine.? If some assumption doesn't favor you, that doesn't make it nonsense.

And I had previously agreed to the possibility of the scenarios you posted.


----------



## Xuper

LOL!

https://twitter.com/AMDRadeon/status/605938931448737792

Read Their comments! haha This AMD Roy.

Quote:


> it's between you guys and the 980ti. Been amd forever but NVIDIA is hitting it right now. Waiting for the announcement


Then answered by AMD Roy

Quote:


> oh please can you guys just relax! Sheesh!


Perhaps AMD Fury is not Slower than Geforce 980Ti ?


----------



## Luciferxy

Quote:


> Originally Posted by *magnek*
> 
> Btw does this count as a hard launch at Computex?


Quote:


> Originally Posted by *The Stilt*
> 
> That´s one huge package


Man... is it me or Dr. Su is just 'hot'









I was wondering, how amd can fit those 4096 (?) SPs (still using 28nm is it ?) in that 'package' ?


----------



## epic1337

Quote:


> Originally Posted by *Luciferxy*
> 
> Man... is it me or Dr. Su is just 'hot'
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was wondering, how amd can fit those 4096 (?) SPs (still using 28nm is it ?) in that 'package' ?


Nvidia's TitanX is 1/2 of that size by appearance, has 3072 cuda cores, and rumored to perform better.


----------



## raghu78

Quote:


> Originally Posted by *epic1337*
> 
> Nvidia's chip is 1/2 of that size by appearance, and performs better.


what are you talking about







The GPU die is different from the interposer which houses both the GPU die and HBM stacks. The GPU die looks somewhere in the region of >= 600 sq mm. btw what Nvidia chip are you talking about and how do you know Fiji performance


----------



## magnek

Quote:


> Originally Posted by *Luciferxy*
> 
> Man... is it me or Dr. Su is just 'hot'
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was wondering, how amd can fit those 4096 (?) SPs (still using 28nm is it ?) in that 'package' ?


CynicalUnicorn has an excellent post regarding die size and shader density:
Quote:


> Originally Posted by *CynicalUnicorn*
> 
> Cool. So that means that the die size includes things like shaders, ROPs, memory controllers, cache, etc. but no RAM, just like all other GPUs. So have a post:
> I noticed a while ago that larger GPU dies have their shaders packed more densely. Yesterday, I plotted the regression line for number of shaders vs expected die size, using the seven GCN dies (Oland, Cape Verde, Bonaire, Pitcairn, Tahiti, Tonga, and Hawaii) that have been released. That line predicted a much larger 650mm^2 die size, while plotting shaders vs expected shader density yields the much smaller, GK110-scale 564mm^2. So which one is right? Well, I'm going to lean towards the smaller one. With the exception of Pitcairn being oddly dense, the density only increases with shader count.
> 
> Now, judging by the picture from WCCF, ASSUMING THE RAM DIES ARE 5x7mm AND ASSUMING THIS IS WHAT THE FINAL FIJI DIE WILL LOOK LIKE: The main GPU die is about 260x350 pixels while the smaller RAM dies are... covered in goo and difficult to see the edges of. Oops. Well, the ones on the bottom are about 65x90 pixels and the ones up top around 60x85 pixels, so there isn't too much of a parallax. Anyway, that's about 12 pixels per mm and therefore the big GPU die is about 22x29 = 630mm^2.
> 
> So I have no idea what to expect. GM200 was the largest 28nm die yet, I believe, at 601mm^2. I'm not sure TSMC can handle anything bigger with decent yields.


----------



## epic1337

Quote:


> Originally Posted by *raghu78*
> 
> what are you talking about
> 
> 
> 
> 
> 
> 
> 
> The GPU die is different from the interposer which houses both the GPU die and HBM stacks. The GPU die looks somewhere in the region of >= 600 sq mm. btw what Nvidia chip are you talking about and how do you know Fiji performance


yes i'm looking at the black box at the center, based on the thumb size, its roughly 30x35 in size.
although maybe not 1/2, but more like 1/3 smaller.

Titan X, thread title.


----------



## bmgjet

Quote:


> Apparently the Radeon Fury X ought to be slower than the GeForce GTX 980 Ti


Let me guess this is at 720 res or some other stupidly low res where its getting held back by the driver.
4K is where this cards going to shine.


----------



## Wishmaker

Quote:


> Originally Posted by *bmgjet*
> 
> Let me guess this is at 720 res or some other stupidly low res where its getting held back by the driver.
> *4K is where this cards going to shine*.


...because last time I checked 4k is the industry standard and everybody is having 4k screens lying all over the place







. Do not confuse the niche you find on OCN who stroke those 4k e-peens with the majority who will buy this card but will not game at 4k. So that argument, like I said in the 980TI thread, applies in a world where friendship is magic







.


----------



## hollowtek

Nvidia: You almost had me? You never had me - you never had your GPU.. Over voltin' not over clockin' like you should. You're lucky that 375watt tdp didn't blow the solder on the heatsink! You almost had me?
Amd wants to be... #fastandfurious


----------



## hawker-gb

Lol,looks like lot of people actually test Fury on this forum.


----------



## Noufel

Quote:


> Originally Posted by *hawker-gb*
> 
> Lol,looks like lot of people actually test Fury on this forum.


OCN is little by little transforming into WCCF


----------



## Forceman

This better not be another Hawaii situation, where they have a big press event and then at the end say it won't be available for another month.

If they are still fiddling the BIOS, that doesn't sound good for retail availability in 2 weeks.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> Nvidia's TitanX is 1/2 of that size by appearance


The Fiji die itself is only slightly bigger than GM200.
Quote:


> Originally Posted by *epic1337*
> 
> yes i'm looking at the black box at the center, based on the thumb size, its roughly 30x35 in size.


The GPU die isn't 30x35mm.

Are you sure you aren't looking at the whole interposer?


----------



## Nnimrod

Quote:


> Originally Posted by *hollowtek*
> 
> Nvidia: You almost had me? You never had me - you never had your GPU.. Over voltin' not over clockin' like you should. You're lucky that 375watt tdp didn't blow the solder on the heatsink! You almost had me?
> Amd wants to be... #fastandfurious


too funny lol. #fastandfuryus


----------



## Ganf

Quote:


> Originally Posted by *Wishmaker*
> 
> ...because last time I checked 4k is the industry standard and everybody is having 4k screens lying all over the place
> 
> 
> 
> 
> 
> 
> 
> . Do not confuse the niche you find on OCN who stroke those 4k e-peens with the majority who will buy this card but will not game at 4k. So that argument, like I said in the 980TI thread, applies in a world where friendship is magic
> 
> 
> 
> 
> 
> 
> 
> .


Because last time I checked Halo cards were marketed towards the 1% of enthusiasts who do own 4k....



Powercolor's Fiji teaser, among many others.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Alatar*
> 
> Would Lisa only show the 4GB package if there actually was an 8GB one? Doubtful, if there was a point in time to stop the 4GB rumors it's now that people are buying 980Tis.


Yup, like I've been saying for weeks now, it will be 4GB on launch with the stock interposer. Several months from now, they will most likely do a larger interposer with 8GB, but not for launch.










They have to calm the people on the fence and stop them from buying the GTX 980Ti because once those sales are done, they will not get a crack at them for some time. Their AMD fan base is safe, and they will never get the nVidia fan base people. That means the AMD/nVidia war is to be fought on the battlefield in the middle ... the people who undecided. nVidia, because of their early launch, and now that the "less than flattering" rumors about the 390X (not to mention a BUTT UGLY Devil card) as far as speed and memory are appearing to be true ... mean that the late and expensive roll out of HMB and the 390X will ultimately mean, even more of a loss of market share (and less money) and an even worse Q3 bottom line. Mark my words.


----------



## Olivon

June 16th is just a presentation and the rebranding launch. Fiji reviews are expected end of june according to Hardware.fr :

http://forum.hardware.fr/hfr/Hardware/hfr/computex-fiji-photo-sujet_980973_1.htm#t9508501


----------



## Cyclonic

http://nl.hardware.info/nieuws/43996/computex-amd-toont-fiji-videokaart-achter-gesloten-deuren---update

The dutch site of hardware.info got a look behind closed doors, there saying it will only be 4gb and will be slower then Titan X but on par with 980TI.


----------



## MrStim

Quote:


> Originally Posted by *Luciferxy*
> 
> Man... is it me or Dr. Su is just 'hot'
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was wondering, how amd can fit those 4096 (?) SPs (still using 28nm is it ?) in that 'package' ?


its just you bro..


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Cyclonic*
> 
> http://nl.hardware.info/nieuws/43996/computex-amd-toont-fiji-videokaart-achter-gesloten-deuren---update
> 
> The dutch site of hardware.info got a look behind closed doors, there saying it will only be 4gb and will be slower then Titan X but on par with 980TI.


Translation (poorly):
Quote:


> As the first generation High Bandwidth Memory is limited to 4 GB per GPU, the remarkable situation where the new top model gets less memory than the underneath positioned 390X. You come to that limit because the 2 gigabit per HBM is present, up to four can be stacked and AMD lost four such stacks which the GPU.
> 
> A Titan X-killer we do not expect: we caught on that the performance level slightly below that of the Nvidia GeForce GTX 980 Ti would be. For this purpose, the clock frequency will increase to more than 1 GHz. The power was called 'under the 300 watt, "we probably can deduce that the consumption above that of the 980 Ti and Titan X will lie. Namely both those cards have a TDP of 250 watts, which is often not achieved in practice.


----------



## mltms

Quote:


> Originally Posted by *Cyclonic*
> 
> http://nl.hardware.info/nieuws/43996/computex-amd-toont-fiji-videokaart-achter-gesloten-deuren---update
> 
> The dutch site of hardware.info got a look behind closed doors, there saying it will only be 4gb and will be slower then Titan X but on par with 980TI.


980ti is slower than titan 3%
what is the big deal with this 3%


----------



## Ganf

Quote:


> Originally Posted by *Cyclonic*
> 
> http://nl.hardware.info/nieuws/43996/computex-amd-toont-fiji-videokaart-achter-gesloten-deuren---update
> 
> The dutch site of hardware.info got a look behind closed doors, there saying it will only be 4gb and will be slower then Titan X but on par with 980TI.


How does it end up slower than the Titan X and on par with the 980ti when those two cards are within a margin of error of each other, and the card hasn't been tested and the spread is always wildly different between AMD and Nvidia cards?

Words are being bandied about.
Quote:


> A Titan X-killer we do not expect: we caught on that the performance level slightly below that of the Nvidia GeForce GTX 980 Ti would be.


----------



## Cyclonic

Quote:


> Originally Posted by *Ganf*
> 
> How does it end up slower than the Titan X and on par with the 980ti when those two cards are within a margin of error of each other, and the card hasn't been tested and the spread is always wildly different between AMD and Nvidia cards?
> 
> Words are being bandied about.


They said first it was slower then 980TI but they bumped up the clock freq to be almost on par with 980TI so it makes it slower then Titan X not by much but its slower.


----------



## Ganf

Quote:


> Originally Posted by *Cyclonic*
> 
> They said first it was slower then 980TI but they bumped up the clock freq to be almost on par with 980TI so it makes it slower then Titan X not by much but its slower.


So now you're speculating based on the speculations of someone else overhearing the speculations of someone who has no hands on experience with the card?

YES INTERNET.
*GIVE.
ME.
MORE.
OPINIONS.*


----------



## HiTechPixel

Since some speculate that the card doesn't even have a BIOS yet, I'd take any rumour about performance with a truckload of doubt.


----------



## SuprUsrStan

Quote:


> Originally Posted by *Tarifas*
> 
> It might just be a paper launch today, who knows?


It's fine if it's a paper launch. We just need them to drop NDA so previews and reviews go out. That way we can sleep easy buying our GTX 980Ti's


----------



## Ganf

Quote:


> Originally Posted by *HiTechPixel*
> 
> Since some speculate that the card doesn't even have a BIOS yet, I'd take any rumour about performance with a truckload of doubt.


If one more person speculates in this room we're all going to need a shower.


----------



## EniGma1987

Does this card and the hype about it and the features it has remind anyone else of the 2900XTX?


----------



## Ganf

Quote:


> Originally Posted by *EniGma1987*
> 
> Does this card and the hyper about it and the features it has remind anyone else of the 2900XTX?


Nope.

2900XTX was basically a rebrand with GDDR4 over GDDR3, anyone who believed that was going to steal the crown from a card that was 35-50% faster was a fool. There's a pretty big difference between an upgrade that was just better VRAM and an upgrade that is 40% more shaders + entirely new VRAM and VRAM usage design.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Ganf*
> 
> How does it end up slower than the Titan X and on par with the 980ti when those two cards are within a margin of error of each other, and the card hasn't been tested and the spread is always wildly different between AMD and Nvidia cards?
> 
> Words are being bandied about.


Where have you seen that? All the reviews I've seen say Ti is ~3-5% slower.

I have no idea how AMD is going to increase their profits if their card isn't as good as the 980 Ti unless they release it for $550 but with HBM and an AIO cooler, I don't see that being a possibility whatsoever.


----------



## Ganf

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Where have you seen that? All the reviews I've seen say Ti is ~3-5% slower.
> 
> I have no idea how AMD is going to increase their profits if their card isn't as good as the 980 Ti unless they release it for $550 but with HBM and an AIO cooler, I don't see that being a possibility whatsoever.


3% is a driver update. We get 15% variance in performance with the silicon lottery if we're overclocking our cards, minimum. If any two cards are within 3% of each other then the only thing that matters is price.


----------



## EniGma1987

Quote:


> Originally Posted by *Ganf*
> 
> There's a pretty big difference between an upgrade that was just better VRAM and an upgrade that is 40% more shaders + entirely new VRAM and VRAM usage design.


is it? The 2900 series was entirely made to have massive memory bandwidth above all else and that would win everything according to ATI. This card may have a few more shaders squeezed in, but that is natural for the card and the big selling point is the card is geared for massive, overwhelming memory bandwidth.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Ganf*
> 
> 3% is a driver update. We get 15% variance in performance with the silicon lottery if we're overclocking our cards, minimum. If any two cards are within 3% of each other then the only thing that matters is price.


But isn't the Ti basically the same as a Titan X? Wouldn't a driver update that increase the Ti's performance also increase the Titan X's performance?


----------



## Ganf

Quote:


> Originally Posted by *EniGma1987*
> 
> is it? The 2900 series was entirely made to have massive memory bandwidth above all else and that would win everything according to ATI. This card may have a few more shaders squeezed in, but that is natural for the card and the big selling point is the card is geared for massive, overwhelming memory bandwidth.


1200 is not a few, and we're not in 2007 anymore. AMD is pushing VR and 4k along with this card, there were no bleeding edge display techs that required extra bandwidth 8 years ago.

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> But isn't the Ti basically the same as a Titan X? Wouldn't a driver update that increase the Ti's performance also increase the Titan X's performance?


You would think so, but then again we also thought that regular driver updates would keep the 780ti up to speed too, didn't we?


----------



## mouacyk

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> But isn't the Ti basically the same as a Titan X? Wouldn't a driver update that increase the Ti's performance also increase the Titan X's performance?


Not if this is how driver updates are coded:

Code:



Code:


if ( g_sGPU == "980TI" ) {
   g_clockTuneDelta += RANDOM( 3.0, 5.0 ); // cuz hot cakes
}
else if ( g_sGeneration == "KEPLER" ) {
   g_clockTuneDelta -= RANDOM( 10.0, 15 ); // cuz boss say so
}
else {
   // not worth it
}


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Ganf*
> 
> 1200 is not a few, and we're not in 2007 anymore. AMD is pushing VR and 4k along with this card, there were no bleeding edge display techs that required extra bandwidth 8 years ago.
> You would think so, but then again we also thought that regular driver updates would keep the 780ti up to speed too, didn't we?


But were the updates for Kepler? Like if the 780 Ti got a performance boost wouldn't the Titan Black for example also get a boost?


----------



## provost

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> But were the updates for Kepler? Like if the 780 Ti got a performance boost wouldn't the Titan Black for example also get a boost?


Or all the Kepler cards, since architecture is the same?
Wondering the same myself. The new driver is unstable anyway.

In any event, when do we start to see some benchmarks for 390x?


----------



## Particle

Quote:


> Originally Posted by *Ganf*
> 
> Nope.
> 
> 2900XTX was basically a rebrand with GDDR4 over GDDR3, anyone who believed that was going to steal the crown from a card that was 35-50% faster was a fool. There's a pretty big difference between an upgrade that was just better VRAM and an upgrade that is 40% more shaders + entirely new VRAM and VRAM usage design.


A rebrand of what? The 2900 series marked the introduction of a completely new architecture and the introduction of a 512-bit memory bus. It was AMD's first release that featured a unified shader architecture. It was vastly different from the X1900/X1950 series that came before it. The 2900s also came in two memory configurations with 512 MB cards using GDDR3 and 1 GB cards using GDDR4.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *provost*
> 
> Or all the Kepler cards, since architecture is the same?
> Wondering the same myself. The new driver is unstable anyway.
> 
> In any event, when do we start to see some benchmarks for 390x?


Right exactly, now you have me wondering


----------



## Wishmaker

Quote:


> Originally Posted by *mouacyk*
> 
> Not if this is how driver updates are coded:
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> if ( g_sGPU == "980TI" ) {
> g_clockTuneDelta += RANDOM( 3.0, 5.0 ); // cuz hot cakes
> }
> else if ( g_sGeneration == "KEPLER" ) {
> g_clockTuneDelta -= RANDOM( 10.0, 15 ); // cuz boss say so
> }
> else {
> // not worth it
> }


/approved!


----------



## 47 Knucklehead

Quote:


> Originally Posted by *mouacyk*
> 
> Not if this is how driver updates are coded:
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> if ( g_sGPU == "980TI" ) {
> g_clockTuneDelta += RANDOM( 3.0, 5.0 ); // cuz hot cakes
> }
> else if ( g_sGeneration == "KEPLER" ) {
> g_clockTuneDelta -= RANDOM( 10.0, 15 ); // cuz boss say so
> }
> else {
> // not worth it
> }


Some would say the same about AMD and their "Frame Pacing" patches. They STILL haven't fixed it for DirectX 9 games, which there are still a ton out there that use it, and as such, you still get microstuttering in Crossfire mode.

I'm just saying.


----------



## Ganf

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> But were the updates for Kepler? Like if the 780 Ti got a performance boost wouldn't the Titan Black for example also get a boost?


Yep, but the 780ti wasn't cannibalizing Titan Black sales, they were dead from the beginning. Take a look around the forums and count the owners.

Nvidia has done a great job of hyping both the Titan X and 980ti to give them huge quarterly sales figures in the quarters they were released. People bought the Titan X because it was the best, now people are buying the 980ti because it's almost as good, some of those people even selling their Titan X's to buy the 980ti. But this is it until pascal, so how is Nvidia going to keep selling both at this pace until summer of 2016? They're going to manipulate the drivers to push sales of whichever card they don't feel is moving enough units, just like they did with the 780ti to encourage people to move to Maxwell.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Ganf*
> 
> Yep, but the 780ti wasn't cannibalizing Titan Black sales, they were dead from the beginning. Take a look around the forums and count the owners.
> 
> Nvidia has done a great job of hyping both the Titan X and 980ti to give them huge quarterly sales figures in the quarters they were released. People bought the Titan X because it was the best, now people are buying the 980ti because it's almost as good, some of those people even selling their Titan X's to buy the 980ti. But this is it until pascal, so how is Nvidia going to keep selling both at this pace until summer of 2016? They're going to manipulate the drivers to push sales of whichever card they don't feel is moving enough units, just like they did with the 780ti to encourage people to move to Maxwell.


Even for cards of the same architecture and (basically) the same die? That sounds really grassy knoll tbh.


----------



## Ganf

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Even for cards of the same architecture and (basically) the same die? That sounds really grassy knoll tbh.


Have they shown any boundaries on what they'll manipulate to push sales yet? Don't make the mistake that this is some foretelling or that I've got inside information or something, just pointing out that this would fit their pattern of behavior. It's still up to them to decide if they want to take the risk of being caught doing it.

I should correct that. They will get caught if they do it, but they're going to decide whether it would be profitable to do it or not depending on how sales of each card are going, and factor in any perceived costs of getting caught.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Ganf*
> 
> Have they shown any boundaries on what they'll manipulate to push sales yet? Don't make the mistake that this is some foretelling or that I've got inside information or something, just pointing out that this would fit their pattern of behavior. It's still up to them to decide if they want to take the risk of being caught doing it.
> 
> I should correct that. They will get caught if they do it, but they're going to decide whether it would be profitable to do it or not depending on how sales of each card are going, and factor in any perceived costs of getting caught.


Well if the new driver is gimping the Ti they could always go back and use the previous driver.


----------



## headd

AMD R9 Fury X Is Potentially The Fastest GPU in The World - How Fast Can Fiji XT Realistically Be ?

Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3c0XXtlpE

And that is without HBM benefits and architectural benefits.


----------



## ComputerRestore

Quote:


> Originally Posted by *headd*
> 
> AMD R9 Fury X Is Potentially The Fastest GPU in The World - How Fast Can Fiji XT Realistically Be ?
> 
> Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3c0XXtlpE
> 
> And that is without HBM benefits and architectural benefits.


45% more Core/Shaders than 390X (5.9TFlops)

50% more Performance than 290X (5.6TFlops VS 8.5TFlops)

Requires temps below 0 Kelvin

Seems plausible.


----------



## Olivon

Quote:


> Originally Posted by *headd*
> 
> AMD R9 Fury X Is Potentially The Fastest GPU in The World - How Fast Can Fiji XT Realistically Be ?
> 
> Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3c0XXtlpE
> 
> And that is without HBM benefits and architectural benefits.


Who cares about crappy WCCFTurd analysis ?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *headd*
> 
> AMD R9 Fury X Is Potentially The Fastest GPU in The World - How Fast Can Fiji XT Realistically Be ?
> 
> Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3c0XXtlpE
> 
> And that is without HBM benefits and architectural benefits.


Applying their math on the 980 and 980 Ti:
Quote:


> We then multiply the result by Fiji's clock speed, which is purported to be 1050Mhz vs 1000Mhz for Hawaii. Which makes Fiji's clock speed 1.05 that of Hawaii.
> Performance = core count x clock speed
> = (4096 / 2816) x 1.05
> = 1.45 x 1.05
> = 1.53


GTX 980 = 1216 MHz http://www.techpowerup.com/gpudb/2621/geforce-gtx-980.html
GTX 980 Ti = 1076 MHz http://www.techpowerup.com/gpudb/2724/geforce-gtx-980-ti.html

That's 88.5%, take 1216 * .885 = 1076.16

= (2816 / 2048) x 0.885
= 1.375 x 0.885
= 1.22

Take that 80, multiply by 1.22 = 97.6, close to that 100 number

Edit: Even if the clock speed was 1000 MHz for Fury X, it'd be 1.45 x 1.00 = 1.45, or 45% which would be 72 * 1.45 for 104.4 and have the same *estimated* performance of the Titan X


----------



## Redwoodz

Quoted from the source
Quote:


> Performance slightly below Titanium X


, with immature driver. The source is a guy on a forum quoting another guy....lol. News these days. And 4GB could very well be more than enough with DirectX12.


----------



## Wishmaker

Quote:


> Originally Posted by *Olivon*
> 
> Who cares about crappy WCCFTurd analysis ?


Anything related to AMD is relevant from these guys. Chuck in information from INTEL or NVIDIA and we bring in the grain of salt







.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *headd*
> 
> AMD R9 Fury X Is Potentially The Fastest GPU in The World - How Fast Can Fiji XT Realistically Be ?
> 
> Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3c0XXtlpE
> 
> And that is without HBM benefits and architectural benefits.


So 6 FPS faster on an average of 21 games than a Titan X at 1440p.


----------



## headd

6%-8%... its not FPS.Without HBM benefition and architectural beneficion.
We already know that it will be gcn 1.2 or 1.3 and not gcn 1.1 in hawaii so it can be more.


----------



## Serandur

Quote:


> Originally Posted by *headd*
> 
> AMD R9 Fury X Is Potentially The Fastest GPU in The World - How Fast Can Fiji XT Realistically Be ?
> 
> Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3c0XXtlpE
> 
> And that is without HBM benefits and architectural benefits.


It would be a better idea to benchmark a 4 GB 285 (Tonga) and multiply the results by 2.286x, imo. Unless 4 GB 285s don't exist in which case use a 2 GB one for lower resolutions only. Both chips taped out at the same time after all, Fiji's probably also GCN 1.2. If desired, overclock the memory on the thing as far as it can go, too.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Serandur*
> 
> It would be a better idea to benchmark a 4 GB 285 (Tonga) and multiply the results by 2.286x, imo. Unless 4 GB 285s don't exist in which case use a 2 GB one for lower resolutions only. Both chips taped out at the same time after all, Fiji's probably also GCN 1.2. If desired, overclock the memory on the thing as far as it can go, too.




2 GB but, that would be 47 x 2.286 = 107.442


----------



## moldyviolinist

Quote:


> Originally Posted by *Serandur*
> 
> It would be a better idea to benchmark a 4 GB 285 (Tonga) and multiply the results by 2.286x, imo. Unless 4 GB 285s don't exist in which case use a 2 GB one for lower resolutions only. Both chips taped out at the same time after all, Fiji's probably also GCN 1.2. If desired, overclock the memory on the thing as far as it can go, too.


Comparing Anand's benchmarks for Crysis 3, 1080p:

R9 285: 54.8 FPS
Fury: 125.3 FPS
Titan X: 127.7 FPS

Actually looks about right.


----------



## tsm106

Quote:


> Originally Posted by *Redwoodz*
> 
> Quoted from the source
> Quote:
> 
> 
> 
> Performance slightly below Titanium X
> 
> 
> 
> , with immature driver. The source is a guy on a forum quoting another guy....lol. News these days. And 4GB could very well be more than enough with DirectX12.
Click to expand...

That passes muster as a source these days with online tech journalism. Is that a sad state of affairs or what? Shakes head.


----------



## headd

Quote:


> Originally Posted by *moldyviolinist*
> 
> Comparing Anand's benchmarks for Crysis 3, 1080p:
> 
> R9 285: 54.8 FPS
> Fury: 125.3 FPS
> Titan X: 127.7 FPS
> 
> Actually looks about right.


Nope...
r9 285 have only 1792SP at 918Mhz and 32Rops
FIJI 4096SP 1GHZ+(1050Mhz rumored that is 15% more than tonga) and maybe 128Rops(4x more)
You really cant compare tonga with Fiji.


----------



## moldyviolinist

Quote:


> Originally Posted by *headd*
> 
> Nope...
> r9 285 have only 1792SP at 918Mhz and 32Rops
> FIJI 4096SP 1GHZ+ and maybe 128Rops


True, I didn't account for the clock speed, but if we assume the 285 is not ROP/bandwidth limited at 1080p, then the shaders will be the only factor by which we should multiply. My estimate is basically a worst case though.


----------



## harney

Quote:


> Originally Posted by *moldyviolinist*
> 
> Comparing Anand's benchmarks for Crysis 3, 1080p:
> 
> R9 285: 54.8 FPS
> Fury: 125.3 FPS
> Titan X: 127.7 FPS
> 
> Actually looks about right.


Well if the fury x does sit any where near the 980 ti then its all down to the price .. but with it being HBM and Water AMD are going to ride on that and charge premium $700 $800 could be even more ....but you never know .....might shock us all same performance as the 980ti and cheaper too that's my wish.....then we would have a price war ......got me popcorn at the ready.....


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Some would say the same about AMD and their "Frame Pacing" patches. They STILL haven't fixed it for DirectX 9 games, which there are still a ton out there that use it, and as such, you still get microstuttering in Crossfire mode.
> 
> I'm just saying.


Yes, and you've "just been saying" it about 40 posts or so in this thread alone.







Why don't you do us all a favor and take a break from the AMD threads for a while. We all know your same tired comments by heart now anyway so there's really no need for you to post "how bankrupt AMD" is or "how they are losing market share" anymore. We get it.


----------



## Blameless

Multiplying performance by the supposed increase in shader count and assuming a clock speed is something we've all done, mentally, if not on paper, since the first rumors hit.

At best it's a vague approximation that doesn't reveal anything new. It may wind up being accurate...or not. We know that Fiji likely has a really shot at competitive performance with GM200, but we need more hard information to be more specific at this point.
Quote:


> Originally Posted by *47 Knucklehead*
> 
> Yup, like I've been saying for weeks now, it will be 4GB on launch with the stock interposer. Several months from now, they will most likely do a larger interposer with 8GB, but not for launch.


Precisely how much later matters.

If the 4GiB Fiji performs well enough, and an 8GiB is announced, I may decide to wait for the 8GiB variant rather than grabbing a 980Ti.

Regardless, I can certainly wait the few weeks at this point to get an idea if Fiji will be truly competitive or not.
Quote:


> Originally Posted by *EniGma1987*
> 
> is it? The 2900 series was entirely made to have massive memory bandwidth above all else and that would win everything according to ATI.


The 2900 series was a radical change from previous ATI cards. Yes it had more memory bandwidth, but that was not the key change. It was ATI's first processor with unified shaders, which was a much bigger deal.
Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> But isn't the Ti basically the same as a Titan X? Wouldn't a driver update that increase the Ti's performance also increase the Titan X's performance?


Yes.
Quote:


> Originally Posted by *47 Knucklehead*
> 
> Some would say the same about AMD and their "Frame Pacing" patches. They STILL haven't fixed it for DirectX 9 games


I doubt they ever will. The incentive to do decreases over time. Certainly this will leave a bitter taste in the mouths of many customers, myself included, but the portion of people really concerned about getting more DX9 performance with multi-generation old hardware run in CFX is dwindling, as is the chance for any positive media attention on the subject.


----------



## Nvidia Fanboy

Quote:


> Originally Posted by *Ganf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *EniGma1987*
> 
> Does this card and the hyper about it and the features it has remind anyone else of the 2900XTX?
> 
> 
> 
> Nope.
> 
> 2900XTX was basically a rebrand with GDDR4 over GDDR3, anyone who believed that was going to steal the crown from a card that was 35-50% faster was a fool. There's a pretty big difference between an upgrade that was just better VRAM and an upgrade that is 40% more shaders + entirely new VRAM and VRAM usage design.
Click to expand...

Did he mean the 2900xt? I don't remember a 2900xtx ever being released. Plentyof AMD fans were expecting big things from the 2900xt. AMD had allowed the 8800gtx and 8800gts 640/320 free reign over the market for quite some time.


----------



## CrazyElf

Let's review what we know so far:


Probably 4 GB at launch, perhaps with an 8GB version coming later in the fall
It looks like a big die - perhaps as much as ~650mm^2, unless the mockup in the image was much bigger than the real thing
If this thing is on the GF 28nm SHP, there could be some potential for gains from that alone in the performance per watt category
There will be massive bandwidth, but the problem is that it only has 4GB (high resolutions are when you need the bandwidth)
Edit: If FP64 performance is omitted, we could see some better performance per watt too.
The real question then becomes the performance per stream processor and how much AMD has improved that. If this thing really does have 4096 Stream Processors (possible with that giant die size) and is combined with the 28nm GF SHP, I think there is a potential for some pretty huge performance increases.

Availability wise, I think we're looking at maybe July-August if mid June is the paper launch, then perhaps later in Q4, we will see an 8GB version, and perhaps some custom PCB variants? It may take until early 2016 for us to get 8GB variants with custom PCBs.


----------



## magnek

Quote:


> Originally Posted by *Olivon*
> 
> June 16th is just a presentation and the rebranding launch. Fiji reviews are expected end of june according to Hardware.fr :
> 
> http://forum.hardware.fr/hfr/Hardware/hfr/computex-fiji-photo-sujet_980973_1.htm#t9508501


Doubt it, especially when HBM takes up 1/3 of AMD's official promtional page

http://www.amd.com/en-us/products/graphics/graphics-technology
Quote:


> Originally Posted by *Cyclonic*
> 
> http://nl.hardware.info/nieuws/43996/computex-amd-toont-fiji-videokaart-achter-gesloten-deuren---update
> 
> The dutch site of hardware.info got a look behind closed doors, there saying it will only be 4gb and will be slower then Titan X but on par with 980TI.


Why do I get the feeling they're simply ripping off HardwareLuxx and Golem.de and adding their own little spin?

Rough translastion of Golem.de:
Quote:


> The boss himself has the idea of the next AMD graphics card announced: Built on the Fiji-chip Radeon R9 to mid-June as the fastest card in the world are presented. We have previously received new unofficial information.
> 
> AMD's chief Lisa Su has first shown the chip of the next high-end graphics card in public at Computex press conference. *Around the designated with the codename Fiji GPU are four stacks of video memory,* along to this combination are the fastest graphics card in the world - said AMD vice president Matt Skinner a few minutes earlier.
> 
> According to our information, the Fiji-based graphics card on 4 GB High Bandwidth Memory to have, in addition to three DisplayPort outputs installed AMD an HDMI 2.0 output. The board is very short with less than 20 cm, as the storage stack to sit next to the graphics chip and not as GDDR5 memory distributed over the entire board.
> 
> The two 8-pin connectors allow together with the PCIe slot theoretically a power consumption of up to 375 watts, but AMD's Graphics CEO Joe Macri said Golem.de that Fiji graphics card should not require more energy than the Radeon R9 290X. Thus, should the upcoming models remain at under 300 watts - possibly even significantly.
> 
> *This is due, among other things with the final GPU frequency to be in excess of 1 GHz*. Whether this is a basic clock is meant as with Nvidia's Geforce cards or a boost, we do not know. This should be clarified on 16 June 2015: AMD would like to introduce the Fiji Radeon streaming LIVE


----------



## Master__Shake

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Some would say the same about AMD and their "Frame Pacing" patches. They STILL haven't fixed it for DirectX 9 games, which there are still a ton out there that use it, and as such, you still get microstuttering in Crossfire mode.
> 
> I'm just saying.


.

who in the flock cares about dx9 these days?

it's 2015 not 2005

besides there is no comparison of a company supporting 10 year old games compared to a company that barely supports it's own year and a half old gpus.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *magnek*
> 
> Doubt it, especially when HBM takes up 1/3 of AMD's official promtional page
> 
> http://www.amd.com/en-us/products/graphics/graphics-technology
> Why do I get the feeling they're simply ripping off HardwareLuxx and Golem.de and adding their own little spin?
> 
> Rough translastion of Golem.de:


Its almost like all these Euro sites are simply trolling AMD. I just think its pretty irresponsible of a supposed "News" source to post hearsay, innuendo, and conjecture about a product just before launch. You don't see the more reputable sites posting crap like this right before launch of a new product with no proof whatsoever.

If you know the card performs poorly, post the proof of it. Otherwise, if you are unwilling to break NDA then keep your mouth shut about performance and let AMD launch the card without tainting the public opinion of it...


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Master__Shake*
> 
> who in the flock cares about dx9 these days?


People who don't view video games as "Frisbee's" and throw them away at the end of the year.

Oh, and people who play MMO's ... which a good portion of them STILL use DX9.

That's who.


----------



## Majin SSJ Eric

Haha, DX9 MMO's don't need two cards to play them. If you still had a 7970 you'd be getting way more than 100FPS in games like WoW. Plus, you are talking about a pre-2007 API. Meanwhile, in the Kepler driver thread we are being told by people like you that Nvidia simply has no need to optimize drivers for our "ancient" Kepler cards (that were still Nvidia's flagship cards less than 1 year ago). So which is it?


----------



## hawker-gb

Which MMO is unplayable on AMD cards?


----------



## GorillaSceptre

Quote:


> Originally Posted by *47 Knucklehead*
> 
> So 6 FPS faster on an average of 21 games than a Titan X at 1440p.


%, and that's a worst case. That's if they did absolutely no work to the architecture, which i doubt after 2 years. Not to mention HBM on top of that.


----------



## Blameless

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Haha, DX9 MMO's don't need two cards to play them. If you still had a 7970 you'd be getting way more than 100FPS in games like WoW.


I don't play Planetside 2 (which is DX9) any more, but it wasn't that long ago that I did, and I sure as hell wasn't getting anywhere near 100 fps in larger battles...and that was on a 290X. Before the 290X I played the game on 7950s. Crossfire would definitely have helped, if it had working frame pacing.

Of course, few DX9 games are as demanding, but there are certainly a few.


----------



## harney

That is a better photo


----------



## ebduncan

so many posts.

guess on the 6/16/15 the forum will implode.

I am just wondering what the models will be. 3 cards? Fury, Fury x, Fury vr?


----------



## harney

Quote:


> Originally Posted by *ebduncan*
> 
> so many posts.
> 
> guess on the 6/16/15 the forum will implode.
> 
> I am just wondering what the models will be. 3 cards? Fury, Fury x, Fury vr?


Fury MAXX + X


----------



## EniGma1987

Quote:


> Originally Posted by *harney*
> 
> That is a better photo


mmmmmmmm

Quote:


> Originally Posted by *harney*
> 
> Fury MAXX + X


The third X added on means we get naked Ruby logo on the card?


----------



## magnek

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Its almost like all these Euro sites are simply trolling AMD. I just think its pretty irresponsible of a supposed "News" source to post hearsay, innuendo, and conjecture about a product just before launch. You don't see the more reputable sites posting crap like this right before launch of a new product with no proof whatsoever.
> 
> If you know the card performs poorly, post the proof of it. Otherwise, if you are unwilling to break NDA then keep your mouth shut about performance and let AMD launch the card without tainting the public opinion of it...


Well it's much easier to go with the popular opinion than it is to go against the grain

You know the thing I really don't get is this. By all accounts before Computex the rumors were that Fiji was supposed to be a Titan X killer right? If AMD designed Fiji to be a Titan X killer, then Titan X's performance (possibly 4K performance only) and price are the only relevant metrics for them. A card that's a hair slower at stock like the 980 Ti isn't going to throw them off. If you aimed to beat to your competitor's best, what relevance does their second best have to you with regards to performance? Quite frankly the real surprise for me regarding the 980 Ti was its price, which seemed to be priced very aggressively. So personally I think in the next 2 weeks AMD may have their bean counters come up with a still somewhat profitable price for Fiji but that's it.


----------



## Master__Shake

Quote:


> Originally Posted by *47 Knucklehead*
> 
> People who don't view video games as "Frisbee's" and throw them away at the end of the year.
> 
> Oh, and people who play MMO's ... which a good portion of them STILL use DX9.
> 
> That's who.


i stand by what i said.

why don't you respond to my statement about support?

10 year old game...1.5 year old gpu.


----------



## hawker-gb

Conspiracy theorists out there have almost two more weeks for all crazy speculations.
This card turn to be the most expecting news.


----------



## criminal

Quote:


> Originally Posted by *Master__Shake*
> 
> i stand by what i said.
> 
> why don't you respond to my statement about support?
> 
> 10 year old game...1.5 year old gpu.


Because it doesn't fit his agenda. And someone had the nerve to say he wasn't bias the other day. Whatever.


----------



## BigMack70

Quote:


> Originally Posted by *criminal*
> 
> Because it doesn't fit his agenda. And someone had the nerve to say he wasn't bias the other day. Whatever.


He's easily the most biased pro-nvidia anti-AMD regular user on these forums. I don't know why anyone interacts with him on these topics.


----------



## criminal

Quote:


> Originally Posted by *BigMack70*
> 
> He's easily the most biased pro-nvidia anti-AMD regular user on these forums. I don't know why anyone interacts with him on these topics.


Me either. And I wish people would stop quoting him so I don't have to see his posts.


----------



## Redwoodz

Quote:


> Originally Posted by *BigMack70*
> 
> He's easily the most biased pro-nvidia anti-AMD regular user on these forums. I don't know why anyone interacts with him on these topics.


Because someone else will take up his mantle if he stops posting.He is doing his job after all.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Master__Shake*
> 
> i stand by what i said.
> 
> why don't you respond to my statement about support?
> 
> 10 year old game...1.5 year old gpu.


He never responds.. He spreads a bunch of fud, then moves on to the next thread.


----------



## hawker-gb

I never see such hate from green side on this forum.

Looks like AMD will really deliver something good in two weeks.

And if they deliver card 100% faster than Titan haters will hate. They are damaged beyond repair.


----------



## DividebyZERO

Quote:


> Originally Posted by *hawker-gb*
> 
> I never see such hate from green side on this forum.
> 
> Looks like AMD will really deliver something good in two weeks.
> 
> And if they deliver card 100% faster than Titan haters will hate. They are damaged beyond repair.


If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


----------



## FallenFaux

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Haha, DX9 MMO's don't need two cards to play them. If you still had a 7970 you'd be getting way more than 100FPS in games like WoW. Plus, you are talking about a pre-2007 API. Meanwhile, in the Kepler driver thread we are being told by people like you that Nvidia simply has no need to optimize drivers for our "ancient" Kepler cards (that were still Nvidia's flagship cards less than 1 year ago). So which is it?


WoW has actually been DX11 since 2010.


----------



## magnek

Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


Nailed it.


----------



## Robenger

Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


Don't forget how ugly and unaesthetic the card is.


----------



## Devnant

Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


Pretty much this.


----------



## prjindigo

Remember that this card is designed for Dx12 -not Dx11- and that there is little chance that the comparison is honest. If it was running Dx11 code then the memory bandwidth went unused as it had to emulate a Dx11 card.

I sincerely doubt that anybody's gonna be concerned when both the 390, 380 AND the Fury outrun 1080 monitors at 144Hz anyway.

Betcha we'll see a full Dx12 implementation on a 4k on the 16th...


----------



## tpi2007

HBM is a wildcard, it could help out in a distinct way or it could just be an enabler for the increased number of cores, meaning, it will just be doing its job, but with lower power consumption giving that headroom to the GPU die itself, all the while freeing up PCB real estate.

From what we can see right now with the Tonga architectural improvements, and unless they made some more for Fiji, I'd say it's going to be close, with driver fine tuning and final clockspeed telling which company has the fastest GPU.

3072 Maxwell CUDA Cores have 90% of the performance of 4608 Kepler Cuda Cores (128 Maxwell cores have 90% of the performance of 192 Kepler cores), thus equivalent to 4147.2 Kepler cores. Now, the R9 290X has been competing with the 780 Ti and they have around the same number of cores - 2816 (AMD) vs 2880 (Nvidia). Meanwhile Fiji is rumoured to have 4096 cores, which is slightly less than 4147, so both the relative and absolute difference is even smaller this time. This is all rough estimation, but I'd say that clockspeeds and driver tuning will end up saying who's the winner.

And they better have a DX 11 overhead optimized driver waiting to be released with the card. People want performance now, not in months or years. And especially, people want performance in games that are out now and won't be upgraded to DX 12.


----------



## tsm106

Quote:


> Originally Posted by *tpi2007*
> 
> HBM is a wildcard, it could help out in a distinct way or it could just be an enabler for the increased number of cores, meaning, it will just be doing its job with lower power consumption giving that headroom to the GPU die itself, all the while freeing up PCB real estate.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> From what we can see right now with the Tonga architectural improvements, and unless they made some more for Fiji, I'd say it's going to be close, with driver fine tuning and final clockspeed telling which company has the fastest GPU.
> 
> 3072 Maxwell CUDA Cores have 90% of the performance of 4608 Kepler Cuda Cores (128 Maxwell cores have 90% of the performance of 192 Kepler cores), thus equivalent to 4147.2 Kepler cores. Now, the R9 290X has been competing with the 780 Ti and they have around the same number of cores - 2816 (AMD) vs 2880 (Nvidia). Meanwhile Fiji is rumoured to have 4096 cores, which is slightly less than 4147, so both the relative and absolute difference is even smaller this time. This is all rough estimation, but I'd say that clockspeeds and driver tuning will end up saying who's the winner.
> 
> And they better have a DX 11 overhead optimized driver waiting to be released with the card. People want performance now, not in months or years. And especially, people want performance in games that are out now and won't be upgraded to DX 12.


Not quite. It's not an enabler for freeing up die space but that is a bonus logistically. Instead HBM tackles a different but profound problem.

Quote:


> But as graphics chips grow faster, their appetite for fast delivery of information ("bandwidth") continues to increase. GDDR5's ability to satisfy those bandwidth demands is beginning to wane as the technology reaches the limits of its specification. Each additional gigabyte per second of bandwidth is beginning to consume too much power to be a wise, efficient, or cost-effective decision for designers or consumers. Taken to its logical conclusion, GDDR5 could easily begin to stall the continued performance growth of graphics chips. HBM resets the clock on memory power efficiency, offering >3X the bandwidth per watt of GDDR5.1


----------



## tpi2007

Quote:


> Originally Posted by *tsm106*
> 
> Not quite. It's not an enabler for freeing up die space but that is a bonus logistically. Instead HBM tackles a different but profound problem.


That's exactly what I said, or at least wanted to say. I added a comma and a "but" to make the sentence clearer:

Quote:


> or it could just be *an enabler for the increased number of cores, meaning, it will just be doing its job*, but with lower power consumption giving that headroom to the GPU die itself, all the while freeing up PCB real estate.


----------



## tsm106

Quote:


> Originally Posted by *tpi2007*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tsm106*
> 
> Not quite. It's not an enabler for freeing up die space but that is a bonus logistically. Instead HBM tackles a different but profound problem.
> 
> 
> 
> That's exactly what I said, or at least wanted to say. I added a comma and a "but" to make the sentence clearer:
> 
> Quote:
> 
> 
> 
> or it could just be *an enabler for the increased number of cores, meaning, it will just be doing its job*, but with lower power consumption giving that headroom to the GPU die itself, all the while freeing up PCB real estate.
> 
> Click to expand...
Click to expand...

But the problem isn't the die space. GPU's according to AMD will hit a power draw wall with GDD5. They already take up way too much space and losses thru their connectivity.


----------



## SoloCamo

Quote:


> Originally Posted by *tsm106*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tpi2007*
> 
> HBM is a wildcard, it could help out in a distinct way or it could just be an enabler for the increased number of cores, meaning, it will just be doing its job with lower power consumption giving that headroom to the GPU die itself, all the while freeing up PCB real estate.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> From what we can see right now with the Tonga architectural improvements, and unless they made some more for Fiji, I'd say it's going to be close, with driver fine tuning and final clockspeed telling which company has the fastest GPU.
> 
> 3072 Maxwell CUDA Cores have 90% of the performance of 4608 Kepler Cuda Cores (128 Maxwell cores have 90% of the performance of 192 Kepler cores), thus equivalent to 4147.2 Kepler cores. Now, the R9 290X has been competing with the 780 Ti and they have around the same number of cores - 2816 (AMD) vs 2880 (Nvidia). Meanwhile Fiji is rumoured to have 4096 cores, which is slightly less than 4147, so both the relative and absolute difference is even smaller this time. This is all rough estimation, but I'd say that clockspeeds and driver tuning will end up saying who's the winner.
> 
> And they better have a DX 11 overhead optimized driver waiting to be released with the card. People want performance now, not in months or years. And especially, people want performance in games that are out now and won't be upgraded to DX 12.
> 
> 
> 
> 
> 
> 
> Not quite. It's not an enabler for freeing up die space but that is a bonus logistically. Instead HBM tackles a different but profound problem.
> 
> Quote:
> 
> 
> 
> But as graphics chips grow faster, their appetite for fast delivery of information ("bandwidth") continues to increase. GDDR5's ability to satisfy those bandwidth demands is beginning to wane as the technology reaches the limits of its specification. Each additional gigabyte per second of bandwidth is beginning to consume too much power to be a wise, efficient, or cost-effective decision for designers or consumers. Taken to its logical conclusion, GDDR5 could easily begin to stall the continued performance growth of graphics chips. HBM resets the clock on memory power efficiency, offering >3X the bandwidth per watt of GDDR5.1
> 
> Click to expand...
Click to expand...

The issue I seem to have with AMD's Fury & 390x at this point is the timing. Is the performance of the card going to need that much more bandwidth at this point? For example, lets say the 390x is as assumed a 290x clocked at 1050 / 1500 with 8gb of ram. I've ran tests at these clocks and it's really not much different than a 290x at stock, even when I run 4k. I noticed more performance at 4k using 1100/1250 vs 1050/1500 at 4k. So factor in HBM will have huge bandwidth gains and there will a stronger GPU behind it, will it really matter being that it's limited to 4gb? Until the 8gb cards come out I don't see this really taking off at this point. Basically saying the Hawaii arch wasn't exactly bandwidth limited as is with the 512bit bus and decent clocks, will the stronger gpu actually require that much more bandwidth?


----------



## tpi2007

Quote:


> Originally Posted by *tsm106*
> 
> But the problem isn't the space. GPU's according to AMD will hit a power draw wall with GDD5. They already take up way too much space and losses thru their connectivity.


Who said it was? The way I phrased it I hoped it would have come across that the PCB space gains was a side benefit.

The real problem at the end of the line isn't power consumption either (it is a problem, but not the ultimate one), it's the fact that there isn't any faster memory than 8 Ghz GDDR5 and it's not practical or viable to put more than a 512-bit memory bus on a card, so the writing is on the wall for theoretical GDDR5 performance, and even at the top it's really not economically feasible anymore, say 8 GB or more of 8 Ghz GDDR5 with a 512-bit memory bus, so going HBM is inevitable at the high-end.

What I said, and thus put in bold is that HBM in Fiji may not be anything distinctive in added performance, it may just be doing its job in enabling the 4096 cores to do their job, just as the 5 Ghz over a 512-bit memory bus allow the 2816 cores in Hawaii to do their job. In essence, HBM not being a bottleneck, allowing the 4096 to do all they have the potential to do.

Was that clear now?


----------



## sugarhell

290x is ROPs limited even with 64. But more ROPs demands more bandwidth and in gcn in general ROPs get starved from memory bandwidth. Using hbm not only lower the power consumption but also allow the use of 100+ ROPs. That also means higher amount of shaders. I expect to see 4K shaders 16 ACEs and 120 ROPs or more if they havent change how the ROPs works with gcn just by using hbm.


----------



## tsm106

Quote:


> Originally Posted by *tpi2007*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tsm106*
> 
> But the problem isn't the space. GPU's according to AMD will hit a power draw wall with GDD5. They already take up way too much space and losses thru their connectivity.
> 
> 
> 
> Who said it was? The way I phrased it I hoped it would have come across that the PCB space gains was a side benefit.
> 
> The real problem at the end of the line isn't power consumption either, it's that fact that there isn't any faster memory than 8 Ghz GDDR5 and it's not practical or viable to put more than a 512-bit memory bus on a card, so the writing is on the wall for theoretical GDDR5 performance, and even at the top it's really not economically feasible anymore, say 8GB or more of 8 Ghz GDDR5 with a 512-bit memory bus, so going HBM is inevitable at the high-end.
> 
> What I said, and thus put in bold is that HBM in Fiji may not be anything distinctive in added performance, it may just be doing its job in enabling the 4096 cores to do their job, just as the 5 Ghz over a 512-bit memory bus allow the 2816 cores in Hawaii to do their job. In essence, HBM not being a bottleneck, allowing the 4096 to do all they have the potential to do.
> 
> *Was that clear now?*
Click to expand...

Quote:


> HBM is a wildcard, *it could help out in a distinct way or it could just be an enabler* for the increased number of cores, meaning, it will just be doing its job, but with lower power consumption giving that headroom to the GPU die itself, all the while freeing up PCB real estate.


You're point or points are actually confusing and a bit dismissive. So which is it one would ask? HBM could help or or it could just be a space saver... hmm? I'm not trying to argue with you, but I'm not sure you're saying anything much. It could be something or nothing, or it could be allowing the cores to do their job?

My point or AMD's point with HBM is that it is the next step, the evolution because according to them, the ROI on GDDR is about to fall out. Also, on that point, Nvidia apparently saw it coming too as they tried to push HMC.


----------



## Master__Shake

Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


have to agree with this.

no matter what amd does it's not good enough compared to nvidia.

how sad.


----------



## tpi2007

Quote:


> Originally Posted by *tsm106*
> 
> You're point or points are actually confusing and a bit dismissive. So which is it one would ask? HBM could help or or it could just be a space saver... hmm? I'm not trying to argue with you, but I'm not sure you're saying anything much. It could be something or nothing, or it could be allowing the cores to do their job?
> 
> My point or AMD's point with HBM is that it is the next step, the evolution because according to them, the ROI on GDDR is about to fall out. Also, on that point, Nvidia apparently saw it coming too as they tried to push HMC.


Let's get one thing straight, I didn't say "HBM could help or or it could just be a space saver". Space saving is a fact, a side benefit, we all got that. What I was addressing is the fact that many sites are saying that estimating the final performance of the card is difficult because the HBM factor could be deciding. What I was saying is that it could be deciding, for example in lower latency, enabling better frametimes, or at the end of the day it could just be what 4096 cores need to work properly. Which is to say, if AMD had the power headroom, a conventional 7 GHz GDDR5 running on 512-bit memory bus could achieve the same goal, meaning HBM could just be the appropriate solution to the given circumstance (i.e. GCN being more power hungry than Maxwell and thus AMD needing all the power savings they can get right now, ahead of HBM2, and thus jumping in early, because they really have to in order to stay competitive and deliver a card with a manageable TDP), but it may not be anything more than that.

It's just that some sites seem to be giving HBM too much credit in the performance estimation, when they probably should just assume that HBM is just there to not bottleneck the increased number of cores, nothing more than that.


----------



## ToTheSun!

Quote:


> Originally Posted by *Master__Shake*
> 
> Quote:
> 
> 
> 
> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.
> 
> 
> 
> have to agree with this.
> 
> no matter what amd does it's not good enough compared to nvidia.
> 
> how sad.
Click to expand...

That's because, speaking objectively, Nvidia have a lot going for them. Even assuming the Fiji flagship has the Titan X beat, there is still a lot to overcome, namely lower DX11 overhead and VRR superiority of Nvidia.


----------



## Master__Shake

Quote:


> Originally Posted by *ToTheSun!*
> 
> That's because, speaking objectively, Nvidia have a lot going for them. Even assuming the Fiji flagship has the Titan X beat, there is still a lot to overcome, namely lower DX11 overhead and VRR superiority of Nvidia.


case in point.

ya it's a better card...but you know reasons


----------



## Ganf

Quote:


> Originally Posted by *tpi2007*
> 
> Let's get one thing straight, I didn't say "HBM could help or or it could just be a space saver". Space saving is a fact, a side benefit, we all got that. What I was addressing is the fact that many sites are saying that estimating the final performance of the card is difficult because the HBM factor could be deciding. What I was saying is that it could be deciding, for example in lower latency, enabling better frametimes, or at the end of the day it could just be what 4096 cores need to work properly. Which is to say, if AMD had the power headroom, a conventional 7 GHz GDDR5 running on 512-bit memory bus could achieve the same goal, meaning HBM could just be the appropriate solution to the given circumstance (i.e. GCN being more power hungry than Maxwell and thus AMD needing all the power savings they can get right now, ahead of HBM2, and thus jumping in early, because they really have to in order to stay competitive and deliver a card with a manageable TDP), but it may not be anything more than that.
> 
> It's just that some sites seem to be giving HBM too much credit in the performance estimation, when they probably should just assume that HBM is just there to not bottleneck the increased number of cores, nothing more than that.


Let's not forget some of the crazy claims that AMD has made about VR in conjunction with HBM and LiquidVR, such as 1ms motion to visualization response time (essentially 1ms frametime) better crossfire scaling in VR than traditional crossfire scaling, better cpu/gpu parallel processing, asynchronous image warping and time warping, etc...

I have a feeling that they're going to be playing that out as the ultimate reason to buy HBM, and that these features are closely tied to HBM.


----------



## GorillaSceptre

What's the general feeling on whether 4GB vram is enough for 1440p?

I won't be going 4k until Volta's time frame, so "future proofing" for 4k is a non-issue. Just 1440p for the near future.


----------



## ToTheSun!

Quote:


> Originally Posted by *Master__Shake*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ToTheSun!*
> 
> That's because, speaking objectively, Nvidia have a lot going for them. Even assuming the Fiji flagship has the Titan X beat, there is still a lot to overcome, namely lower DX11 overhead and VRR superiority of Nvidia.
> 
> 
> 
> case in point.
> 
> ya it's a better card...but you know reasons
Click to expand...

Your entire experience involves every single aspect. Running a benchmark and looking at the FPS counter doesn't necessarily a good experience make (remember the microstutter thing 2-3 years ago?).

Then again, that is all ASSUMING the flagship actually beats the Titan X.


----------



## Master__Shake

Quote:


> Originally Posted by *ToTheSun!*
> 
> Your entire experience involves every single aspect. Running a benchmark and looking at the FPS counter doesn't necessarily a good experience make.
> 
> Then again, that is all ASSUMING the flagship actually beats the Titan X.


nope it really doesn't

if the card is faster it's faster.

that's how benchmarks work.

there is no congeniality award.


----------



## Forceman

Quote:


> Originally Posted by *GorillaSceptre*
> 
> What's the general feeling on whether 4GB vram is enough for 1440p?
> 
> I won't be going 4k until Volta's time frame, so "future proofing" for 4k is a non-issue. Just 1440p for the near future.


Unless you are trying to choke it with crazy settings or texture packs, it should be enough even for upcoming games.


----------



## ToTheSun!

Quote:


> Originally Posted by *Master__Shake*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ToTheSun!*
> 
> Your entire experience involves every single aspect. Running a benchmark and looking at the FPS counter doesn't necessarily a good experience make.
> 
> Then again, that is all ASSUMING the flagship actually beats the Titan X.
> 
> 
> 
> nope it really doesn't
> 
> if the card is faster it's faster.
> 
> that's how benchmarks work.
> 
> there is no congeniality award.
Click to expand...

Alright. It's been a pleasure discussing this with you.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> Unless you are trying to choke it with crazy settings or texture packs, it should be enough even for upcoming games.


Thanks bud.

Without sounding entitled, if i'm spending this much on a high-end card, i am looking to use crazy settings









Damn.. the 4GB is a real buzz-kill







If the Fury isn't noticeably faster, then i think i'll go with the 980Ti.


----------



## Shogon

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Thanks bud.
> 
> Without sounding entitled, if i'm spending this much on a high-end card, i am looking to use crazy settings
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Damn.. the 4GB is a real buzz-kil*l
> 
> 
> 
> 
> 
> 
> 
> If the Fury isn't noticeably faster, then i think i'll go with the 980Ti.


It's supposed to have compression techniques like Maxwell introduced if I recall. But it all depends on how bad, or good the games are in terms of Vram and what resolution you play at. Maybe you've played Shadow of Mordor before, maybe not. Either way it could mean what future titles will be like in terms of Vram usage/allocation, or whatever those people say to discredit high amounts of on-board Vram.


----------



## Forceman

Quote:


> Originally Posted by *Shogon*
> 
> It's supposed to have compression techniques like Maxwell introduced if I recall. But it all depends on how bad, or good the games are in terms of Vram and what resolution you play at. Maybe you've played Shadow of Mordor before, maybe not. Either way it could mean what future titles will be like in terms of Vram usage/allocation, or whatever those people say to discredit high amounts of on-board Vram.


That compression only works in transfers, so bandwidth, not capacity - unless they've changed something (which I doubt). They probably do have some room for better memory management though, as Nvidia seems to use less VRAM at the same settings.


----------



## Master__Shake

Quote:


> Originally Posted by *ToTheSun!*
> 
> Alright. It's been a pleasure discussing this with you.


well geez.

you mean there is more than one metric to a video card benchmark?

are we supposed to judge a product by the way the reviewer feels at that moment about the card?

what are you trying to say?


----------



## rdr09

Quote:


> Originally Posted by *Master__Shake*
> 
> case in point.
> 
> ya it's a better card...but you know reasons


and don't forget . . . flawless drivers. no need to revert to an old one.


----------



## xSociety

Quote:


> Originally Posted by *Master__Shake*
> 
> well geez.
> 
> *you mean there is more than one metric to a video card benchmark?*
> 
> are we supposed to judge a product by the way the reviewer feels at that moment about the card?
> 
> what are you trying to say?


Yes. If there is one card that has higher frames than the other doesn't make it a better choice. I look at the temps and power just as much as FPS. I also am already sold on Gsync and Shadowplay. Two things I can't live without now.


----------



## Shogon

Quote:


> Originally Posted by *Forceman*
> 
> That compression only works in transfers, so bandwidth, not capacity - unless they've changed something (which I doubt). They probably do have some room for better memory management though, as Nvidia seems to use less VRAM at the same settings.


I see now, sort of







. Hopefully it does something though, because all that added bandwidth doesn't mean much if you start using system memory.
Quote:


> Originally Posted by *xSociety*
> 
> Yes. If there is one card that has higher frames than the other doesn't make it a better choice. I look at the temps and power just as much as FPS. I also am already sold on Gsync and Shadowplay. Two things I can't live without now.


I guess frame times could be another metric, but AMD has fixed that mostly apart from certain scenarios. I need to use ShadowPlay a bit more, lately I've been called "A beast" in RO2 by some people and some situations would of been cool to watch again. Buy yes GSYNC, some of us are slaves to its power and manipulation







.


----------



## DividebyZERO

Quote:


> Originally Posted by *xSociety*
> 
> Yes. If there is one card that has higher frames than the other doesn't make it a better choice. I look at the temps and power just as much as FPS. I also am already sold on Gsync and Shadowplay. Two things I can't live without now.


Yeah i know how that goes, im stuck on 2x2 eyefinity, crossfire, VSR all rolled into one.


----------



## Tarifas

Seriously now I want to go single 1440p with max-ish setting, and maybe up to 4x MSAA, and that 4GB gives me the creeps


----------



## prjindigo

Quote:


> Originally Posted by *Master__Shake*
> 
> have to agree with this.
> 
> no matter what amd does it's not good enough compared to nvidia.
> 
> how sad.


...unless you count nearly fifteen years of forcing nVidia and 20 years of forcing intel to do more than simple 20mhz incremental yearly "upgrades". Do that and you'll find that AMD is a major driving force behind innovation.

Beyond a doubt the Fury-4 being a little slower than the best nV can bring to market combined with its comparatively tiny size and its CLEAR upper end horsepower is a foreshadowing of what AMD has in store in its innovation whip. That it is fully OCl compliant and will be able to fit 4x to a system STOCK without cooling conundrums indicates that someone at AMD has their act together.

Did you know that AMD cards display more actual detail per frame? nV cards intentionally actively sacrifice detail.


----------



## Master__Shake

Quote:


> Originally Posted by *prjindigo*
> 
> ...unless you count nearly fifteen years of forcing nVidia and 20 years of forcing intel to do more than simple 20mhz incremental yearly "upgrades". Do that and you'll find that AMD is a major driving force behind innovation.
> 
> Beyond a doubt the Fury-4 being a little slower than the best nV can bring to market combined with its comparatively tiny size and its CLEAR upper end horsepower is a foreshadowing of what AMD has in store in its innovation whip. That it is fully OCl compliant and will be able to fit 4x to a system STOCK without cooling conundrums indicates that someone at AMD has their act together.
> 
> Did you know that AMD cards display more actual detail per frame? nV cards intentionally actively sacrifice detail.


umm... i don't think you read what i was agreeing to.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *GorillaSceptre*
> 
> What's the general feeling on whether 4GB vram is enough for 1440p?
> 
> I won't be going 4k until Volta's time frame, so "future proofing" for 4k is a non-issue. Just 1440p for the near future.


In my experience anyway, 4GB is plenty for 1440p. I run all my games at that res and fully maxed with even 8xMSAA (even though its not really necessary) and I ususally top out around 3-3.5GB memory usage. Now granted I don't have the widest range of games that I play (mostly FPS like Crysis and BF) but there it is. I'm sure you can push 1440p beyond 4GB usage if you really wanted to in certain games but at that point you could just lower AA a smidge and be fine. Also remember that most read memory usage off of AB or other such software and that there is a big difference between "allocation" and actual usage.
Quote:


> Originally Posted by *Tarifas*
> 
> Seriously now I want to go single 1440p with max-ish setting, and maybe up to 4x MSAA, and that 4GB gives me the creeps


Um, that's exactly my setup and I have no issues whatsoever. Never once have I even approached 6GB usage (though I don't play modded Skyrim or GTA so that's an unknown to me)....


----------



## zealord

I wonder if we will make a VRAM jump again soon. Like I remember my 680 was fine on launch and I was like "oh man 2GB is plenty" and 1 year later I couldn't use ultra settings because I got stutter from running out of vram in a couple of games. And now 1 year later again my 290X uses up to 3,6GB at 1080p in games like GTA V etc.


----------



## Blameless

Quote:


> Originally Posted by *GorillaSceptre*
> 
> What's the general feeling on whether 4GB vram is enough for 1440p?


4GiB is enough for almost any game/settings at 1440p and likely will be for at least the near future.
Quote:


> Originally Posted by *ToTheSun!*
> 
> there is still a lot to overcome, namely lower DX11 overhead and VRR superiority of Nvidia.


DX11 overhead is improving in Windows 10. Freesync needs to work on more GPU/display configurations, which is something AMD can improve. The issues with overdrive are more in the hands of display manufacturers.
Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


Some of these are legitimate concerns, some are complete non-issues. Those that focus on the non-issues have already made their choice. Those that focus on the legitimate issues have reason to, and could be persuaded if they were resolved.

As for funky hairdo's, that's not limited to NVIDIA, and even when it's done through HairWorks it will run on AMD parts. If AMD has blostered geometry performance in Fiji, HairWorks may well run just as well, if not better, than on NVIDIA parts, and even if geometry performance hasn't been increased that much, HairWorks with a sensible tesselation cap already works pretty well.
Quote:


> Originally Posted by *Shogon*
> 
> It's supposed to have compression techniques like Maxwell introduced if I recall.


Doesn't necessarily mean that uncompressed assets aren't stored in VRAM.
Quote:


> Originally Posted by *xSociety*
> 
> I also am already sold on Gsync and Shadowplay. Two things I can't live without now.


No viable alternative to G-Sync from AMD, yet, but with that Haswell-E you've got, software video capture will often be superior to ShadowPlay by most metrics.
Quote:


> Originally Posted by *DividebyZERO*
> 
> Yeah i know how that goes, im stuck on 2x2 eyefinity, crossfire, VSR all rolled into one.


Not really the same thing as NVIDIA has equivalents to each of these.

AMD is a bit behind with video encoding (though VCE isn't far off and both are worse than just software x264 for most people with fast CPUs) and variable refresh support.


----------



## Majin SSJ Eric

Lol, my first discrete card was a 560Ti and I thought 1GB was plenty at the time. Of course I used to think 2GB of system memory was plenty back in the days of 500MB systems so there you go...


----------



## zealord

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Lol, my first discrete card was a 560Ti and I thought 1GB was plenty at the time. Of course I used to think 2GB of system memory was plenty back in the days of 500MB systems so there you go...


Yeah I mean it all happened so fast ( probably has something to do with Next gen consoles PS4 and Xbox One ). Like

High ends cards like 480 and 580 had 1.5GB vram
The 680 had 2GB vram
780 3GB

now we are at 12gb and I personally wouldn't buy a 4GB GDDR5 card anymore, though I don't think we will make a big jump again. I don't think we need like 20GB of Vram next year.


----------



## magnek

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> In my experience anyway, 4GB is plenty for 1440p. I run all my games at that res and fully maxed with even 8xMSAA (even though its not really necessary) and I ususally top out around 3-3.5GB memory usage. Now granted I don't have the widest range of games that I play (mostly FPS like Crysis and BF) but there it is. I'm sure you can push 1440p beyond 4GB usage if you really wanted to in certain games but at that point you could just lower AA a smidge and be fine. *Also remember that most read memory usage off of AB or other such software and that there is a big difference between "allocation" and actual usage*.


THIS.

Some games (like COD:AW) are notorious for caching as much as possible. Per Guru3D:
Quote:


> Here is a fun fact, the game engine fill whatever amount of graphics memory is available.
> 
> So if you have a 4 GB Radeon R9 290X or GeForce GTX 980 then it'll slowly fill that 4GB memory with textures, and then slowly but steadily in a minute or so (if you have disabled pre-caching) your graphics memory utilization will start to rise until nearly maxed out as shaders load up in video memory. Above you can see the Titan Black for example, already filling 5.4 GB of video memory.


Quote:


> Originally Posted by *Blameless*
> 
> DX11 overhead is improving in Windows 10. Freesync needs to work on more GPU/display configurations, which is something AMD can improve. The issues with overdrive are more in the hands of display manufacturers.


Indeed. Prime example right there: (guy used 3570K @ 4GHz)




+30% performance under Win10. Wonder if this is a Win10 specific optimization, or whether it could trickle down to older OS's?


----------



## DividebyZERO

Quote:


> Originally Posted by *Blameless*
> 
> Not really the same thing as NVIDIA has equivalents to each of these.


Show me someone who uses 2x2 Surround with SLI and DSR active at the same time.


----------



## Slaughterem

Quote:


> Originally Posted by *Tarifas*
> 
> Seriously now I want to go single 1440p with max-ish setting, and maybe up to 4x MSAA, and that 4GB gives me the creeps


If I was to offer you a 6GB card with gddr3 or a 4GB Gddr5 card which would you select?


----------



## GorillaSceptre

Quote:


> Originally Posted by *Slaughterem*
> 
> If I was to offer you a 6GB card with gddr3 or a 4GB Gddr5 card which would you select?


That's why the Fury would have to significantly outperform the 980Ti for me to get one.

If it's +-10% of a Ti then Nvidia is the best bet imo. If it outperforms it 20+% then i'm sold.


----------



## Slaughterem

Quote:


> Originally Posted by *GorillaSceptre*
> 
> That's why the Fury would have to significantly outperform the 980Ti for me to get one.
> 
> If it's +-10% of a Ti then Nvidia is the best bet imo. If it outperforms it 20+% then i'm sold.


So price would not matter? If the card is 10% faster and at $599 you would still buy the higher priced card?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Slaughterem*
> 
> So price would not matter? If the card is 10% faster and at $599 you would still buy the higher priced card?


I am @ 1440p and dont see myself going 4K any time soon so 4GB vs 6GB is not a big deal. I never play games with MSAA to begin with. If you looks at most games today only the bad ones use a lot of vRAM aka Ubicrap.


----------



## Blameless

Quote:


> Originally Posted by *magnek*
> 
> +30% performance under Win10. Wonder if this is a Win10 specific optimization, or whether it could trickle down to older OS's?


I'm pretty sure it depends on WDDM 2.0, which is limited to Windows 10.

I'm sure there is room for improvements in Windows 7/8.1, but I'm not sure if AMD will ever make them a priority.
Quote:


> Originally Posted by *DividebyZERO*
> 
> Show me someone who uses 2x2 Surround with SLI and DSR active at the same time.


As far as I know, DSR does not work with surround, but prior methods of downsampling still should.

I hadn't realized you were using them (the AMD equivalents) all simultaneously at the time of my post, and this is currently unique to AMD as an official combination of supported features.
Quote:


> Originally Posted by *Slaughterem*
> 
> If I was to offer you a 6GB card with gddr3 or a 4GB Gddr5 card which would you select?


Depends on how high the GDDR3 could clock and what the memory bus width was.

If bandwidth is sufficient, I'd want the most VRAM possible. If VRAM quantity is sufficient, I'd want the fastest type possible. Both facets will have minimums beyond which are not viable, but this is often more of a hard cut off when it comes to quantity.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Slaughterem*
> 
> So price would not matter? If the card is 10% faster and at $599 you would still buy the higher priced card?


My situation is different to a lot of you guys. I have to import my card, i can't send it back and the PC market isn't big enough where i live to sell it.

$50 would be an easy choice for me to have piece of mind. If game x comes out in a few months and i have to turn down settings because i run out of Vram, then i'm stuck until i upgrade again.

But if the Fury has a significant performance advantage, then it might be worth the risk.


----------



## ZealotKi11er

Quote:


> Originally Posted by *GorillaSceptre*
> 
> My situation is different to a lot of you guys. I have to import my card, i can't send it back and the PC market isn't big enough where i live to sell it.
> 
> $50 would be an easy choice for me to have piece of mind. If game x comes out in a few months and i have to turn down settings because i run out of Vram, then i'm stuck until i upgrade again.
> 
> But if the Fury has a significant performance advantage, then it might be worth the risk.


If it uses more then 4GB then that means only people with $650+ GPU can run that game. 4GB is going to be good for good time. We have seen the console damage. Things will remain static until we see new consoles.


----------



## DividebyZERO

Quote:


> Originally Posted by *Blameless*
> 
> I'm pretty sure it depends on WDDM 2.0, which is limited to Windows 10.
> 
> I'm sure there is room for improvements in Windows 7/8.1, but I'm not sure if AMD will ever make them a priority.
> As far as I know, DSR does not work with surround, but prior methods of downsampling still should.
> 
> I hadn't realized you were using them (the AMD equivalents) all simultaneously at the time of my post, and this is currently unique to AMD as an official combination of supported features.
> Depends on how high the GDDR3 could clock and what the memory bus width was.
> 
> If bandwidth is sufficient, I'd want the most VRAM possible. If VRAM quantity is sufficient, I'd want the fastest type possible. Both facets will have minimums beyond which are not viable, but this is often more of a hard cut off when it comes to quantity.


I already knew this, which is why i made my statement. I haven't messed with nvidia since kepler but i question even the ability to have 2x2 surround. I don't have to use any third party software either for it to work. Which amazes me since AMD drivers are so bad that almost all their features work in tandem.


----------



## GorillaSceptre

Quote:


> Originally Posted by *ZealotKi11er*
> 
> If it uses more then 4GB then that means only people with $650+ GPU can run that game. 4GB is going to be good for good time. We have seen the console damage. Things will remain static until we see new consoles.










That's a good point. But can you imagine seeing a Fury get bottle-necked by Vram, while a cheaper 390x doesn't? Games like SOM are already using more than 4 aren't they?

Why couldn't it be HBM and 8GB, it would of been such an easy choice









There's also no way i'm waiting another 3 months for an 8 gig version


----------



## Slaughterem

4 gig has been fine to this point. If NV felt differently they would have made the 970 and 980 higher. We also do not know how HBM will perform, will 4 gig due to its speed be as compatible as 6 g of gddr5? No one can say for sure. Another thing When was the last time you saw NV come out with a card earlier than planned that made their flagship irrelevant after 3 months and price it at 35% less? They usually wait for AMD to come out with there card first and then respond with a better card to keep the crown. At least that is what I have seen over the past few years. NV I would think knows what these cards are capable of. So they release quick at a price that will sell as many as possible IMO and when the competition is better they will reduce prices.


----------



## tpi2007

Quote:


> Originally Posted by *Slaughterem*
> 
> 4 gig has been fine to this point. If NV felt differently they would have made the 970 and 980 higher. We also do not know how HBM will perform, will 4 gig due to its speed be as compatible as 6 g of gddr5? No one can say for sure. Another thing When was the last time you saw NV come out with a card earlier than planned that made their flagship irrelevant after 3 months and price it at 35% less? They usually wait for AMD to come out with there card first and then respond with a better card to keep the crown. At least that is what I have seen over the past few years. NV I would think knows what these cards are capable of. So they release quick at a price that will sell as many as possible IMO and when the competition is better they will reduce prices.


The 980 and 970 were released more than 8 months ago. There was no GTA V for the PC or The Witcher 3 and the upcoming Batman game. Buying decisions made more than 8 months ago had a different set of premises. 4 GB cards are fine for now at a certain price point and for a given GPU horsepower. 4 GB is fine for 1440p, for example, and so is the 980. But get into 4K and more horsepower and VRAM is the obvious choice in the long run. That's where flagships come into play. And AMD's Fury X is the flagship. You can't promote a card as a flagship and then pretend that it's only for 1440p. People have every right to expect best in class performance at 4K.

I've said in another thread that I feel that the 980 is still overpriced after the announced price reduction. People willing to give $500 for a 980 will be much better served if they save up a bit more and get the much better 980 Ti, so the 980 really needs to drop a further $50 to $450 in order to be competitive within the lineup.

Anyway, AMD's flagship is supposed to be quite a bit better than a 980, and it will most probably have the horsepower to use more than 4 GB of VRAM. People buying a flagship in the middle of 2015 (versus September of 2014) expect if to perform accordingly. If people use two in Crossfire at 4K and start having stuttering due to running out of VRAM, then we will have a problem. Having lots of bandwidth and not enough amount to leverage the advantage will only make people think that Nvidia's offer is better in the long run.


----------



## BinaryDemon

For the 4gb version, pricing is going to be key.


----------



## raghu78

Quote:


> Originally Posted by *BinaryDemon*
> 
> For the 4gb version, pricing is going to be key.


No. Their driver software is going to be the key. If AMD do a damn good job of optimizing VRAM usage and can handle the latest games at 4K with ease and beat the 980 Ti / Titan-X by 15% , then nobody gives a damn about GTX 980 Ti's 6GB VRAM or Titan-X's 12GB VRAM. For the guys who want multi monitor 4K, Fury with 8GB should do fine.


----------



## magnek

Quote:


> Originally Posted by *tpi2007*
> 
> The 980 and 970 were released more than 8 months ago. There was no GTA V for the PC or The Witcher 3 and the upcoming Batman game. Buying decisions made more than 8 months ago had a different set of premises. 4 GB cards are fine for now at a certain price point and for a given GPU horsepower. 4 GB is fine for 1440p, for example, and so is the 980. But get into 4K and more horsepower and VRAM is the obvious choice in the long run. That's where flagships come into play. And AMD's Fury X is the flagship. You can't promote a card as a flagship and then pretend that it's only for 1440p. People have every right to expect best in class performance at 4K.
> 
> I've said in another thread that I feel that the 980 is still overpriced after the announced price reduction. People willing to give $500 for a 980 will be much better served if they save up a bit more and get the much better 980 Ti, so the 980 really needs to drop a further $50 to $450 in order to be competitive within the lineup.
> 
> Anyway, AMD's flagship is supposed to be quite a bit better than a 980, and it will most probably have the horsepower to use more than 4 GB of VRAM. People buying a flagship in the middle of 2015 (versus September of 2014) expect if to perform accordingly. If people use two in Crossfire at 4K and start having stuttering due to running out of VRAM, then we will have a problem. Having lots of bandwidth and not enough amount to leverage the advantage will only make people think that Nvidia's offer is better in the long run.


I agree with the general points you're making (especially about CrossFire), but I'd also like to point out that neither GTA V nor The Witcher 3 are particularly vram heavy. In fact TPU said it was "very modest in its VRAM requirements, which is quite surprising given how beautiful the graphics are".










Likewise GTA V doesn't go over 4GB until 2x MSAA at 4K.









Radeon R9 290X 8GB - In dark blue the VRAM measured in AfterBurner - In light blue what the game indicates as VRAM usage with our settings. So overall in the city the VRAM requirement can run higher than prognosed.

If anything I think Shadow of Mordor and COD:AW are the poster childs for eating vram, although in the case of the latter it's simply due to the game engine caching as much as possible, and having only 4GB doesn't negatively impact performance even at 4K.


----------



## Slaughterem

Quote:


> Originally Posted by *tpi2007*
> 
> The 980 and 970 were released more than 8 months ago. There was no GTA V for the PC or The Witcher 3 and the upcoming Batman game. Buying decisions made more than 8 months ago had a different set of premises. 4 GB cards are fine for now at a certain price point and for a given GPU horsepower. 4 GB is fine for 1440p, for example, and so is the 980. *But get into 4K and more horsepower and VRAM is the obvious choice in the long run. That's where flagships come into play. And AMD's Fury X is the flagship. You can't promote a card as a flagship and then pretend that it's only for 1440p. People have every right to expect best in class performance at 4K.
> *
> I've said in another thread that I feel that the 980 is still overpriced after the announced price reduction. People willing to give $500 for a 980 will be much better served if they save up a bit more and get the much better 980 Ti, so the 980 really needs to drop a further $50 to $450 in order to be competitive within the lineup.
> 
> Anyway, AMD's flagship is supposed to be quite a bit better than a 980, and it will most probably have the horsepower to use more than 4 GB of VRAM. People buying a flagship in the middle of 2015 (versus September of 2014) expect if to perform accordingly. If people use two in Crossfire at 4K and start having stuttering due to running out of VRAM, then we will have a problem. Having lots of bandwidth and not enough amount to leverage the advantage will only make people think that Nvidia's offer is better in the long run.


Do you have a source that backs up your claim that 4 gig of HBM is not enough horsepower for 4K? Do moderators here at OCN have early releases of HBM tech? Do you also have a source that AMD is promoting a flagship card and pretending that Fury is for 1440P gaming only? People have a right to know the facts and I would think as a moderator you would be neutral. Maybe the best card to save for is the 390X. Why would you advise someone to purchase a 980TI? You may be right, nobody knows for sure I guess this will be settled in the next month or so.


----------



## SpeedyVT

DX12 has a lot of GPU ram saving features like tile texture asset suddenly you can shrink 2gigs of textures without loosing quality to 1/5 the size.

Current games might be more bloaty at 4k but we've got to look at this as a DX12 situation and opportunity.


----------



## HiTechPixel

Quote:


> Originally Posted by *Slaughterem*
> 
> Do you have a source that backs up your claim that 4 gig of HBM is not enough horsepower for 4K? Do moderators here at OCN have early releases of HBM tech? Do you also have a source that AMD is promoting a flagship card and pretending that Fury is for 1440P gaming only? People have a right to know the facts and I would think as a moderator you would be neutral. Maybe the best card to save for is the 390X. Why would you advise someone to purchase a 980TI? You may be right, nobody knows for sure I guess this will be settled in the next month or so.


Dude, it's obvious if you take a look at several reviews of 4K. Or heck, just take a look at Baasha's 5K results and you'll know even more!


----------



## The Stilt

Tha
Quote:


> Originally Posted by *SpeedyVT*
> 
> DX12 has a lot of GPU ram saving features like tile texture asset suddenly you can shrink 2gigs of textures without loosing quality to 1/5 the size.
> 
> Current games might be more bloaty at 4k but we've got to look at this as a DX12 situation and opportunity.


Can you provide source for this and the Windows 10 utilizing HSA as well please?

All AMD GPUs released during the last decade have advanced frame buffer compression technologies implemented in them.
Like any other data the textures however cannot be compressed infinitely. Also better compression algorithms not only require significantly more resources from the GPU but can add some lag too.


----------



## anujsetia

I think there is a lot of misinformation in this thread.

Is 4 GB HBM = 4 GB GDDR5. No-one knows at this point. So there is no point in speculating that it will not be enough for 4K.

Its like apples to oranges.

I can guarantee even 8 GB DDR3 will not be able to do justice at 4K compared to 4 GB GDDR5 as both are different technologies.

Similarly, HBM is different to GDDR5. How much different is something that reviews will tell.


----------



## Neo_Morpheus

4 gigs of ram is kind of child's play and not enough for 4k or even 1440p, we will all be laughing about this in another 3 years








I not sure about the fury pricing, so it has to be cheaper than the ti? I guessing $650-700


----------



## bkvamme

You can now sign up for the webcast on June 16th.

http://www.amd.com/en-us/products/graphics/graphics-technology#signup

Can't wait to see if they have something up their sleeve. The headline is "AMD Presents: The New Era of PC Gaming", so we'll see what they mean by that.


----------



## HiTechPixel

Quote:


> Originally Posted by *anujsetia*
> 
> I think there is a lot of misinformation in this thread.
> 
> Is 4 GB HBM = 4 GB GDDR5. No-one knows at this point. So there is no point in speculating that it will not be enough for 4K.
> 
> Its like apples to oranges.
> 
> I can guarantee even 8 GB DDR3 will not be able to do justice at 4K compared to 4 GB GDDR5 as both are different technologies.
> 
> Similarly, HBM is different to GDDR5. How much different is something that reviews will tell.


They're both fundamentally the same kind of VRAM. 4GB HBM won't magically hold more information than 6GB or 8GB GDDR5.


----------



## ToTheSun!

Quote:


> Originally Posted by *Master__Shake*
> 
> well geez.
> 
> you mean there is more than one metric to a video card benchmark?
> 
> are we supposed to judge a product by the way the reviewer feels at that moment about the card?
> 
> what are you trying to say?


That's not what i'm saying at all. I don't take into consideration subjective statements from reviewers.

What i'm saying is that a card is more than its shaders, memory, etc. A card's worth should also be measured by its feature support and implementation. If a card is more powerful, both in raw specs and framerate, but is unable to give the user a better experience, what good is the card's "superiority"?
Quote:


> Originally Posted by *Blameless*
> 
> DX11 overhead is improving in Windows 10. Freesync needs to work on more GPU/display configurations, which is something AMD can improve. The issues with overdrive are more in the hands of display manufacturers-


All good and well. Display manufacturers might end up fixing the issues and improving the implementation by themselves eventually. However, Nvidia is speeding things up and creating value by working with the manufacturers to make sure the end product works as well as possible as feature rich as it can be.

Is the customer supposed to ignore their efforts? They're spending resources to make sure their products and the products associated with them work and have quality. That's a form of Quality Control/Assurance, and that has market value.

I've stuck to AMD and defended their merits when genuinely valid of me for 3 years. Right now, when i'm nearing a GPU upgrade, i find it hard to justify buying one of their cards.


----------



## julizs

Quote:


> Originally Posted by *bkvamme*
> 
> You can now sign up for the webcast on June 16th.
> 
> http://www.amd.com/en-us/products/graphics/graphics-technology#signup
> 
> Can't wait to see if they have something up their sleeve. The headline is "AMD Presents: The New Era of PC Gaming", so we'll see what they mean by that.


Wish I could be there live. I remember I got so jealous when all the people at the presentation of the R9 285 just got a 290 for free


----------



## flopper

Quote:


> Originally Posted by *Xuper*
> 
> LOL!
> 
> Perhaps AMD Fury is not Slower than Geforce 980Ti ?


Fury X fastest

Buyers remorse is going to happen
Quote:


> Originally Posted by *HiTechPixel*
> 
> They're both fundamentally the same kind of VRAM. 4GB HBM won't magically hold more information than 6GB or 8GB GDDR5.


better than 970 3.5gb tho


----------



## bkvamme

Quote:


> Originally Posted by *julizs*
> 
> Wish I could be there live. I remember I got so jealous when all the people at the presentation of the R9 285 just got a 290 for free


Wow, that would've been nice! Being there live would've kicked ass in any case


----------



## SpeedyVT

Quote:


> Originally Posted by *The Stilt*
> 
> Tha
> Can you provide source for this and the Windows 10 utilizing HSA as well please?
> 
> All AMD GPUs released during the last decade have advanced frame buffer compression technologies implemented in them.
> Like any other data the textures however cannot be compressed infinitely. Also better compression algorithms not only require significantly more resources from the GPU but can add some lag too.


http://www.anandtech.com/show/8544/microsoft-details-direct3d-113-12-new-features

Since that's the only source most people only believe in.

All you do is bear doubt on any of my statements the simplest google search would totally fry your brain.


----------



## gunzkevin1

Quote:


> Originally Posted by *SpeedyVT*
> 
> http://www.anandtech.com/show/8544/microsoft-details-direct3d-113-12-new-features
> 
> Since that's the only source most people only believe in.
> 
> All you do is bear doubt on any of my statements the simplest google search would totally fry your brain.


* Drops mic *


----------



## Vesku

Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


The 4GB is not enough campaign has already started. If anything that is the strongest indicator that AMD's Fiji will probably be quite interesting.


----------



## prjindigo

On shadow-play: Since we'll be able to directly read a clone of the frame-buffer on Dx12 I'm pretty sure special capture software is a thing of the past.


----------



## Blameless

Quote:


> Originally Posted by *anujsetia*
> 
> Is 4 GB HBM = 4 GB GDDR5. No-one knows at this point. So there is no point in speculating that it will not be enough for 4K.


Capacity wise, it's the same.

Sure there are some variables, like how the driver prioritizes what to keep and what it evicts, but 4GiB of HBM will hold exactly the same amount of data as 4GiB of GDDR5, or 4GiB of anything else.
Quote:


> Originally Posted by *anujsetia*
> 
> Its like apples to oranges.


It's more like the difference in volume between 500ml of orange juice and 500ml of apple juice...


----------



## prjindigo

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Yes, and you've "just been saying" it about 40 posts or so in this thread alone.
> 
> 
> 
> 
> 
> 
> 
> Why don't you do us all a favor and take a break from the AMD threads for a while. We all know your same tired comments by heart now anyway so there's really no need for you to post "how bankrupt AMD" is or "how they are losing market share" anymore. We get it.


Yeah, I get microstutters in Farming Simulator 15 at 4k with my 295x2, its horrible. Or realistic considering most farm equipment vibrates like a mofo.


----------



## harney

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I am @ 1440p and dont see myself going 4K any time soon so 4GB vs 6GB is not a big deal. I never play games with MSAA to begin with. If you looks at most games today only the bad ones use a lot of vRAM aka Ubicrap.


I get confused re this 1440p there are two 2560x1440p or 3440x1440p
i am running on a 970 over clocked heavily running a little faster than a stock 980 on 3440x1440p but i can just about get away with it 60fps but there is no room for movement so i am either going 980 ti or the flurry







....i was on 1200p before with the 970 and that was perfect @60fps but since the monitor upgrade well the 970 is just about keeping up at that res


----------



## harney

Quote:


> Originally Posted by *Slaughterem*
> 
> 4 gig has been fine to this point. If NV felt differently they would have made the 970 and 980 higher. We also do not know how HBM will perform, will 4 gig due to its speed be as compatible as 6 g of gddr5? No one can say for sure. Another thing When was the last time you saw NV come out with a card earlier than planned that made their flagship irrelevant after 3 months and price it at 35% less? They usually wait for AMD to come out with there card first and then respond with a better card to keep the crown. At least that is what I have seen over the past few years. NV I would think knows what these cards are capable of. So they release quick at a price that will sell as many as possible IMO and when the competition is better they will reduce prices.


+1

Yes they have rushed the 980ti out as it was a surprise to most and the supply here in the uk and the rest of Europe is very low all pre orders... to me i was under the impression it was set for release in July august not beginning of June...


----------



## prjindigo

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Applying their math on the 980 and 980 Ti:
> GTX 980 = 1216 MHz http://www.techpowerup.com/gpudb/2621/geforce-gtx-980.html
> GTX 980 Ti = 1076 MHz http://www.techpowerup.com/gpudb/2724/geforce-gtx-980-ti.html
> 
> That's 88.5%, take 1216 * .885 = 1076.16
> 
> = (2816 / 2048) x 0.885
> = 1.375 x 0.885
> = 1.22
> 
> Take that 80, multiply by 1.22 = 97.6, close to that 100 number
> 
> Edit: Even if the clock speed was 1000 MHz for Fury X, it'd be 1.45 x 1.00 = 1.45, or 45% which would be 72 * 1.45 for 104.4 and have the same *estimated* performance of the Titan X


This is all starting to look like the math they used in Arena to determine the winning team except y'all keep leaving the monkey out of the loop for some reason.


----------



## prjindigo

Quote:


> Originally Posted by *magnek*
> 
> Nailed it.


You forgot "having to find a place to mount the radiator" and that the hoses aren't 4 feet long...


----------



## harney

On the note of Vram i am no expert here so some one with better knowledge can come forth and explain

usage of vram in some most games just fills up the vram for no need at all..

while other games use it more efficiently using less

So i assume peeps crying that 2 3 or 4 gb is not enough are the ones seeing all there vram chewed up by games that are less optimized in there usage of vram

i do know there is no real gains in performance having the same card but one with 2gb vs 4gb ect ....however i am interested in how the vram is managed from game to game ....


----------



## prjindigo

Mild but reasonable speculation on the 4gb vs 8gb HBM memory on the interposer:

It looks to me like what people are measuring is the heat spreaders, not the actual ram chips underneath. If the chips underneath are only a percentage larger in size than 256MB IC's then you can probably fit 8 stacks down the sides of the GPU's position. (4gb being 4 1gb HBM being made of 4 256MB IC stacked inside)

So probably what we're seeing is the heat spreader for both the 4GB _and_ the 8GB macrostructure which would use larger 512MB IC in stacks of 4, whereas the 16GB structure would necessarily have to have 8 stacks of 4 512MB IC's?

But to add fire to the fuel that is the baiting: Is HBM2 going to use by-512MB IC with two stacks totaling 8bits or are they going to go for 8x256MB IC per position first and require more height in the heat spreaders along-side the GPU?

I'm betting that increasing the complexity of the interposer may cause a large increase in discards of that component. They've clearly got the by-256MB interposer working at this time. It'll be interesting to see if they switch to a by-512MB system.


----------



## Blameless

Quote:


> Originally Posted by *prjindigo*
> 
> It looks to me like what people are measuring is the heat spreaders, not the actual ram chips underneath.


Those are quite clearly the dies, not heatspreaders.

There are no heatspreaders on the leaked images, and the final part almost certainly will not be equipped with a heatspreader either. AMD doesn't put heatspreaders on their GPUs and they increase packaging costs while harming thermal performance, so there is no need for one unless you are in a situation where the cooling solution stands a real chance of damaging something. With a die this wide and a large shim, this is unlikely.


----------



## prjindigo

Quote:


> Originally Posted by *Blameless*
> 
> Those are quite clearly the dies, not heatspreaders.
> 
> There are no heatspreaders on the leaked images, and the final part almost certainly will not be equipped with a heatspreader either. AMD doesn't put heatspreaders on their GPUs and they increase packaging costs while harming thermal performance, so there is no need for one unless you are in a situation where the cooling solution stands a real chance of damaging something. With a die this wide and a large shim, this is unlikely.


Yeah, see, those are epoxy covers, not the actual dies. So they're bigger than the chips and thus a heat spreader. Tho in factory manufactured water cooling eh


----------



## Blameless

Quote:


> Originally Posted by *prjindigo*
> 
> Yeah, see, those are epoxy covers, not the actual dies.


No they aren't. You are mistaking the remaining encapsulant, which has been mostly lapped off, for a cover. There is no cover on those dies, and certainly no heatspreader.

You can see the silicon on all the dies shown, and can even see the top metal layers of the interposer's die.


----------



## raghu78

Quote:


> Originally Posted by *prjindigo*
> 
> Mild but reasonable speculation on the 4gb vs 8gb HBM memory on the interposer:
> 
> It looks to me like what people are measuring is the heat spreaders, not the actual ram chips underneath. If the chips underneath are only a percentage larger in size than 256MB IC's then you can probably fit 8 stacks down the sides of the GPU's position. (4gb being 4 1gb HBM being made of 4 256MB IC stacked inside)
> 
> So probably what we're seeing is the heat spreader for both the 4GB _and_ the 8GB macrostructure which would use larger 512MB IC in stacks of 4, whereas the 16GB structure would necessarily have to have 8 stacks of 4 512MB IC's?
> 
> But to add fire to the fuel that is the baiting: Is HBM2 going to use by-512MB IC with two stacks totaling 8bits or are they going to go for 8x256MB IC per position first and require more height in the heat spreaders along-side the GPU?
> 
> I'm betting that increasing the complexity of the interposer may cause a large increase in discards of that component. They've clearly got the by-256MB interposer working at this time. It'll be interesting to see if they switch to a by-512MB system.


Quote:


> Originally Posted by *prjindigo*
> 
> Yeah, see, those are epoxy covers, not the actual dies. So they're bigger than the chips and thus a heat spreader. Tho in factory manufactured water cooling eh


yeah you are right. Those are heatspreader covers and not the actual HBM chip . The actual HBM lies underneath.

http://videocardz.com/55259/sk-hynix-shows-off-hbm1-and-hbm2





As for capacity Hynix has said 4 Hi HBM1 is coming first followed by 8 Hi HBM1. There is a small mistake in that slide. The individual chips are 2 Gb connected to two 128 bit channels. so per channel the capacity is 1 Gb. A single stack has 8 channels ( 8 x 128 = 1024 bit). So 8 x 1 Gb = 1 GB . 8 Hi doubles capacity (by stacking 8 instead of 4 chips) but has the same 8 x 128 bit channels. 1 Gb = 1 Gigabit , 1 GB = Gigabyte



btw there was a leaked slide of a dual link interposer tech which allows 2 x 1 GB per stack using 4 Hi stacks. So if thats true (which is not known yet as it could be a fake slide) you could get to 16 GB using 8 Hi stacks and dual link interposer.

http://videocardz.com/55146/amd-radeon-r9-390x-possible-specifications-and-performance-leaked



I believe thats its true because AMD needs 16 GB for their Firepros. Their current flagship Firepro W9100 based on Hawaii has 16 GB GDDR5. AMD has invented HBM and worked with Hynix to bring it to market. So they must have known the limitations of HBM1 from the time they started designing Fiji . Its very likely that AMD thus designed a workaround solution (dual link interposer) which solves the memory size problem for both Radeon and Firepro.


----------



## harney

Quote:


> Originally Posted by *Blameless*
> 
> No they aren't. You are mistaking the remaining encapsulant, which has been mostly lapped off, for a cover. There is no cover on those dies, and certainly no heatspreader.
> 
> You can see the silicon on all the dies shown, and can even see the top metal layers of the interposer's die.


Yep look like dies to me


----------



## BinaryDemon

Quote:


> Originally Posted by *flopper*
> 
> Fury X fastest
> Buyers remorse is going to happen
> better than 970 3.5gb tho


Don't I know it. I've repeatedly said I think Fury will be faster. I still think the 4gb version if gonna sell like crap unless it undercuts the GTX980TI in pricing.
Quote:


> Originally Posted by *Vesku*
> 
> The 4GB is not enough campaign has already started. If anything that is the strongest indicator that AMD's Fiji will probably be quite interesting.


I'm not part of some ANTI-AMD campaign, it's just how I feel as a consumer.

I agree that _currently in most situations_, 4gb of vram is enough. This is evidenced by how well the R9 295x2 does against the Titan X. But when I spend $600 or more on a GPU, I'm trying to anticipate future vram usage for the next year or two as well. I don't have statistics but I think vram usage has increased dramatically in the past year or two, probably due to more vram available on the XB1 and PS4. Now maybe it will stagnate for the next few years since we are stuck with those consoles, but it might not- maybe developers will continue to use more for PC versions of games to distinguish them graphically from consoles.

I'm betting Fury 4gb outperforms GTX980TI, but I don't think I'd buy that version at $650 or more. I'd wait for the 8gb version.


----------



## xxdarkreap3rxx

Given TPU's summary at 2560x1440 res, the Fury X will need to be ~39% faster than the 290X to match a 980 Ti. Given AMD's track record with their past two flagship cards (7970 & 290X), it doesn't seem like a stretch:

Original 7970 -> Original 290X
108/74 = ~46% increase



Original 6970 -> Original 7970
100/72 = ~39% increase


----------



## Master__Shake

Quote:


> Originally Posted by *ToTheSun!*
> 
> That's not what i'm saying at all. I don't take into consideration subjective statements from reviewers.
> 
> What i'm saying is that a card is more than its shaders, memory, etc. A card's worth should also be measured by its feature support and implementation. If a card is more powerful, both in raw specs and framerate, but is unable to give the user a better experience, what good is the card's "superiority"?


and that is when you made this statement true.
Quote:


> Originally Posted by *DividebyZERO*
> 
> If it is faster than titan, expect to hear about power consumption and heat and bad drivers. Also don't forget bankruptcy and market share and bad marketing. Then last but not least it doesn't have Gsynch, Physx, or super duper hairdos. Edit: oh and memory size.


it's been a pleasure going in circles with you.


----------



## ZealotKi11er

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Given TPU's summary at 2560x1440 res, the Fury X will need to be ~39% faster than the 290X to match a 980 Ti. Given AMD's track record with their past two flagship cards (7970 & 290X), it doesn't seem like a stretch:
> 
> Original 7970 -> Original 290X
> 108/74 = ~46% increase
> 
> 
> 
> Original 6970 -> Original 7970
> 100/72 = ~39% increase


Considering HD 7970 to 290X is going from 2048 to 2816 which does not seem like a lot but other changes made the difference.


----------



## Forceman

Quote:


> Originally Posted by *raghu78*
> 
> yeah you are right. Those are heatspreader covers and not the actual HBM chip . The actual HBM lies underneath.


Does Hawaii have a heatspreader? Because the Hawaii die looks an awfully lot like that picture.

Those are clearly the actual die, and not an external heatspreader. Why would you put individual heatspreaders on the interposer anyway?


----------



## The Stilt

There is no heatspreader on neither of those images (Hawaii / Grenada or Fiji) just the bare silicon.
On AMD ASICs the interposer is exposed where nVidias first samples have the interposer embedded into the substrate.
Based on the Fiji die shot the size is around 550mm² (excl. HBM die & I-P) which is pretty much what was expected.

Each HBM stack is x1024 wide so unless there is a way to split the effective width (similar to GDDR5) then Fiji will be limited to 4GB of DRAM until HBM2 arrives.


----------



## ebduncan

meh 4gb is enough for me.

its perfect for 1440p and 4k. I've yet to see an example of a new game which used more than 4gb vram at 4k. I don't care about what level AA they had to use to make it possible to exceed 4gb. You don't need AA/.MSAA at 4k.

Then there is the DX 12 argument where Crossfire/Sli setups will pool the gpu memory. For example two Fury X cards in crossfire would sport 8gb vram more than enough.

Lets face it, it still takes a lot of GPU power to run 4k AAA titles at respectable settings with a single card even the Titan X. 2 cards for 4k is the norm.


----------



## ToTheSun!

Quote:


> Originally Posted by *Master__Shake*
> 
> it's been a pleasure going in circles with you.


You're just being obtuse on purpose now.

Your initial post reflected a tone of disapproval, as if being a superior card by other means other than pure framerate was a bad thing that Nvidia should be ashamed of ("how sad").
That's why i commented. I never replied to that other dude directly. It's surreal that we're even arguing this when you're on 980's SLI. You bought them for a reason, right? Even if the Fiji flagship provides 10% more framerate than the 980ti, it cannot be objectively classified as a better purchase, regardless of price, simply because Nvidia are better in many other aspects.

It's obvious that people will talk about all other aspects because you're not paying $650+ for framerate; you're paying any such amount for the experience as a whole.


----------



## MerkageTurk

Nope 4gb is not enough, you will start to get stuttering etc

This is like the 780 to as it was limited by its 3gb


----------



## tpi2007

Quote:


> Originally Posted by *Slaughterem*
> 
> Do you have a source that backs up your claim that 4 gig of HBM is not enough horsepower for 4K? Do moderators here at OCN have early releases of HBM tech? Do you also have a source that AMD is promoting a flagship card and pretending that Fury is for 1440P gaming only? People have a right to know the facts and I would think as a moderator you would be neutral. Maybe the best card to save for is the 390X. Why would you advise someone to purchase a 980TI? You may be right, nobody knows for sure I guess this will be settled in the next month or so.


Let me be clear, what I post is just my opinion on the matter, based on publicly available data, I don't have any special information that you guys don't have. I'm trying to be as neutral as ever, I've criticized and praised both brands on numerous occasions. What I want is for them to explain how it's suddenly easy to optimize HBM VRAM usage with a couple of engineers, if it's an HBM peculiarity that can't also be applied to GDDR5, because the rumours are also saying that the Hawaii refresh is coming out with 8 GB of GDDR5 VRAM (and that there is an 8 GB HBM version of Fiji slated for August and that slide mentioning dual-link interposing allowing for higher capacities). All I want is more information. See this post I wrote in another thread for more. As to 4 GB not being enough for 4K, it's not if you want to max out games, especially in Crossfire, which I think people have the right to expect when buying a flagship, and increasingly it's not going to be if you factor in that you will be buying a card now and expecting it to last two or three years. Like BinaryDemon said below.

Quote:


> Originally Posted by *BinaryDemon*
> 
> I'm not part of some ANTI-AMD campaign, it's just how I feel as a consumer.
> 
> I agree that _currently in most situations_, 4gb of vram is enough. This is evidenced by how well the R9 295x2 does against the Titan X. But when I spend $600 or more on a GPU, I'm trying to anticipate future vram usage for the next year or two as well. I don't have statistics but I think vram usage has increased dramatically in the past year or two, probably due to more vram available on the XB1 and PS4. Now maybe it will stagnate for the next few years since we are stuck with those consoles, but it might not- maybe developers will continue to use more for PC versions of games to distinguish them graphically from consoles.
> 
> I'm betting Fury 4gb outperforms GTX980TI, but I don't think I'd buy that version at $650 or more. I'd wait for the 8gb version.


----------



## Redwoodz

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Thanks bud.
> 
> Without sounding entitled, if i'm spending this much on a high-end card, i am looking to use crazy settings
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Damn.. the 4GB is a real buzz-kill
> 
> 
> 
> 
> 
> 
> 
> If the Fury isn't noticeably faster, then i think i'll go with the 980Ti.


Quote:


> Originally Posted by *Shogon*
> 
> It's supposed to have compression techniques like Maxwell introduced if I recall. But it all depends on how bad, or good the games are in terms of Vram and what resolution you play at. Maybe you've played Shadow of Mordor before, maybe not. Either way it could mean what future titles will be like in terms of Vram usage/allocation, or whatever those people say to discredit high amounts of on-board Vram.


Quote:


> Originally Posted by *Tarifas*
> 
> Seriously now I want to go single 1440p with max-ish setting, and maybe up to 4x MSAA, and that 4GB gives me the creeps


Quote:


> Originally Posted by *tpi2007*
> 
> The 980 and 970 were released more than 8 months ago. There was no GTA V for the PC or The Witcher 3 and the upcoming Batman game. Buying decisions made more than 8 months ago had a different set of premises. 4 GB cards are fine for now at a certain price point and for a given GPU horsepower. 4 GB is fine for 1440p, for example, and so is the 980. But get into 4K and more horsepower and VRAM is the obvious choice in the long run. That's where flagships come into play. And AMD's Fury X is the flagship. You can't promote a card as a flagship and then pretend that it's only for 1440p. People have every right to expect best in class performance at 4K.
> 
> I've said in another thread that I feel that the 980 is still overpriced after the announced price reduction. People willing to give $500 for a 980 will be much better served if they save up a bit more and get the much better 980 Ti, so the 980 really needs to drop a further $50 to $450 in order to be competitive within the lineup.
> 
> Anyway, AMD's flagship is supposed to be quite a bit better than a 980, and it will most probably have the horsepower to use more than 4 GB of VRAM. People buying a flagship in the middle of 2015 (versus September of 2014) expect if to perform accordingly. If people use two in Crossfire at 4K and start having stuttering due to running out of VRAM, then we will have a problem. Having lots of bandwidth and not enough amount to leverage the advantage will only make people think that Nvidia's offer is better in the long run.


Quote:


> Originally Posted by *The Stilt*
> 
> Tha
> Can you provide source for this and the Windows 10 utilizing HSA as well please?
> 
> All AMD GPUs released during the last decade have advanced frame buffer compression technologies implemented in them.
> Like any other data the textures however cannot be compressed infinitely. Also better compression algorithms not only require significantly more resources from the GPU but can add some lag too.


Quote:


> Originally Posted by *HiTechPixel*
> 
> They're both fundamentally the same kind of VRAM. 4GB HBM won't magically hold more information than 6GB or 8GB GDDR5.


Quote:


> Originally Posted by *Neo_Morpheus*
> 
> 4 gigs of ram is kind of child's play and not enough for 4k or even 1440p, we will all be laughing about this in another 3 years
> 
> 
> 
> 
> 
> 
> 
> 
> I not sure about the fury pricing, so it has to be cheaper than the ti? I guessing $650-700


Quote:


> Originally Posted by *tpi2007*
> 
> Let me be clear, what I post is just my my opinion on the matter, based on publicly available data, I don't have any special information that you guys don't have. I'm trying to be as neutral as ever, I've criticized and praised both brands on numerous occasions. What I want is for them to explain how it's suddenly easy to optimize HBM VRAM usage with a couple of engineers, if it's an HBM peculiarity that can't also be applied to GDDR5, because the rumours are also saying that the Hawaii refresh is coming out with 8 GB of GDDR5 VRAM (and that there is an 8 GB HBM version of Fiji slated for August and that slide mentioning dual-link interposing allowing for higher capacities). All I want is more information. See this post I wrote in another thread for more. As to 4 GB not being enough for 4K, it's not if you want to max out games, especially in Crossfire, which I think people have the right to expect when buying a flagship, and increasingly it's not going to be if you factor in that you will be buying a card now and expecting it to last two or three years. Like BinaryDemon said below.


Everyone is not seeing the important factor when considering VRAM usage with HBM. HBM does not need the same size buffer as GDDR5.It is much much faster with much less latency.Not only that,for future usage,DX12 with Windows 10 has VRAM stacking,so a dual card setup will have 8GB usable. This is a non-issue.
Yesterday Ron Myers,VP of corporate marketing of AMD was asked about this yesterday at Computex,his response "Ron Myers assured that 4GB of high bandwidth memory is plenty for 4K gaming so it is not just a question of the amount of memory, but also the speed and bandwidth. Put it another way, the requirement for 8GB (or whatever) is in combination with the current, slower bandwidth."
http://www.kitguru.net/components/graphic-cards/anton-shilov/amd-teases-fiji-but-refuses-to-reveal-fury/

Nvidia knew it was behind the 8-ball so to speak with the TitanX-980Ti vs the new AMD card,which is why they put 12GB on the full card,hoping to sway buyers with a humongous VRAM amount.It was the only thing they have to counter with besides power consumption.Especially after the 970 3.5GB debacle. Let's see how many people are dupped again.


----------



## RagingCain

Quote:


> Originally Posted by *tpi2007*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Slaughterem*
> 
> Do you have a source that backs up your claim that 4 gig of HBM is not enough horsepower for 4K? Do moderators here at OCN have early releases of HBM tech? Do you also have a source that AMD is promoting a flagship card and pretending that Fury is for 1440P gaming only? People have a right to know the facts and I would think as a moderator you would be neutral. Maybe the best card to save for is the 390X. Why would you advise someone to purchase a 980TI? You may be right, nobody knows for sure I guess this will be settled in the next month or so.
> 
> 
> 
> Let me be clear, what I post is just my my opinion on the matter, based on publicly available data, I don't have any special information that you guys don't have. I'm trying to be as neutral as ever, I've criticized and praised both brands on numerous occasions. What I want is for them to explain how it's suddenly easy to optimize HBM VRAM usage with a couple of engineers, if it's an HBM peculiarity that can't also be applied to GDDR5, because the rumours are also saying that the Hawaii refresh is coming out with 8 GB of GDDR5 VRAM (and that there is an 8 GB HBM version of Fiji for slated for August and that slide mentioning dual-link interposing allowing for higher capacities). All I want is more information. See this post I wrote in another thread for more. As to 4 GB not being enough for 4K, it's not and increasingly it's not going to be if you factor in that you will be buying a card now and expecting it to last two or three years. Like BinaryDemon said below. You know, the previous consoles had 512 MB shared (Xbox 360) / 256 MB dedicated (PS3) of (V)RAM, and as little as over a year after the release of the PS3 we were already using 1 GB of VRAM on the PC side in AAA games.
> 
> Quote:
> 
> 
> 
> Originally Posted by *BinaryDemon*
> 
> I'm not part of some ANTI-AMD campaign, it's just how I feel as a consumer.
> 
> I agree that _currently in most situations_, 4gb of vram is enough. This is evidenced by how well the R9 295x2 does against the Titan X. But when I spend $600 or more on a GPU, I'm trying to anticipate future vram usage for the next year or two as well. I don't have statistics but I think vram usage has increased dramatically in the past year or two, probably due to more vram available on the XB1 and PS4. Now maybe it will stagnate for the next few years since we are stuck with those consoles, but it might not- maybe developers will continue to use more for PC versions of games to distinguish them graphically from consoles.
> 
> I'm betting Fury 4gb outperforms GTX980TI, but I don't think I'd buy that version at $650 or more. I'd wait for the 8gb version.
> 
> Click to expand...
Click to expand...

The fame buffer hasn't changed in 14 years, full 1080P double buffered with 4 pre-rendered frames only needs 64 MB of VRAM. *Everything else is up to developers/driver engineers to use efficiently.*

1920x1080 x 32 bit color x 24 bit Z Buffer x 8 bit Stencil / 1000000 = 8.26 MiB. Basic calculation per frame still applies.

HBM is nothing special in that regards.


----------



## maltamonk

Quote:


> Originally Posted by *ToTheSun!*
> 
> Even if the Fiji flagship provides 10% more framerate than the 980ti, it cannot be objectively classified as a better purchase, regardless of price, simply because Nvidia are better in many other aspects.
> 
> It's obvious that people will talk about all other aspects because you're not paying $650+ for framerate; you're paying any such amount for the experience as a whole.


May I ask what other aspects are better?


----------



## sugarhell

In other news the full tonga use actually a 384 bit bus interface


----------



## youra6

Quote:


> Originally Posted by *maltamonk*
> 
> May I ask what other aspects are better?


Ignore him. Some people won't see truth even if was presented to them in plain sight.


----------



## criminal

Quote:


> Originally Posted by *sugarhell*
> 
> In other news the full tonga use actually a 384 bit bus interface


Source?

NM


----------



## BinaryDemon

Quote:


> Originally Posted by *RagingCain*
> 
> *Everything else is up to developers/driver engineers to use efficiently.*


And this is exactly why I feel the way I do. Seems that given massive amounts of vram available to developers on the consoles, that they have gotten lazy. Developers rely on tools provided by Nvidia / AMD to optimize their coding, and then you get things like Hairworks defaulting to 64x tessellation, ect.


----------



## Ganf

Quote:


> Originally Posted by *RagingCain*
> 
> The fame buffer hasn't changed in 14 years, full 1080P double buffered with 4 pre-rendered frames only needs 64 MB of VRAM. *Everything else is up to developers/driver engineers to use efficiently.*
> 
> 1920x1080 x 32 bit color x 24 bit Z Buffer x 8 bit Stencil / 1000000 = 8.26 MiB. Basic calculation per frame still applies.
> 
> HBM is nothing special in that regards.


VR takes all of that napkin theory you just did and throws it out the window after blowing it's nose on it. I don't have the rest of the day to post links on just how much has been going on in that area in the last 2 years, please google it.


----------



## RagingCain

Quote:


> Originally Posted by *BinaryDemon*
> 
> Quote:
> 
> 
> 
> Originally Posted by *RagingCain*
> 
> *Everything else is up to developers/driver engineers to use efficiently.*
> 
> 
> 
> And this is exactly why I feel the way I do. Seems that given massive amounts of vram available to developers on the consoles, that they have gotten lazy. Developers rely on tools provided by Nvidia / AMD to optimize their coding, and then you get things like Hairworks defaulting to 64x tessellation, ect.
Click to expand...

I have been pushing that message for a long time. I don't think anyone really cares.

Just to be clear, nVidia's GameWorks tools for optimizing games was a misrepresentation of facts. The tools don't auto-optimize code for the developer, nVidia provides tools for developers to optimize their code. If the developer doesn't do his due diligence, it doesn't get done by anything, nVidia or AMD regardless. nVidia basically has a huge extension for Visual Studio called Nsight and I have used it.
Quote:


> Originally Posted by *Ganf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *RagingCain*
> 
> The fame buffer hasn't changed in 14 years, full 1080P double buffered with 4 pre-rendered frames only needs 64 MB of VRAM. *Everything else is up to developers/driver engineers to use efficiently.*
> 
> 1920x1080 x 32 bit color x 24 bit Z Buffer x 8 bit Stencil / 1000000 = 8.26 MiB. Basic calculation per frame still applies.
> 
> HBM is nothing special in that regards.
> 
> 
> 
> VR takes all of that napkin theory you just did and throws it out the window after blowing it's nose on it. I don't have the rest of the day to post links on just how much has been going on in that area in the last 2 years, please google it.
Click to expand...

Why don't you provide factual sources that ever support your arguments? I am still waiting on your "information" from a thread about two weeks ago. Not to mention, Virtual Reality is off topic and not the context of my mathematics, which is how you calculate a basic frame buffer.


----------



## HiTechPixel

Quote:


> Originally Posted by *Redwoodz*
> 
> Everyone is not seeing the important factor when considering VRAM usage with HBM. HBM does not need the same size buffer as GDDR5.It is much much faster with much less latency.Not only that,for future usage,DX12 with Windows 10 has VRAM stacking,so a dual card setup will have 8GB usable. This is a non-issue.
> Yesterday Ron Myers,VP of corporate marketing of AMD was asked about this yesterday at Computex,his response "Ron Myers assured that 4GB of high bandwidth memory is plenty for 4K gaming so it is not just a question of the amount of memory, but also the speed and bandwidth. Put it another way, the requirement for 8GB (or whatever) is in combination with the current, slower bandwidth."
> http://www.kitguru.net/components/graphic-cards/anton-shilov/amd-teases-fiji-but-refuses-to-reveal-fury/
> 
> Nvidia knew it was behind the 8-ball so to speak with the TitanX-980Ti vs the new AMD card,which is why they put 12GB on the full card,hoping to sway buyers with a humongous VRAM amount.It was the only thing they have to counter with besides power consumption.Especially after the 970 3.5GB debacle. Let's see how many people are dupped again.


The VRAM stacking feature in Windows 10 is not natively implemented nor available for every card out of the box. It's been stated several times that it's up to the individual game developers to use that feature.


----------



## sugarhell

Quote:


> Originally Posted by *criminal*
> 
> Source?
> 
> NM


https://forum.beyond3d.com/threads/amd-pirate-islands-r-3-series-speculation-rumor-thread.55600/page-71


----------



## Ganf

Quote:


> Originally Posted by *RagingCain*
> 
> I have been pushing that message for a long time. I don't think anyone really cares.
> 
> Just to be clear, nVidia's GameWorks tools for optimizing games was a misrepresentation of facts. The tools don't auto-optimize code for the developer, nVidia provides tools for developers to optimize their code. If the developer doesn't do his due diligence, it doesn't get done by anything, nVidia or AMD regardless. nVidia basically has a huge extension for Visual Studio called Nsight and I have used it.
> Why don't you provide factual sources that ever support your arguments? I am still waiting on your "information" from a thread about two weeks ago. Not to mention, Virtual Reality is off topic and not the context of my mathematics, which is how you calculate a basic frame buffer.


VR relies on a huge frame buffer.

http://www.tomshardware.com/news/amd-liquidvr-virtual-reality,28682.html

Takes like 5 seconds to google this and read up on what AMD is pushing, which Nvidia is now copying in gameworks VR, there are only about 40 different sites talking about it. You could at least try.


----------



## RagingCain

Quote:


> Originally Posted by *Ganf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *RagingCain*
> 
> I have been pushing that message for a long time. I don't think anyone really cares.
> 
> Just to be clear, nVidia's GameWorks tools for optimizing games was a misrepresentation of facts. The tools don't auto-optimize code for the developer, nVidia provides tools for developers to optimize their code. If the developer doesn't do his due diligence, it doesn't get done by anything, nVidia or AMD regardless. nVidia basically has a huge extension for Visual Studio called Nsight and I have used it.
> Why don't you provide factual sources that ever support your arguments? I am still waiting on your "information" from a thread about two weeks ago. Not to mention, Virtual Reality is off topic and not the context of my mathematics, which is how you calculate a basic frame buffer.
> 
> 
> 
> VR relies on a huge frame buffer.
> 
> http://www.tomshardware.com/news/amd-liquidvr-virtual-reality,28682.html
> 
> Takes like 5 seconds to google this and read up on what AMD is pushing, which Nvidia is now copying in gameworks VR, there are only about 40 different sites talking about it. You could at least try.
Click to expand...

Again VR is not the context of what I am talking about, where is your source that my math is bad?


----------



## Forceman

Quote:


> Originally Posted by *Redwoodz*
> 
> Everyone is not seeing the important factor when considering VRAM usage with HBM. *HBM does not need the same size buffer as GDDR5*.It is much much faster with much less latency.Not only that,for future usage,DX12 with Windows 10 has VRAM stacking,so a dual card setup will have 8GB usable. This is a non-issue.
> Yesterday Ron Myers,VP of corporate marketing of AMD was asked about this yesterday at Computex,his response "Ron Myers assured that 4GB of high bandwidth memory is plenty for 4K gaming so it is not just a question of the amount of memory, but also the speed and bandwidth. Put it another way, the requirement for 8GB (or whatever) is in combination with the current, slower bandwidth."
> http://www.kitguru.net/components/graphic-cards/anton-shilov/amd-teases-fiji-but-refuses-to-reveal-fury/


Why does it not need as much? This claim is bandied about whenever the 4GB limit comes up, but no one has yet explained how a HBM GB is larger than a GDDR5 GB. Games don't know what kind of VRAM is on the card, and they will try to use whatever they are going to use. If that is more than 4GB (or more correctly, if the asset they want to use isn't stored in the 4GB on the card) then it has to fetch it across PCIe - and a billion GB/s of bandwidth on card isn't going to help you. If it's so obvious that HBM is different than GDDR5 in that regard, why can no one explain it?

As for the DX12 stalking horse, VRAM stacking is reportedly a feature, but not one that is going to magically appear in a DX12 game. It is going to need to be explicitly coded for by developers. Mantle can also support VRAM stacking, but do you see any games taking advantage of it?

And what would you expect the VP of _Marketing_ to say? Oh yeah, we're screwed on this VRAM situation?


----------



## Slaughterem

Quote:


> Originally Posted by *ebduncan*
> 
> meh 4gb is enough for me.
> 
> its perfect for 1440p and 4k. I've yet to see an example of a new game which used more than 4gb vram at 4k. I don't care about what level AA they had to use to make it possible to exceed 4gb. You don't need AA/.MSAA at 4k.
> 
> Then there is the DX 12 argument where Crossfire/Sli setups will pool the gpu memory. For example two Fury X cards in crossfire would sport 8gb vram more than enough.
> 
> *Lets face it, it still takes a lot of GPU power to run 4k AAA titles at respectable settings with a single card even the Titan X. 2 cards for 4k is the norm*.


I agree







Even a 980TI with 6 gigs of vram or the Titan X with 12 gigs vram can not reach 60 FPS on Crysis 3 with High quality and FXAA. They both are about 40FPS. a R9 295X2 with 4 gigs vram per GPU does better at 51 FPS. And having 2 GPU"s with 4 gigs Vram does not give you 8 gigs of capacity total as far as I know, though DX12 will make dual GPU Vram act as a single capacity in the future. Same with Shadow of modor. The 980TI with 6 G and Titan X with 12G avg 48FPS while R9 295X2 avgs 60. So to all the 4g is not enough crowd for 4K here is a news flash it appears from this review GPU power is more important than Vram capacity.
http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/5


----------



## Slaughterem

Quote:


> Originally Posted by *tpi2007*
> 
> Let me be clear, what I post is just my opinion on the matter, based on publicly available data, I don't have any special information that you guys don't have. I'm trying to be as neutral as ever, I've criticized and praised both brands on numerous occasions. What I want is for them to explain how it's suddenly easy to optimize HBM VRAM usage with a couple of engineers, if it's an HBM peculiarity that can't also be applied to GDDR5, because the rumours are also saying that the Hawaii refresh is coming out with 8 GB of GDDR5 VRAM (and that there is an 8 GB HBM version of Fiji slated for August and that slide mentioning dual-link interposing allowing for higher capacities). All I want is more information. See this post I wrote in another thread for more. *As to 4 GB not being enough for 4K, it's not if you want to max out games*, especially in Crossfire, which I think people have the right to expect when buying a flagship, and increasingly it's not going to be if you factor in that you will be buying a card now and expecting it to last two or three years. Like BinaryDemon said below.


You are correct, 4 gigs is not enough. You should also point out that 6gig and 12 gig are also not enough if you want to max out games at 4K


----------



## tpi2007

Quote:


> Originally Posted by *RagingCain*
> 
> The fame buffer hasn't changed in 14 years, full 1080P double buffered with 4 pre-rendered frames only needs 64 MB of VRAM. *Everything else is up to developers/driver engineers to use efficiently.*
> 
> 1920x1080 x 32 bit color x 24 bit Z Buffer x 8 bit Stencil / 1000000 = 8.26 MiB. Basic calculation per frame still applies.
> 
> HBM is nothing special in that regards.


That's what I want to know, if HBM is indeed nothing special in that regard. AMD makes it seems like it was easy _on their end_ to optimize usage.

In any case, those numbers you provided look good on paper, but reality then kicks in. You need a lot more data than that corresponding to 4 pre-rendered frames in the buffer if you want to have any sort of fluid gameplay. Imagine a streaming game like GTA V, moving at 60 fps. The moment you have to fetch data over PCIe to display immediately your fps tanks, that's why you need a lot more space to account for this.


----------



## Redwoodz

Quote:


> Originally Posted by *HiTechPixel*
> 
> The VRAM stacking feature in Windows 10 is not natively implemented nor available for every card out of the box. It's been stated several times that it's up to the individual game developers to use that feature.


I'm pretty sure the most advanced and possibly the most powerful GPU ever will be implemented,the rest of the cards doesn't matter. A game developer would be pretty stupid not to,if they have high VRAM usage.

Quote:


> Originally Posted by *Forceman*
> 
> Why does it not need as much? This claim is bandied about whenever the 4GB limit comes up, but no one has yet explained how a HBM GB is larger than a GDDR5 GB. Games don't know what kind of VRAM is on the card, and they will try to use whatever they are going to use. If that is more than 4GB (or more correctly, if the asset they want to use isn't stored in the 4GB on the card) then it has to fetch it across PCIe - and a billion GB/s of bandwidth on card isn't going to help you. If it's so obvious that HBM is different than GDDR5 in that regard, why can no one explain it?
> 
> As for the DX12 stalking horse, VRAM stacking is reportedly a feature, but not one that is going to magically appear in a DX12 game. It is going to need to be explicitly coded for by developers. Mantle can also support VRAM stacking, but do you see any games taking advantage of it?
> 
> And what would you expect the VP of _Marketing_ to say? Oh yeah, we're screwed on this VRAM situation?


I thought I explained it fairly simply,but I will try again. Nowhere did I say 1GB HBM is bigger than 1GB GDDR5.
GDDR5 has to have bigger storage because the speed of the interconnect is slower,so you have to put more in storage ahead of time to avoid performance penalties.Lower latency and superior speeds mean you don't have to pre-load as much info. Think of it like a HDD vs SSD. If you have to access random files on a HDD,you'd like to have as much data stored in memory as possible.With access times of SSD's,it doesn't really matter that much.


----------



## tsm106

Quote:


> Originally Posted by *Redwoodz*
> 
> Quote:
> 
> 
> 
> Originally Posted by *HiTechPixel*
> 
> The VRAM stacking feature in Windows 10 is not natively implemented nor available for every card out of the box. It's been stated several times that it's up to the individual game developers to use that feature.
> 
> 
> 
> I'm pretty sure the most advanced and possibly the most powerful GPU ever will be implemented,the rest of the cards doesn't matter. *A game developer would be pretty stupid not to,if they have high VRAM usage*.
Click to expand...

I'm not sure stupid is a good way to put it? It's more like if the game is applicable to SFR rendering. SFR is not about speed, high fps. It's about fluidity, consistency of framerate, aka it's all about the minimums. I can't see SFR being applied to games like hotcakes. For ex. with Mantle and crossfire enabled in Civ:BE, that game runs in SFR mode. On triple screens in crossfire, it's pretty awesome. You don't get the tearing or fps issues, like when you see the fps counter is way high yet it feels like there is something wrong. Instead, you don't notice the framerate anymore, its so consistent that you don't have to bother thinking about the fps and can instead delve more deeply into the game.


----------



## ToTheSun!

Quote:


> Originally Posted by *maltamonk*
> 
> May I ask what other aspects are better?


Yes, you may!

Most primarily, G-sync itself and all the accompanying features. Even if retired staff tell you to ignore that fact, a sensible person would find it hard to argue that AMD's VRR implementation is better than Nvidia's.

Then, you have the whole "better and more frequent drivers" thing that's been discussed on OCN since the dawn of time that people running single GPU setups will find no problem contesting over and over again (which happens to be the case for me; on that regard, i have absolutely no complaints to make; my 7970 has been good to me).

Then, as probably the least of the concerns, power draw, efficiency, etc.

Of course, that's not to say AMD is all faults, but all i'm saying is that AMD's merits don't outweigh their cons, IMO.


----------



## RagingCain

Quote:


> Originally Posted by *tpi2007*
> 
> Quote:
> 
> 
> 
> Originally Posted by *RagingCain*
> 
> The fame buffer hasn't changed in 14 years, full 1080P double buffered with 4 pre-rendered frames only needs 64 MB of VRAM. *Everything else is up to developers/driver engineers to use efficiently.*
> 
> 1920x1080 x 32 bit color x 24 bit Z Buffer x 8 bit Stencil / 1000000 = 8.26 MiB. Basic calculation per frame still applies.
> 
> HBM is nothing special in that regards.
> 
> 
> 
> That's what I want to know, if HBM is indeed nothing special in that regard. AMD makes it seems like it was easy _on their end_ to optimize usage.
> 
> In any case, those numbers you provided look good on paper, but reality then kicks in. You need a lot more data than that corresponding to 4 pre-rendered frames in the buffer if you want to have any sort of fluid gameplay. Imagine a streaming game like GTA V, moving at 60 fps. The moment you have to fetch data over PCIe to display immediately your fps tanks, that's why you need a lot more space to account for this.
Click to expand...

Well first off you are correct that performance tanks when the RAM side Cache is being call on deck.

However, the entire memory management system is there for a reason. PCs are mostly NUMA, not UMA like the consoles. All the recent console ports show grotesque memory bloat because it cuts down on development time significantly by simply accepting good enough performance and storage usage. Drivers do most of the GPU specific storage management but sometimes they just have to notify dev that something has to give.

A PC has to prioritize the primary cache and the background cache. An image describes it simpler. UMA consoles dont have the background cache as much since most is loaded to memory on startup/load.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *tpi2007*
> 
> Let me be clear, what I post is just my opinion on the matter, based on publicly available data, I don't have any special information that you guys don't have. I'm trying to be as neutral as ever, I've criticized and praised both brands on numerous occasions. What I want is for them to explain how it's suddenly easy to optimize HBM VRAM usage with a couple of engineers, if it's an HBM peculiarity that can't also be applied to GDDR5, because the rumours are also saying that the Hawaii refresh is coming out with 8 GB of GDDR5 VRAM (and that there is an 8 GB HBM version of Fiji slated for August and that slide mentioning dual-link interposing allowing for higher capacities). All I want is more information. See this post I wrote in another thread for more. As to 4 GB not being enough for 4K, it's not if you want to max out games, especially in Crossfire, which I think people have the right to expect when buying a flagship, and increasingly it's not going to be if you factor in that you will be buying a card now and expecting it to last two or three years. Like BinaryDemon said below.


I'm no expert but I keep hearing the same mantra: That 4GB is 4GB and if you run out you run out. Problem is, nobody here has ever tested HBM so we are basing this "truth" off of our experiences with GDDR5 and how that handles memory capacities. Who's to say that the massive increase in bandwidth between the GPU and the memory chips won't allow HBM cards to store less assets in the buffer at any one time because the transfer rate is so much higher than GDDR5? I mean, if data can go back and forth 5 times quicker wouldn't that mean that the data doesn't have to accumulate in the buffer?


----------



## dieanotherday

but the point is, why, everything equal (price/perf, heat, noise), would people choose amd over nvidia? AMD drivers are horrible and crossfire is horrible.


----------



## tsm106

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm no expert but I keep hearing the same mantra: That 4GB is 4GB and if you run out you run out. Problem is, nobody here has ever tested HBM so we are basing this "truth" off of our experiences with GDDR5 and how that handles memory capacities. Who's to say that the massive increase in bandwidth between the GPU and the memory chips won't allow HBM cards to store less assets in the buffer at any one time because the transfer rate is so much higher than GDDR5? I mean, if data can go back and forth 5 times quicker wouldn't that mean that the data doesn't have to accumulate in the buffer?


Hey now, they're all experts with no actual experience on a product that doesn't exist in retail yet, whilst under NDA. Give them a break.


----------



## Ganf

Quote:


> Originally Posted by *RagingCain*
> 
> Again VR is not the context of what I am talking about, where is your source that my math is bad?


I didn't say it was bad, I said it doesn't apply to VR, and AMD is pushing VR hard, along with HBM being the optimal solution for VR. If there is any concern about 4gb not being enough, or the bandwidth on HBM not being relevant in regards to frame buffer, then the aspect you need to be looking at is the amount of frame buffer required for VR because not only is it much more intensive and requires a lot more storage but AMD is claiming 1ms response times on THAT frame buffer.

So if HBM's performance is in question, it should be questioned in respect to all uses. How does GDDR5 compare in that same scenario?


----------



## Forceman

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm no expert but I keep hearing the same mantra: That 4GB is 4GB and if you run out you run out. Problem is, nobody here has ever tested HBM so we are basing this "truth" off of our experiences with GDDR5 and how that handles memory capacities. Who's to say that the massive increase in bandwidth between the GPU and the memory chips won't allow HBM cards to store less assets in the buffer at any one time because the transfer rate is so much higher than GDDR5? I mean, if data can go back and forth 5 times quicker wouldn't that mean that the data doesn't have to accumulate in the buffer?


The transfer rate you mention is between the GPU and VRAM though, it doesn't help you get assets from the system any faster. That's the piece AMD, etc, haven't explained - how does having HBM and high on-card transfer rates help you if the assets aren't already on the card? Less VRAM on card means fewer assets on card, and that increases the likelihood that you have to get said assets from the system over PCIe. If the answer is better VRAM management by drivers (as AMD implies/says) then why don't that also help GDDR5? That type of optimization should be memory type agnostic. I'm curious how faster VRAM is suddenly going to act like more VRAM.

Didn't AMD have a big HBM conference recently? Was any of this explained there? So far all (that I've seen) of AMD's HBM talk sounds more like marketing/damage control than an actual engineering explanation.


----------



## raghu78

Quote:


> Originally Posted by *tsm106*
> 
> Hey now, they're all experts with no actual experience on a product that doesn't exist in retail yet, whilst under NDA. Give them a break.


the forums are filled with too many experts








and they are all very busy judging an unreleased product using a brand new memory technology about which only the inventor AMD and their partners Hynix know.


----------



## RagingCain

Could HBM cards have/use some suddenly more optimized caching algorithm ? Absolutely.

I just wanted to be clear that such a thing can exist without HBM. The bandwith from VRAM to VRAM really though hasnt been the problem, its the bottleneck between VRAM and System RAM thats too slow. For all we know about HBM, nothing reported directly addresses that problem. Its still up to dev to be responsible with the resources.


----------



## Wishmaker

Quote:


> Originally Posted by *SpeedyVT*
> 
> DX12 has a lot of GPU ram saving features like tile texture asset suddenly you can shrink 2gigs of textures without loosing quality to 1/5 the size.
> 
> Current games might be more bloaty at 4k but we've got to look at this as a DX12 situation and opportunity.


You make it sound like from the 29th of July when W10 is out, everything will be DX 12. Every new DX version, the transition took a while and I would not be surprised if DX 12 becomes more widespread when NVIDIA brings their own version of HBM.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm no expert but I keep hearing the same mantra: That 4GB is 4GB and if you run out you run out. Problem is, nobody here has ever tested HBM so we are basing this "truth" off of our experiences with GDDR5 and how that handles memory capacities. Who's to say that the massive increase in bandwidth between the GPU and the memory chips won't allow HBM cards to store less assets in the buffer at any one time because the transfer rate is so much higher than GDDR5? I mean, if data can go back and forth 5 times quicker wouldn't that mean that the data doesn't have to accumulate in the buffer?


Lets take this one step forward and open a 4 GB TIFF in photoshop on 6 GB of RAM at 1333, super loose timings, and then let us open the same picture on 6 GB at 2133, tighter timings. Will the faster RAM show less usage as I continue editing layers than the slower RAM?


----------



## Serandur

Quote:


> Originally Posted by *Forceman*
> 
> The transfer rate you mention is between the GPU and VRAM though, it doesn't help you get assets from the system any faster. That's the piece AMD, etc, haven't explained - how does having HBM and high on-card transfer rates help you if the assets aren't already on the card? Less VRAM on card means fewer assets on card, and that increases the likelihood that you have to get said assets from the system over PCIe. If the answer is better VRAM management by drivers (as AMD implies/says) then why don't that also help GDDR5? That type of optimization should be memory type agnostic. *I'm curious how faster VRAM is suddenly going to act like more VRAM.*


As to that final point, we should have noticed something like that already with the ever-increasing video and system RAM speeds over the years if it were significant or automatic.


----------



## Ganf

Quote:


> Originally Posted by *RagingCain*
> 
> Could HBM cards have/use some suddenly more optimized caching algorithm ? Absolutely.
> 
> I just wanted to be clear that such a thing can exist without HBM. The bandwith from VRAM to VRAM really though hasnt been the problem, its the bottleneck between VRAM and System RAM thats too slow. For all we know about HBM, nothing has reported to address that problem. Its still upto dev to be rezponsible with the resources.


It has been a problem for VR. No one has gotten a sub-10ms frametime yet, but AMD says they've nailed it. The only possible way they've done that is HBM and their new API, and you can bet they'll be showcasing both at E3.


----------



## Alatar

Capacity is still capacity. Doesn't matter if your memory is DDR3, GDDR3, GDDR5 or HBM. 4GB of HBM is not more than 4GB of GDDR5, nor does it effectively act as more.

If you exceed your VRAM capacity you're going to be pulling data across the pci-e bus at extremely low speeds (compared to VRAM that is). And that pci-e bus speed is the same for all pci-e 3.0 cards. HBM doesn't magically make that simpler or easier or faster.

What HBM does is make moving data between the GPU and the VRAM faster (compared to GDDR5). Again, it does not make it faster or somehow more viable for good performance to pull data through the pci-e bus. And that's the problem. You're going to be moving data faster than a 6GB GDDR5 card as long as the amount of data that you have to move and store constantly in VRAM isn't over 4GB. However once you go over 4GB you're going to be pulling stuff through the slow pci-e bus.

*HBM does not mean that you can pull stuff through the pci.e bus without the same consequences as with GDDR5. VRAM capacity is VRAM capacity no matter what type of VRAM it is. Once you run out you deal with the exact same pci-e bandwidth as everyone else.*

Now AMD does say that the way they've been storing stuff in their VRAM with previous cards has been inefficient (they've done zero work on it) and that they can improve it to alleviate the problems with drivers and some OS work. However we'll have to see if this claim actually holds up in the real world. But you also have to remember that if you have to shuffle data around the HBM stacks in order to deal with a capacity bottleneck you're also going to lose some of your effective bandwidth since it's going to be used for dealing with the capacity.

But I guess we'll see what their actual implementation is and whether it'll work. As of now though as far as every consumer is concerned AMD is delivering a 4GB card and 4GB cards are going to be limited by pci-e bus bandwidth when going over 4GB usage.


----------



## tsm106

lol so everyone is over what vram size will now it's lets disprove that they can handle the 4gb vram size. AMD says they can make 4gb go farther and yall anti amd are like no they cannot cuz I say so. Ok... There's no where to go, just wait for AMD to put up or shut up.


----------



## tpi2007

Quote:


> Originally Posted by *RagingCain*
> 
> Well first off you are correct that performance tanks when the RAM side Cache is being call on deck. However, the entire memory management system is there for a reason. PCs are mostly NUMA, not UMA like the consoles. All the recent console ports show grotesque memory bloat because it cuts down on development time significantly by simply accepting good enough performance and storage usage. Drivers do most of the GPU specific storage management but sometimes they just have to notify dev that something has to give.


Sure, I'm not going to argue against that, console to PC ports will many times take the way of the least effort, but the way AMD said that it was fairly easy with a couple of engineers to optimize VRAM memory usage makes me sceptical, and also question if it only applies to HBM. They make it sound like they can solve the problem on their end without the game developer's involvement. In any case, that involvement would primarily only work for the future anyway, very few will bother to change games that are already out.

Quote:


> Macri admitted that in the past very little effort was put into measuring and improving the utilization of the graphics memory system, calling it "exceedingly poor." The solution was to just add more memory - it was easy to do and relatively cheap. With HBM that isn't the case as there is a ceiling of what can be offered this generation. Macri told us that with just a couple of engineers it was easy to find ways to improve utilization and he believes that modern resolutions and gaming engines will not suffer at all from a 4GB graphics memory limit.


http://www.pcper.com/reviews/General-Tech/High-Bandwidth-Memory-HBM-Architecture-AMD-Plans-Future-GPUs

If the rumour about the R9 390X (Hawaii refresh) with 8 GB GDDR5 is true, why not keep it at 4, optimize its usage and use the power headroom to clock the cores higher?

On the other hand, together with more GPU power, we will need more VRAM, both for traditional gaming and especially VR. Take the dreaded pop-in as the most important example of why we need more. We are still having to deal with it and it completely breaks the immersion when you see trees, NPCs, walls, cars, etc, suddenly appearing on-screen.


----------



## Ganf

Quote:


> Originally Posted by *Alatar*
> 
> *Now AMD does say that the way they've been storing stuff in their VRAM with previous cards has been inefficient (they've done zero work on it) and that they can improve it to alleviate the problems with drivers and some OS work. However we'll have to see if this claim actually holds up in the real world.* But you also have to remember that if you have to shuffle data around the HBM stacks in order to deal with a capacity bottleneck you're also going to lose some of your effective bandwidth since it's going to be used for dealing with the capacity.
> 
> But I guess we'll see what their actual implementation is and whether it'll work. As of now though as far as every consumer is concerned AMD is delivering a 4GB card and 4GB cards are going to be limited by pci-e bus bandwidth when going over 4GB usage.


That's the meat and potatoes of it all right there. Either they addressed the issue, or they didn't, nothing else is relevant until we have an answer on that question.


----------



## Blameless

Quote:


> Originally Posted by *raghu78*
> 
> Those are heatspreader covers and not the actual HBM chip


No they aren't. The HBM chip in the picture you show isn't any different, it's just flipped over.
Quote:


> Originally Posted by *Redwoodz*
> 
> GDDR5 has to have bigger storage because the speed of the interconnect is slower,so you have to put more in storage ahead of time to avoid performance penalties.Lower latency and superior speeds mean you don't have to pre-load as much info.


This is completely backwards.

Faster but smaller is worse if the data you need to have locally doesn't fit.
Quote:


> Originally Posted by *RagingCain*
> 
> Well first off you are correct that performance tanks when the RAM side Cache is being call on deck.
> 
> However, the entire memory management system is there for a reason. PCs are mostly NUMA, not UMA like the consoles. All the recent console ports show grotesque memory bloat because it cuts down on development time significantly by simply accepting good enough performance and storage usage. Drivers do most of the GPU specific storage management but sometimes they just have to notify dev that something has to give.
> 
> A PC has to prioritize the primary cache and the background cache. An image describes it simpler. UMA consoles dont have the background cache as much since most is loaded to memory on startup/load.


Yes, the driver and app itself can influence how much local VRAM a GPU needs.

However, memory type and performance does not influence this, unless it's already so slow that accessing non-local memory isn't appreciably slower.
Quote:


> Originally Posted by *tsm106*
> 
> Hey now, they're all experts with no actual experience on a product that doesn't exist in retail yet, whilst under NDA. Give them a break.


One does not need to be an expert to acknowledge a self-evident truth. 4GiB is 4GiB is 4GiB is 4GiB. Any tweak AMD does to make their GPUs more efficient at utilizing, compressing, or evicting unneeded data from memory cannot change this. However, it can change how that memory is used.

Do we know what changes AMD will make to memory management? No.

Do we know that 4GiB is not enough? No.

Do we know that HBM has exactly nothing to do with whether it's enough or not, except perhaps because of it's limitations forcing AMD's driver team to take a closer look at memory management? Yes, we absolutely know this beyond any shadow of a doubt.


----------



## michaelius

Quote:


> Originally Posted by *Ganf*
> 
> It has been a problem for VR. No one has gotten a sub-10ms frametime yet, but AMD says they've nailed it. The only possible way they've done that is HBM and their new API, and you can bet they'll be showcasing both at E3.


Well there's much simpler explanation.

VR is popular buzzword and some people think it's future of gaming so marketing departments are trying to show their products are future proof.


----------



## Alatar

Quote:


> Originally Posted by *Blameless*
> 
> One does not need to be an expert to acknowledge a self-evident truth. 4GiB is 4GiB is 4GiB is 4GiB. Any tweak AMD does to make their GPUs more efficient at utilizing, compressing, or evicting unneeded data from memory cannot change this. However, it can change how that memory is used.
> 
> Do we know what changes AMD will make to memory management? No.
> 
> Do we know that 4GiB is not enough? No.
> 
> Do we know that HBM has exactly nothing to do with whether it's enough or not, except perhaps because of it's limitations forcing AMD's driver team to take a closer look at memory management? Yes, we absolutely know this beyond any shadow of a doubt.


Exactly.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Blameless*
> 
> No they aren't. The HBM chip in the picture you show isn't any different, it's just flipped over.
> This is completely backwards.
> 
> Faster but smaller is worse if the data you need to have locally doesn't fit.
> Yes, the driver and app itself can influence how much local VRAM a GPU needs.
> 
> However, memory type and performance does not influence this, unless it's already so slow that accessing non-local memory isn't appreciably slower.
> One does not need to be an expert to acknowledge a self-evident truth. 4GiB is 4GiB is 4GiB is 4GiB. Any tweak AMD does to make their GPUs more efficient at utilizing, compressing, or evicting unneeded data from memory cannot change this. However, it can change how that memory is used.
> 
> Do we know what changes AMD will make to memory management? No.
> 
> *Do we know that 4GiB is not enough? No.*
> 
> Do we know that HBM has exactly nothing to do with whether it's enough or not, except perhaps because of it's limitations forcing AMD's driver team to take a closer look at memory management? Yes, we absolutely know this beyond any shadow of a doubt.


Yes, just wait and see. Ppl pretend to know it won't be enough. I hope NV pays them well


----------



## Ganf

Quote:


> Originally Posted by *michaelius*
> 
> Well there's much simpler explanation.
> 
> VR is popular buzzword and some people think it's future of gaming so marketing departments are trying to show their products are future proof.


If it were just marketing? Sure, but Oculus asked for AMD to provide a solution to the latency problem for VR, and AMD says they have delivered on that promise, and Oculus is not saying otherwise. The Oculus and all other VR headsets are relying on that ability just to be a functional product, I'm sure if AMD wasn't able to deliver, they'd have beat down Nvidia's door and chained them up in the basement to get a solution out of them a year ago.


----------



## xlink

Quote:


> Originally Posted by *The Stilt*
> 
> 4GB of VRAM and heads will roll at AMD once again.
> Using HBM when limited to just 4GB is a huge mistake especially when it brings no additional performance. It will certainly reduce the power consumption and make the PCB design more simple and cheaper to produce but still.
> 
> They should have stayed with 512-bit GDDR5 interface.


I wouldn't be surprised if the tech was originally slated to be released a year or two earlier... at which time that limitation wouldn't have mattered so much.
It would've been an excellent stepping stone to higher capacity versions while simultaneously allowing for marginal cost reductions.

I actually don't think it'll matter for most people. Just lower your textures slightly.


----------



## Blameless

Quote:


> Originally Posted by *xlink*
> 
> I actually don't think it'll matter for most people. Just lower your textures slightly.


I don't think it will matter for most people either, but telling people they need to lower the textures slightly, for any situation/title, on their brand spanking new $650 flagship Fury whatever, is not going to fly wit a lot of them.

I'll wait and see how the card does before I make a decision, but I am not spending 150% of what I spent on my 18+ month old Hawaii 18+ months ago to have to compromise my performance or IQ settings.


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> If it were just marketing? Sure, but Oculus asked for AMD to provide a solution to the latency problem for VR, and AMD says they have delivered on that promise, and Oculus is not saying otherwise. The Oculus and all other VR headsets are relying on that ability just to be a functional product, I'm sure if AMD wasn't able to deliver, they'd have beat down Nvidia's door and chained them up in the basement to get a solution out of them a year ago.


Are those solutions applicable to normal monitors? If they've suddenly solved the frame time problem, wouldn't that also benefit regular monitors and not just VR headsets, or is there some fundamental difference I'm not aware of? I haven't been following VR very closely.


----------



## RagingCain

Quote:


> Originally Posted by *Ganf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *RagingCain*
> 
> Could HBM cards have/use some suddenly more optimized caching algorithm ? Absolutely.
> 
> I just wanted to be clear that such a thing can exist without HBM. The bandwith from VRAM to VRAM really though hasnt been the problem, its the bottleneck between VRAM and System RAM thats too slow. For all we know about HBM, nothing has reported to address that problem. Its still upto dev to be rezponsible with the resources.
> 
> 
> 
> It has been a problem for VR. No one has gotten a sub-10ms frametime yet, but AMD says they've nailed it. The only possible way they've done that is HBM and their new API, and you can bet they'll be showcasing both at E3.
Click to expand...

AMD quite possibly would be correct. HBM can seriously aid in frametime rendered, but only in situations when the frame time would be bandwidth bound by GDDR5 or whatever exotic memory is in use.

Here are the visual differences, as we know to be true based on released information.


----------



## Slaughterem

Quote:


> Originally Posted by *Blameless*
> 
> I don't think it will matter for most people either, but telling people they need to lower the textures slightly, for any situation/title, on their brand spanking new $650 flagship Fury whatever, is not going to fly wit a lot of them.
> 
> I'll wait and see how the card does before I make a decision, but I am not spending 150% of what I spent on my 18+ month old Hawaii 18+ months ago to have to compromise my performance or IQ settings.


You have to lower your settings for the 980TI and Titan x also to reach decent FPS at 4K but lets not mention that here because it is NV and OK for them.


----------



## Ganf

Quote:


> Originally Posted by *Forceman*
> 
> Are those solutions applicable to normal monitors? If they've suddenly solved the frame time problem, wouldn't that also benefit regular monitors and not just VR headsets, or is there some fundamental difference I'm not aware of? I haven't been following VR very closely.


They keep saying that it's relying on AMD cards and the LiquidVR API being used together. My guess would be that if you're using a regular monitor LiquidVR will probably treat it differently and may not even be used at all.


----------



## SuprUsrStan

Quote:


> Originally Posted by *Alatar*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> One does not need to be an expert to acknowledge a self-evident truth. 4GiB is 4GiB is 4GiB is 4GiB. Any tweak AMD does to make their GPUs more efficient at utilizing, compressing, or evicting unneeded data from memory cannot change this. However, it can change how that memory is used.
> 
> Do we know what changes AMD will make to memory management? No.
> 
> Do we know that 4GiB is not enough? No.
> 
> Do we know that HBM has exactly nothing to do with whether it's enough or not, except perhaps because of it's limitations forcing AMD's driver team to take a closer look at memory management? Yes, we absolutely know this beyond any shadow of a doubt.
> 
> 
> 
> Exactly.
Click to expand...

What if it's 4GB HBM + 4 GB of regular ram. That would add a whole new spin on this.


----------



## Forceman

Quote:


> Originally Posted by *Syan48306*
> 
> What if it's 4GB HBM + 4 GB of regular ram. That would add a whole new spin on this.


That would kind if defeat the whole purpose of going with HBM in the first place though.


----------



## Blameless

Quote:


> Originally Posted by *Slaughterem*
> 
> You have to lower your settings for the 980TI and Titan x also to reach decent FPS at 4K but lets not mention that here because it is NV and OK for them.


Try not to presume bias where there is none...
Quote:


> Originally Posted by *Blameless*
> 
> Regarding 4k, a single 980Ti is still falling a bit short of what I'd want


I'm more likely to have to lower my settings because of lack of VRAM with the GPU that has 33% less VRAM. 4k isn't slow on GM200 because GM200 cards lack VRAM, it's slow because the the core GPU performance just isn't there yet. Maybe Fiji has more raw performance, maybe not. However it does have less memory.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Slaughterem*
> 
> You have to lower your settings for the 980TI and Titan x also to reach decent FPS at 4K but lets not mention that here because it is NV and OK for them.


Well, texture quality is different. It has small to no effect on performance, but adds greatly to image quality. I always leave it @ the highest possible setting. 4GB is not a problem atm for 1440p, probably not even for 4k, but in the future it might. But we have no clue how AMD plans to manage memory. Maybe they will work with game devs to achieve that, maybe they'll improve memory management in their drivers... time will tell.


----------



## Ganf

Quote:


> Originally Posted by *Forceman*
> 
> That would kind if defeat the whole purpose of going with HBM in the first place though.


Not if the drivers treated each appropriately. If AMD programmed the card to treat the GDDR5 as typical DRAM then it's a whole new ball game.

But that's just fantasy. They're not going to mix VRAM architectures.


----------



## Majin SSJ Eric

I'm gonna laugh so hard when Fiji launches and somehow uses less VRAM than GDDR5 cards in games and all the "experts" in this thread scramble to somehow explain it away. But 4GB is 4GB right????


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> Not if the drivers treated each appropriately. If AMD programmed the card to treat the GDDR5 as typical DRAM then it's a whole new ball game.
> 
> But that's just fantasy. They're not going to mix VRAM architectures.


I meant from a physical/engineering standpoint, not software. You don't eliminate the memory controllers on-die with HBM just to put it back for some kind of hybrid solution.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm gonna laugh so hard when Fiji launches and somehow uses less VRAM than GDDR5 cards in games and all the "experts" in this thread scramble to somehow explain it away. But 4GB is 4GB right????


I don't think most people are saying that they _can't_ do better VRAM management with drivers (after all, it appears Nvidia has done some), just that there isn't an inherent property of HBM that makes it not follow the same VRAM capacity rules as GDDR5. If HBM supported on-die compression or something, then certainly, but if your solution is to make better use of assets through intelligent drivers, that would seem to apply equally to GDDR5.

A lot is made of the huge bandwidth gains of HBM, but a 4GB card is going to have 512GB/sec, compared to 320 for Hawaii. So it's not like it is an order of magnitude faster or anything. It's only slightly more than the percentage gain from going 980 to 980 Ti. There are latency advantages as well, but from a top-line bandwidth number it's not that much more.


----------



## pengs

Quote:


> Originally Posted by *BinaryDemon*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Don't I know it. I've repeatedly said I think Fury will be faster. I still think the 4gb version if gonna sell like crap unless it undercuts the GTX980TI in pricing.
> I'm not part of some ANTI-AMD campaign, it's just how I feel as a consumer.
> 
> 
> 
> I agree that _currently in most situations_, 4gb of vram is enough. This is evidenced by how well the R9 295x2 does against the Titan X. But when I spend $600 or more on a GPU, I'm trying to anticipate future vram usage for the next year or two as well. I don't have statistics but I think vram usage has increased dramatically in the past year or two, probably due to more vram available on the XB1 and PS4.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Now maybe it will stagnate for the next few years since we are stuck with those consoles, but it might not- maybe developers will continue to use more for PC versions of games to distinguish them graphically from consoles.
> 
> I'm betting Fury 4gb outperforms GTX980TI, but I don't think I'd buy that version at $650 or more. I'd wait for the 8gb version.


If DX12 is adopted quickly it's hard to say (given that the developer has no hardware target) as it pretty much allows them to use as many assets as they want.

Yeah, your right and it shows in next gen games like Dying Light and GTA V, 4GB's will probably do pretty well for the average gamer for a few years.


----------



## HiTechPixel

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm gonna laugh so hard when Fiji launches and somehow uses less VRAM than GDDR5 cards in games and all the "experts" in this thread scramble to somehow explain it away. But 4GB is 4GB right????


HBM may very well pool assets and cache faster than GDDR5 which will lessen the burden on VRAM, but 4GB is 4GB.


----------



## Blameless

Quote:


> Originally Posted by *Ganf*
> 
> Not if the drivers treated each appropriately. If AMD programmed the card to treat the GDDR5 as typical DRAM then it's a whole new ball game.


A big point of HBM, barely second to performance potential, is to simplify the PCB and lower power. Having off package memory as well as HBM would do exactly the opposite.


----------



## RagingCain

Has there been talk of built-in Maxwell like compression?


----------



## criminal

Quote:


> Originally Posted by *RagingCain*
> 
> Has there been talk of built-in Maxwell like compression?


I believe so.


----------



## Ganf

Quote:


> Originally Posted by *Blameless*
> 
> A big point of HBM, barely second to performance potential, is to simplify the PCB and lower power. Having off package memory as well as HBM would do exactly the opposite.


Thus the word Fantasy was used.


----------



## Blameless

Quote:


> Originally Posted by *RagingCain*
> 
> Has there been talk of built-in Maxwell like compression?


Yes. Tonga had similar compression capabilities; Fiji should inherit them and possibly more.


----------



## Rmerwede

Not exactly news, but a quote from Joe Macri, AMD CTO that sort-of explains why 4GB shouldn't be a limiting factor:
Quote:


> "You're not limited in this world to any number of stacks, but from a capacity point of view, this generation-one HBM, each DRAM is a two-gigabit DRAM, so yeah, if you have four stacks you're limited to four gigabytes. You could build things with more stacks, you could build things with less stacks. Capacity of the frame buffer is just one of our concerns. There are many things you can do to utilise that capacity better. So if you have four stacks you're limited to four [gigabytes], but we don't really view that as a performance limitation from an AMD perspective."
> 
> "If you actually look at frame buffers and how efficient they are and how efficient the drivers are at managing capacities across the resolutions, you'll find that there's a lot that can be done. We do not see 4GB as a limitation that would cause performance bottlenecks. We just need to do a better job managing the capacities. We were getting free capacity, because with [GDDR5] in order to get more bandwidth we needed to make the memory system wider, so the capacities were increasing. As engineers, we always focus on where the bottleneck is. If you're getting capacity, you don't put as much effort into better utilising that capacity. 4GB is more than sufficient. We've had to go do a little bit of investment in order to better utilise the frame buffer, but we're not really seeing a frame buffer capacity [problem]. You'll be blown away by how much [capacity] is wasted."


Makes sense to me...


----------



## criminal

Quote:


> Originally Posted by *Rmerwede*
> 
> Not exactly news, but a quote from Joe Macri, AMD CTO that sort-of explains why 4GB shouldn't be a limiting factor:
> Makes sense to me...


That's an older comment from him, but let's hope he is right and not just blowing smoke. I know I want to be blown away by Fiji.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> That's an older comment from him, but let's hope he is right and not just blowing smoke. I know I want to be blown away by Fiji.


Don't worry, once you see how fast those fans have to spin to dissipate all that heat you may literally be blown away. /joke

I'm pumped to see what they have in store for us with this new memory technology. I don't think the performance increase will be lower than 40% and that will already put it a smidge above the Ti.


----------



## Forceman

Quote:


> Originally Posted by *Rmerwede*
> 
> Not exactly news, but a quote from Joe Macri, AMD CTO that sort-of explains why 4GB shouldn't be a limiting factor:
> Makes sense to me...


Which makes a lot of sense, but then why are they making 8GB standard on the 390X (if indeed they are)? Makes marketing's job tougher when your flagship card has less VRAM than your "normal" enthusiast card, especially if you are hand-waving the difference away with "better utilization" which would also seem to benefit the 390X.


----------



## tpi2007

Quote:


> Originally Posted by *RagingCain*
> 
> Has there been talk of built-in Maxwell like compression?


Tonga has lossless delta colour compression, so I assume at least that will go into Fiji. Perhaps they have put in more refinements too. But note that this compression serves to alleviate the necessary memory bandwidth, it doesn't translate to needing less VRAM to store it once it arrives.

Quote:


> Originally Posted by *Blameless*
> 
> Yes. Tonga had similar compression capabilities; Fiji should inherit them and possibly more.


Similar in goal yes, but not in performance. The R9 285 still has to do with a 256-bit memory bus versus 128-bit on the 960. They are clocked differently, with the 285 at 5.5 Ghz and the 960 at 7 Ghz, but the resulting bandwidth is still much higher for the 285 at 176 GB/s and the 960 at 112 GB/s, so Nvidia is probably doing a better job for now. Let's see if Fijji comes with further improvements. Not that they they need more with HBM I suppose.

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Don't worry, once you see how fast those fans have to spin to *dissipate all that heat* you may literally be blown away. /joke
> 
> *I'm pumped* to see what they have in store for us with this new memory technology. I don't think the performance increase will be lower than 40% and that will already put it a smidge above the Ti.


AIO WC, I see what you did there.


----------



## Redwoodz

Quote:


> Originally Posted by *Forceman*
> 
> Which makes a lot of sense, but then why are they making 8GB standard on the 390X (if indeed they are)? Makes marketing's job tougher when your flagship card has less VRAM than your "normal" enthusiast card, especially if you are hand-waving the difference away with "better utilization" which would also seem to benefit the 390X.


Because the Hawaii based cards still use GDDR5,hence 8GB. It's not hard to prove that 4GB of HBM outperforms 8GB GDDR5. The utilization gains need the faster VRAM system,how is that hard to understand?


----------



## EniGma1987

Quote:


> Originally Posted by *Redwoodz*
> 
> Because the Hawaii based cards still use GDDR5,hence 8GB. It's not hard to prove that 4GB of HBM outperforms 8GB GDDR5. The utilization gains need the faster VRAM system,how is that hard to understand?


We should probably wait to talk about how various parts of the GPU are better utilizing resources until after we have actually seen how the card works.


----------



## Woundingchaney

Quote:


> Originally Posted by *Redwoodz*
> 
> Because the Hawaii based cards still use GDDR5,hence 8GB. It's not hard to prove that 4GB of HBM outperforms 8GB GDDR5. The utilization gains need the faster VRAM system,how is that hard to understand?


It is hard to understand from a marketing standpoint. The vast majority of any consumer base isn't technically informed (yes even the high end gpu consumer base). Launching a flagship card with half the memory of your other offerings and 33% of the offerings of your competitions flagship offering still appears limited in comparison. I have no doubt that they are refining Vram capabilities, but to suggest that 4GB for a Vram pool is future proof even with newer technology is not going to give many consumers much faith.

For AMD it is more of a perception concern, even more so than a technical hurdle (not to suggest that they wont have to do some impressive optimizations with the tech itself).


----------



## xxdarkreap3rxx

Has anyone seen this yet?

http://www.reddit.com/r/Amd/comments/38i077/heres_a_little_jsfiddle_for_you_to_play_and/


----------



## Forceman

Quote:


> Originally Posted by *Redwoodz*
> 
> Because the Hawaii based cards still use GDDR5,hence 8GB. It's not hard to prove that 4GB of HBM outperforms 8GB GDDR5. The utilization gains need the faster VRAM system,how is that hard to understand?


HBM outperforms GDDR5 in speed, obviously, but I'm still waiting for someone to explain how it outperforms in capacity.


----------



## Vesku

Quote:


> Originally Posted by *prjindigo*
> 
> Yeah, see, those are epoxy covers, not the actual dies. So they're bigger than the chips and thus a heat spreader. Tho in factory manufactured water cooling eh


Hynix listed the dimensions "with mold" in their Hotchips paper, 7.29mm x 5.48mm: http://www.hotchips.org/wp-content/uploads/hc_archives/hc26/HC26-11-day1-epub/HC26.11-3-Technology-epub/HC26.11.310-HBM-Bandwidth-Kim-Hynix-Hot%20Chips%20HBM%202014%20v7.pdf

If those HBM dies on the revealed interposer are much bigger than that then that would make Fiji 700+ mm2 which seems unlikely.

The main barrier to >4GB HBM 1 is that technical limitations have meant HBM 1 can only go 4 layers high and each layer is 2Gbit (256MB) for 1GB a stack. Original plans were to have both 4 high (1GB) and 8 high (2GB) HBM 1 stacks. Apparently HBM 1 doesn't have a "clamshell" mode where you share the 1024 bit interface with 2 stacks so the only way to get >4GB on a HBM 1 4096 bit GPU is to have some sort of routing/interleaving chip to handle the memory for the GPU. Based on the chip AMD CEO Lisa Su showed at Computex it doesn't look like the current version will have greater than 4GB.


----------



## Vesku

Quote:


> Originally Posted by *Woundingchaney*
> 
> It is hard to understand from a marketing standpoint. The vast majority of any consumer base isn't technically informed (yes even the high end gpu consumer base). Launching a flagship card with half the memory of your other offerings and 33% of the offerings of your competitions flagship offering still appears limited in comparison. I have no doubt that they are refining Vram capabilities, but to suggest that 4GB for a Vram pool is future proof even with newer technology is not going to give many consumers much faith.
> 
> For AMD it is more of a perception concern, even more so than a technical hurdle (not to suggest that they wont have to do some impressive optimizations with the tech itself).


GTX 680/670s sold well and they had a third less RAM than 7970/7950. The core metric is game performance.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Forceman*
> 
> HBM outperforms GDDR5 in speed, obviously, *but I'm still waiting for someone to explain how it outperforms in capacity*.


Maybe you'll find out, you know, when the thing is released? Nobody here (or anywhere else really, outside of AMD and Hynix) is an expert on HBM so you won't find those answers here...


----------



## criminal

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Has anyone seen this yet?
> 
> http://www.reddit.com/r/Amd/comments/38i077/heres_a_little_jsfiddle_for_you_to_play_and/


That's pretty cool, but does that actually prove anything I wonder?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> That's pretty cool, but does that actually prove anything I wonder?


No idea, that's why I posted it asking


----------



## criminal

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> No idea, that's why I posted it asking


Gotcha.


----------



## Forceman

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Maybe you'll find out, you know, when the thing is released? Nobody here (or anywhere else really, outside of AMD and Hynix) is an expert on HBM so you won't find those answers here...


I realize that, but an awful lot of people here seem to be accepting that it won't be a problem because AMD says so. Forgive me for being skeptical and not taking AMD's word for it without some kind of explanation.

As an aside, does anyone else get the feeling that this Computex anouncement is going to be a repeat of the Hawaii one, where we get a lot of talk about sort-of related stuff (like True Audio / LiquidVR) but no real info on the cards? I'm thinking it's going to be a while before these questions get answered.


----------



## Serandur

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Has anyone seen this yet?
> 
> http://www.reddit.com/r/Amd/comments/38i077/heres_a_little_jsfiddle_for_you_to_play_and/


That is incorrect. The script's writer seems to be under several false assumptions. One, that VRAM bandwidth has anything to do with the rate data is written into and taken out of VRAM (which it doesn't; that would be PCI-E bandwidth which is external to onboard memory and memory buses). And two, that the width size of individual chip interfaces (32-bit and 1024-bit) are the bandwidth, which is completely wrong as it fails to account for all of the chip interfaces in total and fails to account for clock speed of those chips. The latter point wouldn't matter even if it accurately represented actual bandwidth however because of the first point.

Assets in VRAM are stored there (even if not immediately necessary) specifically because pulling them from HDDs/SSDs over PCI-E is way too slow. HBM can't mitigate that even if it was 1000x faster.


----------



## jdstock76

Quote:


> Originally Posted by *Asus11*
> 
> SMH... might invest in AMD stocks while low.. hoping for samsung to overtake them


My E*trade wallet waits in anticipation.


----------



## tsm106

Quote:


> Originally Posted by *jdstock76*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Asus11*
> 
> SMH... might invest in AMD stocks while low.. hoping for samsung to overtake them
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> My E*trade wallet waits in anticipation.
Click to expand...

Been thinking about moving 10 to liquid in case its time to move on an uptick. Hmm.... we'll know whether its good or not weeks before it hits the wire.


----------



## geoxile

Quote:


> Originally Posted by *Serandur*
> 
> That is incorrect. The script's writer seems to be under several false assumptions. One, that VRAM bandwidth has anything to do with the rate data is written into and taken out of VRAM (which it doesn't; that would be PCI-E bandwidth which is external to onboard memory and memory buses). And two, that the width size of individual chip interfaces (32-bit and 1024-bit) are the bandwidth, which is completely wrong as it fails to account for all of the chip interfaces in total and fails to account for clock speed of those chips. The latter point wouldn't matter even if it accurately represented actual bandwidth however because of the first point.
> 
> Assets in VRAM are stored there (even if not immediately necessary) specifically because pulling them from HDDs/SSDs over PCI-E is way too slow. HBM can't mitigate that even if it was 1000x faster.


The VRAM stores more than just assets. It also stores stuff like the shaders, vertex buffer, frame buffer, etc.

When you increase the rendering resolution in your game or increase AA and the VRAM usage goes up, it isn't because there are more assets or higher res assets being loaded.


----------



## Majin SSJ Eric

Regardless of what AMD comes out with in a couple weeks, it must be said that Nvidia has released a seriously good card at a decent price in the 980Ti. Its a helluva lot more interesting card than the 780Ti was...


----------



## tsm106

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Regardless of what AMD comes out with in a couple weeks, it must be said that Nvidia has released a seriously good card at a decent price in the 980Ti. *Its a helluva lot more interesting card than the 780Ti was.*..


Is it really? It seems to me that it did what the 780ti did except the 980ti is slightly slower where the 780ti was faster than its counterpart.


----------



## Majin SSJ Eric

Yeah but its got a much better GPU in it (GM200 is so much beastlier than GK110 ever was) and 6GB high speed GDDR5. The cut down from the full GM200 in the 980Ti barely affects performance at all...


----------



## criminal

Quote:


> Originally Posted by *tsm106*
> 
> Is it really? It seems to me that it did what the 780ti did except the 980ti is slightly slower where the 780ti was faster than its counterpart.


I have to agree with this. 980Ti is boring!


----------



## GorillaSceptre

Quote:


> Originally Posted by *criminal*
> 
> I have to agree with this. 980Ti is boring!


I wouldn't call something that makes a $1000 GPU obsolete, boring.

Most of the recent cards have been boring as far as tech goes. The only really exciting thing to come along lately is HBM.

So i guess it's only boring next to a Fury.








I'm just hoping the 4GB won't be it's Achilles-heel.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Yeah but its got a much better GPU in it (GM200 is so much beastlier than GK110 ever was) and 6GB high speed GDDR5. The cut down from the full GM200 in the 980Ti barely affects performance at all...


780 Ti was answer to 290X. 980 Ti is a early answer to what ever AMD comes up with. We can't judge 980 Ti until Fiji is out.


----------



## Redwoodz

Quote:


> Originally Posted by *Serandur*
> 
> That is incorrect. The script's writer seems to be under several false assumptions. One, that VRAM bandwidth has anything to do with the rate data is written into and taken out of VRAM (which it doesn't; that would be PCI-E bandwidth which is external to onboard memory and memory buses). And two, that the width size of individual chip interfaces (32-bit and 1024-bit) are the bandwidth, which is completely wrong as it fails to account for all of the chip interfaces in total and fails to account for clock speed of those chips. The latter point wouldn't matter even if it accurately represented actual bandwidth however because of the first point.
> 
> Assets in VRAM are stored there (even if not immediately necessary) specifically because pulling them from HDDs/SSDs over PCI-E is way too slow. HBM can't mitigate that even if it was 1000x faster.


He's not illustrating bandwidth,he's showing the effects of wider access(bus) with that script.

This shows the effect of more bandwidth. http://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/
HD 7750 1GB GDDR5 vs HD 7750 2GB DDR3


Bandwidth does matter,as does bus width,to a certain point.


----------



## Ganf

Quote:


> Originally Posted by *GorillaSceptre*
> 
> I wouldn't call something that makes a $1000 GPU obsolete, boring.
> 
> Most of the recent cards have been boring as far as tech goes. The only really exciting thing to come along lately is HBM.
> 
> So i guess it's only boring next to a Fury.
> 
> 
> 
> 
> 
> 
> 
> I'm just hoping the 4GB won't be it's Achilles-heel.


Meh, memory stacking is confirmed. If Fury releases at 4gb I'll just buy one now and wait for the 8gb version before buying my second, for a total of 12gb in DX12 games.

I don't own any DX11 games that use 4gb, so why not?


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Ganf*
> 
> Meh, memory stacking is confirmed. If Fury releases at 4gb I'll just buy one now and wait for the 8gb version before buying my second, for a total of 12gb in DX12 games.
> 
> I don't own any DX11 games that use 4gb, *so why not*?


My OCD would never allow me to have mismatched cards in my box...


----------



## Ganf

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> My OCD would never allow me to have mismatched cards in my box...


They all look the same with a waterblock and backplate.

And if they don't? Oh well, no window.


----------



## Majin SSJ Eric

Yeah, but I'd KNOW they weren't the same. In my heart.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Ganf*
> 
> Meh, memory stacking is confirmed. If Fury releases at 4gb I'll just buy one now and wait for the 8gb version before buying my second, for a total of 12gb in DX12 games.
> 
> I don't own any DX11 games that use 4gb, so why not?


If memory stacking is dev-dependent (Which i assume it is), then it will never be supported.

The GPU i get just needs to last till the end of next year. I still think this thing will be a monster, now if only E3 would hurry up so we can see


----------



## Woundingchaney

Quote:


> Originally Posted by *Ganf*
> 
> Meh, memory stacking is confirmed. If Fury releases at 4gb I'll just buy one now and wait for the 8gb version before buying my second, for a total of 12gb in DX12 games.
> 
> I don't own any DX11 games that use 4gb, so why not?


Memory stacking is confirmed as being possible, memory stacking is currently possible depending on what rendering technique. Im thinking it is a bit premature to believe that all games or the majority of games are going to support SLI Xfire memory stacking. DX12 games will slowly start to trickle in, but it isn't necessarilyl compatibility with the api. Currently buying gpus based upon the notion of DX12 supporting memory stacking is not necessarily a good choice.


----------



## Woundingchaney

Quote:


> Originally Posted by *Vesku*
> 
> GTX 680/670s sold well and they had a third less RAM than 7970/7950. The core metric is game performance.


1/3 less ram equating to a difference of 1 GB. Also there wasn't the shift in the industry then that we are seeing now. Right now Vram has been getting a lot of attention. Everything from large texture files, high end resolutions, and debacles with split memory segments has been a large portion of the gaming media and how Vram usage has gone up and affected performance.

I agree that performance will always be a key indicator, but this isn't something that AMD can really afford right now. They are launching this card in the middle of a blitzkrieg from their competition that owns somewhere around 75% of the market, a competitor that has a very loyal following, better marketing, and just released a high end card that people are literally drooling over. If their performance is within 5%-8% higher than that of the 980ti and a higher price point, I don't see this card being very well adopted. Even if it is 5%-8% above the 980ti at the same price point I don't expect to see the card sell anywhere near the 980tis numbers.

I have no doubt that AMD had the ability right now to launch a card with over 4GB of memory that would be the first bullet point on their docket, simply because they know how important it is to marketing right now.


----------



## Ganf

Quote:


> Originally Posted by *GorillaSceptre*
> 
> If memory stacking is dev-dependent (Which i assume it is), then it will never be supported.
> 
> The GPU i get just needs to last till the end of next year. I still think this thing will be a monster, now if only E3 would hurry up so we can see


No, it's controlled directly by DX12 from everything that's been said about it so far. I'm actually hunting down a video from AMD's talk at computex where they confirmed it because I want to know how they described it word for word for that very reason.


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> No, it's controlled directly by DX12 from everything that's been said about it so far. I'm actually hunting down a video from AMD's talk at computex where they confirmed it because I want to know how they described it word for word for that very reason.


Try to find it if you can, I'd like to watch it because everything I've seen indicates that it is developer controlled and game specific. The same ability exists in Mantle, but didn't go anywhere.


----------



## Serandur

Quote:


> Originally Posted by *geoxile*
> 
> The VRAM stores more than just assets. It also stores stuff like the shaders, vertex buffer, frame buffer, etc.
> 
> When you increase the rendering resolution in your game or increase AA and the VRAM usage goes up, it isn't because there are more assets or higher res assets being loaded.


Of course, but space demands for those aren't really affected by bandwidth (or more precisely, individual chip/stack bus width) either and in the modern age especially, assets do take up quite a large share. It is especially prominent with VRAM caching in current and upcoming game engines.

Quote:


> Originally Posted by *Redwoodz*
> 
> He's not illustrating bandwidth,he's showing the effects of wider access(bus) with that script.
> 
> This shows the effect of more bandwidth. http://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/
> HD 7750 1GB GDDR5 vs HD 7750 2GB DDR3
> 
> 
> Bandwidth does matter,as does bus width,to a certain point.


He specifically labelled 32-bit and 1024-bit for GDDR5 and HBM as "bandwidth" and by themselves, they mean nothing with regards to any data transfer. Fiji will be 4 of those 1024-bit stacks at an effective clock rate of 1.25 GHz resulting in 640 GB/s of bandwidth as per rumors whereas Hawaii, for example, totals a 512-bit bus with a much higher effective frequency (5 GHz) totaling 320 GB/s of bandwidth. In terms of actual bandwidth, Fiji is looking to be around 2x Hawaii and GM200/GK110 for that matter rather than the 32x a 32-bit memory bus with an unspecified equivalent frequency would yield.

I also refer back to point #2 from my post which you quoted in that the bandwidth has no bearing on capacity anyway (which is the point of the script, otherwise there is no point because of course everyone already knows 1024 is a bigger number than 32; the point was to demonstrate that HBM's speed means Fiji's 4 GBs aren't at a disadvantage relative to GDDR5). Data stored and kept in VRAM is still there for the same reason regardless of how quickly the GPU can access it: there's no other area the data can be stored where the GPU can effectively access it without regressing to stone-age level transfer speeds. It really doesn't matter what kind it is or how fast the GPU can access it from its local memory, it needs to be stored somewhere. We've gotten up to the 300-400 GB/s point with GDDR5 and it's done no good in that regard either. GM200 doesn't have any advantage in memory capacity demands over GM206 just because it has 3x the bandwidth, for example, and that's a larger gap than even this first-gen HBM is giving Fiji over its contemporaries and predecessor.

As for the HD 7750; well, first off it's important to note that bandwidth and VRAM capacity are two different and differently-altering specifications. There is a point for each card where too much bandwidth can simply go to waste without the shaders to take advantage of it in a given scenario and the same is true for VRAM, but in a different, more hard-limit way. The two points are separate because one measurement (bandwidth) is dealing with an aspect of direct performance (GPU-VRAM communication speeds) and the other (capacity) is dealing with simple space that doesn't alter performance at all until you run out. The former is gradual, the latter is a hard limit as I mentioned.

The HD7750 in particular is a pretty modern and not particularly weak card, but it has a very narrow 128-bit memory bus so high-speed memory is rather crucial. GDDR3 is naturally not sufficient for such a card; it's an ancient technology and a 7750 would be bandwidth-starved by it. Of course, cards from 2004-2005 wouldn't be. That context is important; depends on which specific processor and which specific amounts of bandwidth or capacity to determine which it would benefit from more for a specific task. The 7750 is being tested in a measly 720p resolution for those tests on games/benchmarks (like Heaven, which is much more a test of processing limitations than VRAM) from several years ago; of course bandwidth is a more urgent limitation than a 1 GB memory capacity in that case and even if the capacity wasn't fully sufficient, you'd have a difficult time knowing unless it was severe since drops would manifest themselves as brief periods of stutter and stalled GPU activity, not sustained lower performance as with bandwidth.

*Bandwidth does not equal capacity. They are two completely different specifications and neither is a substitute or band-aid for the other in any way. The points from which different GPUs would benefit more from either depends on the specific GPU, the application, the settings of the application, and on the specific bandwidth or capacities being compared.* It is not an either/or scenario (or at least, on the high-end, it really shouldn't be) and since games are an amalgamation of various rendering demands from texturing to lighting to physics, one cannot be generalized as being more important than the other without these indicators of context. The 2 GB GDDR3 vs 1 GB GDDR5 HD 7750 scenario cannot be accurately extrapolated to all such bandwidth vs capacity comparisons, as I've seen done lately. For example, there's very little reason to think Hawaii or GM200 are anywhere near as constrained by GDDR5 as the 7750 is by GDDR3, but we do see capacity limitations as a rising concern. The difference is HBM is necessary for the future, but not decisively so for the present GPUs.


----------



## Boomstick727

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Yeah, but I'd KNOW they weren't the same. In my heart.


Haha, right there with ya, couldn't do it. Not run a dual setup for a while though. Found micro stutter a deal breaker.

Anybody know if DX12 will make micro stutter any better / get rid of it?


----------



## Ganf

Quote:


> Originally Posted by *Forceman*
> 
> Try to find it if you can, I'd like to watch it because everything I've seen indicates that it is developer controlled and game specific. The same ability exists in Mantle, but didn't go anywhere.


Nope, I was wrong, verbage is right there in the slide that confirms that it can be done.



It's up to the developers after all.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Ganf*
> 
> No, it's controlled directly by DX12 from everything that's been said about it so far. I'm actually hunting down a video from AMD's talk at computex where they confirmed it because I want to know how they described it word for word for that very reason.


Quote:


> Originally Posted by *Ganf*
> 
> Nope, I was wrong, verbage is right there in the slide that confirms that it can be done.
> 
> It's up to the developers after all.


Damn.. Just saw your reply, and you got my hopes up









That would of changed everything


----------



## Ganf

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Damn.. Just saw your reply, and you got my hopes up
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That would of changed everything


I can see some developers going for it though. It'd be great for local co-op. 2 monitors, 2 cards, one computer. We're just not going to see it in any AAA titles, since EA, Ubisoft and company have decided that local co-op is dead.


----------



## tajoh111

Has AMD ever sorted out it issues with r9 285 and mantle out? Remember the BF4/thief Mantle mess with the r9 285? There was significant performance regression in the cards when Mantle was applied vs directx 11.

AMD was saying it was r9 285 not working properly with Mantle, reviewers were saying it was because it was that games running the Mantle API had heavier memory requirement.

Both these are relevant since direct x 12 is similar to mantle and fiji is likely using a similar architecture to Tonga, hopefully AMD has worked these issues out. A lot of people were reporting increased memory usage when Mantle was turned on and it was generally agreed upon, 3gb of memory was the minimum to have it work properly at *1080p*


----------



## mouacyk

Quote:


> Originally Posted by *Ganf*
> 
> I can see some developers going for it though. It'd be great for local co-op. 2 monitors, 2 cards, one computer. We're just not going to see it in any AAA titles, since EA, Ubisoft and company have decided that local co-op is dead.


That's exactly the ideal scenario for the combined VRAM usage. In all other scenarios, load-balancing of the work to render the current frame to multiple GPU's becomes the biggest issue, and heuristics is often the only generic solution. Is the cost (complexity and runtime) of the heuristics to split the work (allocate assets correctly) worth the gain... ? In general, first-person games become complex quickly when trying to split up work, so only a few GPUs would be feasible. Puzzle games, platformers, etc where assets tend to stay in the same horizontal or vertical regions or stay the same size and are static, it would be easier to scale up the number of GPUs. However, that'd be a waste of 3x+ Titan X's... for puzzle platformers.


----------



## Ganf

Quote:


> Originally Posted by *mouacyk*
> 
> That's exactly the ideal scenario for the combined VRAM usage. In all other scenarios, load-balancing of the work to render the current frame to multiple GPU's becomes the biggest issue, and heuristics is often the only generic solution. Is the cost (complexity and runtime) of the heuristics to split the work (allocate assets correctly) worth the gain... ? In general, first-person games become complex quickly when trying to split up work, so only a few GPUs would be feasible. Puzzle games, platformers, etc where assets tend to stay in the same horizontal or vertical regions or stay the same size and are static, it would be easier to scale up the number of GPUs. However, that'd be a waste of 3x+ Titan X's... for puzzle platformers.


Second ideal scenario, and the one they're pushing it for now, is VR. One GPU per eye. I'd love to see the day when a high end PC can act as an entertainment hub for an entire household through similar methods though. It would really put desktop PC's back on the map. Just a few more features and we'll be there.


----------



## Blameless

Quote:


> Originally Posted by *Forceman*
> 
> HBM outperforms GDDR5 in speed, obviously, but I'm still waiting for someone to explain how it outperforms in capacity.


It can't and it doesn't. Any statement to the contrary, or any statement to the equivalent of being able to fit more than 4,294,967,296 _actual_ bytes in 4GiB of HBM is an absurd impossibility.

A byte is a byte is a byte. Doesn't matter if it's on HBM, GDDR5, a 30-pin FPM SIMM, a cassette tape, a mercury delay line, or a friggin abacus.

Could AMD improve memory management? Sure.

Could there be some form of compression unique to Fiji? Sure.

Could AMD relegate any driver/memory management improvements to Fiji cards? Sure.

Could 4GiB of HBM hold more bits than 4GiB of GDDR5? No way in hell.
Quote:


> Originally Posted by *Redwoodz*
> 
> This shows the effect of more bandwidth. http://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/
> HD 7750 1GB GDDR5 vs HD 7750 2GB DDR3


Obviously if you only need 1GiB of memory 1GiB of fast memory is better than 2GiB of slow memory.

Almost never did I choose more slower memory over less faster memory in the past, unless I was sure I needed more memory...but in the latter case, the decision was all but made for me. Slow memory is bad, not enough is much worse.
Quote:


> Originally Posted by *Redwoodz*
> 
> Bandwidth does matter,as does bus width,to a certain point.


Bus width facilitates bandwidth.
Quote:


> Originally Posted by *Ganf*
> 
> Anybody know if DX12 will make micro stutter any better / get rid of it?


Multi-GPU microstutter largely exists because each card is typically rendering it's own frame (AFR, alternate frame rendering). If these frames aren't drawn and delivered at the proper intervals, you either get perceptible jerkiness, or simply no perceptible benefit from the additional frames. Both AMD and NVIDIA have frame pacing mechanisms to mitigate this, but they don't always work well enough (and AMD lacks them entirely for D3D 9, and perhaps still OGL).

The DX12 rendering method that pools GPU resources won't use AFR and should have no more pacing issues than a single card config.


----------



## Asmodian

Quote:


> Originally Posted by *Ganf*
> 
> Second ideal scenario, and the one they're pushing it for now, is VR. One GPU per eye. I'd love to see the day when a high end PC can act as an entertainment hub for an entire household through similar methods though. It would really put desktop PC's back on the map. Just a few more features and we'll be there.


Is there a lot in the way of assets that can be split when rendering VR though? The frame buffers could be separate but all the textures would still need to be in memory on both GPUs.


----------



## prjindigo

HEY HEY HEY HEY HEY HEY WHOA

Hold on. I'm pretty sure:

#1 Even under Dx12 cards can't "pool" their memory without having massive lost time on the bus transfers, that they're actually going to do sections of the work and thus only load the assets that they need for the process they are doing. Which means that Card 1 will be doing the early stuff and chopping up the assets and card 2 will be doing the lighting and etc and then displaying. Trying to load textures from card1's memory to card 2's GPU shaders is just effing pointless.

#2 Since Dx12 allows direct drive of the graphics cards the card doesn't need to be full of all sorts of extra BS that the companies have had to do to make them work. Games will no longer need to "unfold" like some crazy sci-fi virus into the graphics card in order to operate. So a Dx12 (or 11-under-12) game won't be using as much card memory to do the same thing as a Dx11 or earlier under windows8.1 or earlier. We've already seen the rather hefty increases in performance that 2.0 brings to graphics output.

#3 Almost none of you know the difference between a heat spreader and a encasement crown.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Blameless*
> 
> It can't and it doesn't. Any statement to the contrary, or any statement to the equivalent of being able to fit more than 4,294,967,296 _actual_ bytes in 4GiB of HBM is an absurd impossibility.
> 
> A byte is a byte is a byte. Doesn't matter if it's on HBM, GDDR5, a 30-pin FPM SIMM, a cassette tape, a mercury delay line, or a friggin abacus.
> 
> Could AMD improve memory management? Sure.
> 
> Could there be some form of compression unique to Fiji? Sure.
> 
> Could AMD relegate any driver/memory management improvements to Fiji cards? Sure.
> 
> Could 4GiB of HBM hold more bits than 4GiB of GDDR5? No way in hell.
> .


I have not seen one single person claim that 4GB of HBM can hold more physical memory than 4GB of GDDR5. That is a straw man. What many people are wondering is if the fact that the memory bandwidth is so much faster, if there will be a need to cache so much memory on the memory chips at one time? Think of it like a reservoir in a cooling loop. If the pump is fast enough it is absolutely possible to put much more water in the loop than the reservoir itself can hold. This is of course because the water is flowing into and out of the reservoir faster than it can fill up. I am just wondering if the same kind of thing could apply to HBM, saturating the GPU with data fast enough that it doesn't need to load 5,6,7GB of data in the cache to wait around for the memory transfer? Regardless of what you think (and even though I respect your knowledge immensely) if the card is able to play games without a hitch at settings that would require more than 4GB on a regular GDDR5 card, then the answer will be clear and everything you are saying about 4GB being 4GB will be wrong (or at least irrelevant)...


----------



## Blameless

Quote:


> Originally Posted by *prjindigo*
> 
> Even under Dx12 cards can't "pool" their memory without having massive lost time on the bus transfers, that they're actually going to do sections of the work and thus only load the assets that they need for the process they are doing. Which means that Card 1 will be doing the early stuff and chopping up the assets and card 2 will be doing the lighting and etc and then displaying. Trying to load textures from card1's memory to card 2's GPU shaders is just effing pointless.


Yes.

The memory won't be unified, so pool is a bit of a misnomer. Still, the total VRAM addresses to the graphics system will be cumulative, to a degree, rather than nearly perfectly mirrored as they are with most current mult-GPU rendering methods.


----------



## magnek

Quote:


> Originally Posted by *Wishmaker*
> 
> You make it sound like from the 29th of July when W10 is out, everything will be DX 12. Every new DX version, the transition took a while and I would not be surprised if DX 12 becomes more widespread when NVIDIA brings their own version of HBM.


Yeah DX12 adoption will take a while, but the reduced CPU overhead when run under WDDM 2.0 in Win10 alone should net some nice gains, assuming the following is not just a one off incidence.
Quote:


> Originally Posted by *magnek*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Blameless*
> 
> DX11 overhead is improving in Windows 10. Freesync needs to work on more GPU/display configurations, which is something AMD can improve. The issues with overdrive are more in the hands of display manufacturers.
> 
> 
> 
> Indeed. Prime example right there: (guy used 3570K @ 4GHz)
> 
> 
> 
> 
> +30% performance under Win10. Wonder if this is a Win10 specific optimization, or whether it could trickle down to older OS's?
Click to expand...


----------



## Blameless

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I have not seen one single person claim that 4GB of HBM can hold more physical memory than 4GB of GDDR5. That is a straw man.


It's not a strawman as a counter to the response you gave Forceman to his, "still waiting for someone to explain how it outperforms in capacity", statement. If it's not holding more, it's not outperforming it in capacity.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> What many people are wondering is if the fact that the memory bandwidth is so much faster, if there will be a need to cache so much memory on the memory chips at one time?


The faster the memory, the greater the relative penalty for not being able to use it.

If Fiji can make use of ~640GB/s of memory bandwidth, the last thing you want is to have to dip into main memory across a sub-16GB/s interface. PCI-E is only 20 times slower than the memory on a Hawaii card, but it's 40 times slower than the memory on Fiji. That PCI-E bus will be more of a bottleneck, not less of one.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Think of it like a reservoir in a cooling loop. If the pump is fast enough it is absolutely possible to put much more water in the loop than the reservoir itself can hold. This is of course because the water is flowing into and out of the reservoir faster than it can fill up.


This doesn't make any sense at all. The overwhelming majority of WC loops are closed circuits (one of the reasons it's called a loop). The internal volume of the loop can easily be larger than the capacity of the reservoir itself, irrespective of the pump, or even in the absence of a pump.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I am just wondering if the same kind of thing could apply to HBM, saturating the GPU with data fast enough that it doesn't need to load 5,6,7GB of data in the cache to wait around for the memory transfer?


This doesn't make any sense either.

If you have a game that needs five GiB of assets on hand to render a scene, no ability to manipulate 70-80% of those assets is going to make the remainder travel over PCI-E any faster. The GPU will just sit idle for a greater portion of time waiting for the data it needs.

I once replaced a 7900GS 512 with a 8800GTS 320. The 7900GS was twice as fast in "ultra" quality Doom 3 and LoTRO, but half as fast on the next setting down...because 320MiB wasn't enough for ultra textures in those games, but it was enough for "high". No amount of VRAM bandwidth would have significantly improved the 320s performance on ultra, because that wasn't the bottleneck.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Regardless of what you think (and even though I respect your knowledge immensely) if the card is able to play games without a hitch at settings that would require more than 4GB on a regular GDDR5 card, then the answer will be clear and everything you are saying about 4GB being 4GB will be wrong (or at least irrelevant)...


Unless they specifically fail to give the non-Fiji parts similar memory management optimizations, this is an unlikely scenario.

I'm not certain that 4GiB won't be enough, but if ever falls significantly short, even a much slower card with more memory could provide a better experience.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Blameless*
> 
> It's not a strawman as a counter to the response you gave Forceman to his, "still waiting for someone to explain how it outperforms in capacity", statement. If it's not holding more, it's not outperforming it in capacity.
> 
> The faster the memory, the greater the relative penalty for not being able to use it.
> 
> If Fiji can make use of ~640GB/s of memory bandwidth, the last thing you want is to have to dip into main memory across a sub-16GB/s interface. PCI-E is only 20 times slower than the memory on a Hawaii card, but it's 40 times slower than the memory on Fiji. That PCI-E bus will be more of a bottleneck, not less of one.
> This doesn't make any sense at all. The overwhelming majority of WC loops are closed circuits (one of the reasons it's called a loop). The internal volume of the loop can easily be larger than the capacity of the reservoir itself, irrespective of the pump, or even in the absence of a pump.
> This doesn't make any sense either.
> 
> If you have a game that needs five GiB of assets on hand to render a scene, no ability to manipulate 70-80% of those assets is going to make the remainder travel over PCI-E any faster. The GPU will just sit idle for a greater portion of time waiting for the data it needs.
> 
> I once replaced a 7900GS 512 with a 8800GTS 320. The 7900GS was twice as fast in "ultra" quality Doom 3 and LoTRO, but half as fast on the next setting down...because 320MiB wasn't enough for ultra textures in those games, but it was enough for "high". No amount of VRAM bandwidth would have significantly improved the 320s performance on ultra, because that wasn't the bottleneck.
> Unless they specifically fail to give the non-Fiji parts similar memory management optimizations, this is an unlikely scenario.
> 
> I'm not certain that 4GiB won't be enough, but if ever falls significantly short, even a much slower card with more memory could provide a better experience.


What you are saying makes no sense to me. The GPU is not using every bit of data that is being held in memory at all times. The memory caches the assets in preparation for what the chip is going to need before it needs it, then feeds it to the GPU. Once its been rendered it is dumped and no longer needed. This means that if the transfer rate is fast enough the memory won't necessarily have to hold as much at any given time because the GPU is constantly utilizing that data and dumping it as its processed. Obviously if the GPU needed more than 4GB of data to render a single frame then that would lead to what you are talking about with it having to go into system memory to finish the draw but I have not seen any evidence that this amount of data is actually necessary to render every single frame (unless we are talking maybe tons of texture mods etc. As was posted above, there is constantly a flow of data into memory that is waiting for the GPU to need it, not the other way around.

Of course I'm not an expert so maybe I'm totally wrong but my point is that when the card releases we will see if HBM bandwidth alleviates this need to cache so much data in memory when it either works or doesn't at high resolutions/settings...


----------



## Slomo4shO

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Regardless of what AMD comes out with in a couple weeks, it must be said that Nvidia has released a seriously good card at a decent price in the 980Ti. Its a helluva lot more interesting card than the *780* was...


FTFU

It is not the full chip just like the GTX 780... It is also priced identical to the launch price of the GTX 780...

We will likely see a fully unlocked GM200 by the holidays... Just like the GTX 780 Ti

Milking, its what Nvidia does best


----------



## GorillaSceptre

Quote:


> Originally Posted by *Slomo4shO*
> 
> FTFU
> 
> It is not the full chip just like the GTX 780... It is also priced identical to the launch price of the GTX 780...
> 
> We will likely see a fully unlocked GM200 by the holidays... Just like the GTX 780 Ti
> 
> Milking, its what Nvidia does best


980Ti Black







Or could it be the rumored "metal" 980?

I stuffing hate 28nm..


----------



## magnek

Well 780 had 2304 cores, vs the 780 Ti which had 2880. That's a difference of 576 cores or 20% less than 780 Ti. Plus the 780 Ti's memory was clocked 1GHz faster as well, and this resulted in 780 Ti being almost 20% faster than 780 stock for stock.

Here the 980 Ti only has 256 or 9% less cores than the Titan X, running the same 7GHz GDDR5. Even if we assume the full 980 Ti uses better binned chips to keep within the TDP envelope and boost to higher clocks, it's unlikely to be 10% faster than cut down 980 Ti, if even that. So I can kind of see why a full 980 Ti wouldn't be very interesting.


----------



## KarathKasun

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> What you are saying makes no sense to me. The GPU is not using every bit of data that is being held in memory at all times. The memory caches the assets in preparation for what the chip is going to need before it needs it, then feeds it to the GPU. Once its been rendered it is dumped and no longer needed. This means that if the transfer rate is fast enough the memory won't necessarily have to hold as much at any given time because the GPU is constantly utilizing that data and dumping it as its processed. Obviously if the GPU needed more than 4GB of data to render a single frame then that would lead to what you are talking about with it having to go into system memory to finish the draw but I have not seen any evidence that this amount of data is actually necessary to render every single frame (unless we are talking maybe tons of texture mods etc. As was posted above, there is constantly a flow of data into memory that is waiting for the GPU to need it, not the other way around.
> 
> Of course I'm not an expert so maybe I'm totally wrong but my point is that when the card releases we will see if HBM bandwidth alleviates this need to cache so much data in memory when it either works or doesn't at high resolutions/settings...


The graphics pipeline is a loop, data goes into the GPU and back out to the ram as a finished frame. Data cannot just disappear. What they may have is better full memory compression, with every single asset that gets committed to VRAM being highly compressed with a loss-less algo. If you compress 8GB of image data you could easily get it to fit into 4gb.


----------



## Vesku

Quote:


> Originally Posted by *magnek*
> 
> Well 780 had 2304 cores, vs the 780 Ti which had 2880. That's a difference of 576 cores or 20% less than 780 Ti. Plus the 780 Ti's memory was clocked 1GHz faster as well, and this resulted in 780 Ti being almost 20% faster than 780 stock for stock.
> 
> Here the 980 Ti only has 256 or 9% less cores than the Titan X, running the same 7GHz GDDR5. Even if we assume the full 980 Ti uses better binned chips to keep within the TDP envelope and boost to higher clocks, it's unlikely to be 10% faster than cut down 980 Ti, if even that. So I can kind of see why a full 980 Ti wouldn't be very interesting.


The step up from the 980 Ti would be a 6GB Titan X, there really isn't much else they can squeeze out of GM200. They could follow AMD's lead and make the reference cooler a CLC, I guess.


----------



## HiTechPixel

Quote:


> Originally Posted by *Blameless*
> 
> Multi-GPU microstutter largely exists because each card is typically rendering it's own frame (AFR, alternate frame rendering). If these frames aren't drawn and delivered at the proper intervals, you either get perceptible jerkiness, or simply no perceptible benefit from the additional frames. Both AMD and NVIDIA have frame pacing mechanisms to mitigate this, but they don't always work well enough (and AMD lacks them entirely for D3D 9, and perhaps still OGL).


I think anything older than DX11 flat out don't work well enough on Crossfire but someone with actual Crossfire cards may prove me wrong.


----------



## twitchyzero

Quote:


> Originally Posted by *Slomo4shO*
> 
> FTFU
> 
> It is not the full chip just like the GTX 780... It is also priced identical to the launch price of the GTX 780...
> 
> We will likely see a fully unlocked GM200 by the holidays... Just like the GTX 780 Ti
> 
> Milking, its what Nvidia does best


am i missing something here?

Titan X is a fully unlocked GM200

If you meant unlocked GM200 with 6GB frame buffer then I doubt that's in the works. Aside from low end to lower mid-range (ie. GT 920 to GTX 950 Ti) I think Nvidia will be committed to Pascal/HBM going forward.


----------



## Myst-san

I was thinking, If DX12 allows for combine memory buffer to be seen one, will it be possible to see 2 GPU' as one. Like having 2 GPU with 1024 cores and when combine will be like 1 GPU with 2048 . The game engine will not make a difference between the 2 GPUs, so in theory it will get better scaling with adding more GPUs. I only thing that the x16 PCI-E 3 will be a bottleneck since it has a 252 GB/s max bandwidth. Since the frame will be fragmented between all the GPUs, all of the communication will go through the PCI-ex until the frame is completed. Whereas with AFR only the finished get send to the primary GPU


----------



## epic1337

Quote:


> Originally Posted by *Myst-san*
> 
> I was thinking, If DX12 allows for combine memory buffer to be seen one, will it be possible to see 2 GPU' as one. Like having 2 GPU with 1024 cores and when combine will be like 1 GPU with 2048 . The game engine will not make a difference between the 2 GPUs, so in theory it will get better scaling with adding more GPUs. I only thing that the x16 PCI-E 3 will be a bottleneck since it has a 252 GB/s max bandwidth. Since the frame will be fragmented between all the GPUs, all of the communication will go through the PCI-ex until the frame is completed. Whereas with AFR only the finished get send to the primary GPU


nope, its the software coding that matters.

DX12 GPU combined memory buffer is very similar to how CPU cores and DRAM are used.
just because it shares VRAM as one doesn't mean it'd see GPUs as one.
similar to CPUs still being seen as multiple cores regardless of how many channels of ram you have.


----------



## Myst-san

Quote:


> Originally Posted by *epic1337*
> 
> nope, its the software coding that matters.
> 
> DX12 GPU combined memory buffer is very similar to how CPU cores and DRAM are used.
> just because it shares VRAM as one doesn't mean it'd see GPUs as one.
> similar to CPUs still being seen as multiple cores regardless of how many channels of ram you have.


The GPU is already multi core and adding more cores in the form of a second GPU should not make a big difference.


----------



## Xuper

This Test is really nice :

https://jsfiddle.net/xhhqthrc/


----------



## harney

Quote:


> Originally Posted by *Xuper*
> 
> This Test is really nice :
> 
> https://jsfiddle.net/xhhqthrc/


I am not too sure this is correct


----------



## epic1337

Quote:


> Originally Posted by *Myst-san*
> 
> The GPU is already multi core and adding more cores in the form of a second GPU should not make a big difference.


yes, but GPUs are in itself, a stand-alone unit.
you'd have to take out the host scheduler to make all the cores under the GPU as slaves.

so rather than making things more problematic, just assign their own work on each GPUs and let them peak at each other's work (VRAM).

think of it like two person solving a different but related problem, rather than having two separate tables with two separate sheets and copies of each other's.
have them work their own problems with the desks adjoined while letting them look at each other's sheets, rather than having copies of each other's.
so instead of wasting desk space and sheets (VRAM) you get a larger total desk space with less sheets used on doing the tasks for two persons.


----------



## Blameless

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Once its been rendered it is dumped and no longer needed.


Until you need to render the next frame.

Your GPU isn't going through data sequentially and never touching it again (at least not in the sort of apps most people will used them for, like games), the same assets will be used over and over and over to render innumerable frames.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> As was posted above, there is constantly a flow of data into memory that is waiting for the GPU to need it, not the other way around.


Where was this posted?

Most apps are going to try to prefetch what they can, but that doesn't mean frequent access over PCI-E is a desirable thing. The entire reason why assets are prefetched and retained is because the penalties for not having them on hand when you need them are so high.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Of course I'm not an expert so maybe I'm totally wrong but my point is that when the card releases we will see if HBM bandwidth alleviates this need to cache so much data in memory when it either works or doesn't at high resolutions/settings...


More memory bandwidth just cannot do this.

We will see if Fiji and it's drivers have memory management tweaks that make 4GiB enough in those few situations where it's currently not, but 4GiB will either be enough, or it won't, almost completely irrespective of the memory bandwidth of that 4GiB.
Quote:


> Originally Posted by *Xuper*
> 
> This Test is really nice :
> 
> https://jsfiddle.net/xhhqthrc/


No, it's not. It's misleading in numerous ways.

If it's intent was simply to show the memory bandwidth of a single GDDR5 IC (which has a 32-bit bus) vs. that of a single HBM stack (which is 1024), then at the very least the capacities are mislabeled. Thhere are no 4GB HBM stacks yet, nor is there ever likely to be a 4GB GDDR5 IC; using Gb would be plausible, though. Even if I assume Gb, im not sure why anyone would bother comparing a 32-bit GDDR5 interface to an HBM stack for any practical purpose.

If you take to show the difference in bandwidth between a GDDR5 memory system and an HBM system, then the assumptions on relative memory bandwidth is uselessly lopsided and irrelevant for any comparison between a Fiji and any other GPU. Fiji has roughly double the memory bandwidth of a Hawaii or GM200 part, not thirty-two times as much.

If you take it to show how faster memory could help it render a scene faster, then it almost makes sense, except again, for the absurd discrepancy in memory bandwidth.

If you take it to show how faster memory could some how make you need less memory for a game or most other rendering jobs, then you have to make a series of absurd, borderline impossible, assumptions about how memory will be used.

No one should mistake that animation as an accurate representation of anything relevant to any of the discussions going on in this thread about Fiji.


----------



## Myst-san

Quote:


> Originally Posted by *epic1337*
> 
> yes, but GPUs are in itself, a stand-alone unit.
> you'd have to take out the host scheduler to make all the cores under the GPU as slaves.
> 
> so rather than making things more problematic, just assign their own work on each GPUs and let them peak at each other's work (VRAM).


I think I get what you mean.

So to be able combine 2 GPUs you will need to have only 1 scheduler for both or at least 1 of the schedulers to act as a main one.


----------



## Ha-Nocri

Quote:


> Originally Posted by *harney*
> 
> I am not too sure this is correct


Well, the principle is correct. Wider bus means data can be pulled from VRAM faster, so more data is being deleted in the same period of time.


----------



## RagingCain

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Quote:
> 
> 
> 
> Originally Posted by *harney*
> 
> I am not too sure this is correct
> 
> 
> 
> Well, the principle is correct. Wider bus means data can be pulled from VRAM faster, so more data is being deleted in the same period of time.
Click to expand...

What that script is doing is 100% bologna.


----------



## Myst-san

Quote:


> Originally Posted by *RagingCain*
> 
> What that script is doing is 100% bologna.


If you change the script with gddrBandW = 128 . There will be no difference with HBM.


----------



## harney

What we should start doing is placing bets virtual of course as to where the flurry's going to stand in performance with the current line up ..at least give us sumit to do until the benchmarks arrive.... -+2%

My Bet is the top end fury x (or whatever there going to call it) is 7% 9% faster than the titan on average


----------



## RagingCain

Quote:


> Originally Posted by *Myst-san*
> 
> Quote:
> 
> 
> 
> Originally Posted by *RagingCain*
> 
> What that script is doing is 100% bologna.
> 
> 
> 
> If you change the script with gddrBandW = 128 . There will be no difference with HBM.
Click to expand...

Well they both can only get new data at the same rate. The Long Term Storage -> System RAM -> VRAM

They don't have to delete data, it can simply be overwritten...

There are so many things wrong with what this thing is doing. As mentioned previously the data transfer rates at best are doubled when comparing two high end cards. There are no magnitudes of it being faster in these cases.

Secondly, if 2GB of VRAM is necessary, they both have 2GB of VRAM in use. The same video game, with the same settings, in the same scene at the same time is going to use the exact same amount of memory on GDDR5 as it will on an HBM card (as it stands now with AMDs memory management) . I don't get what people do not understand about this. Any capacity optimization, or prioritization, or queueing equally can be applied to GDDR5 as it can HBM. There isn't anything special about HBM's capacity.

Internal data transfers will be faster, as well as having very fast frame buffers. Getting new data is equally slow while deleting data is optional with GC runs the show.


----------



## harney

Quote:


> Originally Posted by *RagingCain*
> 
> Well they both can only get new data at the same rate. The Long Term Storage -> System RAM -> VRAM
> 
> They don't have to delete data, it can simply be overwritten...
> 
> There are so many things wrong with what this thing is doing. As mentioned previously the data transfer rates at best are doubled when comparing two high end cards. There are no magnitudes of it being faster in these cases.
> 
> Secondly, if 2GB of VRAM is necessary, they both have 2GB of VRAM in use. The same video game, with the same settings, in the same scene at the same time is going to use the exact same amount of memory on GDDR5 as it will on an HBM card (as it stands now with AMDs memory management) . I don't get what people do not understand about this. Any capacity optimization, or prioritization, or queueing equally can be applied to GDDR5 as it can HBM. There isn't anything special about HBM's capacity.
> 
> Internal data transfers will be faster, as well as having very fast frame buffers. Getting new data is equally slow while deleting data is optional with GC runs the show.


Agree

Well if AMD are trying to optimize the HBM and do manage to pull something off then its good news then all round ...as it should then be applied to gddr5 too .... but wait that will be a feature for HBM only cards even though it can be applied to gddr5 .....For all we know the reason for the optimization of the fury is that its way too fast and there doing some fancy gimping optimization to slow it down ...


----------



## shadow85

Quote:


> Originally Posted by *harney*
> 
> My Bet is the top end fury x (or whatever there going to call it) is 7% 9% faster than the titan on average


It better atleast be upwards of the Titan, or all this hype about HBM and stuff is a waste if GDDR5 still beats it, not to mention AMD taking their sweet arse time to get it out.


----------



## harney

Quote:


> Originally Posted by *shadow85*
> 
> It better atleast be upwards of the Titan, or all this hype about HBM and stuff is a waste if GDDR5 still beats it, not to mention AMD taking their sweet arse time to get it out.


My crystal ball tells me it will be faster but its all so saying 280 300 watt to do so only time will tell


----------



## Blameless

I'm not super worried about power consumption. I can cool everything attached to the interposer with my old swiftech CPU block and some zip ties, then epoxy a pile of random heatsinks to the VRM and put a fan over it...if I decide to get one.


----------



## Redwoodz

Quote:


> Originally Posted by *harney*
> 
> My crystal ball tells me it will be faster but its all so saying 280 300 watt to do so only time will tell


I think it's almost impossible for it not to beat Titan at high resolutions,after all 290X is not that far behind.The interesting thing will be how it stacks against 1080p performance.


----------



## SoloCamo

Quote:


> Originally Posted by *Redwoodz*
> 
> Quote:
> 
> 
> 
> Originally Posted by *harney*
> 
> My crystal ball tells me it will be faster but its all so saying 280 300 watt to do so only time will tell
> 
> 
> 
> I think it's almost impossible for it not to beat Titan at high resolutions,after all 290X is not that far behind.The interesting thing will be how it stacks against 1080p performance.
Click to expand...

If you are buying a 980ti/Titan X or Fury for 1080p you are doing it wrong


----------



## 47 Knucklehead

Quote:


> Originally Posted by *SoloCamo*
> 
> If you are buying a 980ti/Titan X or Fury for 1080p you are doing it wrong


Maybe he wants to use 3 1080p panels?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *SoloCamo*
> 
> If you are buying a 980ti/Titan X or Fury for 1080p you are doing it wrong


Why's that. Can a single Ti/Titan X/Fury X drive 1080p @ 144 Hz?


----------



## 47 Knucklehead

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Why's that. Can a single Ti/Titan X/Fury X drive 1080p @ 144 Hz?


Sure ... Pong at 1920x1080, low settings.


----------



## flopper

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Why's that. Can a single Ti/Titan X/Fury X drive 1080p @ 144 Hz?


what settings?









Normally no.
Titanx cant sustain 60fps witcher 3 at 1080p.
4k? Forget it.


----------



## xxdarkreap3rxx

Sounds like the 980 Ti for 144 Hz @ 1080p isn't that bad of a choice then


----------



## Woundingchaney

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Sounds like the 980 Ti for 144 Hz @ 1080p isn't that bad of a choice then


For a single card solution, right now it is most likely your best bet value wise (unless you want to wait on the Fury).


----------



## harney

Quote:


> Originally Posted by *flopper*
> 
> what settings?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Normally no.
> Titanx cant sustain 60fps witcher 3 at 1080p.
> 4k? Forget it.


Witcher 3 is not optimized very well ..i am still awaiting for them to fix the horse stutter which is animation based and drives me mad and slow panning stutters before i play ...
patch 1.05 is on its way been told by devs that should help sort alot of issues out and hopefully be optimized tweaked a little better


----------



## prjindigo

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> What you are saying makes no sense to me. The GPU is not using every bit of data that is being held in memory at all times. The memory caches the assets in preparation for what the chip is going to need before it needs it, then feeds it to the GPU. Once its been rendered it is dumped and no longer needed. This means that if the transfer rate is fast enough the memory won't necessarily have to hold as much at any given time because the GPU is constantly utilizing that data and dumping it as its processed. Obviously if the GPU needed more than 4GB of data to render a single frame then that would lead to what you are talking about with it having to go into system memory to finish the draw but I have not seen any evidence that this amount of data is actually necessary to render every single frame (unless we are talking maybe tons of texture mods etc. As was posted above, there is constantly a flow of data into memory that is waiting for the GPU to need it, not the other way around.
> 
> Of course I'm not an expert so maybe I'm totally wrong but my point is that when the card releases we will see if HBM bandwidth alleviates this need to cache so much data in memory when it either works or doesn't at high resolutions/settings...


Bud... I've got a 295x2 that does just fine in xfire with an effective 4gb ram in 4k mode. It really handles it well. Arguing whether 4GB of vRam is enough to handle high resolutions is a bit like arguing whether there's room for dust to settle on a shelf. While I don't get the framerate I'd like when running 4k on a single core I've NEVER been bottlenecked by the card's memory.

So the issue here is that you don't even understand how Dx9/11 work and nobody understands how Dx12 works yet. Still there's an argument.

WHY IS THERE AN ARGUMENT?!?


----------



## Ganf

Quote:


> Originally Posted by *flopper*
> 
> what settings?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Normally no.
> Titanx cant sustain 60fps witcher 3 at 1080p.
> 4k? Forget it.


If you aren't using your drivers to force 4k downscaling, 16xAA, post processing, edge filtering, deinterlacing and SSAO on your Atari emulator you're doing it wrong.


----------



## prjindigo

The 4GB Fiji is a major market trial of a new technology. If I can spare the money I will buy and use one without a doubt. We need to encourage out-of-the-box thinking because we're currently boxed in.
If AMD hadn't played around with Mantle we'd not have Dx12 right now. I'm willing to chip in to get a high quality card.

Hell, I'm willing to see what can happen with my 290X 295x2 and Fury all in the same box!


----------



## harney

[/quote]
Quote:


> Originally Posted by *prjindigo*
> 
> Bud... I've got a 295x2 that does just fine in xfire with an effective 4gb ram in 4k mode. It really handles it well. Arguing whether 4GB of vRam is enough to handle high resolutions is a bit like arguing whether there's room for dust to settle on a shelf. While I don't get the framerate I'd like when running 4k on a single core I've NEVER been bottlenecked by the card's memory.
> 
> So the issue here is that you don't even understand how Dx9/11 work and nobody understands how Dx12 works yet. Still there's an argument.
> 
> WHY IS THERE AN ARGUMENT?!?


"bit like arguing whether there's room for dust to settle on a shelf."









Right there is the fine argument


----------



## harney

Quote:


> Originally Posted by *prjindigo*
> 
> The 4GB Fiji is a major market trial of a new technology. If I can spare the money I will buy and use one without a doubt. We need to encourage out-of-the-box thinking because we're currently boxed in.
> If AMD hadn't played around with Mantle we'd not have Dx12 right now. I'm willing to chip in to get a high quality card.
> 
> Hell, I'm willing to see what can happen with my 290X 295x2 and Fury all in the same box!


Agree re AMD not been liking nvids tactics lately and would rather support amd ...i am awaiting on the flurry and yes i hope it succeeds for the sake of all of us as i and many others would not like i world where we just have one gpu firm .....

however the 980 ti is very tempting and good on nvid releasing early ...but as i am struggling at 3440x 1440p with my current 970 with an extreme over-clock ..its either flurry or ti ...so i have the cash in one hand and a massive bucket of popcorn in the other ready for news on the flurry so this war can commence ....


----------



## Redwoodz

Quote:


> Originally Posted by *harney*
> 
> Agree re AMD not been liking nvids tactics lately and would rather support amd ...i am awaiting on the flurry and yes i hope it succeeds for the sake of all of us as i and many others would not like i world where we just have one gpu firm .....
> 
> however the 980 ti is very tempting and good on nvid releasing early ...but as i am struggling at 3440x 1440p with my current 970 with an extreme over-clock ..its either flurry or ti ...so i have the cash in one hand and a massive bucket of popcorn in the other ready for news on the flurry so this war can commence ....


What if the Grenada 390X 8GB releases at $595. What would that tell you?


----------



## Nvidia Fanboy

Quote:


> Originally Posted by *Slomo4shO*
> 
> FTFU
> 
> It is not the full chip just like the GTX 780... It is also priced identical to the launch price of the GTX 780...
> 
> We will likely see a fully unlocked GM200 by the holidays... Just like the GTX 780 Ti
> 
> Milking, its what Nvidia does best


We already have a fully unlocked GM200. It was released a few months ago. It was sort of a big deal here on OCN.


----------



## Ganf

Quote:


> Originally Posted by *Redwoodz*
> 
> What if the Grenada 390X 8GB releases at $595. What would that tell you?


It'd tell me that AMD is out of their gourd because there is no way they'll get anything close to 980ti performance out of a rebranded 290x.


----------



## Redwoodz

It is not a just rebranded Hawaii. It will compete with 980/980Ti. They have done a lot of work with "Tonga" type color compression and power savings,also possibly some 28nm process improvements.$595 is an AIB's retail price,so custom cooler as well. The only way it is released at that price is it is competitive with 980Ti. And it's not even their 2nd tier flagship.
















?

http://www.sweclockers.com/nyhet/20623-amd-radeon-300-serien-dyker-upp-i-prislista

4,990kr equals $595.


----------



## Kane2207

Quote:


> Originally Posted by *Redwoodz*
> 
> It is not a just rebranded Hawaii. It will compete with 980/980Ti. They have done a lot of work with "Tonga" type color compression and power savings,also possibly some 28nm process improvements.$595 is an AIB's retail price,so custom cooler as well. The only way it is released at that price is it is competitive with 980Ti. And it's not even their 2nd tier flagship.


Whilst you're predicting the future, can I have the winning numbers for the euromillions lottery tonight please?

I'll even buy you a Fury if you nail all 6 for the £70m


----------



## rt123

Maybe the 390X is Hawaii with 3072 cores.
There was a rumor last year of a Higher core count 290X.


----------



## Alatar

I'm still waiting for someone to actually show some actual proof that AMD's hawaii replacements are anything other than higher clocked hawaii with more memory.

I mean codeXL doesn't have any new chips other than Fiji, drivers don't have any new chips other than Fiji and the 300 series chips that were listed had hawaii device IDs.

Even the recent videocardz performance leaks would point at a direct 290X rebrand with slightly higher mem and core clocks because that's what the performance matched.

Where's the proof?
Quote:


> Originally Posted by *Redwoodz*
> 
> It is not a just rebranded Hawaii. It will compete with 980/980Ti. They have done a lot of work with "Tonga" type color compression and power savings,also possibly some 28nm process improvements.$595 is an AIB's retail price,so custom cooler as well. The only way it is released at that price is it is competitive with 980Ti. And it's not even their 2nd tier flagship.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ?
> 
> http://www.sweclockers.com/nyhet/20623-amd-radeon-300-serien-dyker-upp-i-prislista
> 
> 4,990kr equals $595.


Swedish prices almost always have VAT included.

For those swedish prices the likely USD msrp is $499 however personally I think it's going to be $450 or so.

Over here an average 290X is already close to $450 USD.
Quote:


> Originally Posted by *rt123*
> 
> Maybe the 390X is Hawaii with 3072 cores.
> There was a rumor last year of a Higher core count 290X.


Baseless rumor that was debunked by Dave Baumann.


----------



## Ganf

Subtract VAT, and take into consideration that some companies that put prices up early are gouging. $499 is the most they could sanely charge for the 390x, and they don't have enough tricks in their bag to make it compete with the 980ti. Take it from someone who owns one, I can only get 10% above a stock 980 with a 20% overclock, and mine is one of the higher overclocking cards burning 450w just to do that. No amount of compression, architecture improvements or GCN tweaks is going to get them the other 10% they're missing between my card and a 980ti.

AMD knows how to keep their cards relevant, but they don't know how to break the laws of physics, put it in a box and sell it for $599.


----------



## Redwoodz

Quote:


> Originally Posted by *Alatar*
> 
> I'm still waiting for someone to actually show some actual proof that AMD's hawaii replacements are anything other than higher clocked hawaii with more memory.
> 
> I mean codeXL doesn't have any new chips other than Fiji, drivers don't have any new chips other than Fiji and the 300 series chips that were listed had hawaii device IDs.
> 
> Even the recent videocardz performance leaks would point at a direct 290X rebrand with slightly higher mem and core clocks because that's what the performance matched.
> 
> Where's the proof?
> Swedish prices almost always have VAT included.
> 
> For those swedish prices the likely USD msrp is $499 however personally I think it's going to be $450 or so.
> 
> Over here an average 290X is already close to $450 USD.
> Baseless rumor that was debunked by Dave Baumann.


The article quotes 3000kr for 290X vs 4999kr for 390X. That's a 40% increase. No way they would charge 40% more for the same performance.

Let's say they didn't inc. VAT and extra 4GB in that comparison,and they add cost for new features like DX12 etc,you could still easily expect at least 15% increase in performance.


----------



## rt123

Quote:


> Originally Posted by *Alatar*
> 
> Baseless rumor that was debunked by Dave Baumann.


I see.


----------



## Alatar

40% increase over the current $300 MSRP for the 290X is $420.

And the current 290X MSRP is artificially low because AMD has desperately been trying to clear inventory for a while now.

Add 4-7% extra perf from some core and mem clocks, double the VRAM capacity and slap some new shiny coolers + 300 series branding on it. I can definitely see AMD going for slightly above $400 with the thing since they really do need higher margins right now.


----------



## TheRacker

Fiji just needs to come out so all this speculation and arguing about 4gb vram can stop lol. Speculate all you want how much it matters, it's not going to change the performance at launch 11 days from now. You can justify your feelings about 4gb then.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Alatar*
> 
> I'm still waiting for someone to actually show some actual proof that AMD's hawaii replacements are anything other than higher clocked hawaii with more memory.


Luckily you only have to wait about 11 days for your precious proof. I mean, they did change the name of the chip to Grenada so that likely means there is more going on than just a re-clock. But again, speculation on either side does nobody any real good at this point. All of your questions will be answered on the 16th...


----------



## Alatar

The things still had hawaii device IDs in the drivers even though they were 300 series cards...

Not to mention that AMD has done this before. Pitcairn was turned into Curacao. Still did nothing. No freesync, no trueaudio, no die size or transistor count changes...

The way I see it proof is pointing at a hawaii rebrand. Of course anything is possible but is anything else likely or supported by proof at this point?


----------



## sugarhell

Quote:


> Originally Posted by *Alatar*
> 
> The things still had hawaii device IDs in the drivers even though they were 300 series cards...
> 
> Not to mention that AMD has done this before. Pitcairn was turned into Curacao. Still did nothing. No freesync, no trueaudio, no die size or transistor count changes...
> 
> The way I see it proof is pointing at a hawaii rebrand. Of course anything is possible but is anything else likely or supported by proof at this point?


Dont use codexl so much. most of the times is wrong. Just wait 10+ days


----------



## SoloCamo

I want the 390x to be something else, and part me thinks it *might* due to them being so silent on it... However my gut tells it's an OC'ed 8gb 290x with aftermarket coolers on launch and slightly better power consumption. We would have seen something on the contrary this close to release... yea most are rumors but there are usually bits and pieces of truth and I've yet to see anything from a remotely reliable source as to any changes besides it being called something else


----------



## Ganf

Quote:


> Originally Posted by *Alatar*
> 
> The things still had hawaii device IDs in the drivers even though they were 300 series cards...
> 
> Not to mention that AMD has done this before. Pitcairn was turned into Curacao. Still did nothing. No freesync, no trueaudio, no die size or transistor count changes...
> 
> The way I see it proof is pointing at a hawaii rebrand. Of course anything is possible but is anything else likely or supported by proof at this point?


Quote:


> Originally Posted by *SoloCamo*
> 
> I want the 390x to be something else, and part me thinks it *might* due to them being so silent on it... However my gut tells it's an OC'ed 8gb 290x with aftermarket coolers on launch and slightly better power consumption. We would have seen something on the contrary this close to release... yea most are rumors but there are usually bits and pieces of truth and I've yet to see anything from a remotely reliable source as to any changes besides it being called something else


If AMD had significant boosts in performance over the 2xx series in their rebrands, I seriously doubt they would have been so desperately clearing out stock in anticipation of this launch. It's highly likely they were worried about left over stock from the 200 series causing the 300's to flop.


----------



## Redwoodz

Quote:


> Originally Posted by *Alatar*
> 
> The things still had hawaii device IDs in the drivers even though they were 300 series cards...
> 
> Not to mention that AMD has done this before. Pitcairn was turned into Curacao. Still did nothing. No freesync, no trueaudio, no die size or transistor count changes...
> 
> The way I see it proof is pointing at a hawaii rebrand. Of course anything is possible but is anything else likely or supported by proof at this point?


A Phenom II with revision C2 versus C3 revision was the same part too,yet the C3 rev. had vastly superior overclocking and power consumption.


----------



## SoloCamo

Quote:


> Originally Posted by *Redwoodz*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Alatar*
> 
> The things still had hawaii device IDs in the drivers even though they were 300 series cards...
> 
> Not to mention that AMD has done this before. Pitcairn was turned into Curacao. Still did nothing. No freesync, no trueaudio, no die size or transistor count changes...
> 
> The way I see it proof is pointing at a hawaii rebrand. Of course anything is possible but is anything else likely or supported by proof at this point?
> 
> 
> 
> A Phenom II with revision C2 versus C3 revision was the same part too,yet the C3 rev. had vastly superior overclocking and power consumption.
Click to expand...

Which would coincide with what is widely believed at this point, a revised and more efficient hawaii. If they can clock Hawaii to 1050-1100 and 1500 and keep it from throttling AND keep the price around $400 it would be a great buy IMO. At that point it is more than close enough to stock 980 at 4k, and not nearing the 980ti price range where it wouldn't compete.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Alatar*
> 
> The things still had hawaii device IDs in the drivers even though they were 300 series cards...
> 
> Not to mention that AMD has done this before. Pitcairn was turned into Curacao. Still did nothing. No freesync, no trueaudio, no die size or transistor count changes...
> 
> The way I see it proof is pointing at a hawaii rebrand. Of course anything is possible but is anything else likely or supported by proof *at this point?*


As I said, its irrelevant. You'll know in 10 flippin days. Just settle down...


----------



## Alatar

Quote:


> Originally Posted by *SoloCamo*
> 
> Which would coincide with what is widely believed at this point, a revised and more efficient hawaii. If they can clock Hawaii to 1050-1100 and 1500 and keep it from throttling


This doesn't require a revised or a more efficient hawaii. What you're describing is pretty much what 3rd party hawaii cards already are.

And the chances of AMD releasing any reference designs is extremely low to nonexistent.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> As I said, its irrelevant. You'll know in 10 flippin days. Just settle down...


This is the rumors and unconfirmed articles section. The whole point of the section is to not wait and speculate while we don't know the exact facts.

And yes, that includes speculation and discussion that isn't all roses and fairytales.

Not to mention that we're not even talking about pure speculation anymore.


----------



## Ganf

Quote:


> Originally Posted by *SoloCamo*
> 
> Which would coincide with what is widely believed at this point, a revised and more efficient hawaii. If they can clock Hawaii to 1050-1100 and 1500 and keep it from throttling AND keep the price around $400 it would be a great buy IMO. At that point it is more than close enough to stock 980 at 4k, and not nearing the 980ti price range where it wouldn't compete.


It won't compete with the 980 because neither the 980 nor any iteration of the 290x is sufficient for 4k, so their comparison at 4k is irrelevant. Since it loses out in every other area the improvements you suggest still result in a loss.

AMD's goal isn't to *match* the competition, they're supposed to *BEAT* them. We don't want Nvidia deciding the pace at which the market advances and then AMD following docilely in their steps, competition drives innovation. If it turns out that AMD is nothing but a whipped dog and can no longer give Nvidia a run for their money we all lose.


----------



## Ha-Nocri

I don't think AMD is trying to fight Titan with anything. Titan is not a gamer's card. 12GB is wasted for gaming only. So, big Fiji's target is 980ti, and small Fiji will compete against 980. I expect 980 to stand no chance ofc, and NV will lower it's price for sure. Not so sure tho if 980ti will keep it's crown or not.

All in all this will be good for us, consumers


----------



## SoloCamo

Quote:


> Originally Posted by *Alatar*
> 
> Quote:
> 
> 
> 
> Originally Posted by *SoloCamo*
> 
> Which would coincide with what is widely believed at this point, a revised and more efficient hawaii. If they can clock Hawaii to 1050-1100 and 1500 and keep it from throttling
> 
> 
> 
> This doesn't require a revised or a more efficient hawaii. What you're describing is pretty much what 3rd party hawaii cards already are.
> 
> And the chances of AMD releasing any reference designs is extremely low to nonexistent.
Click to expand...

Fair enough. I'm able to hold 1050 core without throttling at all on a reference cooler on my 290x with only 5% power limit increase and keeping the gpu fan at 60-65% instead of the factory 55%. I'm sure I can drop the power limit but I'd rather let it stretch it's legs more if needed.
Quote:


> Originally Posted by *Ganf*
> 
> It won't compete with the 980 because neither the 980 nor any iteration of the 290x is sufficient for 4k, so their comparison at 4k is irrelevant. Since it loses out in every other area the improvements you suggest still result in a loss.


Completely subjective. I've been gaming on 4k with a reference 290x since Dec 2014.. Sure I can't max a lot of recent AAA games but a lot of good looking but older games still look great on it. 4k doesn't always have to mean maxing it the newest games at 4k, something that seems to be the only goal around here. I play GTA V on a single 290x at 4k with a mix of ultra/high/med - sure it's not near a locked 60fps (avg around 45fps I'd say) but it still is fine for that type of game.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Alatar*
> 
> This doesn't require a revised or a more efficient hawaii. What you're describing is pretty much what 3rd party hawaii cards already are.
> 
> And the chances of AMD releasing any reference designs is extremely low to nonexistent.
> *This is the rumors and unconfirmed articles section.* The whole point of the section is to not wait and speculate while we don't know the exact facts.
> 
> And yes, that includes speculation and discussion that isn't all roses and fairytales.
> 
> Not to mention that we're not even talking about pure speculation anymore.


Which makes it all the more curious that you are so fervently discounting any possibilities without "proof" in a supposed "rumors and unconfirmed reports" thread. But I think we all know the only rumors you care to report on are ones where AMD is made out to be a bunch of bumbling buffoons...


----------



## Ganf

Quote:


> Originally Posted by *SoloCamo*
> 
> Fair enough. I'm able to hold 1050 core without throttling at all on a reference cooler on my 290x with only 5% power limit increase and keeping the gpu fan at 60-65% instead of the factory 55%. I'm sure I can drop the power limit but I'd rather let it stretch it's legs more if needed.
> Completely subjective. I've been gaming on 4k with a reference 290x since Dec 2014.. Sure I can't max a lot of recent AAA games but a lot of good looking but older games still look great on it. 4k doesn't always have to mean maxing it the newest games at 4k, something that seems to be the only goal around here. I play GTA V on a single 290x at 4k with a mix of ultra/high/med - sure it's not near a locked 60fps (avg around 45fps I'd say) but it still is fine for that type of game.


You're already coming across games that you can't play at the best detail, and anybody buying a GPU between two weeks from now and 2016 when the next generation hits is going to expect their card to be relevant for a year at minimum at the resolution they play. The 290x is not going to be 4k capable by the end of this year for the majority of games people are going to want to play at 4k.


----------



## HiTechPixel

Quote:


> Originally Posted by *Ha-Nocri*
> 
> I don't think AMD is trying to fight Titan with anything. Titan is not a gamer's card. 12GB is wasted for gaming only.


Is this a poor troll attempt, or what? Titan X, or Maxwell for that matter, has always been gaming oriented due to the gimped double-precision performance. And the 12GB is for high resolutions like 4K, 5K or surround 4K or surround 5K.


----------



## alexmaia_br

Quote:


> Originally Posted by *prjindigo*
> 
> The 4GB Fiji is a major market trial of a new technology. If I can spare the money I will buy and use one without a doubt. We need to encourage out-of-the-box thinking because we're currently boxed in.
> If AMD hadn't played around with Mantle we'd not have Dx12 right now. I'm willing to chip in to get a high quality card.
> 
> Hell, I'm willing to see what can happen with my 290X 295x2 and Fury all in the same box!


I agree 100%.


----------



## Ha-Nocri

Quote:


> Originally Posted by *HiTechPixel*
> 
> Is this a poor troll attempt, or what? Titan X, or Maxwell for that matter, has always been gaming oriented due to the gimped double-precision performance. And the 12GB is for high resolutions like 4K, 5K or surround 4K or surround 5K.


https://www.youtube.com/watch?v=ommjmc_mhs8

jump to 2:40 and hear NV rep


----------



## SoloCamo

Quote:


> Originally Posted by *Ganf*
> 
> The 290x is not going to be 4k capable by the end of this year for the majority of games people are going to want to play at 4k.


Yes it will - just not at high/max settings. I don't see any groundbreaking games coming as far as graphics go... The biggest hyped up game upcoming so far seems to be Fallout 4 assuming you've seen the trailer 4k shouldn't be a problem









I used to be a max settings or bust guy when I was at 1080p, but I will gladly take the clarity of 4k with lowered settings over it if that's what I need to do for decent frame rates.

And in all honesty and somewhat OT, what are even the next big upcoming games (graphically)? Either I'm just missing them or there isn't anything worthwhile. I can play UT on the UE4 demo above and around 30fps max settings at 4k on my 290x with a minor OC... and that game isn't even in what I'd call an alpha state yet IMO looks better than anything coming up that's been shown at least. I'm not too worried when these consoles are under powered. Doom 4 is the only thing that comes to mind but who knows when that is coming out...


----------



## Ganf

Quote:


> Originally Posted by *Ha-Nocri*
> 
> https://www.youtube.com/watch?v=ommjmc_mhs8
> 
> jump to 2:40 and hear NV rep


NV rep?

HA! That guys nobody, he's just some shmuck sliming along on their coattails for attention...

.....

Where'd I put that crash helmet.....








Quote:


> Originally Posted by *SoloCamo*
> 
> Yes it will - just not at high/max settings. I don't see any groundbreaking games coming as far as graphics go... The biggest hyped up game upcoming so far seems to be Fallout 4 assuming you've seen the trailer 4k shouldn't be a problem
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I used to be a max settings or bust guy when I was at 1080p, but I will gladly take the clarity of 4k with lowered settings over it if that's what I need to do for decent frame rates.
> 
> And in all honesty and somewhat OT, what are even the next big upcoming games (graphically)? Either I'm just missing them or there isn't anything worthwhile. I can play UT on the UE4 demo above and around 30fps max settings at 4k on my 290x with a minor OC... and that game isn't even in what I'd call an alpha state yet IMO looks better than anything coming up that's been shown at least. I'm not too worried when these consoles are under powered. Doom 4 is the only thing that comes to mind but who knows when that is coming out...


Meanwhile, the 980ti pulls over 50fps in UT at stock, while being just $150 more than a 980 which is not a wallet buster for someone who already bought a 4k monitor.

You get what I'm saying now? That goalpost has moved. The 980ti set the new standard for 4k performance and value.


----------



## DividebyZERO

Quote:


> Originally Posted by *Ha-Nocri*
> 
> https://www.youtube.com/watch?v=ommjmc_mhs8
> 
> jump to 2:40 and hear NV rep


I still don't understand because Nvidia advertises it on their website as an ultimate gaming solution.

http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-titan

Says gaming, right?

http://www.nvidia.com/gtx-700-graphics-cards/gtx-titan-black/
Quote:


> THE NEXT EVOLUTIONIntroducing GeForce® GTX™ TITAN Black.
> TITAN Black is a masterpiece of design and engineering. Evolved from the award-winning GTX TITAN, the Black edition gives you the added horsepower to drive your most graphics-intensive games while still maintaining whisper-quiet acoustics and cool thermals.


http://www.geforce.com/whats-new/articles/nvidia-geforce-gtx-titan-x

Quote:


> Now, there's a new TITAN, one built for Virtual Reality and 4K gaming.


So what is it?


----------



## Blameless

Quote:


> Originally Posted by *prjindigo*
> 
> The 4GB Fiji is a major market trial of a new technology. If I can spare the money I will buy and use one without a doubt. We need to encourage out-of-the-box thinking because we're currently boxed in.


I'm all for encouraging innovation, but I don't give charity. I buy the best product for my dollar within my intended budget.
Quote:


> Originally Posted by *Alatar*
> 
> For those swedish prices the likely USD msrp is $499 however personally I think it's going to be $450 or so.
> 
> Over here an average 290X is already close to $450 USD.


I wouldn't expect more than $400 in the states for even an improved Hawaii with 8GiB.
Quote:


> Originally Posted by *TheRacker*
> 
> Fiji just needs to come out so all this speculation and arguing about 4gb vram can stop lol. Speculate all you want how much it matters, it's not going to change the performance at launch 11 days from now. You can justify your feelings about 4gb then.


I'm already 99% percent certain Fiji's 4GiB will be fine in essentially all current games, minus one or two outliers I'm probably not going to ever play.

I'm more worried about how a 4GiB Fiji will stack up to 6 and 8GiB cards in 12-18 months, cause that's the absolute soonest I'd consider upgrading a near flagship part again.
Quote:


> Originally Posted by *SoloCamo*
> 
> Yes it will - just not at high/max settings.


I don't consider the 290X capable of 4k right now.


----------



## SoloCamo

Quote:


> Originally Posted by *Ganf*
> 
> Meanwhile, the 980ti pulls over 50fps in UT at stock, while being just $150 more than a 980 which is not a wallet buster for someone who already bought a 4k monitor.
> 
> You get what I'm saying now? That goalpost has moved. The 980ti set the new standard for 4k performance and value.


Not to me at $650, the 980ti falls short for a single gpu at max settings at 4k, you will have to compromise much like I do with my 290x. Albeit compromise less, but I paid less for my 290x back in october of 2013 which is now almost two years ago.

I'm not saying it wouldn't be the go to card but a card these days lasts two years or so, a monitor lasts 5+ easy. Over the past decade I went from 1366x768 to 1080p to 4k - I've gone through many more gpu's in that time. I am more willing to spend $$ on a good monitor that a gpu for example like the 780ti that is now scoffed at.
Quote:


> Originally Posted by *Blameless*
> 
> I don't consider the 290X capable of 4k right now.


Again, subjective. I play on it daily at 4k on a reference 290x with a small OC.

____

Don't get me wrong, the 290x is clearly getting long in the tooth, but to say it's incapable of 4k is shortsighted IMO. Again, obviously the top AAA titles will struggle at the higher settings but there are plenty of games that aren't like that that I play.

4k has certainly made Skyrim look nice and crisp, especially with 8x msaa on top of it and still getting a locked 60fps with some mods.









To get to my main point, a true 4k 60fps card will not arrive this generation. We will be looking at Pascal / Arctic Islands for that


----------



## Slaughterem

Quote:


> Originally Posted by *harney*
> 
> Agree re AMD not been liking nvids tactics lately and would rather support amd ...i am awaiting on the flurry and yes i hope it succeeds for the sake of all of us as i and many others would not like i world where we just have one gpu firm .....
> 
> however the 980 ti is very tempting and good on nvid releasing early ...but as i am struggling at 3440x 1440p with my current 970 with an extreme over-clock ..its either flurry or ti ...so i have the cash in one hand and a massive bucket of popcorn in the other ready for news on the flurry so this war can commence ....


I am with you on that - extra butter on the popcorn please, I took my pill so I should be good to go. Predictions on the cards -
R9 380 Tonga 4 GB 10% higher than 960 priced at $280
R9 380X ???? Full Tonga same as 780TI priced at $359
R9 390X Grenada at higher frequencies +10% of 980 priced at $499
R9 390 Grenada at higher frequencies + 10% of 970 priced at $429
Fury with Fiji Pro same as 980TI $599
Fury X with Fiji XT +8% of Titan X $699
Fury X2 with 2X Fiji XT nothing like it in the world $1349


----------



## shadow85

With the dual Fury XT cards in one, I rekon they will be priced much higher than $1349, because at that price that is almost just 2x the price of one Fiji XT card. And we all know by now that dual cards are never released at just 2x the original price of one of their cards i.e. remember when Titan Z and 295X2 first came out? They were both priced a hell of alot more than 2x the single card price upon release.


----------



## raghu78

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Which makes it all the more curious that you are so fervently discounting any possibilities without "proof" in a supposed "rumors and unconfirmed reports" thread. But I think we all know the only rumors you care to report on are ones where AMD is made out to be a bunch of bumbling buffoons...


well said.







If there is something positive about AMD we won't see him discussing with the same fervour. Its the opportunity to spread FUD and bad mouth AMD that has him salivating.









Lets see what AMD has cooked up for R9 3xx. Something tells me its going to be awesome.


----------



## Vesku

Quote:


> Originally Posted by *Redwoodz*
> 
> It is not a just rebranded Hawaii. It will compete with 980/980Ti. They have done a lot of work with "Tonga" type color compression and power savings,also possibly some 28nm process improvements.$595 is an AIB's retail price,so custom cooler as well. The only way it is released at that price is it is competitive with 980Ti. And it's not even their 2nd tier flagship.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ?
> 
> http://www.sweclockers.com/nyhet/20623-amd-radeon-300-serien-dyker-upp-i-prislista
> 
> 4,990kr equals $595.


Doesn't that include 20% VAT? So maybe $499-549 US before sales tax. Assuming EU is less affected by the random "in the EU" pricing of the past.


----------



## RagingCain

Quote:


> Originally Posted by *raghu78*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Majin SSJ Eric*
> 
> Which makes it all the more curious that you are so fervently discounting any possibilities without "proof" in a supposed "rumors and unconfirmed reports" thread. But I think we all know the only rumors you care to report on are ones where AMD is made out to be a bunch of bumbling buffoons...
> 
> 
> 
> well said.
> 
> 
> 
> 
> 
> 
> 
> If there is something positive about AMD we won't see him discussing with the same fervour. Its the opportunity to spread FUD and bad mouth AMD that has him salivating.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Lets see what AMD has cooked up for R9 3xx. Something tells me its going to be awesome.
Click to expand...

And what would you want him to say and think about AMD, Mr. Raghu78?

It sounds like anyone who disagrees with you, or more importantly, disagrees with AMD's actions, is entirely wrong, a shill, a fanboy, or a troll.

What I see most of the time, is, and forgive me for analogies, one of the largest celebrated football (European football) teams (AMD) now struggling to avoid relegation from the big league. It's just so hard to stay being a fan when you are let down, when your team is struggling to do the simplest of tasks... what once was easy has now become a struggle for survival. Then suddenly you sign a key player, get a new world class head coach, or at least a proven incredibly intelligent coach (Dr. Su)... hoping for a serious comeback... which as far as comebacks go, can only occur once you are down. Make no mistake about it, AMD is down right now. *The harshest critic though is usually someone who once was a beloved fan who has been let down for the last time... Damn you Newcastle United....*

Now, I don't know Alatar personally, or if he even has green underwear as claimed, however, if *you* only see life as black or white, or in this case Green or Red, you are blind to all the contributions that the other colors bring. Such as neutral gray, or light green, or light red. In fact, having an attitude quite often and *readily* found in your last 17 pages of post history would totally discourage me from offering my opinions, ideas, general thoughts, or feelings, on the subject. If your goal is that you would like to listen to basically the same yes-men who do like you do, say like you say, think like you think, and leave you having conversations that are empty/repetitive and unoriginal, then continue on your path. You will either get conversations like that or end up talking to yourself alone. Currently what you are getting now is the opinions and observations of some of the most experienced members of the enthusiast community that should not be readily dismissed as fanciful brand favoriteering.

Your aggressive posts negate AMD's reputation as a Good Guy Greg the company and you do better to engage in conversation to move light grey users and light green users closer to the middle grounds of neutrality or close to the AMDs side if that is the goal you seek. Another analogy, apologies, is having someone like Jesus Christ perform miracles, then having a group of followers rubbing it in other peoples' faces, for years, how awesome it is and how everyone else is the suckzor/going to hell. Team America, but for Jesus, if you will. They are definitely spreading a negative message with that one.

I personally want to see AMD succeed both for personal and fiscal reasons, but zealotry in a handful of users doesn't actually help them sell GPUs. It's great to have enthusiasm but go with the ebbs and flows. This message is aimed at you and any zealot, regardless, of which color is on the brand label. Criticizing the faults of one brand of hardware should be taken at face value: criticisms of one brand of hardware. Just because you are against something, does not mean you are for the other side.

Edited: Typo fairies and grammar goblins.


----------



## Majin SSJ Eric

I think it is certainly possible that Al is right about everything. Unfortunately here lately his anti-AMD predictions have been largely accurate. But there is still hope that they can pull out a winner this generation. All Raghu and I have been saying is let's wait a week and see what they do. If indeed they just release a bunch of rebrands and an underwhelming 4GB halo card then he and Knucklehead and all the rest will be free to throw the Green Party they've all been waiting for these past few months and I will be forced to agree that Nvidia has notched another devastating victory over them. But it's also possible that maybe they have something hidden up their sleeve and will surprise a lot of people. Point is, we don't have long to wait either way...


----------



## Forceman

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think it is certainly possible that Al is right about everything. Unfortunately here lately his anti-AMD predictions have been largely accurate. But there is still hope that they can pull out a winner this generation. All Raghu and I have been saying is let's wait a week and see what they do. If indeed they just release a bunch of rebrands and an underwhelming 4GB halo card then he and Knucklehead and all the rest will be free to throw the Green Party they've all been waiting for these past few months and I will be forced to agree that Nvidia has notched another devastating victory over them. But it's also possible that maybe they have something hidden up their sleeve and will surprise a lot of people. Point is, we don't have long to wait either way...


My fear is that this E3 launch is going to be a repeat of the Hawaii launch, where all you get are some marketing slides (if that) and an actual launch date. Then two weeks later, or whatever, you get actual reviews. So it's going to end up being another month of "just wait and see" on top of the month we just spent (partly waiting for the Computex "launch" some people were so confident about that never happened).

Edit: Had the conventions backwards.


----------



## RagingCain

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think it is certainly possible that Al is right about everything. Unfortunately here lately his anti-AMD predictions have been largely accurate. But there is still hope that they can pull out a winner this generation. All Raghu and I have been saying is let's wait a week and see what they do. If indeed they just release a bunch of rebrands and an underwhelming 4GB halo card then he and Knucklehead and all the rest will be free to throw the Green Party they've all been waiting for these past few months and I will be forced to agree that Nvidia has notched another devastating victory over them. But it's also possible that maybe they have something hidden up their sleeve and will surprise a lot of people. Point is, we don't have long to wait either way...


Raghu personally accused a moderator of spreading lies and fanboyism, I merely commented on his public post history showing a blatant hypocrisy as well as the fact that negative behavior turns people away from your ideas.

So no, he wasnt just saying wait a week.

There is nothing wrong with a 4GB halo card, nearly every post has merely corrected an illogical fallacy of somehow they can store more than 4GB in it through the use of magic fairy dust. Speculation on how things work, or taking our years of experience and applying it to pre-launch data is a part of the excitement. The more experience you get, means there is less room for imagination and more pragmatism.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Forceman*
> 
> My fear is that this E3 launch is going to be a repeat of the Hawaii launch, where all you get are some marketing slides (if that) and an actual launch date. Then two weeks later, or whatever, you get actual reviews. So it's going to end up being another month of "just wait and see" on top of the month we just spent (partly waiting for the Computex "launch" some people were so confident about that never happened).
> 
> Edit: Had the conventions backwards.


That would be a huge fail and I think they know it (and possibly the rush to release 980Ti at Computex means Nvidia knows it too). I'm ok with a paper launch at E3 as long as the nda is at least lifted and we get real reviews...


----------



## szeged

Hoping the launch will drop more 290x prices below $300 for a friends build I'm doing. Max budgets on gpus sucks.


----------



## raghu78

Quote:


> Originally Posted by *RagingCain*
> 
> *And what would you want him to say and think about AMD, Mr. Raghu78? It sounds like anyone who disagrees with you, or more importantly, disagrees with AMD's actions, is entirely wrong, a shill, a fanboy, or a troll.
> *
> What I see most of the time, is, and forgive me for analogies, one of the largest celebrated football (European football) teams (AMD) now struggling to avoid relegation from the big league. It's just so hard to stay being a fan when you are let down, when your team is struggling to do the simplest of tasks... what once was easy has now become a struggle for survival. Then suddenly you sign a key player, get a new world class head coach, or at least a proven incredibly intelligent coach (Dr. Su)... hoping for a serious comeback... which as far as comebacks go, can only occur once you are down. Make no mistake about it, AMD is down right now. *The harshest critic though is usually someone who once was a beloved fan who has been let down for the last time... Damn you Newcastle United....*


No. I don't expect everybody to agree with me. I am also one of the harshest critics of AMD. I have blasted them for Bulldozer which has nearly killed the company. But the difference is I am an underdog supporter. So I have always been pro-AMD against Intel/Nvidia. I am one of the strongest believers in healthy competition benefiting consumers. Thats why I root for AMD. Thats why I also want Zen to succeed. Can AMD pull it off ? Sure they can. Can they fail again. yeah thats also possible. But I am also sure another failure in the CPU segment will finish off the company. I also have faith in Jim Keller's technical experience and leadership.

What I hate is people who bash AMD at all times and only find negatives to talk about them no matter what. I am talking of alatar and knucklehead. See there are people like majin or criminal who have shown to be more balanced as they buy products of Nvidia and are open to AMD too. They criticize Nvidia and AMD equally. Thats why I respect their opinions.









http://www.overclock.net/u/189301/majin-ssj-eric
http://www.overclock.net/u/58838/criminal

I don't claim to be as balanced as them. Because I will support AMD over Nvidia till the day AMD overtakes Nvidia and becomes as arrogant as Nvidia.







I detest Nvidia for how they treat their customers and their sheer arrogance on account of their market leadership. Majin would agree with me on how Nvidia does not care a damn and will even go as far as lying and intentional withholding of information (yeah i am talking of GTX 970) to get what they want.
Quote:


> Now, I don't know Alatar personally, or if he even has green underwear as claimed, however, if *you* only see life as black or white, or in this case Green or Red, you are blind to all the contributions that the other colors bring. Such as neutral gray, or light green, or light red. In fact, having an attitude quite often and *readily* found in your last 17 pages of post history would totally discourage me from offering my opinions, ideas, general thoughts, or feelings, on the subject. If your goal is that you would like to listen to basically the same yes-men who do like you do, say like you say, think like you think, and leave you having conversations that are empty/repetitive and unoriginal, then continue on your path. You will either get conversations like that or end up talking to yourself alone. Currently what you are getting now is the opinions and observations of some of the most experienced members of the enthusiast community that should not be readily dismissed as fanciful brand favoriteering.
> 
> *Your aggressive posts negate AMD's reputation as a Good Guy Greg the company and you do better to engage in conversation to move light grey users and light green users closer to the middle grounds of neutrality or close to the AMDs side if that is the goal you seek.* Another analogy, apologies, is having someone like Jesus Christ perform miracles, then having a group of followers rubbing it in other peoples' faces, for years, how awesome it is and how everyone else is the suckzor/going to hell. Team America, but for Jesus, if you will. They are definitely spreading a negative message with that one.
> 
> I personally want to see AMD succeed both for personal and fiscal reasons, but zealotry in a handful of users doesn't actually help them sell GPUs. It's great to have enthusiasm but go with the ebbs and flows. This message is aimed at you and any zealot, regardless, of which color is on the brand label. Criticizing the faults of one brand of hardware should be taken at face value: criticisms of one brand of hardware. Just because you are against something, does not mean you are for the other side.
> 
> Edited: Typo fairies and grammar goblins.


I don't expect to convert anybody to any team as I don't work for any team. I do root for one company today and that is AMD. In the future if AMD overtakes Nvidia and displays the same insensitive attiude and arrogance then surely I will drop them without any hesitation.







I have nothing against Nvidia. In fact I appreciate their products when they are very good like the 8800 GTX, 8800 GT, GTX 580, GTX 680. But I don't agree with their business ethics (I would be surprised if they have one). Nvidia is the most unscruplous tech company in the high tech industry. From the days of bump-gate and to the GTX 970 they have never accepted that they ever did anything wrong and apologized. Instead they denied bump-gate was a serious engineering snafu and settled out of court but still denied that it was any acceptance of wrongdoing or liability.









http://www.canadiannvidiasettlement.com/Files/nvidia/NVIDIA%20Settlement%20Agreement%20-%20Fully%20Executed%20(June%2012,%202013).pdf

The day AMD gets too big and shows such lack of business ethics and mind-numbing arrogance I would surely drop them and their products.


----------



## harney

Quote:


> Originally Posted by *Redwoodz*
> 
> It is not a just rebranded Hawaii. It will compete with 980/980Ti. They have done a lot of work with "Tonga" type color compression and power savings,also possibly some 28nm process improvements.$595 is an AIB's retail price,so custom cooler as well. The only way it is released at that price is it is competitive with 980Ti. And it's not even their 2nd tier flagship.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ?
> 
> http://www.sweclockers.com/nyhet/20623-amd-radeon-300-serien-dyker-upp-i-prislista
> 
> 4,990kr equals $595.


I very much doubt that the 390x is going to be anywhere near the 980 ti unless it comes equipped with its own nuclear sub station









i even have my doubts on the flurry but i am hoping that will be at least 10% faster ....i am no fan of either red or green but like i said before i do want AMD to succeed for all are sakes


----------



## raghu78

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> That would be a huge fail and I think they know it (and possibly the rush to release 980Ti at Computex means Nvidia knows it too). I'm ok with a paper launch at E3 as long as the nda is at least lifted and we get real reviews...


Majin I am sure AMD has been taking their time to get the launch right. yeah I am talking about the software. Performance in Gameworks titles needs to be as good as possible to reduce the distortion that titles like Project Cars can have on their avg performance across a game suite. CF Freesync is something I hope they deliver for Fiji Fury launch. It would be stupid to have such a powerful card which can run the latest games at 4K at max settings especially in CF and not support CF Freesync and CF Freesync Eyefinity.

I am expecting a Jun 16th product stack and architectural reveal and reviews go up on 24th or before month end. I expect the cards to be available at retail when reviews go up. I don't think they are going to backtrack on that as they did a hard launch with the R9 2xx. I hope AMD knock this one out of the park - hardware, software and availability. Anything less and they deserve all the criticism that will come their way.


----------



## Elmy

Exciting times are ahead....


----------



## DividebyZERO

What i would like to know is why are people constantly adding onto the expectations and speculations. It has to be same as 980ti or titanX, it has to be at least 5% faster, it has to be at least 10% faster, it has to be at least 20% faster.... it needs to be 30% faster, and 50% cheaper.

It needs to be 100% faster and 100% free. /sarcasm


----------



## harney

Quote:


> Originally Posted by *raghu78*
> 
> But I don't agree with their business ethics (I would be surprised if they have one). Nvidia is the most unscruplous tech company in the high tech industry. From the days of bump-gate and to the GTX 970 they have never accepted that they ever did anything wrong and apologized. Instead they denied bump-gate was a serious engineering snafu and settled out of court but still denied that it was any acceptance of wrongdoing or liability.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.canadiannvidiasettlement.com/Files/nvidia/NVIDIA%20Settlement%20Agreement%20-%20Fully%20Executed%20(June%2012,%202013).pdf
> 
> The day AMD gets too big and shows such lack of business ethics and mind-numbing arrogance I would surely drop them and their products.


Yes Nvids business ethics are disgusting period ...... the bumbgate caused me nothing but endless headaches with a load of dell/hp laptops at the time it was over 200 units that i had to deal with absolute nightmare will never forget that one....

And the recent 970 saga well nothing no support from nvidia what's so ever no help from the board manufactures apart from a few and it took me over 3 weeks before even my distributor would acknowledge there was a problem i had kids teenagers and there parents with pitch forks at my door wanting blood re the 970.... gee again never forget that one and worse still i am still out of pocket on some cards and still trying to resolve..... so yes nvidia is very very dirty indeed .....

Oh and to add more... this whole game (not) works crap just adds more of a no no to me....what there needs to be is a 3rd party company that is not biased and works to help any company in open source code rather than this dirty tactics nvidia are doing lately with GW...And to hear the head of nvid state its time to upgrade your 780ti is a joke was only a year ago at £599 absolute absurd ..if it carries on like this we will be renting nvidia cards for 6 months at a time at a cost of £400 then will stop working after that with there drivers so guess what re rent please to use for another 6 months ....

So yes COME ON AMD lets have your best shot here as this Nvidia bully needs putting back in his place .....


----------



## magnek

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> That would be a huge fail and I think they know it (and possibly the rush to release 980Ti at Computex means Nvidia knows it too). I'm ok with a paper launch at E3 as long as the nda is at least lifted and we get real reviews...


I've said this in another thread but feel compelled to repeat it here.

Most people say AMD's silence is deafening. But what if AMD isn't saying anything simply because they don't want to give away _any_ detail to their competitor, especially when AMD is in a disadvantaged position? The other thing I hear often is why not just release some paper specs. Well what's the point? If you release paper specs without benchmarks that doesn't tell you jack, and if anything would make people more suspicious you had a dud on your hands, because why else would you withhold something as important as performance metrics? In fact the more awesome the paper specs the worse the impression. And if you want AMD to release paper specs and performance teasers, well then they might as well do a proper paper launch. inb4 just push up the paper launch 2 weeks problem solved LOL

Anyway that's my rationale for why AMD's "deafening silence" is not really silence but simply them sticking to the best option. Maybe I'm rationalizing here but that's my take.


----------



## epic1337

Quote:


> Originally Posted by *DividebyZERO*
> 
> What i would like to know is why are people constantly adding onto the expectations and speculations. It has to be same as 980ti or titanX, it has to be at least 5% faster, it has to be at least 10% faster, it has to be at least 20% faster.... it needs to be 30% faster, and 50% cheaper.
> 
> It needs to be 100% faster and 100% free. /sarcasm


they're scared they'd be disappointed, pretty much i am too.
disappointed = no upgrade = stuck with their current GPU = or upgrade to Nvidia

and on that note, where all furry comments go?


----------



## raghu78

Quote:


> Originally Posted by *magnek*
> 
> I've said this in another thread but feel compelled to repeat it here.
> 
> Most people say AMD's silence is deafening. But what if AMD isn't saying anything simply because they don't want to give away _any_ detail to their competitor, especially when AMD is in a disadvantaged position? The other thing I hear often is why not just release some paper specs. Well what's the point? If you release paper specs without benchmarks that doesn't tell you jack, and if anything would make people more suspicious you had a dud on your hands, because why else would you withhold something as important as performance metrics? In fact the more awesome the paper specs the worse the impression. And if you want AMD to release paper specs and performance teasers, well then they might as well do a proper paper launch. inb4 just push up the paper launch 2 weeks problem solved LOL
> 
> Anyway that's my rationale for why AMD's "deafening silence" is not really silence but simply them sticking to the best option. Maybe I'm rationalizing here but that's my take.


very well said. It makes so much sense to just shut up and deliver an excellent R9 3xx series and let the press and the consumers judge their product. As always its important that they put their best foot forward as launch reviews go a long way in leaving lasting impressions. The hardware looks top notch. The software needs to be equally good. That remains to be seen. I am sure AMD's R9 3xx is going to return the GPU market to competition. Lets see how well AMD have done in a few weeks.


----------



## Kane2207

Quote:


> Spoiler: Warning: Spoiler!
> 
> 
> 
> And what would you want him to say and think about AMD, Mr. Raghu78?
> 
> It sounds like anyone who disagrees with you, or more importantly, disagrees with AMD's actions, is entirely wrong, a shill, a fanboy, or a troll.
> 
> What I see most of the time, is, and forgive me for analogies, one of the largest celebrated football (European football) teams (AMD) now struggling to avoid relegation from the big league. It's just so hard to stay being a fan when you are let down, when your team is struggling to do the simplest of tasks... what once was easy has now become a struggle for survival. Then suddenly you sign a key player, get a new world class head coach, or at least a proven incredibly intelligent coach (Dr. Su)... hoping for a serious comeback... which as far as comebacks go, can only occur once you are down. Make no mistake about it, AMD is down right now. *The harshest critic though is usually someone who once was a beloved fan who has been let down for the last time... Damn you Newcastle United....*
> 
> Now, I don't know Alatar personally, or if he even has green underwear as claimed, however, if *you* only see life as black or white, or in this case Green or Red, you are blind to all the contributions that the other colors bring. Such as neutral gray, or light green, or light red. In fact, having an attitude quite often and *readily* found in your last 17 pages of post history would totally discourage me from offering my opinions, ideas, general thoughts, or feelings, on the subject. If your goal is that you would like to listen to basically the same yes-men who do like you do, say like you say, think like you think, and leave you having conversations that are empty/repetitive and unoriginal, then continue on your path. You will either get conversations like that or end up talking to yourself alone. Currently what you are getting now is the opinions and observations of some of the most experienced members of the enthusiast community that should not be readily dismissed as fanciful brand favoriteering.
> 
> Your aggressive posts negate AMD's reputation as a Good Guy Greg the company and you do better to engage in conversation to move light grey users and light green users closer to the middle grounds of neutrality or close to the AMDs side if that is the goal you seek. Another analogy, apologies, is having someone like Jesus Christ perform miracles, then having a group of followers rubbing it in other peoples' faces, for years, how awesome it is and how everyone else is the suckzor/going to hell. Team America, but for Jesus, if you will. They are definitely spreading a negative message with that one.
> 
> I personally want to see AMD succeed both for personal and fiscal reasons, but zealotry in a handful of users doesn't actually help them sell GPUs. It's great to have enthusiasm but go with the ebbs and flows. This message is aimed at you and any zealot, regardless, of which color is on the brand label. Criticizing the faults of one brand of hardware should be taken at face value: criticisms of one brand of hardware. Just because you are against something, does not mean you are for the other side.
> 
> Edited: Typo fairies and grammar goblins.[/spolier]


You're a Newcastle fan?

Wow, that's a tough gig mate, you have my deepest sympathy


----------



## magnek

Arsenal > everything









You gotta respect that 49 unbeaten streak though, seriously impressive.


----------



## Chromatrope

I think it's pretty logical for Fiji to be marginally to significantly faster than a Titan X/980Ti. A Tonga card with ~1700 shaders below 1ghz performs slightly below half of a 980Ti/Titan X, which means that a full Tonga (2048 shaders) would perform at over half, and Fiji, with double the shaders, at over 1ghz, and with efficiency improvements from switching to Globalfoundries' 28nm, would be pretty guaranteed to beat the 980Ti.


----------



## en9dmp

I've been following this thread since it started, as someone who's been waiting for this card for months and very excited to see what it can do... As someone who has always water cooled my systems, heat and power consumption has never been a consideration and so I've always tended towards AMD for their price / performance and overclockability.

As we move into the world of DX12 all these worries about 4Gb not being enough should not really be valid, as 4Gb right now is still JUST enough to max out AAA titles in 4K. I would expect any new game in development for release in Q3-4 2015 to take advantage of the key features of DX12, such as the VRAM pooling and SFR, so all your need to do is drop in another card and boom, any potential memory bottleneck is gone...

I know Mantle had had this capability since the 200 series came out but as devs I can understand most of them not adding months to the development timeline just to be able to improve performance for one brand. Proper DX12 development will significantly improve performance on both AMD & NVIDIA cards giving the devs much more incentive to code for it. It also means that people are much more likely to buy second (or third or fourth) GPUs for their systems if they know the next gen of games will scale perfectly with no microstutter across all their GPUs. Hence I think both AMD and Nvidia will be encouraging devs to make full use of it, as it could trigger significantly more sales. Whether current AAA games are patched for DX12 is another question.

Personally, I'm building a new small form factor WC htpc for 4K gaming so I'm planning on getting 2 fury Xs (with their tiny size also being quite important). But TBH I'm really interested in whether the AIO watercooler is going to allow customers to detach the closed loop pump/rad and allow integration into and pre-existing loop... (In all pictures so far they have purposefully not chosen to show any details of the water block). The last thing I want to do is spend probably £1500 on 2 cards and have to remove 2 perfectly good waterblocks, fork out another £300 to EK just so I can use them in my existing loop...

I reckon if they allow users to do this it would be a huge plus for them. I would expect a reasonable proportion of people looking at a flagship GPU are enthusiasts running full water loops...

Anyway, sorry that was a huge post, and my first so go easy on me


----------



## prjindigo

Quote:


> Originally Posted by *Ganf*
> 
> It'd tell me that AMD is out of their gourd because there is no way they'll get anything close to 980ti performance out of a rebranded 290x.


A great deal of the improvements that went into the nV900's are also going into the R9-300's. The 28nm tech improvements that made the 780-980 jump belong to the printer and it looks like AMD played a game on nV waiting to see what all new tools came back. The 390X is NOT a re-labeled 290X.


----------



## prjindigo

Quote:


> Originally Posted by *Ganf*
> 
> You're already coming across games that you can't play at the best detail, and anybody buying a GPU between two weeks from now and 2016 when the next generation hits is going to expect their card to be relevant for a year at minimum at the resolution they play. The 290x is not going to be 4k capable by the end of this year for the majority of games people are going to want to play at 4k.


er... read a little more. The 390X is supposed to be about 40% faster than the 290X based on improvements in hardware design and Win10's increases in Dx11 performance is supposedly about 40% as well. If both of these hold true then a 390X under 10 is gonna be about 196% of a 290X under win7/8. So "faster than a 295x2 is right now" on 4k AND the 390X will come in 8GB instead of 4...

I mean, yeah... AMD really bought the pooch and screwed the farm on the entire ****edozer series which thank gods they're leaving behind.

What *_I_* can barely wait to see is how badass the Fury will be running folding. My poor old Titan cards would push up to 1.24GHz stock when only doing calculations.

A water cooled Fury, if unlocked, will probably scream.


----------



## SpeedyVT

Quote:


> Originally Posted by *prjindigo*
> 
> er... read a little more. The 390X is supposed to be about 40% faster than the 290X based on improvements in hardware design and Win10's increases in Dx11 performance is supposedly about 40% as well. If both of these hold true then a 390X under 10 is gonna be about 196% of a 290X under win7/8. So "faster than a 295x2 is right now" on 4k AND the 390X will come in 8GB instead of 4...
> 
> I mean, yeah... AMD really bought the pooch and screwed the farm on the entire ****edozer series which thank gods they're leaving behind.
> 
> What *_I_* can barely wait to see is how badass the Fury will be running folding. My poor old Titan cards would push up to 1.24GHz stock when only doing calculations.
> 
> A water cooled Fury, if unlocked, will probably scream.


Imagine overclocking! It's already got a waterblock on it.


----------



## Forceman

Quote:


> Originally Posted by *prjindigo*
> 
> er... read a little more. The 390X is supposed to be about 40% faster than the 290X based on improvements in hardware design and Win10's increases in Dx11 performance is supposedly about 40% as well. If both of these hold true then a 390X under 10 is gonna be about 196% of a 290X under win7/8. So "faster than a 295x2 is right now" on 4k AND the 390X will come in 8GB instead of 4...


I wouldn't hold my breath waiting for a 390x that's faster than a 295X2. I wouldn't even hold it waiting for the Fury to be that fast.


----------



## shadow85

Quote:


> Originally Posted by *harney*
> 
> ....i am no fan of either red or green but like i said before i do want AMD to succeed for all are sakes


What are you a fan of then, black?


----------



## harney

Quote:


> Originally Posted by *shadow85*
> 
> What are you a fan of then, black?


Matrox which ever colour they are







(come on Matrox where are you)


----------



## Blameless

Quote:


> Originally Posted by *shadow85*
> 
> What are you a fan of then, black?


Not everyone needs to be a fan.


----------



## Ganf

Quote:


> Originally Posted by *prjindigo*
> 
> er... read a little more. The 390X is supposed to be about 40% faster than the 290X based on improvements in hardware design and Win10's increases in Dx11 performance is supposedly about 40% as well. If both of these hold true then a 390X under 10 is gonna be about 196% of a 290X under win7/8. So "faster than a 295x2 is right now" on 4k AND the 390X will come in 8GB instead of 4...
> 
> I mean, yeah... AMD really bought the pooch and screwed the farm on the entire ****edozer series which thank gods they're leaving behind.
> 
> What *_I_* can barely wait to see is how badass the Fury will be running folding. My poor old Titan cards would push up to 1.24GHz stock when only doing calculations.
> 
> A water cooled Fury, if unlocked, will probably scream.


That's some incredibly optimistic thinking there, considering AMD's last line of rebrands only gained 5% over their predecessors.


----------



## epic1337

Quote:


> Originally Posted by *Ganf*
> 
> That's some incredibly optimistic thinking there, considering AMD's last line of rebrands only gained 5% over their predecessors.


you clearly haven't seen the Zen thread.


----------



## svenge

Quote:


> Originally Posted by *epic1337*
> 
> you clearly haven't seen the Zen thread.


You mean the one where Jesus Jim Keller is personally hand-placing each and every transistor on the die with atomic-scale tweezers and will have per-core scaling of 120%, right?


----------



## Ganf

Quote:


> Originally Posted by *epic1337*
> 
> you clearly haven't seen the Zen thread.


I saw where that one was going early on and slipped out the back door.


----------



## flopper

Quote:


> Originally Posted by *prjindigo*
> 
> er... read a little more. The 390X is supposed to be about 40% faster than the 290X based on improvements in hardware design and Win10's increases in Dx11 performance is supposedly about 40% as well. If both of these hold true then a 390X under 10 is gonna be about 196% of a 290X under win7/8. So "faster than a 295x2 is right now" on 4k AND the 390X will come in 8GB instead of 4...
> 
> What *_I_* can barely wait to see is how badass the Fury will be running folding. My poor old Titan cards would push up to 1.24GHz stock when only doing calculations.
> 
> A water cooled Fury, if unlocked, will probably scream.


390x 1070mhz 8gb granada wont be 40% faster than 290x it replaces.
Thats 980ti territory. its targeted vs the 980.

Fury however is a different animal.


----------



## epic1337

Quote:


> Originally Posted by *svenge*
> 
> You mean the one where Jesus Jim Keller is personally hand-placing each and every transistor on the die with atomic-scale tweezers and will have per-core scaling of 120%, right?


yes









Quote:


> Originally Posted by *Ganf*
> 
> I saw where that one was going early on and slipped out the back door.


i see what you did there.









Quote:


> Originally Posted by *flopper*
> 
> 390x 1070mhz 8gb granada wont be 40% faster than 290x it replaces.
> Thats 980ti territory. its targeted vs the 980.
> 
> Fury however is a different animal.


$500~$600 price tag coming up against a 980 equivalent? not to mention its even a 290X rebadge? overpriced much.

just because it has a higher factory clock, doesn't mean a +$200 price tag is justified, is that extra 4GB VRAM THAT expensive?
i'd rather pick a 290X 4GB for $300, overclock it to literally stomp that 390X in a realistic workload, saving $200 while i'm at it.
talk about FX-9590 all over again.


----------



## flopper

Quote:


> Originally Posted by *epic1337*
> 
> $500~$600 price tag coming up against a 980 equivalent? not to mention its even a 290X rebadge? overpriced much.


449/499 is what I seen as far.
cheaper than 980 for the 390x with 8gb ram what your complaint really?
Cant afford the Fury?

The midrange is Tonga/grenada and at superb prices and ram.
Dx12 with AMD cards=win.


----------



## Ganf

Quote:


> Originally Posted by *flopper*
> 
> 449/499 is what I seen as far.
> cheaper than 980 for the 390x with 8gb ram what your complaint really?
> Cant afford the Fury?
> 
> The midrange is Tonga/grenada and at superb prices and ram.
> Dx12 with AMD cards=win.


My complaint is that AMD is playing catch-up, and they can't even catch up. At this rate nvidia is going to have complete control over the pace of innovation in the market and that means we all get screwed just like we're getting screwed by intel.


----------



## shadow85

Yeah, Nvidia seems to be setting the trends around here. They come up with their unique name for their highest end product i.e. Titan, and now AMD wants to follow suit and make theirs Fury.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Ganf*
> 
> My complaint is that AMD is playing catch-up, and they can't even catch up. At this rate nvidia is going to have complete control over the pace of innovation in the market and that means we all get screwed just like we're getting screwed by intel.


Sometimes it's Nvidia, sometimes it's AMD. If GTX980 Ti came out same time as GTX980 then AMD would be in bad stop but even so GTX970/980 has done the biggest damage to AMD since i can remember. I still dont understand why cards what dont have a generational jump cause such a big market share swing.


----------



## Ganf

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Sometimes it's Nvidia, sometimes it's AMD. If GTX980 Ti came out same time as GTX980 then AMD would be in bad stop but even so GTX970/980 has done the biggest damage to AMD since i can remember. I still dont understand why cards what dont have a generational jump cause such a big market share swing.


The back and forth has ended with this generation. AMD can no longer keep up. The 390x isn't going to compete with the 980 and the Fury X isn't going to be priced competitively against the 980ti. One card is played out without enough headroom to catch up and the other one is using new tech while just keeping pace, driving their costs through the roof.


----------



## erocker

Jumpin' the shark!


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> The back and forth has ended with this generation. AMD can no longer keep up. The 390x isn't going to compete with the 980 and the Fury X isn't going to be priced competitively against the 980ti. One card is played out without enough headroom to catch up and the other one is using new tech while just keeping pace, driving their costs through the roof.


I imagine the 390X will compete pretty strongly against the 980,and the Fury to have TX/980Ti class performance. Their problem with the 390X is going to be marketing it against the 990, they can't really tout the 8GB as an advantage without undercutting whatever their marketing is on the 4GB Fury. Likewise the Fury pricing is going to be a problem - Nvidia priced the 980Ti to kneecap them on pricing for what will most likely be a very expensive card to manufacture, and their margins are already getting crushed.

I think performance-wise AMD will be fine, but I'm not sure they can price low enough to gain back any market share.


----------



## zealord

So I've read a few pages of the thread and we can *SAVELY* assume now that the 390X is 40% faster than the 290X because no idea and another 40% faster in Windows 10 because of DX11 improvements? And then another 2500% according to DX12 benchmarks draw calls.

In my expert opinion I can validate that the 390X is going to be 15 times faster than the Titan X.

Looks like a no brainer to me !


----------



## JackCY

One thing needs to be said about AMD and it's GPUs and CPUs, new generation products should have been on the market a couple years ago. It's late for what they feed the market with.
CPUs are 14nm and lower, GPUs stuck on 28nm due to their high volumes and no one but Intel being able to produce high volume lower nodes.

If these 3xx are yet another rebrands and tweaks of older platform, they should have been on market already.
Clearly it can be done to release new designs much faster. If only AMD didn't always screw it up somewhere and negate what ever edge they gained, like having too much stock or bad investment and finances getting sunk elsewhere.


----------



## zealord

Quote:


> Originally Posted by *JackCY*
> 
> One thing needs to be said about AMD and it's GPUs and CPUs, new generation products should have been on the market a couple years ago. It's late for what they feed the market with.
> CPUs are 14nm and lower, GPUs stuck on 28nm due to their high volumes and no one but Intel being able to produce high volume lower nodes.
> 
> If these 3xx are yet another rebrands and tweaks of older platform, they should have been on market already.
> Clearly it can be done to release new designs much faster. If only AMD didn't always screw it up somewhere and negate what ever edge they gained, like having too much stock or bad investment and finances getting sunk elsewhere.


I agree. AMD was pretty fast with the 7970 in Dec 2011/ January 2012 with 28nm, but then the 290X came pretty late and now the 300 series and Fury cards are a couple of months too late aswell


----------



## harney

Personally i do not care for the 300 bunch ....the only flavours i would like to try are of the HBM fluury variety... Does anybody know how many versions of the fluury there is going to be ....looking around it seems there maybe 2 possible 3 with HBM ..you never know AMD may surprise us all.....with one on par & the same price as the 980 ti then one flurry at titan price but 10 20 % faster...


----------



## harney

Quote:


> Originally Posted by *JackCY*
> 
> One thing needs to be said about AMD and it's GPUs and CPUs, new generation products should have been on the market a couple years ago. It's late for what they feed the market with.
> CPUs are 14nm and lower, GPUs stuck on 28nm due to their high volumes and no one but Intel being able to produce high volume lower nodes.
> 
> If these 3xx are yet another rebrands and tweaks of older platform, they should have been on market already.
> Clearly it can be done to release new designs much faster. If only AMD didn't always screw it up somewhere and negate what ever edge they gained, like having too much stock or bad investment and finances getting sunk elsewhere.


To be honest i think the whole getting rid of old stock is the main reason for AMD's delay


----------



## pengs

Quote:


> Originally Posted by *Ganf*
> 
> One card is played out without enough headroom to catch up and the other one is *using new tech while just keeping pace, driving their costs through the roof.*


You really believe that?

That's a "_premium_" tax - premium here in the sense of a general consensus. When you've got a larger market share and brand recognition your able to charge whatever you want. There are people who would continue to buy Pepsi tomorrow if it was priced twice over because... Pepsi.

And I don't believe AMD's architecture is tapped out, I think it's got more to do with the fact that it was aimed and revolves around a sub to 300w TDP with power delivery to match.


----------



## Blameless

Quote:


> Originally Posted by *zealord*
> 
> So I've read a few pages of the thread and we can *SAVELY* assume


I think we can safely assume the 390X will be at least 7% faster than the 290X...

I think there is a pretty good chance of the 390X matching the GTX 980 (GM204).

I think there is essentially zero chance of it being anywhere near 40% faster than a 290X, unless you are running out of VRAM on the latter.


----------



## epic1337

Quote:


> Originally Posted by *flopper*
> 
> 449/499 is what I seen as far.
> cheaper than 980 for the 390x with 8gb ram what your complaint really?
> Cant afford the Fury?
> 
> The midrange is Tonga/grenada and at superb prices and ram.
> Dx12 with AMD cards=win.


not with aftermarket coolers, have you seen "overclock edition!" or "golden edition!" priced the same as reference cards?
my complaint? 290X can be had for less than $300, +4GB does not equate +$200.

whats the point of buying fury?

lol, DX12 requires less VRAM in general, specially in crossfire/SLI.


----------



## pengs

Quote:


> Originally Posted by *epic1337*
> 
> 290X can be had for less than $300, +4GB does not equate +$200.


So a card which could match or slightly out perform a $550 GTX980 while having twice the vram doesn't deserve to be priced at $450? The 4GB 290x can be within 5 to 0% of the 980 already at almost half the price; the core of the pricing problem comes from the trend-setter.
The 980 will most likely have to drop in price soon anyhow if the 390x competes and given that the 980Ti is somewhat cannibalizing the 980.


----------



## BinaryDemon

Quote:


> Originally Posted by *zealord*
> 
> And then another 2500% according to DX12 benchmarks draw calls.


You missed an opportunity to utilize the "OVER 9000" meme.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *pengs*
> 
> So a card which could match or slightly out perform a $550 GTX980 while having twice the vram doesn't deserve to be priced at $450? The 4GB 290x can be within 5 to 0% of the 980 already at almost half the price; the core of the pricing problem comes from the trend-setter.
> The 980 will most likely have to drop in price soon anyhow if the 390x competes and given that the 980Ti is somewhat cannibalizing the 980.


Good point. People are too hung up on the hardware and aren't paying any attention to the relative performance. Who really cares if the 390X really is just a freshened 8GB Hawaii as long as it outperforms the competition? Why is AMD not allowed to charge more than $400 for a card that definitively beats the 980 (assuming that it does)?


----------



## rt123

Quote:


> Originally Posted by *Blameless*
> 
> I think we can safely assume the 390X will be at least 7% faster than the 290X...
> 
> I think there is a pretty good chance of the 390X matching the GTX 980 (GM204).
> 
> I think there is essentially zero chance of it being anywhere near 40% faster than a 290X, unless you are running out of VRAM on the latter.


Pretty sure he meant to say Fury. This Fury & 390X thing is confusing.
Personally Fury will be 390X.


----------



## SoloCamo

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Good point. People are too hung up on the hardware and aren't paying any attention to the relative performance. Who really cares if the 390X really is just a freshened 8GB Hawaii as long as it outperforms the competition? Why is AMD not allowed to charge more than $400 for a card that definitively beats the 980 (assuming that it does)?


The 390x with the specs proposed would about equal a 980 at 4k assuming a non nvidia biased game. Lower than that and it will still be the 980's win. The other issue is that Hawaii isn't the greatest OC'er. So let's say they launch at 1050/1500 you arent going to have tremendous headroom unless they've been binning like crazy for a while at this point. Meanwhile the 980 has a good deal of OC headroom still left. Only thing they can win with here is 8gb and pricepoint - it won't win in power consumption or at 1080p, again, assuming the specs are true.

Knowing this, the only reason the 980 is still $500 is because NV knows this already and will drop the 980 once enough have sold at this price point where they can then cash in on taking any 390x sales. Then just drop the 970 pricing slightly too and bam, lock on the market unfortunately as always. Then assuming Fury competes with the 980ti, they can drop that down to 600 or so, etc. Nvidia has plenty of headroom on pricing, something unfortunately AMD cannot do with their margins.


----------



## iamhollywood5

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Good point. People are too hung up on the hardware and aren't paying any attention to the relative performance. Who really cares if the 390X really is just a freshened 8GB Hawaii as long as it outperforms the competition? Why is AMD not allowed to charge more than $400 for a card that definitively beats the 980 (assuming that it does)?


Lol what? The 290X simply does not compete with the 980. Not even close. What world do you live in? The 980 is an entire class above the 290X. The only thing a 290X is going to beat is maybe a 970, and it hardly be definitive probably losing in certain games.

And you ask who cares that the 390X is just a 290X rebrand with 8GB? The people who find the heat and noise production unacceptable.


----------



## epic1337

Quote:


> Originally Posted by *pengs*
> 
> So a card which could match or slightly out perform a $550 GTX980 while having twice the vram doesn't deserve to be priced at $450? The 4GB 290x can be within 5 to 0% of the 980 already at almost half the price; the core of the pricing problem comes from the trend-setter.
> The 980 will most likely have to drop in price soon anyhow if the 390x competes and given that the 980Ti is somewhat cannibalizing the 980.


if 980 were to drop price, then all the more reason why +4GB isn't worth +$200.
point is, if 290X was already on-par with 980, why the hell isn't it priced $450 hmm?

not only that, 4GB VRAM is plenty even in 4K so long as you don't push high AA.
and as a matter of fact you wouldn't even be able to sustain above 30FPS even with a 290X, all the more reason 8GB VRAM is a waste.
and to top it off, DX12 can share VRAM on multiple GPUs, making more VRAM not just redundant, but an utter waste of money.

so put some logic in your thoughts, why would you pay +$200 for an unnecessary +4GB?


----------



## Ganf

Quote:


> Originally Posted by *pengs*
> 
> You really believe that?
> 
> That's a "_premium_" tax - premium here in the sense of a general consensus. When you've got a larger market share and brand recognition your able to charge whatever you want. There are people who would continue to buy Pepsi tomorrow if it was priced twice over because... Pepsi.
> 
> And I don't believe AMD's architecture is tapped out, I think it's got more to do with the fact that it was aimed and revolves around a sub to 300w TDP with power delivery to match.


It's going to be a premium tax until AMD realizes no one is buying their cards, and then that tax is going to evaporate like dew under the noonday sun, and we'll be left with a card that still isn't priced to compete with the 980ti.


----------



## SoloCamo

Quote:


> Originally Posted by *iamhollywood5*
> 
> Lol what? The 290X simply does not compete with the 980. Not even close. What world do you live in? The 980 is an entire class above the 290X. The only thing a 290X is going to beat is maybe a 970, and it hardly be definitive probably losing in certain games.
> 
> And you ask who cares that the 390X is just a 290X rebrand with 8GB? The people who find the heat and noise production unacceptable.


A) Yes, the 290x does compete with the 980 just fine at high res

B) Heat and noise are not an issue with the aftermarket coolers


----------



## Nvidia Fanboy

Quote:


> Originally Posted by *iamhollywood5*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Majin SSJ Eric*
> 
> Good point. People are too hung up on the hardware and aren't paying any attention to the relative performance. Who really cares if the 390X really is just a freshened 8GB Hawaii as long as it outperforms the competition? Why is AMD not allowed to charge more than $400 for a card that definitively beats the 980 (assuming that it does)?
> 
> 
> 
> Lol what? The 290X simply does not compete with the 980. Not even close. What world do you live in? The 980 is an entire class above the 290X. The only thing a 290X is going to beat is maybe a 970, and it hardly be definitive probably losing in certain games.
> 
> And you ask who cares that the 390X is just a 290X rebrand with 8GB? The people who find the heat and noise production unacceptable.
Click to expand...

The 290x and 970 are pretty much neck to neck with the 290x winning at higher resolutions. Last I checked, the 980 is barely 10 to 15% faster than the 970. In what world is a 15% improvement a different class? You fanboys are sad.


----------



## JackCY

Quote:


> Originally Posted by *harney*
> 
> To be honest i think the whole getting rid of old stock is the main reason for AMD's delay


Still?








It was an issue over half a year ago after litecoin (bitcoin) crashed and it became useless for miners to buy the high volumes this seemed to turn. It was so bad it was raising the prices of 280x and similar consumption/performance cards, out of stock and more expensive than similar offerings from NV at the time.
Supply chain management isn't exactly rocket science.

I've had AMD CPUs and have AMD GPU second hand after people started to flood the second hand market with them when litecoin was gone. Hard to find any NV cards second hand, but AMD were and probably still are easy to grab.

NV has already rolled out 3 new platforms on Maxwell.
Intel is rolling out Broadwell, late as hell and crap but hey, Skylake right behind at least.

AMD, 83x0? Still? GPUs based on 7970, replaced by a weaker version 285 than 280x was, 290 was new but only for so long.


----------



## pengs

Quote:


> Originally Posted by *epic1337*
> 
> if 980 were to drop price, then all the more reason why +4GB isn't worth +$200.
> point is, if 290X was already on-par with 980, why the hell isn't it priced $450 hmm?


You need to take a look at some of the 980Ti benchmarks and check where the performance lies on the current catalyst 15.5 drivers, if the 290x is taken as a baseline for price/performance than the 980 ends up looking like a complete rip off even given the overclockability. Price doesn't equal performance if both cards are in the same ball field.
Quote:


> so put some logic in your thoughts, why would you pay +$200 for an unnecessary +4GB?


No doubt you would but what I'm asking is that if the baseline for said 980 equivalent performance is tagged at $550 and the 390x (assuming it matches or out performs the 980) comes in at $450 instead of $350, who's fault is it? - it's the price tag on the 980 which is forcing the prices up, AMD being quite generous to begin with, why should they take such a potential loss when the product competes at the same level while adding 4GB's of memory.


----------



## criminal

Quote:


> Originally Posted by *iamhollywood5*
> 
> Lol what? The 290X simply does not compete with the 980. Not even close. What world do you live in? *The 980 is an entire class above the 290X*. The only thing a 290X is going to beat is maybe a 970, and it hardly be definitive probably losing in certain games.
> 
> And you ask who cares that the 390X is just a 290X rebrand with 8GB? The people who find the heat and noise production unacceptable.


No it isn't.


----------



## KarathKasun

People obviously don't remember the days of the GF 8800 and HD 2900. The next step down from the GF 8800/HD 2900 in CLASS would have been the HD 2600 and GF 8600 series cards. They had ~1/3 of the hardware units.

8800 GTS/GTX/Ultra and HD 2900 were all in the same performance class.

970/980 and 290/290x are in the same class.


----------



## epic1337

Quote:


> Originally Posted by *pengs*
> 
> You need to take a look at some of the 980Ti benchmarks and check where the performance lies on the current catalyst 15.5 drivers, if the 290x is taken as a baseline for price/performance than the 980 ends up looking like a complete rip off even given the overclockability. Price doesn't equal performance if both cards are in the same ball field.
> No doubt you would but what I'm asking is that if the baseline for said 980 equivalent performance is tagged at $550 and the 390x (assuming it matches or out performs the 980) comes in at $450 instead of $350, who's fault is it? - it's the price tag on the 980 which is forcing the prices up, AMD being quite generous to begin with, why should they take such a potential loss when the product competes at the same level while adding 4GB's of memory.


thats why 290X is the better deal than 390X at $450, if you put a rebadge $200 more then its literally a rip-off, even with that +4GB in mind.
with 290X current price at least, compared to 980 being on the overly expensive side, is very much the ideal price point.
put it along this line, who in their right mind would buy a 390X over a 290X?

and also think of it from perf/$ point of view, how far under do you think 980 is compared to 290X?


----------



## Astral Fly

I'm in the market for a new gpu witin the next couple of months and the only way I'll consider AMD is if they bring the power consumption down considerably. 390X may give the same level of performance as 980 at a lower price, but if it uses almost 100W more I'm simply not interested.


----------



## Nvidia Fanboy

Quote:


> Originally Posted by *KarathKasun*
> 
> People obviously don't remember the days of the GF 8800 and HD 2900. The next step down from the GF 8800/HD 2900 in CLASS would have been the HD 2600 and GF 8600 series cards. They had ~1/3 of the hardware units.
> 
> 8800 GTS/GTX/Ultra and HD 2900 were all in the same performance class.
> 
> 970/980 and 290/290x are in the same class.


Agreed. Arguing that a 15% performance difference is a different class is the dumbest thing I've ever heard. The 8800gtx was a good 75% to 90% faster than the previous generation single card champ x1950xtx. Now THAT is a different class.


----------



## epic1337

Quote:


> Originally Posted by *Astral Fly*
> 
> I'm in the market for a new gpu witin the next couple of months and the only way I'll consider AMD is if they bring the power consumption down considerably. 390X may give the same level of performance as 980 at a lower price, but if it uses almost 100W more I'm simply not interested.


its a 290X rebadge so its not only just 100W more than 980.


----------



## rt123

Quote:


> Originally Posted by *epic1337*
> 
> its a 290X rebadge so its not only just 100W more than 980.


Yes because you have the card in hand & know exactly how it will perform.
They could have made some changes.


----------



## magnek

Quote:


> Originally Posted by *epic1337*
> 
> thats why 290X is the better deal than 390X at $450, if you put a rebadge $200 more then its literally a rip-off, even with that +4GB in mind.
> with 290X current price at least, compared to 980 being on the overly expensive side, is very much the ideal price point.
> put it along this line, who in their right mind would buy a 390X over a 290X?
> 
> and also think of it from perf/$ point of view, how far under do you think 980 is compared to 290X?
> -snip-


You're acting as if $450 is the confirmed price. Also you should probably compare like with like, the cheapest 290X 8GB right now is $360, so $450 would be a premium of $90 not $200.

Now keep in mind the 980 is supposed to have a price cut to $500, so if the 390X doesn't at least match the 980 in most games, even with 4GB extra vram the $450 price will be very unappealing. So either the rumored price is off (most likely), or 390X will bring more improvements than just increased speeds and more vram.


----------



## epic1337

Quote:


> Originally Posted by *magnek*
> 
> You're acting as if $450 is the confirmed price. Also you should probably compare like with like, the cheapest 290X 8GB right now is $360, so $450 would be a premium of $90 not $200.
> Now keep in mind the 980 is supposed to have a price cut to $500, so if the 390X doesn't at least match the 980 in most games, even with 4GB extra vram the $450 price will be very unappealing. So either the rumored price is off (most likely), or 390X will bring more improvements than just increased speeds and more vram.


ah, i was comparing it to the 4GB version, since i had long since viewed the 8GB version unnecessarily expensive.
as to why its fairly simple, it requires 4K at more than 2xAA to hit over 4GB consumption on some of the most very demanding games, and at such settings a single 290X couldn't cope so whats the point.

most likely its simply overclocked, which in it's own is just similar to what they did with FX-9590.


----------



## Blameless

Quote:


> Originally Posted by *SoloCamo*
> 
> So let's say they launch at 1050/1500 you arent going to have tremendous headroom unless they've been binning like crazy for a while at this point.


Hawaii is quite temperature sensitive, and it wouldn't take too much better silicon than we see with earlier samples to give them a fair bit of headroom when combined with a solid cooler. Almost certainly not as much headroom as a 980, but OC headroom rarely drives sales, and I'm not personally considering a 390X as I have a 290X now.
Quote:


> Originally Posted by *iamhollywood5*
> 
> Lol what? The 290X simply does not compete with the 980. Not even close. What world do you live in? The 980 is an entire class above the 290X. The only thing a 290X is going to beat is maybe a 970, and it hardly be definitive probably losing in certain games.


The 970 isn't an entire class above the 980. A Hawaii based part with modest clock speed improvement, solid coolers (so they don't throttle), and revised drivers (which are looking promising), could well be on par with a 980, overall.
Quote:


> Originally Posted by *epic1337*
> 
> thats why 290X is the better deal than 390X at $450


The 290X won't be around very long after the 390X shows up, and if the 390X isn't competitive, it will see a price drop to make it so.


----------



## gamervivek

Quote:


> Originally Posted by *iamhollywood5*
> 
> Lol what? The 290X simply does not compete with the 980. Not even close. What world do you live in? The 980 is an entire class above the 290X. The only thing a 290X is going to beat is maybe a 970, and it hardly be definitive probably losing in certain games.
> 
> And you ask who cares that the 390X is just a 290X rebrand with 8GB? The people who find the heat and noise production unacceptable.


TIL that a single digit percentage difference is an entire class difference, AMD need more games like Hitman Absolution and reviewers like TechSpot to put delusions like those in their proper place.

http://www.techspot.com/articles-info/977/bench/Hitman.png


----------



## peateargryphon

Obviously a rumor site, but: http://wccftech.com/amd-fury-series-lineup-flavors-fury-nano-xt-pro/

It now sounds like there's gonna be Fury Nano, Fury XT, and Fury Pro. They are still showing the old pricing but it's getting super close to the 16th now--although that still seems like a painfully long time from now for someone who's been waiting to see how all the new cards compare...


----------



## Ganf

Quote:


> Originally Posted by *peateargryphon*
> 
> Obviously a rumor site, but: http://wccftech.com/amd-fury-series-lineup-flavors-fury-nano-xt-pro/
> 
> It now sounds like there's gonna be Fury Nano, Fury XT, and Fury Pro. They are still showing the old pricing but it's getting super close to the 16th now--although that still seems like a painfully long time from now for someone who's been waiting to see how all the new cards compare...


The closer we get, the farther away it seems. I've been waiting anxiously ever since I had to put my build on hold in the middle of the 3.5gb drama.


----------



## Forceman

Quote:


> Originally Posted by *peateargryphon*
> 
> Obviously a rumor site, but: http://wccftech.com/amd-fury-series-lineup-flavors-fury-nano-xt-pro/
> 
> It now sounds like there's gonna be Fury Nano, Fury XT, and Fury Pro. They are still showing the old pricing but it's getting super close to the 16th now--although that still seems like a painfully long time from now for someone who's been waiting to see how all the new cards compare...


Really? "Trades blows" with the 980 Ti and $899. Doubtful. One or the other maybe, but not both.


----------



## Majin SSJ Eric

I think it will definitely be faster than the 980Ti. If not, AMD has wasted all our time.


----------



## peateargryphon

Yeah and the site mentioned directly, it seems AMD is rethinking that pricing model. Presumably, the leaked pricing was before they knew how close to Titan X performance the 980 Ti would be?? It's all speculation though--not much merit to the details. I'm trying to avoid all these rumors but I can't help myself...

I have to add my guess from reading all these rumors and that flagship card releasing now will be within +/- 10% performance of the Titan X and that it will be marginally lower cost than the 980 Ti since they likely use 10-20% more power than the 980 Ti/Titan X. AMD will not want to play the drastically undercut on price game but they don't have the market share or marketing power to sell at price parity of Nvidia, so while it will be cheaper, I doubt it will be radically so.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think it will definitely be faster than the 980Ti. If not, AMD has wasted all our time.


Unless they miraculously price it at 550 again. I think that's the highest they could go for not being able to beat a Ti. Tbh though, I just can't see their card as being good price/performance this time around. 980 Ti is just too good


----------



## hollowtek

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Unless they miraculously price it at 550 again. I think that's the highest they could go for not being able to beat a Ti. Tbh though, I just can't see their card as being good price/performance this time around. 980 Ti is just too good


I'm pretty sure they'll price it competitively with the Ti. That is, 650 for 980Ti, 620-630 for furry.


----------



## xxdarkreap3rxx

I can't get over the fact of it having HBM (+ R&D costs) *and* an AIO. Nvidia is pricing so agressively in the low-ish end with the 970 and now the high-end with the 980 Ti.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think it will definitely be faster than the 980Ti. If not, AMD has wasted all our time.


290X was faster then GTX Titan and GTX780 which was $650 and came for $550. Fuji has to be like 20% faster to be sold over $800.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> The 290X won't be around very long after the 390X shows up, and if the 390X isn't competitive, it will see a price drop to make it so.


290X production will stop, but theres an inventory overstock at the moment, its also the reason why 290X is very cheap, so i doubt it'd disappear even after its discontinued.
390X won't ever be competitive if they keep chasing "same performance same price or slightly cheaper" target, like with 290X's cheap price, hardly anyone buys it.


----------



## harney

As i thought .......Its Good to hear the flurry comes in 3 flavours









at a guess should be as follows one below the 980ti costing less one on par same cost as ti and one to out perform ti tx price wise same as titan.....


----------



## michaelius

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think it will definitely be faster than the 980Ti. If not, AMD has wasted all our time.


Wouldn't be first time.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 290X was faster then GTX Titan and GTX780 which was $650 and came for $550. Fuji has to be like 20% faster to be sold over $800.


.

Yeah I remember the Anandtech review:

"Consequently against NVIDIA's pricing structure the 290X is by every definition a steal at $549. Even if it were merely equal to the GTX 780 it would still be $100 cheaper, but instead it's both [10%] faster and cheaper, something that has proven time and again to be a winning combination in this industry."

Source: http://www.anandtech.com/show/7457/the-radeon-r9-290x-review/20

Another repeat would be mind blowing


----------



## xxdarkreap3rxx

Nano $750
XT $650
Pro $500


----------



## Apexii22

I love it how just a couple months ago everyone was saying how the 390X was going to be faster then a Titan X at half the price. Now we're worried it will be slower then the 980Ti haha.


----------



## shadow85

Yep, it's gonna feel like torture for another 9 days.

Just wish it would hurry up already, tired of waiting to buy a new GPU. Well it's not like I have much of a choice anyway, 980 Ti Hybrid or any other models for that matter won't be in stock for Australians for at least 1-2 months I fear.


----------



## harney

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Nano $750
> XT $650
> Pro $500


I think it will be

XT $799
Nano $699
Pro $549

So let me work out the UK gouging Prices

XT £699
Nano £599
Pro £449


----------



## en9dmp

Quote:


> Originally Posted by *Apexii22*
> 
> I love it how just a couple months ago everyone was saying how the 390X was going to be faster then a Titan X at half the price. Now we're worried it will be slower then the 980Ti haha.


Wow, everyone was saying that a couple of months ago were they? Nobody with any credibility has ever said that.

Does it not seem slightly strange to anyone that nvidia released the 980ti out of the blue with virtually the same performance as the titan x at about two thirds the price? Why would they do that if they weren't worried about the fury's performance? They easily could have sold it for more if they knew it was capable of out performing the fury...


----------



## Alatar

I don't understand why people say that the 980Ti was released out of the blue and at an unexpected price.

The 980Ti is exactly where everyone should have expected it to be and priced exactly like everyone should have expected it to be. It even came out at the most obvious and expected time.

It's almost like people forget the previous generation after the new one starts...


----------



## Woundingchaney

Quote:


> Originally Posted by *Alatar*
> 
> I don't understand why people say that the 980Ti was released out of the blue and at an unexpected price.
> 
> The 980Ti is exactly where everyone should have expected it to be and priced exactly like everyone should have expected it to be. It even came out at the most obvious and expected time.
> 
> It's almost like people forget the previous generation after the new one starts...


You do realize that the 980ti launched at a lower price point than the 780ti, which would be the generation you are referring to.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Alatar*
> 
> I don't understand why people say that the 980Ti was released out of the blue and at an unexpected price.
> 
> The 980Ti is exactly where everyone should have expected it to be and priced exactly like everyone should have expected it to be. It even came out at the most obvious and expected time.
> 
> It's almost like people forget the previous generation after the new one starts...


No, it's almost like there were all kinds of rumors that said it would be coming out at the end of summer. Kind of like the rumors that Fiji will be slower than the 980Ti...


----------



## harney

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> No, it's almost like there were all kinds of rumors that said it would be coming out at the end of summer. Kind of like the rumors that Fiji will be slower than the 980Ti...


For sure it was an early release ....you can tell with stock being low usually distributors and the shops have good stock but not this time very few it seems ....


----------



## michaelius

Quote:


> Originally Posted by *Woundingchaney*
> 
> You do realize that the 980ti launched at a lower price point than the 780ti, which would be the generation you are referring to.


980ti is equivalent of 780 release since it cut down chip and that launched at 650$


----------



## jmcosta

yeah i remember paying around 630€ for the 780 and then 680€ the 780ti (these cards are still overpriced tho, i mean any top end card cost way too much to run unoptimized games)
i haven't been following the fiji,
is there any bench yet?


----------



## Woundingchaney

Quote:


> Originally Posted by *michaelius*
> 
> 980ti is equivalent of 780 release since it cut down chip and that launched at 650$


You still have to look at the time frame of release and while cut the 980ti has other advantages over the 780ti. More memory, larger bus that the 780ti didn't have over the 780.


----------



## Ganf

Quote:


> Originally Posted by *Alatar*
> 
> I don't understand why people say that the 980Ti was released out of the blue and at an unexpected price.
> 
> The 980Ti is exactly where everyone should have expected it to be and priced exactly like everyone should have expected it to be. It even came out at the most obvious and expected time.
> 
> It's almost like people forget the previous generation after the new one starts...


Sweclockers started that rumor that it wouldn't drop until after the summer, and just about every other tech news site picked it up and ran with it.


----------



## michaelius

Quote:


> Originally Posted by *Woundingchaney*
> 
> You still have to look at the time frame of release and while cut the 980ti has other advantages over the 780ti. More memory, larger bus that the 780ti didn't have over the 780.


I think you are concentrating too much on naming scheme when name is merely a tool for marketing

Previous gen release schedule for big kepler:

Titan with 1 block disabled 6GB of vram
780 with 3 blocks disabled 3 GB of vram 3 months later 650$
780ti with full core 3 GB 6 months after 780 700$
Titan Black full core 6GB ...


----------



## Blameless

Quote:


> Originally Posted by *Alatar*
> 
> I don't understand why people say that the 980Ti was released out of the blue and at an unexpected price.
> 
> The 980Ti is exactly where everyone should have expected it to be and priced exactly like everyone should have expected it to be. It even came out at the most obvious and expected time.


Completely agree.

There was exactly nothing surprising about the 980Ti release.
Quote:


> Originally Posted by *Woundingchaney*
> 
> You do realize that the 980ti launched at a lower price point than the 780ti, which would be the generation you are referring to.


The 980Ti is the equivalent to the vanilla 780, not the 780Ti.
Quote:


> Originally Posted by *Woundingchaney*
> 
> You still have to look at the time frame of release and while cut the 980ti has other advantages over the 780ti. More memory, larger bus that the 780ti didn't have over the 780.


Irrelevant.

The 780 and 980Ti filled the same segment by being slightly cut down cards released slightly after the Titans that preceded them.


----------



## Woundingchaney

Quote:


> Originally Posted by *Blameless*
> 
> Completely agree.
> 
> There was exactly nothing surprising about the 980Ti release.
> The 980Ti is the equivalent to the vanilla 780, not the 780Ti.
> Irrelevant.
> 
> The 780 and 980Ti filled the same segment by being slightly cut down cards released slightly after the Titans that preceded them.


Its not irrelevant the 980ti offer various advantages as far as upgrades from card to card over the 780ti even with coming in at a lower price. Also iirc the performance leap is higher with the 980ti or perhaps nearly the same. 50% more memory and bus width are major advantages that the 780ti didn't have over the 780.

Bottom line is that performance wise the 980ti mimicked the 780ti but came in at a lower price point. Its simply the price point that people weren't expecting. Most everyone was expecting 700-750 usd.


----------



## Blameless

Quote:


> Originally Posted by *Woundingchaney*
> 
> Its not irrelevant the 980ti offer various advantages as far as upgrades from card to card over the 780ti even with coming in at a lower price. Also iirc the performance leap is higher with the 980ti or perhaps nearly the same. 50% more memory and bus width are major advantages that the 780ti didn't have over the 780.


Why are you talking about the 780Ti at all? It has nothing to do with anything.

The there was a Kepler Titan for $1k, NVIDIA needed a card in the 600-700 segment, thus the 780.

There was a GM200 Titan X for $1k, NVIDIA needed a card in the 600-700 segment, thus the 980Ti. The only unknown about the 980Ti on the day the Titan X was released was exactly what they would disable to make the 600-700 part and what they would name it. Release date was predictable, rough performance was predictable, price was predictable.


----------



## Woundingchaney

Quote:


> Originally Posted by *Woundingchaney*
> 
> Its not irrelevant the 980ti offer various advantages as far as upgrades from card to card over the 780ti even with coming in at a lower price. Also iirc the performance leap is higher with the 980ti or perhaps nearly the same. 50% more memory and bus width are major advantages that the 780ti didn't have over the 780.


Quote:


> Originally Posted by *Blameless*
> 
> Why are you talking about the 780Ti at all? It has nothing to do with anything.
> 
> The there was a Kepler Titan for $1k, NVIDIA needed a card in the 600-700 segment, thus the 780.
> 
> There was a GM200 Titan X for $1k, NVIDIA needed a card in the 600-700 segment, thus the 980Ti. The only unknown about the 980Ti on the day the Titan X was released was exactly what they would disable to make the 600-700 part and what they would name it. Release date was predictable, rough performance was predictable, price was predictable.


My posts directly link to the initial statement that was in discussion. Perhaps you shouldn't jump in the conversation prior to be aware of its origin.

Price was lower than expected. Performance was roughly right at or a bit above what was expect. Release date was a bit sooner than expected. Time frames and are similar to the 700 seris, but that is the first time Nvidia introduced this naming scheme and priceing diagram, there is not much of an expectation with only one generation gone by.

That is why so many people felt as if the 980ti came out of nowhere. Which is the statement that started the posts.


----------



## epic1337

Quote:


> Originally Posted by *Woundingchaney*
> 
> Bottom line is that performance wise the 980ti mimicked the 780ti but came in at a lower price point. Its simply the price point that people weren't expecting. Most everyone was expecting 700-750 usd.


in actual practice, newer generation has to be more cost efficient than the previous generation.
so thinking "980Ti performing identical to 780Ti must be priced identical too!" is hilarious.


----------



## youra6

Quote:


> Originally Posted by *Woundingchaney*
> 
> Its not irrelevant the 980ti offer various advantages as far as upgrades from card to card over the 780ti even with coming in at a lower price. Also iirc the performance leap is higher with the 980ti or perhaps nearly the same. 50% more memory and bus width are major advantages that the 780ti didn't have over the 780.
> 
> *Bottom line is that performance wise the 980ti mimicked the 780ti but came in at a lower price point. Its simply the price point that people weren't expecting. Most everyone was expecting 700-750 usd.*


980ti mimicked the 780 more in my opinion.

Quote:


> Originally Posted by *Blameless*
> 
> Why are you talking about the 780Ti at all? It has nothing to do with anything.
> 
> The there was a Kepler Titan for $1k, NVIDIA needed a card in the 600-700 segment, thus the 780.
> 
> There was a GM200 Titan X for $1k, NVIDIA needed a card in the 600-700 segment, thus the 980Ti. The only unknown about the 980Ti on the day the Titan X was released was exactly what they would disable to make the 600-700 part and what they would name it. Release date was predictable, rough performance was predictable, price was predictable.


Price came as a shock to me, as I had figured anything in the 700-800 dollar ranged would have been possible.


----------



## Woundingchaney

Quote:


> Originally Posted by *epic1337*
> 
> in actual practice, newer generation has to be more cost efficient than the previous generation.
> so thinking "980Ti performing identical to 780Ti must be priced identical too!" is hilarious.


I suppose I don't understand what you are saying here. The performance gap from the 780 to the 780ti is roughly equivalent from the 980 to the 980ti.

There is nothing to suggest cost efficiencies as a set in stone metric. Much of this is dependent upon current market situations at time of release. Generational differences play a large role in cost efficiencies and there are also a plethora of outlying factors to consider.

Quote:


> 980ti mimicked the 780 more in my opinion


I can see as to how you feel that way. I think they made the right decision by offering a cut card with improvements to the bus and vram capacity over the same memory scheme and a full card.


----------



## Blameless

Quote:


> Originally Posted by *Woundingchaney*
> 
> Price was lower than expected. Performance was roughly right at or a bit above what was expect. Release date was a bit sooner than expected.


Fifteen years of precedent states otherwise.

Since the GeForce 2 series I have been waiting 3-4 months after a flagship release to buy the slightly slower, significantly cheaper, part that came shortly after.

The price differential has been nearly constant between top of the line parts and the one's immediately below them this entire time. The 980Ti is no exception.

When a rumored $800 price point was posted, I was surprised by that because I was expecting the same $650 the GTX 780 cost, as that part was the prior generation's direct equivalent to the 980Ti..
Quote:


> Originally Posted by *Woundingchaney*
> 
> The performance gap from the 780 to the 780ti is roughly equivalent from the 980 to the 980ti.


The 780ti is closer in performance to the 780 than the 980ti is to the 980. Not that the 780Ti or 980 are relevant to this comparison.

No 780Ti equivalent has been released for the current generation of cards, and may well never be (if it is, it will likely be a faster clocked Titan X with 6GiB of memory, which would only be needed as a stopgap to counter Fury, if it's fast enough, until Pascal can be pushed out), while the 980Ti isn't the same chip as the 980.


----------



## carlhil2

The OG Titan is to the 780ti what the 980ti is to Titan X, Nvidia just did it backwards the first time, they learned THIS time, full chip FIRST....


----------



## zealord

Quote:


> Originally Posted by *carlhil2*
> 
> The OG Titan is to the 780ti what the 980ti is to Titan X, Nvidia just did it backwards the first time, they learned THIS time, full chip FIRST....


but what is left to do now that Maxwell full fat is out and pascal needs another year or even more ?









GTX 990 but that is a dual GPU


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> No, it's almost like there were all kinds of rumors that said it would be coming out at the end of summer. Kind of like the rumors that Fiji will be slower than the 980Ti...


Yeah that's what I remember. Late summer @ $800 which is why I didn't sell off my 980s and then they dropped right at the start of June for $650 -.-

These Fiji rumors get better and better every week. Seriously, it's like:

Quote:


> Originally Posted by *John Madden*
> Well one thing's for certain, Bob - it'll either be faster or slower


----------



## Woundingchaney

Quote:


> Originally Posted by *Blameless*
> 
> Fifteen years of precedent states otherwise.
> 
> Since the GeForce 2 series I have been waiting 3-4 months after a flagship release to buy the slightly slower, significantly cheaper, part that came shortly after.
> 
> The price differential has been nearly constant between top of the line parts and the one's immediately below them this entire time. The 980Ti is no exception.
> 
> When a rumored $800 price point was posted, I was surprised by that because I was expecting the same $650 the GTX 780 cost, as that part was the prior generation's direct equivalent to the 980Ti..
> The 780ti is closer in performance to the 780 than the 980ti is to the 980. Not that the 780Ti or 980 are relevant to this comparison.
> 
> No 780Ti equivalent has been released for the current generation of cards, and may well never be (if it is, it will likely be a faster clocked Titan X with 6GiB of memory, which would only be needed as a stopgap to counter Fury, if it's fast enough, until Pascal can be pushed out), while the 980Ti isn't the same chip as the 980.


The 780 is not the equivalent of the 980ti. One simply cannot omit the existence of the original 980 and/or 970. Yes I agree that several factors do indeed correspond particularly given physical aspects of the cards. Though they launched into different market realities and under different circumstances.

I suppose the viable argument is whether one wants to define a chip by its physical characteristics or its market presence and relative performance within the existing market.

Yes there has always been a similar item released about a 1/4 year after initial flagship, but to suggest this pricing difference of 350-400 usd or that it releases at 100-150 above the current market 2nd tier card as something that has been commonplace since the Geforce 2 series isn't something I can agree with. Realistically the 980ti launch 20% above the 980 and at 65% of the Titan Xs cost with a performance metric that made it "seem" to be an amazing deal.


----------



## epic1337

Quote:


> Originally Posted by *Woundingchaney*
> 
> I suppose I don't understand what you are saying here. The performance gap from the 780 to the 780ti is roughly equivalent from the 980 to the 980ti.
> 
> There is nothing to suggest cost efficiencies as a set in stone metric. Much of this is dependent upon current market situations at time of release. Generational differences play a large role in cost efficiencies and there are also a plethora of outlying factors to consider.


constant price yes, but the performance? no.
basically said, performance going up while maintaining price, is the same as performance staying the same while price goes down, its called higher cost efficiency or performance per dollar.

edit: did you know that release price of 780 was $649? wasn't 980 priced at $549?


----------



## Woundingchaney

Quote:


> Originally Posted by *epic1337*
> 
> constant price yes, but the performance? no.
> basically said, performance going up while maintaining price, is the same as performance staying the same while price goes down, its called higher cost efficiency or performance per dollar.
> 
> edit: did you know that release price of 780 was $649? wasn't 980 priced at $549?


Yes I knew the price of the 780 at launch but Im thinking the price of the 980 was 600 usd.

I don't recall ever speaking against the notion of performance vs price as a constant, because I would generally agree with this. Are you talking about comparative performance from generation to generation?


----------



## specopsFI

Quote:


> Originally Posted by *Woundingchaney*
> 
> Yes I knew the price of the 780 at launch but Im thinking the price of the 980 was 600 usd.
> 
> I don't recall ever speaking against the notion of performance vs price as a constant, because I would generally agree with this. Are you talking about comparative performance from generation to generation?


You are talking about something else than what this 980Ti talk was about. In fact, the original reason why talk of the 980Ti launch was brought into a Fiji thread was to suggest that Nvidia rushed it to cash in before Fiji launch because they found out they're losing that battle. The only argument for this seems to be that why else would Nvidia undercut their own Titan flagship so drastically. It seems like a solid line of thinking... if you haven't lived through the same before with the OG Titan / 780. It was the exact same thing: Nvidia had the performance crown unchallenged for three months and would have continued to do so for several more months before Hawaii launch, but still they were willing to sell their flagship customers short and cut their Titan pricing premium by offering essentially the same performance for $350 less. It is indeed the exact same that they did with Titan X / 980Ti. You only need to consider the 980 as the 680 of its time (which it is) and not confuse the 980 with 980Ti. 680 was GK104, 780 was GK110, 980 is GM204 and 980Ti is GM200.

The difference is that this time we will see AMD's response within weeks in stead of the long months between 780 launch and Hawaii launch.


----------



## Majin SSJ Eric

At least the original Titan still had dp so they werent breaking off owners of it quite as bad as Titan X owners are getting it with 980Ti. One way or the other we dont have to wait very long to find out how Fiji stacks up...


----------



## specopsFI

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> At least the original Titan still had dp so they werent breaking off owners of it quite as bad as Titan X owners are getting it with 980Ti. One way or the other we dont have to wait very long to find out how Fiji stacks up...


I would have to disagree. And I'm not just saying it, I actually bought an OG Titan and couldn't believe the nerve on Nvidia when the truth about the 780 came to light. It just didn't make any sense to me, as their customer, the way they slapped my face for no external reason. That is the lesson I learned: there doesn't have to be any reason for Nvidia to undercut their flagship other than their willingness to get rid of their stock of salvage chips.

Of course there were a few rare OG Titan customers who really bought it for DP, but let's not kid ourselves: they were very, very few. You weren't one of them, AFAIK?


----------



## Orivaa

Quote:


> Originally Posted by *specopsFI*
> 
> You are talking about something else than what this 980Ti talk was about. In fact, the original reason why talk of the 980Ti launch was brought into a Fiji thread was to suggest that Nvidia rushed it to cash in before Fiji launch because they found out they're losing that battle. The only argument for this seems to be that why else would Nvidia undercut their own Titan flagship so drastically. It seems like a solid line of thinking... if you haven't lived through the same before with the OG Titan / 780. It was the exact same thing: Nvidia had the performance crown unchallenged for three months and would have continued to do so for several more months before Hawaii launch, but still they were willing to sell their flagship customers short and cut their Titan pricing premium by offering essentially the same performance for less. It is indeed the exact same that they did with Titan X / 980Ti. You only need to consider the 980 as the 680 of its time (which it is) and not confuse the 980 with 980Ti. 680 was GK104, 780 was GK110, 980 is GM204 and 980Ti is GM200.
> 
> The difference is that this time we will see AMD's response within weeks in stead of the long months between 780 launch and Hawaii launch.


The difference between now and then is that the previous Titans were meant to also be a workstation. The Titan X is solely marketed as a gaming card, which makes it completely and utterly obsolete in face of the 980ti, and they don't have their "workstation" excuse to fall back on.


----------



## specopsFI

Quote:


> Originally Posted by *Orivaa*
> 
> The difference between now and then is that the previous Titans were meant to also be a workstation. The Titan X is solely marketed as a gaming card, which makes it completely and utterly obsolete in face of the 980ti.


That is not true. OG Titan was not any more of a workstation GPU than Titan X. DP alone doesn't make a workstation card and lack of DP doesn't mean a card can't be a workstation GPU. And when it comes to Titan X being solely marketed as a gaming card: if you had watched JHH's official Titan X release speech at GTC, you wouldn't be saying that. That was all about deep learning and nothing about gaming. There was even a meme: "it will pay for itself in an afternoon". That doesn't mean Titan X isn't a gaming GPU as far as the great audience is considered, but DP or no DP, so was the OG Titan.


----------



## epic1337

Quote:


> Originally Posted by *Woundingchaney*
> 
> Yes I knew the price of the 780 at launch but Im thinking the price of the 980 was 600 usd.
> 
> I don't recall ever speaking against the notion of performance vs price as a constant, because I would generally agree with this. Are you talking about comparative performance from generation to generation?


a little of both, the performance between generation to generation should increase even a bit, or if not the release price would be lower instead.
the trend had always been like this, and only a few rare variants like the Titan that broke the perf/$ progression.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *specopsFI*
> 
> I would have to disagree. And I'm not just saying it, I actually bought an OG Titan and couldn't believe the nerve on Nvidia when the truth about the 780 came to light. It just didn't make any sense to me, as their customer, the way they slapped my face for no external reason. That is the lesson I learned: there doesn't have to be any reason for Nvidia to undercut their flagship other than their willingness to get rid of their stock of salvage chips.
> 
> Of course there were a few rare OG Titan customers who really bought it for DP, but let's not kid ourselves: they were very, very few. You weren't one of them, AFAIK?


No i wasnt. But the 980ti seems closer to the titan x to me than the 780 was to the original titan. And of course we got full voltage control which helped them oc like champs.


----------



## Blameless

Quote:


> Originally Posted by *Woundingchaney*
> 
> The 780 is not the equivalent of the 980ti.


Yes, it is. It's the high-end part made from die harvested GPUs that are in the absolute top end parts.
Quote:


> Originally Posted by *Woundingchaney*
> 
> One simply cannot omit the existence of the original 980 and/or 970.


The 980 was never priced as an extreme high end part. It was the fastest part NVIDIA had for a while, but the 980/970 were not occupying the same segments as the original Titan and GTX 780 were. Regardless, the price differential we see between the 980 and 970 is proportionally similar to what we see with the Titan/780 and the Titan X/980Ti. So even if you are using GM204 as a reference point, you still get a 600-700 dollar price point for the 980Ti, at the most.
Quote:


> Originally Posted by *Woundingchaney*
> 
> I suppose the viable argument is whether one wants to define a chip by its physical characteristics or its market presence and relative performance within the existing market.


Either path leads me to the same conclusion when it comes to these parts.
Quote:


> Originally Posted by *epic1337*
> 
> wasn't 980 priced at $549?


Yes, and the 970 launched at 330.

If you use the 980/970 ratio to predict 980Ti price, you come up with ~600 dollars for the 980Ti.

If you use the more directly comparable Titan/780 ratio, you get $650.
Quote:


> Originally Posted by *Orivaa*
> 
> The difference between now and then is that the previous Titans were meant to also be a workstation. The Titan X is solely marketed as a gaming card, which makes it completely and utterly obsolete in face of the 980ti, and they don't have their "workstation" excuse to fall back on.


The workstation excuse was an illusion. Sure some people likely used them as budget Quadros/Tesla parts for DP work, but most people that bought Titans almost certainly never did anything that couldn't have been done just as well without full DP functionality.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> the 980ti seems closer to the titan x to me than the 780 was to the original titan.


It is, ever so slightly, but only because the Titan X has more SMXes total.

To make the GTX 780 they took a Titan, cut the VRAM in half, and disabled two SMXes.

To make a GTX 980Ti they took a Titan X, cut the VRAM in half, and disabled two SMXes.

The whole Titan full DP thing never had any bearing on it's positioning in the consumer space.


----------



## epic1337

Quote:


> Originally Posted by *Blameless*
> 
> Yes, and the 970 launched at 330.
> 
> If you use the 980/970 ratio to predict 980Ti price, you come up with ~600 dollars for the 980Ti.
> 
> If you use the more directly comparable Titan/780 ratio, you get $650.


but 980Ti isn't a Titan.
780 = $649
780Ti = $699
980 = $549
980Ti = $649

699 / 649 = 1.077
$549 * 1.077 = $591

ratios doesn't mean anything really.


----------



## ZealotKi11er

Quote:


> Originally Posted by *epic1337*
> 
> but 980Ti isn't a Titan.
> 780 = $649
> 780Ti = $699
> 980 = $549
> 980Ti = $649
> 
> 699 / 649 = 1.077
> $549 * 1.077 = $591
> 
> ratios doesn't mean anything really.


Nope. GTX980 is not part of anything. Add GTX770 to the list. What's what GTX980 is.


----------



## epic1337

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nope. GTX980 is not part of anything. Add GTX770 to the list. What's what GTX980 is.


as if, 780 doesn't even deserve to be 780.
780Ti came out and roflstomped 780, clearly showing 780 die config is no where near "top-end".


----------



## Forceman

Quote:


> Originally Posted by *epic1337*
> 
> but 980Ti isn't a Titan.
> 780 = $649
> 780Ti = $699
> 980 = $549
> 980Ti = $649
> 
> 699 / 649 = 1.077
> $549 * 1.077 = $591
> 
> ratios doesn't mean anything really.


Why are you comparing the 780 and 780Ti? The naming doesn't matter - the comparable cards are the 780/680 and 980Ti/980. Cut down Gx200 and full Gx204.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> but 980Ti isn't a Titan.


I didn't compare the 980Ti to the Titan, I compared it to the 780...because it's directly comparable to the 780 by virtue of filling the exact same slot in the line up that the 780 did.
Quote:


> Originally Posted by *epic1337*
> 
> ratios doesn't mean anything really.


If you compare the right things they do.


----------



## epic1337

well suit yourselves, 980Ti is already $649 and it'd stay as is.


----------



## prjindigo

Quote:


> Originally Posted by *Forceman*
> 
> I wouldn't hold my breath waiting for a 390x that's faster than a 295X2. I wouldn't even hold it waiting for the Fury to be that fast.


...aaaaaand you ignorantly miss the point of the paragraph. The 390X is a Dx12 card and the total result WILL be a good deal faster. The new cards are ALL going to be somewhat faster or much faster under Dx12. I was comparing the expected Dx12 performance of the 390X to the current Dx11 performance of the 295x2.

In that we have NO news yet on how the dual chip cards perform under Dx12, the large number of Dx9/11 games that refuse to successfully crossfire must be taken into account as well. I can guarantee that the 390X will outperform a 295x2 in win7 on a game that won't crossfire...

It will be nice if Dx12/Vulcan automatically uses them correclt in spite of the games... but that's a pretty big wish.


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> well suit yourselves, 980Ti is already $649 and it'd stay as is.


Yes, and that's the point.

The argument was over whether the 980Ti price was predictable or not. Clearly it was.
Quote:


> Originally Posted by *prjindigo*
> 
> The 390X is a Dx12 card and the total result WILL be a good deal faster. The new cards are ALL going to be somewhat faster or much faster under Dx12.


Most sources are indicating that the 390X is either not a new chip, or is mild revision of Hawaii.


----------



## prjindigo

Quote:


> Originally Posted by *Ganf*
> 
> That's some incredibly optimistic thinking there, considering AMD's last line of rebrands only gained 5% over their predecessors.


Actually it was 15 to 22% gain after the drivers settled.

You're clearly not reading news or this thread. Most of the speed improvements that nVidia put into their cards were made available to AMD by the printer. They're already seeing the same jump in performance compared to the previous hardware. There were reviews and leaked benchmark info and everything.


----------



## prjindigo

Quote:


> Originally Posted by *flopper*
> 
> 390x 1070mhz 8gb granada wont be 40% faster than 290x it replaces.
> Thats 980ti territory. its targeted vs the 980.
> 
> Fury however is a different animal.


You're just not conversant with what's been done inside the GPU architecture. The chip has a different name, yes? So it's not the same chip... the transistor count doesn't match either.
Go read up a little.... AND you just said the 390X is to compete with a card that's 40% faster than a 290X while saying the 390X wont be anywhere near 40% faster than a 290X.

Come on, if you're gonna troll be more careful... if you're not then PLEASE read what you type before you post.


----------



## prjindigo

Quote:


> Originally Posted by *Ganf*
> 
> My complaint is that AMD is playing catch-up, and they can't even catch up. At this rate nvidia is going to have complete control over the pace of innovation in the market and that means we all get screwed just like we're getting screwed by intel.


Almost all the substrate improvements that went into the nV cards also just went into the AMD cards, plus a few things AMD did themselves on top.


----------



## Forceman

You are aware that the 390X and Fury are two different cards, right? The 390X is a rebranded/upgraded Hawaii and has no chance of being 40% faster than a 290X. And for your DX12 argument, the 295X2 is also a DX12 card, so any benefits that brings will also accrue to it.

Plus, the gains Nvidia got with Maxwell are largely because of the architecture changes, and not because of any significant process improvements.


----------



## prjindigo

Quote:


> Originally Posted by *Ganf*
> 
> The back and forth has ended with this generation. AMD can no longer keep up. The 390x isn't going to compete with the 980 and the Fury X isn't going to be priced competitively against the 980ti. One card is played out without enough headroom to catch up and the other one is using new tech while just keeping pace, driving their costs through the roof.


Yeah, ganf is trolling. Nobody can be that ignorant.

The 980ti came out so nV could get it sold before the 390X came out. No other reason: nV is driven by money alone.


----------



## michaelius

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Sometimes it's Nvidia, sometimes it's AMD. If GTX980 Ti came out same time as GTX980 then AMD would be in bad stop but even so GTX970/980 has done the biggest damage to AMD since i can remember. *I still dont understand why cards what dont have a generational jump cause such a big market share swing.*


Because 970 was 330$ gpu that outperformed $550 290X and $700 780ti. So it was first resonable upgrade for milions who were sitting with older gpu after long time.

Also something most enthusiast boards can't understand - most people aren't upgrading every generation insted they skip one or two.


----------



## sugalumps

Quote:


> Originally Posted by *michaelius*
> 
> Because 970 was 330$ gpu that outperformed $550 290X and $700 780ti. So it was first resonable upgrade for milions who were sitting with older gpu after long time.
> 
> Also something most enthusiast boards can't understand - most people aren't upgrading every generation insted they skip one or two.


Also the 970 used very little power, so alot of people sitting on the side lines with low W psu's/systems were able to add a cheap 970 to bring to life an aging cheap system. Couple that with the fact they were getting top of the line performance for very cheap it made it one of the best selling cards to date.


----------



## cowie

I am waiting till after the release so you guys can work out the numbers .i am in no way going to believe the reviews no matter where or who does them.
You have to know that amd is even more worse then nv on release reviews







well unless they totally kill on futuremark benches


----------



## prjindigo

Quote:


> Originally Posted by *cowie*
> 
> I am waiting till after the release so you guys can work out the numbers .i am in no way going to believe the reviews no matter where or who does them.
> You have to know that amd is even more worse then nv on release reviews
> 
> 
> 
> 
> 
> 
> 
> well unless they totally kill on futuremark benches


Lol.

Yeah, we'll have to wait and see. Tho it was fun seeing just how far this thread could be beaten along.


----------



## ZealotKi11er

Quote:


> Originally Posted by *sugalumps*
> 
> Also the 970 used very little power, so alot of people sitting on the side lines with low W psu's/systems were able to add a cheap 970 to bring to life an aging cheap system. Couple that with the fact they were getting top of the line performance for very cheap it made it one of the best selling cards to date.


290X was not that much. After the mining crash you could get R9 290X for less then $400 and R9 290 was already on sub $300 price. Yes it was good price compare to GTX780 Ti.


----------



## prjindigo

I think that, with the announce that there's Fury Nano, Fury Pro and Fury X and etc... that the headline for this whole thread is officially DEBUNKED. They said the X was the 4gb card so god only knows what they were claiming if there was any fact at all.


----------



## schmotty

Quote:


> Originally Posted by *cowie*
> 
> I am waiting till after the release so you guys can work out the numbers .i am in no way going to believe the reviews no matter where or who does them.
> You have to know that amd is even more worse then nv on release reviews
> 
> 
> 
> 
> 
> 
> 
> well unless they totally kill on futuremark benches


I don't trust Benches anymore. My overclocked 460GTX (875Mhz) beats my stock R9 285 (954Mhz) in Cinebench, Furmark, and 3DMark, but it can't avg more than 45fps in BF4 on low settings, although the 285 averages 60+ fps on ultra.

The 460GTX got me (if I was lucky) 30k folding points a day. R9 285 is getting me +120k.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *schmotty*
> 
> I don't trust Benches anymore. My overclocked 460GTX (875Mhz) beats my stock R9 285 (954Mhz) in Cinebench, Furmark, and 3DMark, but it can't avg more than 45fps in BF4 on low settings, although the 285 averages 60+ fps on ultra.
> 
> The 460GTX got me (if I was lucky) 30k folding points a day. R9 285 is getting me +120k.


Could be Mantle vs DX11 though


----------



## schmotty

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Could be Mantle vs DX11 though


No.

I had to install the latest Catalyst beta to even get Mantle to show up, and while it does increase the peak fps, the minimum makes it completely unplayable for me. Everything will be fine, 100fps or so, until I get into a vehicle and then it dips to 15-20 fps. So I use DX11 in BF4. I also play FarCry 3, Tomb Raider, and Arkham City and the 285 has much higher frame rates with better settings.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *schmotty*
> 
> No.
> 
> I had to install the latest Catalyst beta to even get Mantle to show up, and while it does increase the peak fps, the minimum makes it completely unplayable for me. Everything will be fine, 100fps or so, until I get into a vehicle and then it dips to 15-20 fps. So I use DX11 in BF4. I also play FarCry 3, Tomb Raider, and Arkham City and the 285 has much higher frame rates with better settings.


Interesting. Here's an R9 280 vs GTX 480 (closest comparison with the cards they have available): http://www.anandtech.com/bench/product/1332?vs=1135 Surprised that a 460 beats a 285 in 3DMark.


----------



## sonoma

I used to be a big fan of AMD CPU's and GPU's now I am just waiting for Samsung or some other company to buy them out because there stuff has sucked over the last 3 years or so.


----------



## Serandur

I think I found some Fiji compute benchmarks and info. Could be fake, but it seems to match up with what we "know" of Fiji. It says it's got 64 compute units (meaning 4096 GCN shaders are true), a 1000 MHz max clock, and specifically calls it "Fiji" (that's all under the info section):

https://compubench.com/device.jsp?D=AMD+Radeon+Graphics+Processor&testgroup=overall

Side by side comparison with the Titan X:

https://compubench.com/compare.jsp?benchmark=compu20&did1=25622690&os1=Windows&api1=cl&hwtype1=dGPU&hwname1=AMD+Radeon+Graphics+Processor&D2=NVIDIA+GeForce+GTX+TITAN+X

And side by side comparison with the 290X:

https://compubench.com/compare.jsp?benchmark=compu20&did1=25622690&os1=Windows&api1=cl&hwtype1=dGPU&hwname1=AMD+Radeon+Graphics+Processor&D2=AMD+Radeon+R9+290X


----------



## HMBR

Quote:


> Originally Posted by *schmotty*
> 
> I don't trust Benches anymore. My overclocked 460GTX (875Mhz) beats my stock R9 285 (954Mhz) in Cinebench, Furmark, and 3DMark, but it can't avg more than 45fps in BF4 on low settings, although the 285 averages 60+ fps on ultra.
> 
> The 460GTX got me (if I was lucky) 30k folding points a day. R9 285 is getting me +120k.


3dmark? the 285 should destroy the 460 on 3dmark 2013 and 11

as for the others, they are not good benchmarks for comparing VGAs,


----------



## harney

Quote:


> Originally Posted by *Serandur*
> 
> I think I found some Fiji compute benchmarks and info. Could be fake, but it seems to match up with what we "know" of Fiji. It says it's got 64 compute units (meaning 4096 GCN shaders are true), a 1000 MHz max clock, and specifically calls it "Fiji" (that's all under the info section):
> 
> https://compubench.com/device.jsp?D=AMD+Radeon+Graphics+Processor&testgroup=overall
> 
> Side by side comparison with the Titan X:
> 
> https://compubench.com/compare.jsp?benchmark=compu20&did1=25622690&os1=Windows&api1=cl&hwtype1=dGPU&hwname1=AMD+Radeon+Graphics+Processor&D2=NVIDIA+GeForce+GTX+TITAN+X
> 
> And side by side comparison with the 290X:
> 
> https://compubench.com/compare.jsp?benchmark=compu20&did1=25622690&os1=Windows&api1=cl&hwtype1=dGPU&hwname1=AMD+Radeon+Graphics+Processor&D2=AMD+Radeon+R9+290X


If that is true then ouch for the flury







suppose its all about the price


----------



## magnek

Quote:


> Originally Posted by *Serandur*
> 
> I think I found some Fiji compute benchmarks and info. Could be fake, but it seems to match up with what we "know" of Fiji. It says it's got 64 compute units (meaning 4096 GCN shaders are true), a 1000 MHz max clock, and specifically calls it "Fiji" (that's all under the info section):
> 
> https://compubench.com/device.jsp?D=AMD+Radeon+Graphics+Processor&testgroup=overall
> 
> Side by side comparison with the Titan X:
> 
> https://compubench.com/compare.jsp?benchmark=compu20&did1=25622690&os1=Windows&api1=cl&hwtype1=dGPU&hwname1=AMD+Radeon+Graphics+Processor&D2=NVIDIA+GeForce+GTX+TITAN+X
> 
> And side by side comparison with the 290X:
> 
> https://compubench.com/compare.jsp?benchmark=compu20&did1=25622690&os1=Windows&api1=cl&hwtype1=dGPU&hwname1=AMD+Radeon+Graphics+Processor&D2=AMD+Radeon+R9+290X


Huh so that SiSoft Sandra leak from more than 7 months ago was spot on then. Notice also how the core is 1GHz and with 4096 shaders.


----------



## prjindigo

So to add a tidbit in here. AMD's support of Dx12 features is more whole-hog than nV's. Wccf released a price list putting the 390X "hawaii boogaloo" 8GB at $390 and it looks like the Dx12 compliant nV cards either support 12.1 OR 12.2 _*but not both*_.

That price list that wccf ran slits nV's throat from the 970 all the way down.

Who the hell would want a 3.5GB 970 when for sixty more bucks you can have an 8GB 390X?

To AMD I say "BRING THE FIJI DOWN ON THEIR NECK!"


----------



## schmotty

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Interesting. Here's an R9 280 vs GTX 480 (closest comparison with the cards they have available): http://www.anandtech.com/bench/product/1332?vs=1135 Surprised that a 460 beats a 285 in 3DMark.


Maybe I should rerun 3Dmark. I may have confused the results, but I'm certain it was close either way.


----------



## criznit

The latest from wccf

http://wccftech.com/amd-radeon-fury-x-specs-fiji/


----------



## 47 Knucklehead

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 290X was not that much. After the mining crash you could get R9 290X for less then $400 and R9 290 was already on sub $300 price. Yes it was good price compare to GTX780 Ti.


You can get a 3 fan R9 290X with 8GB of GDDR5 for $359.99 now.

So if WCCF's numbers of an R9 390X with 8GB being basically an upclocked 8GB 290X for $389 are true ... basically you are paying AMD $30 to "factory overclock" your video card ... just like EVGA charges $20 to pre-overclock their "Superclocked" cards.


----------



## peateargryphon

The Fury cards are much more interesting than the 3xx cards... I think this is pretty much what has been rumored for the shader count, for a while but WCCF has posted a "confirmation" of the Fiji card specs:
http://wccftech.com/amd-radeon-fury-x-specs-fiji/

*edit: removed spaces above table


Spoiler: Fiji / Fury Specs




WCCFTechFury X (Water Cooled)Fury X (Air Cooled)Fury (Air Cooled)R9 290XGPUFiji XTFiji XTFiji ProHawaii XTStream Processors4096409635842816GCN Compute Units64645644Render Output Units12812812864Texture Mapping Units256256224176GPU Frequency≥ 1050Mhz1050Mhz1000Mhz1000MhzMemory4GB HBM4GB HBM4GB HBM4GB GDDR5Memory Interface4096bit4096bit4096bit512bitMemory Frequency500Mhz500Mhz500Mhz1250MhzEffective Memory Speed1Gbps1Gbps1Gbps5GbpsMemory Bandwidth512GB/s512GB/s512GB/s320GB/sCoolingLiquid, 120mm RadiatorAir, 3 Axial FansAir, 3 Axial FansAir, Single Blower FanPerformance (SPFP)≥ 8.6 TFLOPS8.6 TFLOPS7.2 TFLOPS5.6 TFLOPSTDP300W300W275W290WGFLOPS/Watt≥ 28.728.726.219.4Launch PriceTBATBATBA$549


----------



## Jedi Mind Trick

Quote:


> Originally Posted by *peateargryphon*
> 
> The Fury cards are much more interesting than the 3xx cards... I think this is pretty much what has been rumored for the shader count, for a while but WCCF has posted a "confirmation" of the Fiji card specs:
> http://wccftech.com/amd-radeon-fury-x-specs-fiji/
> 
> *edit: removed spaces above table
> 
> 
> Spoiler: Fiji / Fury Specs
> 
> 
> 
> 
> WCCFTechFury X (Water Cooled)Fury X (Air Cooled)Fury (Air Cooled)R9 290XGPUFiji XTFiji XTFiji ProHawaii XTStream Processors4096409635842816GCN Compute Units64645644Render Output Units12812812864Texture Mapping Units256256224176GPU Frequency≥ 1050Mhz1050Mhz1000Mhz1000MhzMemory4GB HBM4GB HBM4GB HBM4GB GDDR5Memory Interface4096bit4096bit4096bit512bitMemory Frequency500Mhz500Mhz500Mhz1250MhzEffective Memory Speed1Gbps1Gbps1Gbps5GbpsMemory Bandwidth512GB/s512GB/s512GB/s320GB/sCoolingLiquid, 120mm RadiatorAir, 3 Axial FansAir, 3 Axial FansAir, Single Blower FanPerformance (SPFP)≥ 8.6 TFLOPS8.6 TFLOPS7.2 TFLOPS5.6 TFLOPSTDP300W300W275W290WGFLOPS/Watt≥ 28.728.726.219.4Launch PriceTBATBATBA$549


If true, I'm liking that Fiji Pro/XT as a way to go back to single cards (I really want to go MITX, but CFX 290s won't let that happen).


----------



## magnek

Quote:


> Originally Posted by *criznit*
> 
> The latest from wccf
> 
> http://wccftech.com/amd-radeon-fury-x-specs-fiji/


Quick someone make a thread!
Quote:


> Fiji, and the Fury line of cards which are based on it, feature notable improvements across the board. Performance is obviously significantly improved. *Fury X is faster than the R9 290X by a minimum of 54%.* Which brings us to the second major improvement. Fury X achieves this performance improvement with a TDP that's a meager 10W higher. *Which makes Fury X 48% more power efficient than AMD's previous single GPU flagship the R9 290X, which is quite remarkable.*


*Minimum* 54% faster? I'm a bit skeptical, but obviously great news if true. Coupled with the TDP that puts Fiji only slightly behind Maxwell on perf/watt, a tremendous improvment indeed.


----------



## gamervivek

Even AMD's 'leaked' slide had it at a minimum faster at 50% in one game. It's going to be fast, that's a given, the two speculations are whether it'll get 8GB or not and whether the 390X and lower are going tobe a rebrand or not. Since the former is going to happen eventually, captain jack is what I'm looking forward to.


----------



## Ganf

Gonna have so much fun overclocking the HBM...


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> Gonna have so much fun overclocking the HBM...


Assuming you can.

I'm a bit skeptical of the claim that the reference cooler will be a 3 fan model. That's a significant break from past cards.


----------



## Ganf

Quote:


> Originally Posted by *Forceman*
> 
> Assuming you can.
> 
> I'm a bit skeptical of the claim that the reference cooler will be a 3 fan model. That's a significant break from past cards.


They've already thrown out another 8 second teaser showing the fan spinning up with the AMD logo on it, what more do you want?

Can't has never been an excuse, just "how much work do I have to do?"


----------



## magnek

Quote:


> Originally Posted by *Forceman*
> 
> Assuming you can.
> 
> I'm a bit skeptical of the claim that the reference cooler will be a 3 fan model. That's a significant break from past cards.


Maybe they reused the 7990 cooler.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Forceman*
> 
> Assuming you can.
> 
> I'm a bit skeptical of the claim that the reference cooler will be a 3 fan model. That's a significant break from past cards.


I thought I heard the Nano/WCE was the only one with a reference version and the XT/Pro have no reference versions and are up to AIBs to create.


----------



## Ganf

So it's raining today, that's nice.

This was a wasteful post. I've edited it appropriately.


----------



## tpi2007

Quote:


> Originally Posted by *Forceman*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Ganf*
> 
> Gonna have so much fun overclocking the HBM...
> 
> 
> 
> Assuming you can.
> 
> I'm a bit skeptical of the claim that the reference cooler will be a 3 fan model. That's a significant break from past cards.
Click to expand...

They've done it before and recently:


----------



## criminal

Quote:


> Originally Posted by *criznit*
> 
> The latest from wccf
> 
> http://wccftech.com/amd-radeon-fury-x-specs-fiji/


If specs are true (please let them be) Fiji Pro is all mine.


----------



## Ganf

MSI you better have the Lightnings ready to go....


----------



## zealord

Quote:


> Originally Posted by *Ganf*
> 
> MSI you better have the Lightnings ready to go....


ready to go probably not, but maybe in like 5-6 months. They always take their sweet time


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> They've already thrown out another 8 second teaser showing the fan spinning up with the AMD logo on it, what more do you want?


Must have missed that one.
Quote:


> Originally Posted by *tpi2007*
> 
> They've done it before and recently:


Yeah, I was thinking single GPU cards, don't usually pay attention to the duals. Not great if it needs the same cooler as a dual Tahiti card, but I guess they don't want a repeat of the 290X problems. It'll be weird seeing a giant cooler like that on a half-length card.


----------



## zealord

interestingly AMD_UK twittered a picture of the three fan 7990 cooler today. probably just a coincidence


----------



## magnek

Eh having a ridiculous looking overhanging cooler still beats the leaf blower on the 290X by miles. Plus it fits the image of a beastly card.








Quote:


> Originally Posted by *zealord*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Ganf*
> 
> MSI you better have the Lightnings ready to go....
> 
> 
> 
> ready to go probably not, but maybe in like 5-6 months. They always take their sweet time
Click to expand...

If the 8GB Fiji XT is a thing (big IF), I wouldn't mind waiting a couple months for an 8GB Lightning. That would truly be one hell of a card.


----------



## tpi2007

Those specs, if true do indeed look good, it looks like they managed to save quite bit of power transitioning from GDDR5 to HBM and put that to good use with more cores. Fury Pro should be faster than the 980 and Fury XT competing with the 980 Ti and Titan X. Should be interesting.

Quote:


> Originally Posted by *Forceman*
> 
> Yeah, I was thinking single GPU cards, don't usually pay attention to the duals. Not great if it needs the same cooler as a dual Tahiti card, but I guess they don't want a repeat of the 290X problems. It'll be weird seeing a giant cooler like that on a half-length card.


The rumour is that the air cooled versions will have a longer PCB just so that the cooler can be properly secured to the PCB. You know, just like the GTX 670 and 970 don't really need a long PCB, but those versions are made anyway, same thing here. There will be lots of space free on that PCB, since no memory chips will be there, but that's how it goes. The length of the PCB of my MSI GTX 750 Ti Gaming is almost the same as the cooler, and it's only to match, there is also a lot of free space on the PCB. Happens all the time with many cards, it will be the same in this case.


----------



## Blameless

Extra PCB space isn't necessarily a bad thing...the PCB is where the bulk of heat from the VRM is going to go first. Should make it easier to cool.

Anyway, if this part really has 128 ROPs, it's going to be a 4k monster, as long as you don't run out of VRAM.


----------



## Majin SSJ Eric

I just can't wait to see this thing in a shootout with the Titan X and 980Ti! Gonna be great fun!


----------



## criznit

Quote:


> Originally Posted by *criminal*
> 
> If specs are true (please let them be) Fiji Pro is all mine.


I hope they are because im ready to do some upgrading


----------



## TheBlindDeafMute

Question. So if the default 290x ran hot, won't the 390x run hotter since it has been overclocked? Assuming it is just a rebranded 290x. Seems like a silly strategy unless they use a 2 or 3 fan setup. But then you increase the price.


----------



## harney

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I just can't wait to see this thing in a shootout with the Titan X and 980Ti! Gonna be great fun!


Yep got my 10 buckets of popcorn ready


----------



## Ultracarpet

Quote:


> Originally Posted by *TheBlindDeafMute*
> 
> Question. So if the default 290x ran hot, won't the 390x run hotter since it has been overclocked? Assuming it is just a rebranded 290x. Seems like a silly strategy unless they use a 2 or 3 fan setup. But then you increase the price.


Pretty much only ran hot because of the stock cooler. Most aftermarket designs were decent afaik, but the reputation remained. I'm guessing there probably won't be a reference cooler this time around.


----------



## Blackops_2

I hope it lives up to speculated hype. Might consider one or two.


----------



## harney

Wonder if its worth buying some AMD shares any thoughts advice anyone i am seriously thinking about it


----------



## Woundingchaney

Quote:


> Originally Posted by *harney*
> 
> Wonder if its worth buying some AMD shares any thoughts advice anyone i am seriously thinking about it


I would hold off until you see what is going on with their cpu division over the next year or a bit more.


----------



## decimator

Quote:


> Originally Posted by *harney*
> 
> Wonder if its worth buying some AMD shares any thoughts advice anyone i am seriously thinking about it


Yeah, uh, don't take random advice from people on a tech forum if you actually get any. You have to do your own due diligence because, after all, it's your money on the line. Also, don't just rely on the company outlook -- new product releases, announcements, etc. You need to look at the numbers -- P/E multiple, EV/EBITDA, that sort of thing, maybe even build a model out. Good luck.


----------



## Creator

Quote:


> Originally Posted by *decimator*
> 
> Yeah, uh, don't take random advice from people on a tech forum if you actually get any. You have to do your own due diligence because, after all, it's your money on the line. Also, don't just rely on the company outlook -- new product releases, announcements, etc. You need to look at the numbers -- P/E multiple, EV/EBITDA, that sort of thing, maybe even build a model out. Good luck.


Long term yes. But short term you need to know when there's going to be hype to buy the rumor and sell the news. I don't think the GPU division extends to enough market to generate the hype needed. But Zen may.


----------



## harney

Thanks for the advice opinions ect and yes i am careful where and how i take my advice ...was mostly interested in other peeps opinions...i just have a good feeling re AMD at the moment with the Fury & Zen and that with the low share price...


----------



## harney

http://wccftech.com/amd-radeon-fury-exclusive-pics/

http://www.overclock3d.net/articles/gpu_displays/amd_fury_x_3dmark_performance_leak/1


----------



## Kuivamaa

Quote:


> Originally Posted by *schmotty*
> 
> No.
> 
> I had to install the latest Catalyst beta to even get Mantle to show up, and while it does increase the peak fps, the minimum makes it completely unplayable for me. Everything will be fine, 100fps or so, until I get into a vehicle and then it dips to 15-20 fps. So I use DX11 in BF4. I also play FarCry 3, Tomb Raider, and Arkham City and the 285 has much higher frame rates with better settings.


You hit the VRAM wall. Roughly medium settings hit 1900+ VRAM usage with mantle in BF4. Then stutter begins.


----------



## prjindigo

"EHTS NAUGHT UN X" Come on people... the 4GB fury is JUST the Fury. Its the "R9 390" of the Fury set, its NOT the "R9 390X" of it.

Get with the goddamned program!


----------



## schmotty

Quote:


> Originally Posted by *Kuivamaa*
> 
> You hit the VRAM wall. Roughly medium settings hit 1900+ VRAM usage with mantle in BF4. Then stutter begins.


I'll have to check that out. On DX11 I don't even hit 1.5GB using mostly ultra settings.

Edit: Confirmed. Mantle maxed my RAM. Reducing AA and textures help keep it below 1500, but I still get a frozen screen for 2-3 seconds occasionally, only on mantle.


----------



## tsm106

Be like water...


----------



## Blackops_2

Love that fan made video of his.

So what's the consensus on the rumor mill so far? We think 50-55% faster than hawaii guaranteed? Likely to sit between 980Ti and Titan X? Could see fury selling like crazy if priced right. Much like the 290/290x situation. If i would've gone hawaii it would've been 2 290s.


----------



## Majin SSJ Eric

If it were 55% faster than the 290X wouldn't that make it faster than the Titan X? Serious question, I don't have the relative performance chart memorized....


----------



## white owl

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> If it were 55% faster than the 290X wouldn't that make it faster than the Titan X? Serious question, I don't have the relative performance chart memorized....


It adds up...
At least @ 1080p.


----------



## shadow85

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I just can't wait to see this thing in a shootout with the Titan X and 980Ti! Gonna be great fun!


It will be nice to see, but from the early leaks it still look as though the Titan X is faster, and has alot of OC head room. Have to see if the Fury X even OCs at all if they have similar performance, otherwise Nvidia still stands king. And that cant be good for everyone in the future.


----------



## Blackops_2

Quote:


> Originally Posted by *shadow85*
> 
> It will be nice to see, but from the early leaks it still look as though the Titan X is faster, and has alot of OC head room. Have to see if the Fury X even OCs at all if they have similar performance, otherwise Nvidia still stands king. And that cant be good for everyone in the future.


Fury X being on an AIO also makes you wonder how much overhead it has and what kind of heat it's generating too.

While i would like AMD to take the performance crown i'm more concerned with what gains them market share. Whether it be taking the performance crown or lying around and inbetween the 980-980Ti-Titan X while being aggressively priced i'm not overly concerned with.


----------



## epic1337

Quote:


> Originally Posted by *Blackops_2*
> 
> Fury X being on an AIO also makes you wonder how much overhead it has and what kind of heat it's generating too.
> 
> While i would like AMD to take the performance crown i'm more concerned with what gains them market share. Whether it be taking the performance crown or lying around and inbetween the 980-980Ti-Titan X while being aggressively priced i'm not overly concerned with.


GPU AIO is an issue in itself though.
the placement would be an issue, you can mount it in 4 possible locations, yet these locations may or may not exist.
rear exhaust, front intake, bottom intake, side panel exhaust (side panel is doubtfully possible).

so in all likelihood, not all cases can accommodate a GPU with AIO.


----------



## Ganf

Quote:


> Originally Posted by *epic1337*
> 
> GPU AIO is an issue in itself though.
> the placement would be an issue, you can mount it in 4 possible locations, yet these locations may or may not exist.
> rear exhaust, front intake, bottom intake, side panel exhaust (side panel is doubtfully possible).
> 
> so in all likelihood, not all cases can accommodate a GPU with AIO.


And for those cases.



People with non-standard builds already expect the need to use non-standard solutions to get the job done. No big deal. AIO's, be they CPU or GPU, work for almost every average case out there, so they're not really limiting their customer base in any way by non-compatibility issues.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *magnek*
> 
> *Minimum* 54% faster? I'm a bit skeptical, but obviously great news if true. Coupled with the TDP that puts Fiji only slightly behind Maxwell on perf/watt, a tremendous improvment indeed.


So, when a news site says something that isn't favorable to AMD, you dismiss it as "a silly rumor" and when a site does something that even you say you really don't believe, you don't say "Stupid rumors are false, stop spreading these rumors, you say "obviously great news if true."?

Interesting.


----------



## Wishmaker

Quote:


> Originally Posted by *47 Knucklehead*
> 
> So, when a news site says something that isn't favorable to AMD, you dismiss it as "a silly rumor" and when a site does something that even you say you really don't believe, you don't say "Stupid rumors are false, stop spreading these rumors, you say "obviously great news if true."?
> 
> Interesting.


This and slice bread are the one true constant in the universe.


----------



## harney

Well i do wish AMD the best with the fury product ...however its not helped waiting so long as i am still sat on this damn fence waiting for results release and most important price.

for me its either 980 ti or fury ...but it all so depends on the gouging gods that live above the uk ....

I am hoping for some price movement on the 980 ti here in the uk once the Fury bites if they both have similar performance then for me its all about price...but until release then i need a comfy cushion for this damn fence...


----------



## GorillaSceptre

Quote:


> Originally Posted by *harney*
> 
> Well i do wish AMD the best with the fury product ...however its not helped waiting so long as i am still sat on this damn fence waiting for results release and most important price.
> 
> for me its either 980 ti or fury ...but it all so depends on the gouging gods that live above the uk ....
> 
> I am hoping for some price movement on the 980 ti here in the uk once the Fury bites if they both have similar performance then for me its all about price...but until release then i need a comfy cushion for this damn fence...


Welcome to the club







Fury or Ti for me too. Only 5 days left to wait


----------



## Noobism

Quote:


> Originally Posted by *harney*
> 
> Well i do wish AMD the best with the fury product ...however its not helped waiting so long as i am still sat on this damn fence waiting for results release and most important price.
> 
> for me its either 980 ti or fury ...but it all so depends on the gouging gods that live above the uk ....
> 
> I am hoping for some price movement on the 980 ti here in the uk once the Fury bites if they both have similar performance then for me its all about price...but until release then i need a comfy cushion for this damn fence...


People knew when these cards were to be announced and released, not sure why everyone gets hung up on that.


----------



## Kuivamaa

Presentation timetables and review NDAs are already in place with dates set and all. Won't take long.


----------



## magicc8ball

Now something to pass time, hmm maybe some last minute memories with my 7970? Yes I think that will do.


----------



## ebduncan

I just wanted to note the 295x2 stays under 65c with a single 120 radiator. Granted you have respectable ambient temps.

Fury X WCE will have plenty of thermal room for overclocking. The real question is it capable of overclocking well? I would also assume there would be no reason to increase the memory speed, but focus on the core clock. Based on Hawaii, i would have to say It will likely have decent OC room. The bigger question is with that many shaders etc what will it do to power consumption.

I doubt I will be upgrading this generation. I could see myself buying a 980 ti, or a Fury Pro, but will likely just wait for the 14nm gpus and hbm2. Its not like my current 290(1300/1650) is holding me back at 1080p.

For those talking about the AIO liquid cooler, most cases have at least one 120mm fan mount. IF for some reason you don't have an available 120mm fan opening, you can go with a air cooled card.


----------



## JackCY

Quote:


> Originally Posted by *Astral Fly*
> 
> I'm in the market for a new gpu witin the next couple of months and the only way I'll consider AMD is if they bring the power consumption down considerably. 390X may give the same level of performance as 980 at a lower price, but if it uses almost 100W more I'm simply not interested.


It's better to have other options than Intel and Nvidia only, better bang/buck options. The moment AMD is gone and no other competition or break trough is made by a disruptive product, Intel and Nvidia will stall it and keep the prices, look at what CPUs did due to lack of better competition.

290/290x can be had second hand for over a year, only a mad man would buy it new for full price especially after currency rates have gone haywire.
Second hand it's one of the best cards to buy along with a used 280x.


----------



## maltamonk

If someone doesn't have an 120 fan mount and are spending this kind of dough on a gpu....they would be well advised in to looking at different case options 1st.


----------



## harney

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Welcome to the club
> 
> 
> 
> 
> 
> 
> 
> Fury or Ti for me too. Only 5 days left to wait


If you need one got a spare comfy cushion for that fence of yours ...For club fury or Ti members only
Quote:


> Originally Posted by *Noobism*
> 
> People knew when these cards were to be announced and released, not sure why everyone gets hung up on that.


not hung up ...was just uncomfortable with being sat on the fence too long ...not long to wait now and this comfy cushion now helps







so all is good


----------



## Majin SSJ Eric

I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die? I suppose any cooler design would have to account for them, right? For that matter, will you have to put TIM on the memory chips when installing a block?


----------



## magicc8ball

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die? I suppose any cooler design would have to account for them, right? For that matter, will you have to put TIM on the memory chips when installing a block?


I am very curious about this as well. It is going to be interesting seeing how the full coverage blocks look and function.


----------



## Forceman

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die? I suppose any cooler design would have to account for them, right? For that matter, will you have to put TIM on the memory chips when installing a block?


If they are as low power as they are supposed to be, they shouldn't get very hot by themselves (although they are smaller as well, so the thermal density may be similar), but you do have to wonder how much the GPU heat will affect them. It must not be too big a problem though, considering the layers are stacked so the middle ones must get a lot of trapped heat. If they are fine you'd expect the package as a whole to be fine. It will concentrate all the GPU and VRAM heat into a smaller area on the block though, which may have a negative affect on the GPU temps.


----------



## tsm106

Quote:


> Originally Posted by *magicc8ball*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Majin SSJ Eric*
> 
> I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die? I suppose any cooler design would have to account for them, right? For that matter, will you have to put TIM on the memory chips when installing a block?
> 
> 
> 
> I am very curious about this as well. It is going to be interesting seeing how the full coverage blocks look and function.
Click to expand...

I would think the HBM/interposer is a non-issue heat wise since it puts out less heat and it is voila, actively cooled by your gpu cooler anyways so no need for more heatsinks etc.


----------



## Ganf

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die? I suppose any cooler design would have to account for them, right? For that matter, will you have to put TIM on the memory chips when installing a block?


The pic of the die with TIM still on it showed that the HBM chips are level with the top of the die, get TIM and are covered by the waterblock.


----------



## harney

is all the volt reg all on that interposer


----------



## Ganf

Quote:


> Originally Posted by *harney*
> 
> is all the volt reg all on that interposer


Some, not all.


----------



## Forceman

Quote:


> Originally Posted by *harney*
> 
> is all the volt reg all on that interposer


No, should still be on the PCB. Just the GPU and HBM stacks on the interposer.
Quote:


> Originally Posted by *Ganf*
> 
> Some, not all.


Pretty sure those are just surface mount capacitors, like you see on delidded Intel CPUs.


----------



## Ganf

Quote:


> Originally Posted by *Forceman*
> 
> No, should still be on the PCB. Just the GPU and HBM stacks on the interposer.
> Pretty sure those are just surface mount capacitors, like you see on delidded Intel CPUs.


You're right, I was thinking power management in general, not VRMs.


----------



## harney

Ok so the VRM is on the pcb how about the memory controller reason why i am asking is heat ...as i recall wasn't the haswell hot because the memory controller was on chip die

just a thought


----------



## tsm106

Quote:


> Originally Posted by *harney*
> 
> Ok so the VRM is on the pcb how about the memory controller reason why i am asking is heat ...as i recall wasn't the haswell hot because the memory controller was on chip die
> 
> just a thought


Afaik there isn't a traditional memory controller, it goes thru the interposer.


----------



## magnek

Quote:


> Originally Posted by *47 Knucklehead*
> 
> So, when a news site says something that isn't favorable to AMD, you dismiss it as "a silly rumor" and when a site does something that even you say you really don't believe, you don't say "Stupid rumors are false, stop spreading these rumors, you say "obviously great news if true."?
> 
> Interesting.


I'm going to go ahead and assume you're making yet another reference to that Fury X limited to 30K thread. Alright so here's another one of your great examples of false equivalency.

First of all I only "dimissed the rumor" because Lisa Su denied production limitations herself. I wasn't the first or only person saying such a thing, heck I wasn't even the most vocal. Literally all I did was quote your post where you said supply would be limited due to poor HBM yields and said "Lisa Su debunked this herself". And of course then you started with the intentionally limiting supply nonsense. Even then nowhere did I say anything resembling what you quote me as saying. That's right, I did not say "stupid rumors are false, stop spreading these rumors" even to your own rumor you made up yourself that no other "news sites" ever reported upon.

As far as the the rumor in question goes, I expressed my skepticism towards the 54% figure, called it great news if true, and moved on. Given the lack of contradicting rumors and unfalsifiability of the rumor at the time, I don't see how my response was inappropriate.

Finally and no offense, but I find it extremely ironic you're making these accusations. Pot calling the kettle black much?

Edited to maintain civility


----------



## Gilles3000

Quote:


> Originally Posted by *harney*
> 
> Ok so the VRM is on the pcb how about the memory controller reason why i am asking is heat ...as i recall wasn't the haswell hot because the memory controller was on chip die
> 
> just a thought


The main reason for Haswell running hot was bad TIM between the die and IHS.


----------



## harney

Quote:


> Originally Posted by *Gilles3000*
> 
> The main reason for Haswell running hot was bad TIM between the die and IHS.


yes i was aware of the poor thermal job they did but i am more than sure i read some where that explained there was a slight increase in heat due to the mem controller being in the chip


----------



## Ganf

Quote:


> Originally Posted by *Gilles3000*
> 
> The main reason for Haswell running hot was bad TIM between the die and IHS.


Haswell-E's are still hot with solder, so don't be too quick to dismiss.


----------



## lucasj1974

AMD needs to get it together....they really have been making poor business decisions....Instead of letting rumors fly around for months they ought to keep the tech community supplied with information instead of "leaking it out". At this point, even if the FuryX stomps the competition I don't think I would risk buying one because of the kind of company AMD is showing itself to be. Sad but true.


----------



## Ganf

Quote:


> Originally Posted by *lucasj1974*
> 
> AMD needs to get it together....they really have been making poor business decisions....Instead of letting rumors fly around for months they ought to keep the tech community supplied with information instead of "leaking it out". At this point, even if the FuryX stomps the competition I don't think I would risk buying one because of the kind of company AMD is showing itself to be. Sad but true.


Ahh, right, and Nvidia doesn't wait until a week before they release a card to start dribbling out information....


----------



## lucasj1974

Quote:


> Originally Posted by *Ganf*
> 
> Ahh, right, and Nvidia doesn't wait until a week before they release a card to start dribbling out information....


Well the way I feel about AMD involves a lot more than just the leaks.......that's just the icing on the crap cake. The leaks have been going on for a lot more than a week, BTW.


----------



## Ganf

Quote:


> Originally Posted by *lucasj1974*
> 
> Well the way I feel about AMD involves a lot more than just the leaks.......that's just the icing on the crap cake. The leaks have been going on for a lot more than a week, BTW.


The information that's actually coming from AMD? A couple weeks. The rest of it has been frantic speculation and maddened cyberspying by literally everyone else on the planet.

It's been incredible to watch what lengths people will go through to claim to have exclusive information on these cards.


----------



## lucasj1974

Quote:


> Originally Posted by *Ganf*
> 
> The information that's actually coming from AMD? A couple weeks. The rest of it has been frantic speculation and maddened cyberspying by literally everyone else on the planet.
> 
> It's been incredible to watch what lengths people will go through to claim to have exclusive information on these cards.


I won't dispute that point.


----------



## Forceman

Quote:


> Originally Posted by *Ganf*
> 
> It's been incredible to watch what lengths people will go through to claim to have exclusive information on these cards.


That's always the case around new GPU release time. Everyone has a "friend" that knows "things".


----------



## magnek

Quote:


> Originally Posted by *lucasj1974*
> 
> AMD needs to get it together....they really have been making poor business decisions....Instead of letting rumors fly around for months they ought to keep the tech community supplied with information instead of "leaking it out". At this point, even if the FuryX stomps the competition I don't think I would risk buying one because of the kind of company AMD is showing itself to be. Sad but true.


What do you propose AMD should've done then? To keep the tech community supplied with info but not officially announce anything would, well, require them to leak information.

If you're complaining about the lack of "official leaks", well Roy has been unusually quiet this year. And after the Bulldozer disaster and Freesync fiasco, I wonder if they've decided it's best to just shut their mouth for the near term. Or maybe it's just how Lisa Su likes to do things.


----------



## SlackerITGuy

Sorry if already posted:









Source: PCPer.


----------



## Ghoxt

OH they have one. Tasty

Will be interesting to see which SKU this is. Are all Fury WC? or no... No telling as it's the best kept secret and all.


----------



## Redwoodz

Quote:


> Originally Posted by *harney*
> 
> I very much doubt that the 390x is going to be anywhere near the 980 ti unless it comes equipped with its own nuclear sub station
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i even have my doubts on the flurry but i am hoping that will be at least 10% faster ....i am no fan of either red or green but like i said before i do want AMD to succeed for all i sakes


Probably closer to 980,true but they really are not that far apart in high resolutions. 290X is approx 3100kr while 980 is 5100kr.Without VAT.
Quote:


> Originally Posted by *SlackerITGuy*
> 
> Sorry if already posted:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Source: PCPer.










Quality looks very good!


----------



## Ganf

Rep, we haven't seen that one yet, and that's the last pic I need to see of a GT on the radiator to squash any doubt.


----------



## magnek

Well these appear to be the high speed typhoons though judging by the circular ring joining the fan blades, and they can get a bit loud. I hope for AMD's sake they didn't put the 5400 rpm one on there.









CoolingTechnique tested the AP-29 (3000 rpm) and AP-31 (5400 rpm) versions for those interested.


----------



## joeh4384

Are the VRMs factory water cooled as well as the GPU/VRAM?


----------



## Ganf

Mmmmmaybe. Nidec does build fans to custom specs for the right order volumes. AMD could have requested the blade type for a higher RPM fan just because it's black and holds a sticker. Hard to say until we hear a reviewer say whether it shrieks or not.

It's unlikely that it's anything above 3000rpms. As another poster mentioned, the 4000rpm fans will blow a standard fan header just starting up, and even if it were a 3000rpm fan, it'll be PWM and you only need 2000 out of it to keep temps down.


----------



## DarkLiberator

That's crazy how small it looks. My GTX 770 is like twice as long as that Fury X. Might even fit in an htpc easily.


----------



## Asmodian

Quote:


> Originally Posted by *harney*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Gilles3000*
> 
> The main reason for Haswell running hot was bad TIM between the die and IHS.
> 
> 
> 
> yes i was aware of the poor thermal job they did but i am more than sure i read some where that explained there was a slight increase in heat due to the mem controller being in the chip
Click to expand...

It isn't the memory controller heating up Haswell it is the integrated voltage regulator (FIVR). The memory controllers have been on die ever since Nehalem.

There is a memory controller on the Fiji die but it is smaller than Hawaii's because for some reason HBM doesn't need as many transistors in its memory controller.


----------



## Majin SSJ Eric

Still kinda surprised more people aren't making a big deal about AMD packaging GT's with this card, especially around here where that fan has an almost cult status. Also, that rad looks mighty thick...


----------



## magnek

It looks thicker than it really is because the shroud extends over the fan as well. It's hard to judge from those angles but the rad itself looks to be a bit thicker than 2 PCI brackets.


----------



## Majin SSJ Eric

I thought that bit with the grooves in it is the reservoir? Sorry, I've never been too knowledgeable about these AIO's...


----------



## magnek

No you're right it's the reservoir, not sure why I said shroud


----------



## lucasj1974

Quote:


> Originally Posted by *magnek*
> 
> What do you propose AMD should've done then? To keep the tech community supplied with info but not officially announce anything would, well, require them to leak information.
> 
> If you're complaining about the lack of "official leaks", well Roy has been unusually quiet this year. And after the Bulldozer disaster and Freesync fiasco, I wonder if they've decided it's best to just shut their mouth for the near term. Or maybe it's just how Lisa Su likes to do things.


I want them to stop dragging everyone around waiting for something that's supposed to be great but isn't. Stop badmouthing the competition and release something better. I'm over them......and I was one of the first AMD guys around. I had an AMD 486 133mhz processor that competed with intel's 80386 back when Windows 95 was hitting big (just in case you think I'm an Nvidia fanboy). They are really letting me down. Their company sucks now.

(Sniffle) I'm sick of articles....I want benchmarks! (Sniffle)


----------



## magnek

Umm did you miss the thread about the leaked benchmark? And to be fair AMD's "official promotion" so far has consisted of a couple absolutely forgettable garbage 8 second teasers, and an official page that's well hidden, so I really wouldn't say they're "dragging everyone around".

Badmouthing the competition now ok you have a point there. Even if every word you say is true you do come off as unprofessional at least. And Roy's deliberately provocative rhetoric doesn't help either.


----------



## shadow85

Ill admit, that is a good looking card. I just hope it performs as well as it is hyped up to be.


----------



## Blameless

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die?


Less hot than they get sitting next to the VRM on some GDDR5 boards.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> For that matter, will you have to put TIM on the memory chips when installing a block?


Yes, that would be wise.
Quote:


> Originally Posted by *harney*
> 
> is all the volt reg all on that interposer


Very little voltage regulation on the interposer...no more than is on a traditional GPU package really. Just some small capacitors and resistors for final filtering.
Quote:


> Originally Posted by *harney*
> 
> Ok so the VRM is on the pcb how about the memory controller reason why i am asking is heat ...as i recall wasn't the haswell hot because the memory controller was on chip die


Essentially all GPUs, going back before the term GPU was coined, have the memory controller on-die. I think I'd have to dig some multi-chip parts from the early 90s out to find an exception.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I wonder how hot these stacked hbm memory chips are going to get sitting right next to the gpu die?


Less hot than they get sitting next to the VRM on some GDDR5 boards.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> For that matter, will you have to put TIM on the memory chips when installing a block?


Yes, that would be wise.
Quote:


> Originally Posted by *harney*
> 
> is all the volt reg all on that interposer


Very little voltage regulation on the interposer...no more than is on a traditional GPU package really. Just some small capacitors and resistors for final filtering.
Quote:


> Originally Posted by *harney*
> 
> Ok so the VRM is on the pcb how about the memory controller reason why i am asking is heat ...as i recall wasn't the haswell hot because the memory controller was on chip die


Essentially all GPUs, going back before the term GPU was coined, have the memory controller on-die. I think I'd have to dig some multi-chip parts from the early 90s out to find an exception.
Quote:


> Originally Posted by *tsm106*
> 
> Afaik there isn't a traditional memory controller, it goes thru the interposer.


Interposer connects the memory to the GPU, but the memory controller has to be on the GPU unless they want HBM to be slower than what it's replacing.

Off-die memory controllers are not an option if memory performance is a real concern.
Quote:


> Originally Posted by *harney*
> 
> yes i was aware of the poor thermal job they did but i am more than sure i read some where that explained there was a slight increase in heat due to the mem controller being in the chip


Intel has been using on-die memory controllers for their CPUs since Nehalem in late 2008.

You are thinking of the FIVR, not the IMC.
Quote:


> Originally Posted by *Asmodian*
> 
> There is a memory controller on the Fiji die but it is smaller than Hawaii's because for some reason HBM doesn't need as many transistors in its memory controller.


Lower clocked memory controllers can be smaller. HBM runs at very low clock speeds.


----------



## lucasj1974

Quote:


> Originally Posted by *magnek*
> 
> Umm did you miss the thread about the leaked benchmark? And to be fair AMD's "official promotion" so far has consisted of a couple absolutely forgettable garbage 8 second teasers, and an official page that's well hidden, so I really wouldn't say they're "dragging everyone around".
> 
> Badmouthing the competition now ok you have a point there. Even if every word you say is true you do come off as unprofessional at least. And Roy's deliberately provocative rhetoric doesn't help either.


I really wasn't trying to be professional. I don't claim to be a professional.


----------



## magnek

lol the "you" in my sentence was referring to AMD, not "you the poster"


----------



## ALT F4

Quote:


> Originally Posted by *shadow85*
> 
> Ill admit, that is a good looking card. I just hope it performs as well as it is hyped up to be.


I think we all do.


----------



## harney

Quote:


> Originally Posted by *lucasj1974*
> 
> I want them to stop dragging everyone around waiting for something that's supposed to be great but isn't. Stop badmouthing the competition and release something better. I'm over them......and I was one of the first AMD guys around. I had an AMD 486 133mhz processor that competed with intel's 80386 back when Windows 95 was hitting big (just in case you think I'm an Nvidia fanboy). They are really letting me down. Their company sucks now.
> 
> (Sniffle) I'm sick of articles....I want benchmarks! (Sniffle)


Best i can do is this for you.... saw this fly poster while going to store ...AMD definitely mean business with a poster like this


----------



## 47 Knucklehead

Quote:


> Originally Posted by *ALT F4*
> 
> I think we all do.


For the sake of AMD staying in business as their own entity.


----------



## Exilon

LegitReviews, PCPer, and KitGuru are mad that AMD social media shills (in the most literal sense) are receiving samples before them. Guru3d doesn't have samples either.

So what's it going to be? Check all that apply

[ ] New marketing paradigm - social media!
[ ] No decisive advantage vs GTX 980 Ti, have to guarantee favorable launch reviews.
[ ] Hard launch w/o reviews
[ ] Paper launch
[ ] Other


----------



## lucasj1974

Quote:


> Originally Posted by *harney*
> 
> Best i can do is this for you.... saw this fly poster while going to store ...AMD definitely mean business with a poster like this


LOL That's great!


----------



## lucasj1974

Quote:


> Originally Posted by *Exilon*
> 
> LegitReviews, PCPer, and KitGuru are mad that AMD social media shills (in the most literal sense) are receiving samples before them. Guru3d doesn't have samples either.
> 
> So what's it going to be? Check all that apply
> 
> [ ] New marketing paradigm - social media!
> [ ] No decisive advantage vs GTX 980 Ti, have to guarantee favorable launch reviews.
> [ ] Hard launch w/o reviews
> [ ] Paper launch
> [ ] Other


Check #2


----------



## Redwoodz

Quote:


> Originally Posted by *Exilon*
> 
> LegitReviews, PCPer, and KitGuru are mad that AMD social media shills (in the most literal sense) are receiving samples before them. Guru3d doesn't have samples either.
> 
> So what's it going to be? Check all that apply
> 
> [ ] New marketing paradigm - social media!
> [ ] No decisive advantage vs GTX 980 Ti, have to guarantee favorable launch reviews.
> [ ] Hard launch w/o reviews
> [ ] Paper launch
> [ ] Other


Believe they already stated it would be paper launch the 16th with cards following in 2 weeks. If you don't want leaks you don't give it out.


----------



## thwl

Going to say .. #4 and unsub immediately.


----------



## Pantsu

I think it's pretty obvious by now that the launch on 16th is hard only for the 300 series, and they'll only reveal the Fury, and reviews will come out in a week or two with availability maybe even later.


----------



## specopsFI

Yep, 300 series is a hard launch since they are already in shops. And it doesn't look good that AMD doesn't want tech sites publishing reviews until after the launch.

But the Fury launch will most likely follow the usual path. There's probably a good couple of weeks until retail availability and by that time we will see reviews.


----------



## michaelius

Weird - our Polish reviewers strongly suggested they received tons of new gpus to test over last week - some of them were 980 ti non reference obviously but not all if i'm reading beetwen lines correctly


----------



## Sheyster

Quote:


> Originally Posted by *Exilon*
> 
> LegitReviews, PCPer, and KitGuru are mad that AMD social media shills (in the most literal sense) are receiving samples before them. Guru3d doesn't have samples either.
> 
> So what's it going to be? Check all that apply
> 
> [ ] New marketing paradigm - social media!
> [ ] No decisive advantage vs GTX 980 Ti, have to guarantee favorable launch reviews.
> [ ] Hard launch w/o reviews
> [ ] Paper launch
> [ ] Other


Looks like AMD is learning from the big movie studies. If the finished movie is crappy, skip the legit critical reviews and leak a few copies to the shills. UGH!


----------



## 331149

Card not out yet, drivers still being optimized, nvidia fanboys going crazy. Yep looks like any other day on the interweb. Let's not forget the fact that the card is not out yet and drivers are still being optimized.


----------



## tsm106

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tsm106*
> 
> Afaik there isn't a traditional memory controller, it goes thru the interposer.
> 
> 
> 
> Interposer connects the memory to the GPU, but the memory controller has to be on the GPU unless they want HBM to be slower than what it's replacing.
> 
> Off-die memory controllers are not an option if memory performance is a real concern.
Click to expand...

What the hell you do you mean off-die? Do you reply just to reply or something? The whole memory system on HBM is off die, that's the point. And if you bothered to read any of the HBM explanation articles none refer to a memory controller directly. And digging into the HBM setup, you'd find that the HBM stack of four are on top of a logic die which then stacks onto the interposer. There is no traditional memory controller from what I have seen.


----------



## provost

Quote:


> Originally Posted by *Sheyster*
> 
> Looks like AMD is learning from the big movie studies. If the finished movie is crappy, skip the legit critical reviews and leak a few copies to the shills. UGH!


What use would it be to leak the cards to the reviewers; has any reviewer even bothered to test the GK110 cards with new drivers against the Maxwell, and has any reviewer bothered to test the sli frametimes with the new Nvidia drivers?

My Titans keep crashing, and the fix of AB pooling rate and physx to CPU is no longer working. Never mind the latest driver which crashes every 5 minutes in W3, but the previous version is also unstable.

Do reviewers only care about "reviewing" the latest hardware that's for sale, or do these guys add any value to their subscribers by actually doing some decent journalism and "reviewing" what is actually important to their subscriber base and pc gamers in general (most of these forums have multiple threads on crashing and performance issues with Nvidia's latest drivers) rather than advertising new hardware?

I don't need another useless canned review masquerading as an advertisement for new hardware. I can get that from the marketing literature.

On a separate note, it looks like there is big disconnect between the poor driver guys and the glorified Gameworks folks at NV, based on how these drivers are performance. Either people aren't talking each other or there is some cultural friction going on.

Nvidia is failing (in my experience and for numerous others) one metric that is the most critical to a consumer discretionary company, The Customer Experience.

No matter what people say about Apple, if Steve Jobs were to be out in charge of NV, he would have been mortified.


----------



## Blameless

Quote:


> Originally Posted by *tsm106*
> 
> What the hell you do you mean off-die? Do you reply just to reply or something?


Anything not on the Fiji die proper is off die in this context.
Quote:


> Originally Posted by *tsm106*
> 
> The whole memory system on HBM is off die, that's the point.


This is not true.

Indeed, quite the opposite is the case. The memory has been moved on package, which is a step closer to the die than it was before.

The memory controller itself is still going to be on the Fiji die proper..there is really no other viable place to put it.
Quote:


> Originally Posted by *tsm106*
> 
> And if you bothered to read any of the HBM explanation articles none refer to a memory controller directly.


Because HBM is the memory, not the controller. Some sort of memory controller is a given, but HBM doesn't dictate what that memory controller must be any more than GDDR5, DDR3, or FPM asyncronous DRAM does.
Quote:


> Originally Posted by *tsm106*
> 
> And digging into the HBM setup, you'd find that the HBM stack of four are on top of a logic die which then stacks onto the interposer.


And? The logic layer doesn't negate the need for a memory controller on Fiji any more than the buffer on an FB-DIMM eliminates the need for the MCH on an old Xeon setup.
Quote:


> Originally Posted by *tsm106*
> 
> There is no traditional memory controller from what I have seen.


If it didn't have a memory controller it would not be able to access memory. If it's memory controller was not on the Fiji GPU die, there would be an enormous latency penalty that would more than negate any possible benefit HBM could provide.


----------



## tsm106

You like reading your own posts.


----------



## Blameless

Quote:


> Originally Posted by *tsm106*
> 
> You like reading your own posts.


Is that a rebuttal to something?

You seem to be trying to imply that Fiji won't have transistors dedicated to accessing it's HBM stacks. I do not find this remotely likely.


----------



## DarkIdeals

I kept telling this to people but the AMD fanboys wouldn't listen...









When info came out and said the Fury X would be 60% faster than 290X i did the math in my head; 290X is between a 780 and 780 TI in overall performance. This puts the GTX 980 at around 15% perhaps 20% faster than 290X. Then the GTX TITAN X is ~50% faster than the GTX 980. So this puts the TITAN X at 65% faster than 290X, so at least a few percent over Fury X. BUT, we also have to consider that the 60% number is AMDs own "bragging", and regardless of whether it's Nvidia, Intel, AMD etc.. bragging is NEVER 100% correct. So a better estimate would be 50-52% faster than 290X, perhaps 55% at best. So that puts the TITAN X at 13-15% faster than Fury X, and the 980 TI around 8-10% faster than Fury X.

Then there's also another consideration, the fact that the card is still limited to 4GB of VRAM. No matter how fast your VRAM is, if you don't have enough of it performance will TANK! (even the 640gb/s revealed isn't "that" much faster than the 336gb/s on the GM200 chips, and the 640gb/s has actually been proven false, saying the card will have somewhere between like 500gb/s and 600gb/s, can't remember the exact number they listed) So at 4K, 5K, 4K surround, 1440p surround, (what these cards like the Fury X and TITAN X are MADE for!) or even 144hz 1440p and 3440x1440p to a slightly smaller degree, the Fury X will lose quite a bit of performance, as even with 4GB running at ~550gb/s if you are forced to use 1-2GB of 30gb/s system RAM your overall performance will plummet while waiting for the information to move between the GPU and RAM as well as taking time to be calculated at that slower ~30gb/s DDR3 speed (a bit higher at DDR4, maybe 45-50gb/s, but still the same result). There are games like Dying LIght, Shadow of Mordor, Heavily texture modded Skyrim (or fallout 4 when it's out) etc.. that will use over 4GB of VRAM even at 3440x1440p or triple 1080p etc.. and possibly even regular 1440p in some cases. And the list gets MUCH larger at 4K resolution.

It's not that the Fury X is a "bad" card or anything, it's still much better than what AMD had been doing in the past, with the exception of the 295x2. It's just that it isn't the amazing "TITAN X KILLER" that everyone was making it out to be, especially since it doesn't even beat the 980 TI. I wouldn't really hold out hope for drivers to fix it either, as we all know how AMD drivers work....*shudder*


----------



## sugarhell

404: Math not found ^^


----------



## Sheyster

Quote:


> Originally Posted by *provost*
> 
> Nvidia is failing (in my experience and for numerous others) one metric that is the most critical to a consumer discretionary company, The Customer Experience.


My experiences with the T-X (so far), the 780 Ti before it, and the 780 before that have all been outstanding. I skipped the 980 since I had a great 1300 MHz 780 Ti that was more or less equivalent.


----------



## provost

Quote:


> Originally Posted by *Sheyster*
> 
> My experiences with the T-X (so far), the 780 Ti before it, and the 780 before that have all been outstanding. I skipped the 980 since I had a great 1300 MHz 780 Ti that was more or less equivalent.


Well, sure as long as you keep buying the latest sku Nvidia has out in a single card configuration, you may not have issues with Nvidia drivers.
But, if Nvidia is only interested in supporting the latest sku (because of whatever godforsaken corporate priorities it deems important), I have no interest in their cards as I don't have the time to keep switching out my cards that I have to pay a premium for every few months. Not to mention the accelerated depreciation in the value of any of these cards which Nvidia refuses to support. However, I will be interested in renting a premium card for $10/$15 a month, as long as the lessor keeps sending me an upgrade for a nominal cost with adequate driver support.


----------



## svenge

Quote:


> Originally Posted by *Exilon*
> 
> LegitReviews, PCPer, and KitGuru are mad that AMD social media shills (in the most literal sense) are receiving samples before them. Guru3d doesn't have samples either.
> 
> So what's it going to be? Check all that apply
> 
> [ ] New marketing paradigm - social media!
> *[X] No decisive advantage vs GTX 980 Ti, have to guarantee favorable launch reviews.
> [X] Hard launch w/o reviews
> *[ ] Paper launch
> [ ] Other


Between AMD snubbing the established review sites in favor of using their "Red Team Plus" lapdogs for 100% uncritical praise/promotion and the blatant re-badging of not only model numbers but chip code names (Hawaii -> Grenada, Tonga -> Antigua, and the re-re-branding of Pitcairn -> Curacao -> Trinidad), it seems that AMD has an awful lot to hide in terms of how underwhelming their "new" line of *R*(e-br)*a**deon* 300 series GUPs will be...


----------



## iLeakStuff

So what is the point with giving samples only to members of the AMD fanclub? Will they publish reviews? Does AMD expect people to put as much weight in to the words from the fanclub as we would with the respectable reviewers like PcPer etc?

Or am I reading this wrong?


----------



## rt123

Quote:


> Originally Posted by *svenge*
> 
> Between AMD snubbing the established review sites in favor of using their "Red Team Plus" lapdogs for 100% uncritical praise/promotion and the blatant re-badging of not only model numbers but chip code names (Hawaii -> Grenada, Tonga -> Antigua, and the re-re-branding of Pitcairn -> Curacao -> Trinidad), it seems that AMD has an awful lot to hide in terms of how underwhelming their "new" line of *R*(e-br)*a**deon* 300 series GUPs will be...


Do you have a source for this alleged snub..?


----------



## Noobism

Quote:


> Originally Posted by *DarkIdeals*
> 
> I kept telling this to people but the AMD fanboys wouldn't listen...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> When info came out and said the Fury X would be 60% faster than 290X i did the math in my head; 290X is between a 780 and 780 TI in overall performance. This puts the GTX 980 at around 15% perhaps 20% faster than 290X. Then the GTX TITAN X is ~50% faster than the GTX 980. So this puts the TITAN X at 65% faster than 290X, so at least a few percent over Fury X. BUT, we also have to consider that the 60% number is AMDs own "bragging", and regardless of whether it's Nvidia, Intel, AMD etc.. bragging is NEVER 100% correct. So a better estimate would be 50-52% faster than 290X, perhaps 55% at best. So that puts the TITAN X at 13-15% faster than Fury X, and the 980 TI around 8-10% faster than Fury X.
> 
> Then there's also another consideration, the fact that the card is still limited to 4GB of VRAM. No matter how fast your VRAM is, if you don't have enough of it performance will TANK! (even the 640gb/s revealed isn't "that" much faster than the 336gb/s on the GM200 chips, and the 640gb/s has actually been proven false, saying the card will have somewhere between like 500gb/s and 600gb/s, can't remember the exact number they listed) So at 4K, 5K, 4K surround, 1440p surround, (what these cards like the Fury X and TITAN X are MADE for!) or even 144hz 1440p and 3440x1440p to a slightly smaller degree, the Fury X will lose quite a bit of performance, as even with 4GB running at ~550gb/s if you are forced to use 1-2GB of 30gb/s system RAM your overall performance will plummet while waiting for the information to move between the GPU and RAM as well as taking time to be calculated at that slower ~30gb/s DDR3 speed (a bit higher at DDR4, maybe 45-50gb/s, but still the same result). There are games like Dying LIght, Shadow of Mordor, Heavily texture modded Skyrim (or fallout 4 when it's out) etc.. that will use over 4GB of VRAM even at 3440x1440p or triple 1080p etc.. and possibly even regular 1440p in some cases. And the list gets MUCH larger at 4K resolution.
> 
> It's not that the Fury X is a "bad" card or anything, it's still much better than what AMD had been doing in the past, with the exception of the 295x2. It's just that it isn't the amazing "TITAN X KILLER" that everyone was making it out to be, especially since it doesn't even beat the 980 TI. I wouldn't really hold out hope for drivers to fix it either, as we all know how AMD drivers work....*shudder*


Really quite fitting.


----------



## svenge

Quote:


> Originally Posted by *rt123*
> 
> Do you have a source for this alleged snub..?


/r/hardware shows PcPer, Linus, and Guru3D not getting review units yet while AMD just held a 2-day confab with their viral marketing crew. And those are just the sites that have spoken about the situation so far.


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> So what is the point with giving samples only to members of the AMD fanclub? Will they publish reviews? Does AMD expect people to put as much weight in to the words from the fanclub as we would with the respectable reviewers like PcPer etc?
> 
> Or am I reading this wrong?


These are all great questions, so now my question is is AMD really that dumb and conceited to think they could pull something like this off without severe backlash, or is this one big misunderstanding that's been twisted and blown out of proportion?

A follow up question would be is this the first time AMD has done something like this with their Red Team Plus members, or is this merely continuation of a tradition?

And my final question is, has any of these RTP members ever reviewed any of AMD's hardware, showered it with praises, and then tried to pass it off as a legit review from a neutral 3rd party?


----------



## Noobism

Quote:


> Originally Posted by *svenge*
> 
> /r/hardware shows PcPer, Linus, and Guru3D not getting review units *YET*while AMD just held a 2-day confab with their viral marketing crew. And those are just the sites that have spoken about the situation so far.


So because they've yet to get a review model it's considered a snub? Not enough time in the day I tell yea.


----------



## iLeakStuff

Quote:


> Originally Posted by *magnek*
> 
> These are all great questions, so now my question is is AMD really that dumb and conceited to think they could pull something like this off without severe backlash, or is this one big misunderstanding that's been twisted and blown out of proportion?
> 
> A follow up question would be is this the first time AMD has done something like this with their Red Team Plus members, or is this merely continuation of a tradition?
> 
> And my final question is, has any of these RTP members ever reviewed any of AMD's hardware, showered it with praises, and then tried to pass it off as a legit review from a neutral 3rd party?


You even manage to raise even more very interesting questions. wow

I don`t know the answer to any of them. Except I dont think AMD have skipped any reviewers before. But like svenge says, reviewers might get theirs before any official reviews anyway and AMD fanclub got a taste before them, but why would PCPer etc complaint so open on twitter if that is the case?

So many questions


----------



## magnek

I'm just saying if AMD was intentionally trying to bypass reviewers then they did a terrible job, and it's already backfired. Plus why is it that other major sites like AnandTech, TechPowerUp, TechReport, and Tom's haven't said a thing (yet anyway)? And of the sites that have spoken out, why is only PCPer and LegitReviews complaining while Linus and Guru3D make a simple "we don't have these cards yet" statement and move on?

Plus if AMD really was trying to pull a quick one, why would they leave all the evidence out in public for everyone to see, and provide at least a partial listing of its RTP members?

Or maybe they're skipping certain sites on purpose because of reasons. *shurg* /tinfoil hat


----------



## gamervivek

Quote:


> Originally Posted by *DarkIdeals*
> 
> I kept telling this to people but the AMD fanboys wouldn't listen...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> When info came out and said the Fury X would be 60% faster than 290X i did the math in my head; 290X is between a 780 and 780 TI in overall performance. This puts the GTX 980 at around 15% perhaps 20% faster than 290X. Then the GTX TITAN X is ~50% faster than the GTX 980. So this puts the TITAN X at 65% faster than 290X, so at least a few percent over Fury X. BUT, we also have to consider that the 60% number is AMDs own "bragging", and regardless of whether it's Nvidia, Intel, AMD etc.. bragging is NEVER 100% correct. So a better estimate would be 50-52% faster than 290X, perhaps 55% at best. So that puts the TITAN X at 13-15% faster than Fury X, and the 980 TI around 8-10% faster than Fury X.


They would try if you get your percentages and usage of them right.


----------



## Orivaa

Quote:


> Originally Posted by *DarkIdeals*
> 
> I kept telling this to people but the AMD fanboys wouldn't listen...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> When info came out and said the Fury X would be 60% faster than 290X i did the math in my head; 290X is between a 780 and 780 TI in overall performance. This puts the GTX 980 at around 15% perhaps 20% faster than 290X. Then the GTX TITAN X is ~50% faster than the GTX 980. So this puts the TITAN X at 65% faster than 290X, so at least a few percent over Fury X. BUT, we also have to consider that the 60% number is AMDs own "bragging", and regardless of whether it's Nvidia, Intel, AMD etc.. bragging is NEVER 100% correct. So a better estimate would be 50-52% faster than 290X, perhaps 55% at best. So that puts the TITAN X at 13-15% faster than Fury X, and the 980 TI around 8-10% faster than Fury X.


Maybe if you had not done the calculations in your head, they wouldn't be wrong.


----------



## Boomstick727

Quote:


> Originally Posted by *svenge*
> 
> /r/hardware shows PcPer, Linus, and Guru3D not getting review units yet while AMD just held a 2-day confab with their viral marketing crew. And those are just the sites that have spoken about the situation so far.


AMD and PCGamer have gone to the trouble of doing the first ever E3 PC event. AMD want to reveal their hardware at the event, press will get samples after that. Why people would have a problem with that is beyond me..

It's almost like people are just sat around waiting to get mad at something.

Let AMD reveal their hardware at their event. Reviews can come later..


----------



## awdrifter

This card probably won't live up to the hype. This is the exact kind of things that AMD did during the HD2900XT. There were numerous leaks that the card wasn't as powerful as the 8800GTX, They have NDA from every reviewer until the day of release. When they had a superior product in the HD4870 series, they didn't do any of that, reviews were out early, everyone was able to publish their reviews.


----------



## magnek

The complaints are related to the 300 series cards (which we pretty much know to be rebrands anyway), not the Fury GPU with HBM. There's the leaked benchmark (and an entire OCN thread about it) showing Fury X to be on par with Titan X, and people seemed to think it was credible. So really it all comes down to price at this point.


----------



## zealord

Quote:


> Originally Posted by *magnek*
> 
> The complaints are related to the 300 series cards (which we pretty much know to be rebrands anyway), not the Fury GPU with HBM. There's the leaked benchmark (and an entire OCN thread about it) showing Fury X to be on par with Titan X, and people seemed to think it was credible. So really it all comes down to price at this point.


What do you think would be an appropriate price for the Fury X 4GB HBM Water cooled card when it performs within + - 5% Titan X?


----------



## Kinaesthetic

Quote:


> Originally Posted by *zealord*
> 
> What do you think would be an appropriate price for the Fury X 4GB HBM Water cooled card when it performs within + - 5% Titan X?


I'd say $599-649 personally. Because HBM or not, that doesn't prevent you from the fact that there are only 4GB of physically addressable space, irregardless of how fast access times to those bits are. Also, they'll have to price it at the 980Ti level, purely because if it is +-5% of the TitanX, that puts it squarely in the 980Ti performance bracket, which is +-5% of the Titan X.


----------



## criminal

Quote:


> Originally Posted by *zealord*
> 
> What do you think would be an appropriate price for the Fury X 4GB HBM Water cooled card when it performs within + - 5% Titan X?


$699 tops


----------



## zealord

Quote:


> Originally Posted by *Kinaesthetic*
> 
> I'd say $599-649 personally. Because HBM or not, that doesn't prevent you from the fact that there are only 4GB of physically addressable space, irregardless of how fast access times to those bits are. Also, they'll have to price it at the 980Ti level, purely because if it is +-5% of the TitanX, that puts it squarely in the 980Ti performance bracket, which is +-5% of the Titan X.


exactly my thoughts. I could see 599$ happening for the air cooled Fury X. Also depends on how good the card will be at 4K. If it is a behemoth at 4K and outclasses the Titan X then I could see it being 800-900$
Quote:


> Originally Posted by *criminal*
> 
> $699 tops


Could be a reasonable price for the water cooled Fury X from AMD standpoints yeah.


----------



## magnek

I'd say $699 for the AIO version, definitely <$649 for air cooled version. (preferably $599, but realistically $629)


----------



## zealord

We've had rumours of the Fury X being 849$ or more since long (one even from a very reliable source), I bet they probably readjusted them after the 980 Ti released at 649$.


----------



## magnek

Yes and so everybody took that $849 price as gospel. Well the same source also said 980 Ti would arrive after summer and was dead wrong about that. Just goes to show everything can (and does) change.

Oh and remember the Titan X @ $1349 rumors, and how when it was released at $999 people thought it was "a good deal"? Could very well be the same kind of thing going on here.


----------



## iLeakStuff

Maybe we get a scenario where you can get a 980Ti Hybrid and Fury X for the same price more or less? Performance too should be close I think


----------



## zealord

Quote:


> Originally Posted by *magnek*
> 
> Yes and so everybody took that $849 price as gospel. Well the same source also said 980 Ti would arrive after summer and was dead wrong about that. Just goes to show everything can (and does) change.
> 
> Oh and remember the Titan X @ $1349 rumors, and how when it was released at $999 people thought it was "a good deal"? Could very well be the same kind of thing going on here.


Nah I mean the german source heise.de (they are very reliable. they like never post rumours). They posted that they got a sheet when attending Cebit where the prices for the , at that time still 390X, flagship card was listed at 700+ $.

They were the very first to say anything about the price of Fiji and at that time they already said (back in March) that the Flagship was very expensive and that there is a huge gap to the next non Fiji Card. Now we get that exact same ratio with prices (The 390X will be ~400$ and we all expect the Fury to be quite expensive).

For clarification what I mean because my english can't quite put it together well :

Heise.de said in March

390X 700+$ USD
390 700 $ USD
380X 400 $ USD

and so on.

Now we have, based on rumors and some confirmation :

Fury X price unknown, but expected to be expensive
Fury price unknown
390X 400$ USD

and so on.

I know that this is still a rumor and nothing is confirmed, but I actually believe that they are right about it. Atleast at that time. Maybe they have changed it now, because the 980 Ti released at 649$.


----------



## Ganf

Quote:


> Originally Posted by *zealord*
> 
> We've had rumours of the Fury X being 849$ or more since long (one even from a very reliable source), I bet they probably readjusted them after the 980 Ti released at 649$.


Yeah, the 980ti blindsided them, but the pricing scheme is obvious given what we know about the lineup.

AIO Fury priced against the 980ti Hybrid
Fury X against 980ti
Fury in between the 980 and 980ti
390x against the 980
etc...

You can tell they wanted to launch the AIO Fury against the Titan X, but the Price to Performance crown would've gone to the 980ti in that scenario and that would've been game over. AMD wouldn't even be a consideration for enthusiasts after that. Nvidia scooped the Fury's performance and left the 980ti's performance so close to the Titan X at such a low price that there was no way AMD could claim the ground in between the two. There is no ground to claim now, and the 980ti's price lowballs both AMD and Nvidia.

There was a lot of tinfoil hat talk about duopoly collusion while we were waiting for these cards to release. Cooperative Duopolies don't get this cutthroat, Nvidia wants AMD out of the market.


----------



## zealord

Quote:


> Originally Posted by *Ganf*
> 
> Yeah, the 980ti blindsided them, but the pricing scheme is obvious given what we know about the lineup.
> 
> *AIO Fury priced against the 980ti Hybrid
> Fury X against 980ti
> Fury in between the 980 and 980ti
> 390x against the 980
> etc...*
> 
> You can tell they wanted to launch the AIO Fury against the Titan X, but the Price to Performance crown would've gone to the 980ti in that scenario and that would've been game over. AMD wouldn't even be a consideration for enthusiasts after that. Nvidia scooped the Fury's performance and left the 980ti's performance so close to the Titan X at such a low price that there was no way AMD could claim the ground in between the two. There is no ground to claim now, and the 980ti's price lowballs both AMD and Nvidia.
> 
> There was a lot of tinfoil hat talk about duopoly collusion while we were waiting for these cards to release. Cooperative Duopolies don't get this cutthroat, *Nvidia wants AMD out of the market*.


That sounds reasonable yeah. AMD prices should be 30-40$ cheaper for direct equivalent competing cards.

Wouldn't AMD be out of the market be bad for nvidia?


----------



## magnek

Quote:


> Originally Posted by *zealord*
> 
> Nah I mean the german source heise.de (they are very reliable. they like never post rumours). They posted that they got a sheet when attending Cebit where the prices for the , at that time still 390X, flagship card was listed at 700+ $.
> 
> They were the very first to say anything about the price of Fiji and at that time they already said (back in March) that the Flagship was very expensive and that there is a huge gap to the next non Fiji Card. Now we get that exact same ratio with prices (The 390X will be ~400$ and we all expect the Fury to be quite expensive).
> 
> For clarification what I mean because my english can't quite put it together well :
> 
> Heise.de said in March
> 
> 390X 700+$ USD
> 390 700 $ USD
> 380X 400 $ USD
> 
> and so on.
> 
> Now we have, based on rumors and some confirmation :
> 
> Fury X price unknown, but expected to be expensive
> Fury price unknown
> 390X 400$ USD
> 
> and so on.
> 
> I know that this is still a rumor and nothing is confirmed, but I actually believe that they are right about it. Atleast at that time. Maybe they have changed it now, because the 980 Ti released at 649$.


Considering the 390X is no longer the GPU with HBM, I think it's safe to say a lot has changed, and those prices should all be disregarded.


----------



## Ganf

Quote:


> Originally Posted by *zealord*
> 
> That sounds reasonable yeah. AMD prices should be 30-40$ cheaper for direct equivalent competing cards.
> 
> Wouldn't AMD be out of the market be bad for nvidia?


Not in the least. They've already got silly profits in the consumer GPU arena, no competition means they could run wild with it. You only become a monopoly if you show non-competitive practices, not if there's nobody to compete with you.

And to be honest, they should be throwing curve balls like this. Nvidia has 75% of the PC GPU market and that's great. AMD has the rest of the gaming world except for handhelds and mobile, which Nvidia got chased out of by ARM. They have the consoles, they're creeping Mantle into the consoles, and they have tight cooperation with Microsoft on DX12.

The hardware isn't jack without software support, AMD is rebuilding their software support from the ground up right now. If AMD pulls it off and stays competitive until HBM2 2016 is going to be even crazier than 2015 has been. Mud-slinging won't even describe it as AMD goes after Gameworks and Nvidia tries to pretend that Gameworks isn't their ace in the hole to prevent AMD from playing a royal flush on driver support from the API up, no matter what platform the game originates from.


----------



## magnek

Eh I'd say public perception is nVidia's ace in the hole. They could comfortably coast on that for the next 5 years at least. Whatever AMD does it will unfortunately always be "not good enough", whether deserved or not.


----------



## zealord

Quote:


> Originally Posted by *Ganf*
> 
> Not in the least. They've already got silly profits in the consumer GPU arena, no competition means they could run wild with it. You only become a monopoly if you show non-competitive practices, not if there's nobody to compete with you.
> 
> And to be honest, they should be throwing curve balls like this. Nvidia has 75% of the PC GPU market and that's great. AMD has the rest of the gaming world except for handhelds and mobile, which Nvidia got chased out of by ARM. They have the consoles, they're creeping Mantle into the consoles, and they have tight cooperation with Microsoft on DX12.
> 
> The hardware isn't jack without software support, AMD is rebuilding their software support from the ground up right now. If AMD pulls it off and stays competitive until HBM2 2016 is going to be even crazier than 2015 has been. Mud-slinging won't even describe it as AMD goes after Gameworks and Nvidia tries to pretend that Gameworks isn't their ace in the hole to prevent AMD from playing a royal flush on driver support from the API up, no matter what platform the game originates from.


I don't know why, but your post made my hyped for 2016. Sounds like a movie description


----------



## Ganf

Quote:


> Originally Posted by *magnek*
> 
> Eh I'd say public perception is nVidia's ace in the hole. They could comfortably coast on that for the next 5 years at least. Whatever AMD does it will unfortunately always be "not good enough", whether deserved or not.


And after coasting along for 5 years they'll have dug the same hole that AMD has spent the last few years trying to crawl out of. I'm pretty sure they'd rather not be in that position.


----------



## magnek

By "coast" I meant do what they normally do without going the extra mile or doing anything out of the ordinary.


----------



## lucasj1974

Quote:


> Originally Posted by *zealord*
> 
> What do you think would be an appropriate price for the Fury X 4GB HBM Water cooled card when it performs within + - 5% Titan X?


If they can swing it I think $549 or $599 at most, but even then NVidia can lower the cost of the 980ti. Should be very interesting.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *svenge*
> 
> /r/hardware shows PcPer, Linus, and Guru3D not getting review units yet while AMD just held a 2-day confab with their viral marketing crew. And those are just the sites that have spoken about the situation so far.


TBH if I were in charge of AMD I'd never give PCPer another review product ever again...


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *magnek*
> 
> Yes and so everybody took that $849 price as gospel. Well the same source also said 980 Ti would arrive after summer and was dead wrong about that. Just goes to show everything can (and does) change.
> 
> Oh and remember the Titan X @ $1349 rumors, and how when it was released at $999 people thought it was "a good deal"? Could very well be the same kind of thing going on here.


I think they were likely shooting for the $850 price point with Fury originally which is why Nvidia swooped in with a $650 980Ti. Most reports had the 980Ti coming out at the end of summer but I guess they got wind of Fury and its proximity to Titan X and decided to take the initiative with an early release (before Alatar gets his panties in a wad, I admit this is pure conjecture on my part). Now with 980Ti on the market I don't see how AMD can get away with charging any more than $750 for Fury X. I'd love to see it more aggressively aimed at the 980Ti price point but with the expensive HBM and water cooler (with an AP-29) I expect the price to be $50-$100 more than the reference 980Ti. Of course this will be OK for AMD if they manage to beat Titan X outright in performance...


----------



## lucasj1974

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> TBH if I were in charge of AMD I'd never give PCPer another review product ever again...


Well I don't think they would have any trouble buying one of their own......I know you probably think they are hard on AMD but I think their reviews are pretty much spot on. I'm not going to argue the details BTW.


----------



## Majin SSJ Eric

Certainly they can buy one off the shelf for review. I just wouldn't ever do anything to help PCPer considering they are about as anti-AMD as any review site I have ever seen. Of course people disagree with my opinion but it is just that, my opinion and I have seen enough blatantly negative AMD reviews from them (while lauding pretty much anything NVidia ever does) to convince me that they are in no way impartial or fair...


----------



## magnek

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think they were likely shooting for the $850 price point with Fury originally which is why Nvidia swooped in with a $650 980Ti. Most reports had the 980Ti coming out at the end of summer but I guess they got wind of Fury and its proximity to Titan X and decided to take the initiative with an early release (before Alatar gets his panties in a wad, I admit this is pure conjecture on my part). Now with 980Ti on the market I don't see how AMD can get away with charging any more than $750 for Fury X. I'd love to see it more aggressively aimed at the 980Ti price point but with the expensive HBM and water cooler (with an AP-29) I expect the price to be $50-$100 more than the reference 980Ti. Of course this will be OK for AMD if they manage to beat Titan X outright in performance...


No your thought process is logical, although I"d just like to correct myself and say that $849 figure actually originated from Fudzilla not SweClockers, so now I have doubts whether that price was even a serious contender to begin with.

I personally predict the following:

Fiji XT AIO: $699 (980 Ti hybrid is $749)
Fiji XT air-cooled: $599
Fiji Pro: $499 (would compete very well with 980)

Lisa Su said there is no HBM shortage, plus we don't really know if they are THAT expensive. I mean AMD did co-develop HBM with Hynix so they could get them at discounted prices. But yeah I know I'm being very hopeful with the prices, and most will probably think I'm dreaming. But except for Fiji Pro which might be "too low", I think these prices are realistic enough to compete, yet also high enough that AMD would still turn a profit. We'll see how wrong I am in a few days.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *magnek*
> 
> No your thought process is logical, although I"d just like to correct myself and say that $849 figure actually originated from Fudzilla not SweClockers, so now I have doubts whether that price was even a serious contender to begin with.
> 
> I personally predict the following:
> 
> Fiji XT AIO: $699 (980 Ti hybrid is $749)
> Fiji XT air-cooled: $599
> Fiji Pro: $499 (would compete very well with 980)
> 
> Lisa Su said there is no HBM shortage, plus we don't really know if they are THAT expensive. I mean AMD did co-develop HBM with Hynix so they could get them at discounted prices. But yeah I know I'm being very hopeful with the prices, and most will probably think I'm dreaming. But except for Fiji Pro which might be "too low", I think these prices are realistic enough to compete, yet also high enough that AMD would still turn a profit. We'll see how wrong I am in a few days.


If Fury released with those prices and at least matched the 980Ti I think that would be a huge win for AMD. Much more so if it actually matched Titan X at those prices. Not too much longer of a wait!


----------



## SpeedyVT

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> If Fury released with those prices and at least matched the 980Ti I think that would be a huge win for AMD. Much more so if it actually matched Titan X at those prices. Not too much longer of a wait!


This article is flamebait, between the HBM and DX12 coming out around the corner I highly doubt the fury card is weaker than the 980ti. AMD's awful driver overhead is responsible for all of their major performance issues. They could easily market it more or less, it really comes down to fanboys not ingenuity.







NVidia has a huge fanboy population.


----------



## Noufel

Quote:


> Originally Posted by *SpeedyVT*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Majin SSJ Eric*
> 
> If Fury released with those prices and at least matched the 980Ti I think that would be a huge win for AMD. Much more so if it actually matched Titan X at those prices. Not too much longer of a wait!
> 
> 
> 
> This article is flamebait, between the HBM and DX12 coming out around the corner I highly doubt the fury card is weaker than the 980ti. AMD's awful driver overhead is responsible for all of their major performance issues. They could easily market it more or less, it really comes down to fanboys not ingenuity.
> 
> 
> 
> 
> 
> 
> 
> NVidia has a huge fanboy population.
Click to expand...

It's called relativity my good man, nvidia has 75% of market shares so the fanboys community if it was AMD who had those shares you would be saying " AMD has a huge fanboy population " but you are still right nvidia fanboys are many especialy here


----------



## SpeedyVT

Quote:


> Originally Posted by *Noufel*
> 
> It's called relativity my good man, nvidia has 75% of market shares so the fanboys community if it was AMD who had those shares you would be saying " AMD has a huge fanboy population " but you are still right nvidia fanboys are many especialy here


I've got no problem with NVidia, I'm just not cool with them locking stuff down as far as source code is concerned. However it's good to keep the record straight, all benchmarks are invalid until AMD says so.


----------



## DarkIdeals

Welp, the red and black-white knights are out in force today it seems....























"Meanwhile, in AM-or-Dor.....Lisa Su-rons all seing eye in secret crafts an ultimate, inneficient, cache sharing processor of evil....yes....one 32nm ancient architecture to rule them all!! Using an army of 5.5Ghz (6ghz turbo!) FX 9999 569w TDP 32 modul....er..core...CPUs to sweep down upon the kingdoms of Mentel and the Elven fortress of Nvideandell." -_____- smh


----------



## harney

Quote:


> Originally Posted by *magnek*
> 
> Yes and so everybody took that $849 price as gospel. Well the same source also said 980 Ti would arrive after summer and was dead wrong about that. Just goes to show everything can (and does) change.
> 
> Oh and remember the Titan X @ $1349 rumors, and how when it was released at $999 people thought it was "a good deal"? Could very well be the same kind of thing going on here.


Yep its simple business tactics get the hype out about a high price $1349 then once peeps have got use to that sell lower and peeps think they have a deal but it was going to be at $999 all along....

i think they have done the same with the Ti against the TX with TX being the golden carrot in terms of price then peeps see the Ti similar performance at a lower cost than the TX and peeps again think they have a great deal...Personally i still think its outrageous prices for these cards ...this is in the end just one component of many no matter how they try to sell it to you...Its not so bad for you guys in the US but in gouging UK land here its crazy i am now seeing reference 980 ti selling for over + £649 $1000....

I am hoping this fury comes in a kicks up some dust so to rearrange nvidia's prices will see


----------



## rt123

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> TBH if I were in charge of AMD I'd never give PCPer another review product ever again...


I would add Tomshardware to that list.


----------



## ToTheSun!

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> TBH if I were in charge of AMD I'd never give PCPer another review product ever again...


The first review i've ever watched from PCPer was of a monitor wherein they said something like "This monitor is good because it looks good and we like it."

I don't even know or care if they're Nvidia shills. I don't take them seriously for that kind of subjective crap.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *magnek*
> 
> These are all great questions, so now my question is is AMD really that dumb and conceited to think they could pull something like this off without severe backlash, or is this one big misunderstanding that's been twisted and blown out of proportion?


Magic 8-ball says ... yes.


----------



## Casey Ryback

Quote:


> Originally Posted by *The Stilt*
> 
> 4GB of VRAM and heads will roll at AMD once again.
> Using HBM when limited to just 4GB is a huge mistake especially when it brings no additional performance. It will certainly reduce the power consumption and make the PCB design more simple and cheaper to produce but still.
> 
> They should have stayed with 512-bit GDDR5 interface.


Hasn't the latest compression techniques in the hardware made high memory interfaces redundant?

ie Nvidia 9 series?


----------



## epic1337

Quote:


> Originally Posted by *Casey Ryback*
> 
> Hasn't the latest compression techniques in the hardware made high memory interfaces redundant?
> 
> ie Nvidia 9 series?


exactly, all the more reason HBM isn't a mandatory choice right now, Nvidia didn't even bat an eyelid when AMD went with it.


----------



## Ganf

Quote:


> Originally Posted by *Casey Ryback*
> 
> Hasn't the latest compression techniques in the hardware made high memory interfaces redundant?
> 
> ie Nvidia 9 series?


Not for VR, HBM and/or dual GPUs are what AMD is saying is required for sub-10ms response times, which are cruicial for eliminating motion sickness.

We'll see though, could just be marketing.


----------



## ToTheSun!

Quote:


> Originally Posted by *epic1337*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Casey Ryback*
> 
> Hasn't the latest compression techniques in the hardware made high memory interfaces redundant?
> 
> ie Nvidia 9 series?
> 
> 
> 
> exactly, all the more reason HBM isn't a mandatory choice right now, Nvidia didn't even bat an eyelid when AMD went with it.
Click to expand...

Data compression has absolutely nothing to do with the improvements HBM brings.

And Nvidia couldn't do much about it because AMD had a deal with Hynix for 2015 exclusivity, AFAIK.


----------



## Casey Ryback

Quote:


> Originally Posted by *ToTheSun!*
> 
> Data compression has absolutely nothing to do with the improvements HBM brings.


Exactly, I was talking about the way the data is transferred in smaller compressed packages, so therefore in theory 384/512 bit memory could be reduced to 256 bit etc.

If we talked about cards performing for high resolutions 2 years ago with a 256bit memory bus people would've said not possible.

Compression has changed this. The gtx 960 uses a 128bit memory interface and perform admirably.


----------



## epic1337

Quote:


> Originally Posted by *ToTheSun!*
> 
> Data compression has absolutely nothing to do with the improvements HBM brings.
> 
> And Nvidia couldn't do much about it because AMD had a deal with Hynix for 2015 exclusivity, AFAIK.


i think you missed my point?

data compression made bandwidth constrictions of GDDR5 less of an issue, HBM's primary purpose was to lift the issue entirely.
with HBM's current restriction to capacity, and price per GB is higher than GDDR5, how would it be superior to a wider buswidth GDDR5?


----------



## Ganf

Quote:


> Originally Posted by *epic1337*
> 
> i think you missed my point?
> 
> data compression made bandwidth constrictions of GDDR5 less of an issue, HBM's primary purpose was to lift the issue entirely.
> with HBM's current restriction to capacity, and price per GB is higher than GDDR5, how would it be superior to a wider buswidth GDDR5?


Compression adds latency. We didn't see a bump in latency with Maxwell because it was compensated for elsewhere, but I honestly don't see compression working well with all of the API tricks that've been developed for making VR work. Most of it involves having data available to process instantly.

Edit: But who knows, Nvidia's higher clock speeds may make that irrelevant, I'm certainly no expert.


----------



## epic1337

Quote:


> Originally Posted by *Ganf*
> 
> Compression adds latency. We didn't see a bump in latency with Maxwell because it was compensated for elsewhere, but I honestly don't see compression working well with all of the API tricks that've been developed for making VR work. Most of it involves having data available to process instantly.


well yes, but it doesn't justify using a crippled solution, 4GB might not be too small, but we're talking about top-end chips limited to only 4GB VRAM.

on that note, these cards with their 4GB limit would've been restricted to low resolutions, or high resolutions with lower texture and AA settings.


----------



## Ganf

Quote:


> Originally Posted by *epic1337*
> 
> well yes, but it doesn't justify using a crippled solution, 4GB might not be too small, but we're talking about top-end chips limited to only 4GB VRAM.
> 
> on that note, these cards with their 4GB limit would've been restricted to low resolutions, or high resolutions with lower texture and AA settings.


How many times does it have to be shown that 4gb isn't a restriction except in very rare circumstances?

3.5gb was only a restriction for the 970 because the other .5mb cripples the card's performance and should've never been left enabled in the first place. The only games that don't run well on 4gb at 4k are AC: Unity and Watch_Dogs, and maybe I'm forgetting one so you're talking about alienating the 1% of people playing those games that have no replayability who're buying cards in the top 1% of the GPU market.

Doesn't sound like a huge demographic to turn away from to me. Devs of new games seem to be staying close to the 3gb mark because of the popularity of the 780ti coupled with the 970 both of which will probably be out in the wild for a long time to come. 4gb is a footnote, not a keystone, in the conversation of what Fiji is capable of.


----------



## 47 Knucklehead

I love how all the talk about 4GB suddenly isn't a problem now that nVidia has 6GB or 12GB on their upper cards, and AMD only has 4GB on their Fury.

It's a total LIE that AMD is trying to push. I mean come on, if 4GB was TRULY enough, then why on earth have they been putting 8GB on their upper end cards BEFORE the 4GB limit that HBM GEN1 has?

Anyone care to bet that once AMD starts putting out HBM GEN2 cards out, that "suddenly" 4GB will ONCE AGAIN no longer be enough?


----------



## michaelius

Quote:


> Originally Posted by *Ganf*
> 
> Not for VR, HBM and/or dual GPUs are what AMD is saying is required for sub-10ms response times, which are cruicial for eliminating motion sickness.
> 
> We'll see though, could just be marketing.


Sounds like pure PR speak about magic memory - last time I checked VR was in 1080p or 1440p area pixel wise which means you need gpu power first and since you need 100+ fps all the time we won't be pushing Crysis 4 worth og graphic details there.

Then there's the question of multi gpu VR - both companies plan to use 1 gpu per eye which means each one of them will be rendering half pixels so let's say 1280x1440p - do we seriously expect to need 500+ GB/s with compression adding another 30% to push sub FHD resolution ?

IMHO there's only one reason why AMD went with HBM1 - it let them put another 90 Watts worth of transistors into GPU without having to invest in efficiency of gcn and everything else is AMD marketting running with new buzzword in hope people choose them to be future proof ready for those incoming VR headsets.

Same thing they did with HSA to show 12 "cores" in APU or slides or 8 cores is future with Bulldozer.


----------



## cstkl1

Hbm would give a big boost on syn benchmarks.


----------



## Ha-Nocri

Still about whether 4GB is enough or not?! AMD seems serious about their drivers, so we can only wait to see how they'll manage (if at all) Fury's VRAM


----------



## Ganf

Quote:


> Originally Posted by *47 Knucklehead*
> 
> I love how all the talk about 4GB suddenly isn't a problem now that nVidia has 6GB or 12GB on their upper cards, and AMD only has 4GB on their Fury.
> 
> It's a total LIE that AMD is trying to push. I mean come on, if 4GB was TRULY enough, then why on earth have they been putting 8GB on their upper end cards BEFORE the 4GB limit that HBM GEN1 has?
> 
> Anyone care to bet that once AMD starts putting out HBM GEN2 cards out, that "suddenly" 4GB will ONCE AGAIN no longer be enough?


There was nothing sudden about it. You should know better than anyone. People citing that 4gb wasn't enough were using bad examples, and they weren't testing the performance of 4gb cards in the games they were referencing and showing that those cards had problems due to low memory.

When you DO play those games, and you DO push the memory beyond what would be >4GB on other cards, nothing happens. The game performs as expected, your VRAM is maxed out, but there is no detriment to gameplay or rendering performance beyond the usual loss of framerate from having your settings maxed out.

AMD is putting 8GB on the 390's because they have nothing else they can do to keep those cards relevant. It's completely redundant to use them as an example because they don't even have the power to run systems that would need 8GB. Rebranding them with 8GB is one of the dumbest things AMD has done, it's a card with no place in the market because anything it does other cards do better, without the extra and useless 4gb of GDDR5.


----------



## cstkl1

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Still about whether 4GB is enough or not?! AMD seems serious about their drivers, so we can only wait to see how they'll manage (if at all) Fury's VRAM


Also depends on the next game releases. Since ppl will always equate the previous games as " unoptimized" " console ports" etc. Most vram intensive games are open world and its high use is equated with popins.

Gta v has popins.


----------



## Ha-Nocri

Quote:


> Originally Posted by *cstkl1*
> 
> Also depends on the next game releases. Since ppl will always equate the previous games as " unoptimized" " console ports" etc. Most vram intensive games are open world and its high use is equated with popins.
> 
> Gta v has popins.


gta5 popins are the same if you have 12GB or 4GB. It depends on in-game settings. U set distance scaling to max and advance distance scaling to max and you should see no difference.


----------



## harney

Quote:


> Originally Posted by *Ganf*
> 
> There was nothing sudden about it. You should know better than anyone. People citing that 4gb wasn't enough were using bad examples, and they weren't testing the performance of 4gb cards in the games they were referencing and showing that those cards had problems due to low memory.
> 
> When you DO play those games, and you DO push the memory beyond what would be >4GB on other cards, nothing happens. The game performs as expected, your VRAM is maxed out, but there is no detriment to gameplay or rendering performance beyond the usual loss of framerate from having your settings maxed out.
> 
> AMD is putting 8GB on the 390's because they have nothing else they can do to keep those cards relevant. It's completely redundant to use them as an example because they don't even have the power to run systems that would need 8GB. Rebranding them with 8GB is one of the dumbest things AMD has done, it's a card with no place in the market because anything it does other cards do better, without the extra and useless 4gb of GDDR5.


Yep its all about the consumer being led down the garden path so to speak .....many tests have been done regarding 2gb vs 4gb vs 8gb on the same card and there is no real gain unless there is the gpu grunt to do it with AA ect but most cards run out of steam ..

This one was an interesting test

http://alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/


----------



## cstkl1

Quote:


> Originally Posted by *Ganf*
> 
> There was nothing sudden about it. You should know better than anyone. People citing that 4gb wasn't enough were using bad examples, and they weren't testing the performance of 4gb cards in the games they were referencing and showing that those cards had problems due to low memory.
> 
> When you DO play those games, and you DO push the memory beyond what would be >4GB on other cards, nothing happens. The game performs as expected, your VRAM is maxed out, but there is no detriment to gameplay or rendering performance beyond the usual loss of framerate from having your settings maxed out.
> 
> AMD is putting 8GB on the 390's because they have nothing else they can do to keep those cards relevant. It's completely redundant to use them as an example because they don't even have the power to run systems that would need 8GB. Rebranding them with 8GB is one of the dumbest things AMD has done, it's a card with no place in the market because anything it does other cards do better, without the extra and useless 4gb of GDDR5.


Well true. Nvidia did a array of gpu lineup. Amd had two cards.

But 7970ghz before launch was under estimated but was stellar card
So did the 290x. So think fury should be good. Question is at what pricepoint.
Evga prices the hybrid cooler at usd 99. I am assuming fury X is usd 150 more on tech and early adoper cost of hbm than 390x ... N with cooler usd 250 more than 390x. So usd 599?? 619??


----------



## Woundingchaney

Quote:


> Originally Posted by *harney*
> 
> Yep its all about the consumer being led down the garden path so to speak .....many tests have been done regarding 2gb vs 4gb vs 8gb on the same card and there is no real gain unless there is the gpu grunt to do it with AA ect but most cards run out of steam ..
> 
> This one was an interesting test
> 
> http://alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/


The issue of running out of vram is better detected by frame times over frame rates.


----------



## harney

Quote:


> Originally Posted by *Woundingchaney*
> 
> The issue of running out of vram is better detected by frame times over frame rates.


True but surly it would still show up in frame rates if there was any real benefit in doubling the ram ...i am no expert on this but i assume when they 1st make the card they know how much vram bus width the gpu will suit then when they release the double the ram versions its pointless but more of a selling trick ...i suppose sli systems would be different as there is more grunt in the gpu but like i said i am no expert in there matters.

All so i have been hearing peeps complain there running out of vram and need more ect ...but surly this depends on how the game is coded some games seem to fill it the vram for no reason. while others seem to manage the vram a lot better


----------



## ToTheSun!

Quote:


> Originally Posted by *epic1337*
> 
> i think you missed my point?
> 
> data compression made bandwidth constrictions of GDDR5 less of an issue, HBM's primary purpose was to lift the issue entirely.
> with HBM's current restriction to capacity, and price per GB is higher than GDDR5, how would it be superior to a wider buswidth GDDR5?


Data compression is a non-optimal, though elegant, solution to a problem that doesn't exist for HBM. Also, the advantages it brings are many. Bandwidth is, hardly, the fundamental one (latency, pcb size reduction, efficiency). The post you quoted mentioned VRAM usage, not bandwidth, and usage has nothing to do with HBM.
Quote:


> Originally Posted by *47 Knucklehead*
> 
> I love how all the talk about 4GB suddenly isn't a problem now that nVidia has 6GB or 12GB on their upper cards, and AMD only has 4GB on their Fury.


We understand that you find funny that shills are shills. We heard you the first time. The last 9001 posts were unnecessary.


----------



## DividebyZERO

Quote:


> Originally Posted by *ToTheSun!*
> 
> Data compression is a non-optimal, though elegant, solution to a problem that doesn't exist for HBM. Also, the advantages it brings are many. Bandwidth is, hardly, the fundamental one (latency, pcb size reduction, efficiency). The post you quoted mentioned VRAM usage, not bandwidth, and usage has nothing to do with HBM.
> We understand that you find funny that shills are shills. We heard you the first time. The last 9001 posts were unnecessary.


I guess he forgot about the 970gtx scenario before TX or 980TI showed up. I guess those people along with the 980gtx are also just horrible because of the lack of VRAM. He conveniently ignores the fact even though the 980ti and TX are 6gb and 12gb, that they are in fact only faster because they are newer stronger Maxwell. His elitist faboyism is basically downing everyone with less than 6gb Vram, even though he intends to only to insult AMD per his usual tactics.


----------



## michaelius

Quote:


> Originally Posted by *harney*
> 
> Yep its all about the consumer being led down the garden path so to speak .....many tests have been done regarding 2gb vs 4gb vs 8gb on the same card and there is no real gain unless there is the gpu grunt to do it with AA ect but most cards run out of steam ..
> 
> This one was an interesting test
> 
> http://alienbabeltech.com/main/gtx-770-4gb-vs-2gb-tested/3/


Old games made in era of 512 MB consoles don't show this.





If game runs out of vram you will see impact in minimum frame rates like in test above.


----------



## tsm106

Quote:


> Originally Posted by *epic1337*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Casey Ryback*
> 
> Hasn't the latest compression techniques in the hardware made high memory interfaces redundant?
> 
> ie Nvidia 9 series?
> 
> 
> 
> exactly, all the more reason HBM isn't a mandatory choice right now, Nvidia didn't even bat an eyelid when AMD went with it.
Click to expand...

Nv had to sit on the side line because their proposal for hmc lost out to hbm, and thus they cannot jump in until hbm2. That totally sounds like by choice.


----------



## gonfreecs

Quote:


> Originally Posted by *Ganf*
> 
> they don't even have the power to run systems that would need 8GB.


I've never understood why this gets repeated. High res textures have almost no effect on performance as long as you have enough vram.
Doesn't matter whether you are using a Titan X or a R9 270X, if you have the vram you will be able to enjoy those high res textures.


----------



## epic1337

Quote:


> Originally Posted by *DividebyZERO*
> 
> I guess he forgot about the 970gtx scenario before TX or 980TI showed up. I guess those people along with the 980gtx are also just horrible because of the lack of VRAM. He conveniently ignores the fact even though the 980ti and TX are 6gb and 12gb, that they are in fact only faster because they are newer stronger Maxwell. His elitist faboyism is basically downing everyone with less than 6gb Vram, even though he intends to only to insult AMD per his usual tactics.


oh? and what does my HD7950 says on that?

to point out, cards on par with Titan X and 980Ti can sustain 4K by itself as a single GPU, which means you can ramp up the settings higher, which leads to a higher VRAM usage.
neither 970 nor 980 is capable of this feat, while exceeding VRAM usage doesn't have much implications to performance, it still has a bottlenecking effect.

so compare VRAM bottleneck from two aspects, which is worse, insufficient VRAM or slightly higher latency? both cards are assumed to perform the same and priced the same.


----------



## Dart06

Quote:


> Originally Posted by *Menta*
> 
> what is AMD doing might as well hand over the market on a silver platter


You have to already have the market to be able to hand it over.


----------



## Woundingchaney

Quote:


> Originally Posted by *harney*
> 
> True but surly it would still show up in frame rates if there was any real benefit in doubling the ram ...i am no expert on this but i assume when they 1st make the card they know how much vram bus width the gpu will suit then when they release the double the ram versions its pointless but more of a selling trick ...i suppose sli systems would be different as there is more grunt in the gpu but like i said i am no expert in there matters.
> 
> All so i have been hearing peeps complain there running out of vram and need more ect ...but surly this depends on how the game is coded some games seem to fill it the vram for no reason. while others seem to manage the vram a lot better


Coding a game of course comes into play and the setting a user is running for a game also come into play. Those running 4k resolutions will usually tell you that currently 4 gig of vram is the absolute minimum they would even consider running. Right now there are a hand full of games that do benefit with above 4 gigs of vram, but these games are also AAA titles.

I was running 4k with SLI 970s and when the slower portion of memory was being used there was obvious frame time issues and the lower end of my fps dropped.

Now Im not sure how this will or will not hamper the Fiji cards. For all I know AMD may not be hampered in coming titles. Im very interested in seeing how the extra bandwidth impacts higher resolutions. Though to say that storage isn't going to be an issue for a certain portion of consumers looking at this card probably isn't realistic.
Quote:


> to point out, cards on par with Titan X and 980Ti can sustain 4K by itself as a single GPU, which means you can ramp up the settings higher, which leads to a higher VRAM usage.
> neither 970 nor 980 is capable of this feat, while exceeding VRAM usage doesn't have much implications to performance, it still has a bottlenecking effect.


Don't forget about users running Xfire and SLI configurations.


----------



## Casey Ryback

Quote:


> Originally Posted by *47 Knucklehead*
> 
> It's a total LIE that AMD is trying to push. I mean come on, if 4GB was TRULY enough, then why on earth have they been putting 8GB on their upper end cards BEFORE the 4GB limit that HBM GEN1 has?


Because people buy 3 x 1440p and cry MOAR VRAM!

99% of user won't need more than 4GB for probably a few years.


----------



## DividebyZERO

Quote:


> Originally Posted by *epic1337*
> 
> oh? and what does my HD7950 says on that?
> 
> to point out, cards on par with Titan X and 980Ti can sustain 4K by itself as a single GPU, which means you can ramp up the settings higher, which leads to a higher VRAM usage.
> neither 970 nor 980 is capable of this feat, while exceeding VRAM usage doesn't have much implications to performance, it still has a bottlenecking effect.
> 
> so compare VRAM bottleneck from two aspects, which is worse, insufficient VRAM or slightly higher latency? both cards are assumed to perform the same and priced the same.


We still don't know what the Fiji VRAM size is officially.

TitanX and 980TI pushing 4k with higher settings and being acceptable FPS on a single GPU is very subjective to users. I am guessing people who buy 650$+ - 1K$+ gpu's are probably more likely to use more than one for 4k to get acceptable FPS. 7950 wasn't a top end card either even when it was released, 7970 was.

The VRAM arguments for Fiji are not really worth it yet, simply because we don't know officially what it is.


----------



## tsm106

Quote:


> Originally Posted by *DividebyZERO*
> 
> The VRAM arguments for Fiji are not really worth it yet, simply because we don't know officially what it is.


We kind of do know what they said though. It remains to be seen whether it's true or not. And what did they say? It won't hurt the card because...

From TR
Quote:


> This first-gen HBM stack will impose at least one limitation of note: its total capacity will only be 4GB. At first blush, that sounds like a limited capacity for a high-end video card. After all, the Titan X packs a ridiculous 12GB, and the prior-gen R9 290X has the same 4GB amount. Now that GPU makers are selling high-end cards on the strength of their performance at 4K resolutions, one might expect more capacity from a brand-new flagship graphics card.
> 
> When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing."
> 
> Strong words, indeed.
> 
> With HBM, he said, "we threw a couple of engineers at that problem," which will be addressed solely via the operating system and Radeon driver software. "We're not asking anybody to change their games."
> 
> The conversation around this issue should be interesting to watch. Much of what Macri said about poor use of the data in GPU memory echoes what Nvidia said in the wake of the revelations about the GeForce GTX 970's funky 3.5GB/0.5GB memory split. If Nvidia makes an issue of memory capacity at the time of the new Radeons' launch, it will be treading into dangerous waters. Of course, the final evaluation will be up to reviewers and end-users. We'll surely push these cards to see where they start to struggle.


----------



## Woundingchaney

Quote:


> Originally Posted by *Casey Ryback*
> 
> Because people buy 3 x 1440p and cry MOAR VRAM!
> 
> 99% of user won't need more than 4GB for probably a few years.


That is very true, but these same users would be the logical consumer base for their new high end skus.


----------



## Ganf

Quote:


> Originally Posted by *gonfreecs*
> 
> I've never understood why this gets repeated. High res textures have almost no effect on performance as long as you have enough vram.
> Doesn't matter whether you are using a Titan X or a R9 270X, if you have the vram you will be able to enjoy those high res textures.


I don't get why people think textures are the only thing that require VRAM.

Running multi-monitor setups? More VRAM.

Mismatched monitors? More VRAM

Downscaling a game? Need more VRAM.

If textures are your only concern because you're only playing modded Skyrim, sure. 8GB 390x is your jam, get on it. Every other situation which requires more than 4GB? You're going to need crossfire, and by the time you buy two 390x's you could've had a 980ti for less or thrown in a couple hundred more for a Titan X, and avoided all possible problems with a multi-GPU setup.

The card has no place.


----------



## rizla1

Why does nobody think that AMD will decide against refreshing Fuji later this year with 8GB HBM? For the Under 4K Club Fuji or Nvidia offerings will do just fine.

When I get a 4k monitor later this year I am getting a 2nd R9-290 , I know crazy rite I am trying to run 4k on ONLY 4GB of ram ?









So far it looks like 290X 4GB XFIRE holds up very well on 4k..... am I missing something?


----------



## epic1337

Quote:


> Originally Posted by *rizla1*
> 
> Why does nobody think that AMD will decide against refreshing Fuji later this year with 8GB HBM? For the Under 4K Club Fuji or Nvidia offerings will do just fine.


fiji refresh with 8GB HBM will undoubtedly be more expensive, unless otherwise they suddenly decide to cannibalize their entire lineup.

Fiji 4GB has a certain spot for 2560x1440 120Hz, its the current gimmick popping up concerning freesync and such.
but neither 980Ti nor TitanX can sustain 2560x1440 120Hz, so presumably neither will Fiji, which comes a bit short of "ideal single GPU solution".


----------



## Ganf

Quote:


> Originally Posted by *rizla1*
> 
> Why does nobody think that AMD will decide against refreshing Fuji later this year with 8GB HBM? For the Under 4K Club Fuji or Nvidia offerings will do just fine.
> 
> When I get a 4k monitor later this year I am getting a 2nd R9-290 , I know crazy rite I am trying to run 4k on ONLY 4GB of ram ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So far it looks like 290X 4GB XFIRE holds up very well on 4k..... am I missing something?


Nope. There is supposedly an 8gb Fiji coming in August, people conveniently forget that when the VRAM subject comes up, and 4gb won't be outdated until we hit 14nm next year, which comes with HBM2, and enough of a performance increase that everyone will be looking to upgrade, so future proofing isn't an excuse either.

4GB isn't enough is just the justification that Titan X users ran with when that was released, and people are still carrying on about it.


----------



## Woundingchaney

Quote:


> Originally Posted by *Ganf*
> 
> Nope. There is supposedly an 8gb Fiji coming in August, people conveniently forget that when the VRAM subject comes up, and 4gb won't be outdated until we hit 14nm next year, which comes with HBM2, and enough of a performance increase that everyone will be looking to upgrade, so future proofing isn't an excuse either.
> 
> 4GB isn't enough is just the justification that Titan X users ran with when that was released, and people are still carrying on about it.


Do you spend much time gaming at 4k or higher resolutions?


----------



## mfknjadagr8

So I need to read up on hbm but from what I'm seeing the higher bandwidth negates the need for high clock speeds on memory? Would be interesting if it can be overclocked to say 1100


----------



## umeng2002

Quote:


> Originally Posted by *mfknjadagr8*
> 
> So I need to read up on hbm but from what I'm seeing the higher bandwidth negates the need for high clock speeds on memory? Would be interesting if it can be overclocked to say 1100


Go wide or go fast, not both. HBM has a massively wide bus.


----------



## Ganf

Quote:


> Originally Posted by *Woundingchaney*
> 
> Do you spend much time gaming at 4k or higher resolutions?


Yep, just downscaling so far though. 4k monitor I want hasn't released yet.

Was doing it a fair bit on one of the titles that supposedly proves that 4GB wasn't enough too, until I got tired of the swings in framerate. Shadows of Mordor runs just fine on 4gb.


----------



## tsm106

Quote:


> Originally Posted by *mfknjadagr8*
> 
> So I need to read up on hbm but from what I'm seeing the higher bandwidth negates the need for high clock speeds on memory? Would be interesting if it can be overclocked to say 1100


Because of the bus width and because of its stacking, the memory itself doesn't need to run at high speeds (like GDDR5) thus lowering consumption whilst improving latency by 50% from what I've read. You don't need to oc this from what they are saying. It's already going to be stupid fast, thus we can concentrate on other things like the core and cooling.


----------



## epic1337

Quote:


> Originally Posted by *umeng2002*
> 
> Go wide or go fast, not both. HBM has a massively wide bus.


its possible to do both, though not currently with HBM.
in GDDR5 its possible to reach 768bit ( 2 x 384bit ) without issues, 1024bit should also be possible.


----------



## Blameless

Quote:


> Originally Posted by *Casey Ryback*
> 
> Hasn't the latest compression techniques in the hardware made high memory interfaces redundant?
> 
> ie Nvidia 9 series?


Memory compression to improve bandwidth utilization is not new and AMD and NVIDIA should be at rough parity in these features with GCN 1.2 and Maxwell 2.

Still, GPUs keep getting more powerful and GDDR5 is nearing it's limit. A new memory standard, even if not absolutely necessary this generation, would still have been needed sooner rather than later.
Quote:


> Originally Posted by *ToTheSun!*
> 
> Data compression has absolutely nothing to do with the improvements HBM brings.


It does mean you need less bandwidth to accomplish the same tasks and despite all the advantages of HBM, the key advantage for GPU's is superior bandwidth it's capable of.
Quote:


> Originally Posted by *47 Knucklehead*
> 
> I love how all the talk about 4GB suddenly isn't a problem now that nVidia has 6GB or 12GB on their upper cards, and AMD only has 4GB on their Fury.
> 
> It's a total LIE that AMD is trying to push. I mean come on, if 4GB was TRULY enough, then why on earth have they been putting 8GB on their upper end cards BEFORE the 4GB limit that HBM GEN1 has?
> 
> Anyone care to bet that once AMD starts putting out HBM GEN2 cards out, that "suddenly" 4GB will ONCE AGAIN no longer be enough?


Standard marketing spin that both sides habitually engage in.

Whatever advantage either has over the other becomes a big deal, with their respective disadvantages are downplayed.
Quote:


> Originally Posted by *gonfreecs*
> 
> I've never understood why this gets repeated. High res textures have almost no effect on performance as long as you have enough vram.


Largely true.
Quote:


> Originally Posted by *Woundingchaney*
> 
> Do you spend much time gaming at 4k or higher resolutions?


Had I a faster GPU, I'd be using 4k internal resolution via super-sampling on many games.
Quote:


> Originally Posted by *mfknjadagr8*
> 
> So I need to read up on hbm but from what I'm seeing the higher bandwidth negates the need for high clock speeds on memory? Would be interesting if it can be overclocked to say 1100


As stated, HBM is very wide so doesn't need to be fast to deliver huge bandwidth.

That said, Fiji will probably have a large surplus of memory bandwidth, so even if it does have good OCing headroom, it probably won't help performance much and you'd likely be better off using that power and heat budget to get more out of the core.


----------



## tsm106

Quote:


> Originally Posted by *epic1337*
> 
> Quote:
> 
> 
> 
> Originally Posted by *umeng2002*
> 
> Go wide or go fast, not both. HBM has a massively wide bus.
> 
> 
> 
> its possible to do both, though not currently with HBM, in GDDR5 its possible to reach 512bit without issues, 1024bit should also be possible.
Click to expand...











The bus width is 1024bit times 4. That means each stack is 1024bit.


----------



## Ganf

Quote:


> Originally Posted by *tsm106*
> 
> Because of the bus width and because of its stacking, the memory itself doesn't need to run at high speeds (like GDDR5) thus lowering consumption whilst improving latency by 50% from what I've read. *You don't need to oc this from what they are saying.* It's already going to be stupid fast, thus we can concentrate on other things like the core and cooling.


First thing on the to-do list if I buy a Fury: Overclock the HBM...


----------



## Woundingchaney

Quote:


> Originally Posted by *Ganf*
> 
> Yep, just downscaling so far though. 4k monitor I want hasn't released yet.
> 
> Was doing it a fair bit on one of the titles that supposedly proves that 4GB wasn't enough too, until I got tired of the swings in framerate. Shadows of Mordor runs just fine on 4gb.


On a single 290x? I cant imagine that you are running any AAA game well rendering at 4k.


----------



## epic1337

Quote:


> Originally Posted by *tsm106*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The bus width is 1024bit times 4. That means each stack is 1024bit.


no, i meant its possible to do "fast" in HBM but not with current technology, HBM is currently clocked 400Mhz.

while GDDR5 is innately fast (1500Mhz+) and can scale to 1024bit with current technology.


----------



## Ganf

Quote:


> Originally Posted by *Woundingchaney*
> 
> On a single 290x? I cant imagine that you are running anything well rendering at 4k.


Only thing that runs well at 4k is Elite: Dangerous. Everything else is mostly just for the sake of determining what it takes for 4k and seeing how driver improvements are working in Windows 10. I've got a good clocking card, but it still doesn't cut it.

So maybe I should rephrase that. I mostly test in 4k for hours at a time, there are only a couple games I PLAY. And it's mostly just because seeing how much this Lightning can grunt and squeal under max load compared to your regular 290x is vaguely erotic....


----------



## tsm106

Quote:


> Originally Posted by *epic1337*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tsm106*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The bus width is 1024bit times 4. That means each stack is 1024bit.
> 
> 
> 
> no, i meant its possible to do "fast" in HBM but not with current fabrications, HBM is currently clocked 400Mhz.
> 
> while GDDR5 is innately fast (1500Mhz+) but can scale to 1024bit.
Click to expand...

How can GDDR5 scale to 1024bit wide? Innately faster means what? Stop making stuff up lol.


----------



## Woundingchaney

Quote:


> Originally Posted by *Ganf*
> 
> Only thing that runs well at 4k is Elite: Dangerous. Everything else is mostly just for the sake of determining what it takes for 4k and seeing how driver improvements are working in Windows 10. I've got a good clocking card, but it still doesn't cut it.
> 
> So maybe I should rephrase that. I mostly test in 4k for hours at a time, there are only a couple games I PLAY. And it's mostly just because seeing how much this Lightning can grunt and squeal under max load compared to your regular 290x is vaguely erotic....


I should probably apologize for that statement sounding abrasive. I think a single card running games at approx. 35 fps would be considered as running well in 4k. Didn't mean to suggest that every game has to run near flawlessly for it to be a viable solution.

This is why Im wanting to see what HBM brings to the table because high bandwidth pays dividends at high resolutions.


----------



## sugarhell

Quote:


> Originally Posted by *epic1337*
> 
> no, i meant its possible to do "fast" in HBM but not with current technology, HBM is currently clocked 400Mhz.
> 
> while GDDR5 is innately fast (1500Mhz+) and can scale to 1024bit with current technology.


I would like to see a 1024bit pcb traces. Possible but stupid


----------



## Ganf

Quote:


> Originally Posted by *Woundingchaney*
> 
> I should probably apologize for that statement sounding abrasive. I think a single card running games at approx. 35 fps would be considered as running well in 4k. Didn't mean to suggest that every game has to run near flawlessly for it to be a viable solution.
> 
> This is why Im wanting to see what HBM brings to the table because high bandwidth pays dividends at high resolutions.


No issue, I'm the cantankerous one. Bandwidth and DX12 should change the 4k scene from the ground up, but it's hard to see just how much yet. If I were certain about any of it, I'd have already made my decision on cards by now.


----------



## mfknjadagr8

Quote:


> Originally Posted by *tsm106*
> 
> Because of the bus width and because of its stacking, the memory itself doesn't need to run at high speeds (like GDDR5) thus lowering consumption whilst improving latency by 50% from what I've read. You don't need to oc this from what they are saying. It's already going to be stupid fast, thus we can concentrate on other things like the core and cooling.


yeah I did a little reading...looks like this really is some ground breaking stuff...hopefully it's implemented properly...last thing we need is an awesome memory interface and it to be bottlenecked by core or cooling not being able to feed it...I'm actually wondering if we will see the apus or even a cpu taking this approach in the future...


----------



## tsm106

Quote:


> Originally Posted by *mfknjadagr8*
> 
> Quote:
> 
> 
> 
> Originally Posted by *tsm106*
> 
> Because of the bus width and because of its stacking, the memory itself doesn't need to run at high speeds (like GDDR5) thus lowering consumption whilst improving latency by 50% from what I've read. You don't need to oc this from what they are saying. It's already going to be stupid fast, thus we can concentrate on other things like the core and cooling.
> 
> 
> 
> yeah I did a little reading...looks like this really is some ground breaking stuff...hopefully it's implemented properly...last thing we need is an awesome memory interface and it to be bottlenecked by core or cooling not being able to feed it...I'm actually wondering if we will see the apus or even a cpu taking this approach in the future...
Click to expand...

Well the core count on Fury is roughly 45% higher, so on paper it looks like they thought about balancing the core count vs the improved memory system.

Yea, they are going to apply this to their apu in the future. It's going to be some crazy future with cpu's with HBM and/or their apu's with HBM.


----------



## epic1337

Quote:


> Originally Posted by *tsm106*
> 
> How can GDDR5 scale to 1024bit wide? Innately faster means what? Stop making stuff up lol.


i don't know why you can't see the numbers already written there.
buswidth and clockspeed are different things, like fast and wide are different words.

Quote:


> Originally Posted by *sugarhell*
> 
> I would like to see a 1024bit pcb traces. Possible but stupid


we already saw 2x384bit done, so the PCB can handle that much, as for the traces going into the GPU itself, it could be done if theres enough PCB layers.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Woundingchaney*
> 
> That is very true, but these same users would be the logical consumer base for their new high end skus.


EXACTLY!

That is why 4GB makes sense on low to mid range cards and 6,8,12 GB makes more sense on the upper end cards.

AMD has done exactly the opposite, which makes no sense (well, other than the technological limitation of HBM GEN1 with a standard interposer, which obviously means they haven't figured out a way yet to do a double stack). They have 4GB on their low end cards, 8GB on their middle to upper end card that is a 3 year old rebrand, and their "top of the line" is only 4GB.

People who are going to drop major bucks on 3 monitors aren't going to run them on 4GB rebrands, they want 6+ GB ultra high end cards.


----------



## iLeakStuff

If anyone wonders, AMD have just shipped a ton of Fiji XT (Fury X) to reviewers on June 8th.
Well they probably went to AMD first, but they are not keeping 24 cards.

They have also changed from Cooler Master to Asetek for the cards


----------



## Blameless

Quote:


> Originally Posted by *epic1337*
> 
> we already saw 2x384bit done, so the PCB can handle that much, as for the traces going into the GPU itself, it could be done if theres enough PCB layers.


There are 2x512-bit cards.

However, dual GPU doesn't really count as you don't need many more layers to route traces to separate GPUs, you just need a wider PCB.

Routing 1024-bits of GDDR traces to a single GPU package would mean a lot more complexity, much greater trace length, and many extra layers needed. High end GPUs already have around a dozen PCB layers.

Could 1024-bit be done on GDDR5? Sure. However it would certainly start to have negligible cost advantages over HBM, and would make achieving peak GDDR5 speeds very difficult. It would also inflate GPU die sizes relative to HBM.


----------



## raghu78

Quote:


> Originally Posted by *iLeakStuff*
> 
> If anyone wonders, AMD have just shipped a ton of Fiji XT (Fury X) to reviewers on June 8th.
> Well they probably went to AMD first, but they are not keeping 24 cards.
> 
> They have also changed from Cooler Master to Asetek for the cards


its well known from last year that asetek will supply the cooler for AMD's next gen flagship









http://asetek.com/press-room/news/2014/asetek-announces-largest-ever-design-win.aspx

btw Asetek also supplied he R9 295 x2 hybrid cooler

http://www.techpowerup.com/199681/amd-selects-asetek-to-liquid-cool-the-worlds-fastest-graphics-card.html


----------



## ToTheSun!

Quote:


> Originally Posted by *Blameless*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ToTheSun!*
> 
> Data compression has absolutely nothing to do with the improvements HBM brings.
> 
> 
> 
> It does mean you need less bandwidth to accomplish the same tasks and despite all the advantages of HBM, the key advantage for GPU's is superior bandwidth it's capable of.
Click to expand...

You missed the post he quoted, i'm assuming.


----------



## SpeedyVT

Quote:


> Originally Posted by *epic1337*
> 
> ok... i take it that you're too inept to understand it so let me explain in an utterly simple sentence.
> 
> HBM can clock to around 400Mhz for the mean time, which is slow, yet has a wide bus to compensate.
> while GDDR5 can clock to as high as 1800Mhz which is ridiculously fast in all of GDDR lineage, yet is limited by buswidth.
> thats why i said "possible" not did i say "practical"?
> in a sense, GDDR5 512bit at 1750Mhz could already have around 450GB/s of bandwidth, while HBM 4096bit at 400Mhz presumably has 512GB/s of bandwidth.


You probably already known this but to end the my GDDR5 is faster debate, the 4096 bit is the highways or the individual rows for things to be access. HBM is data wide not data fast. This is better because it's ability to retrieve information is exponentially closer to the rate. So long as the timings in the HBM are less than GDDR5 the response rate will be significantly faster.

GDDR5 is based off of DDR3 to widen it's bandwidth for data wide. However it's too fast today to match it's width. So while you might get more performance overclocking the ram, it's still hitting a wall where it can only pull 512 simutaneous bits at a time.

HBM's timing, bandwidth and speed is better utilized.

Technically you could say each core has a bit dedicated to access!!! HA HA!


----------



## epic1337

Quote:


> Originally Posted by *Ganf*
> 
> Err... No...
> 
> Try 500mhz, because on all of the benchmarks we've seen on Fiji it is clocked at 500mhz.


ahh, you meant 500Mhz at 512GB/s, sorry.

Quote:


> Originally Posted by *SpeedyVT*
> 
> You probably already known this but to end the my GDDR5 is faster debate, the 4096 bit is the highways or the individual rows for things to be access. HBM is data wide not data fast. This is better because it's ability to retrieve information is exponentially closer to the rate. So long as the timings in the HBM are less than GDDR5 the response rate will be significantly faster.
> 
> GDDR5 is based off of GDDR to widen it's bandwidth for data wide. However it's too fast today to match it'd width. So while you might get more performance overclocking the ram, it's still hitting a wall where it can only pull 512 simutaneous bits at a time.


mhmmm, but the current flaw of HBM is it's limited capacity _and_ higher cost per GB.

in which i'll go back to my point, their decision of going for HBM is bad.
they could've experimented with a different card but this is their top-cards they're risking.


----------



## iLeakStuff

Quote:


> Originally Posted by *raghu78*
> 
> its well known from last year that asetek will supply the cooler for AMD's next gen flagship
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://asetek.com/press-room/news/2014/asetek-announces-largest-ever-design-win.aspx
> 
> btw Asetek also supplied he R9 295 x2 hybrid cooler
> 
> http://www.techpowerup.com/199681/amd-selects-asetek-to-liquid-cool-the-worlds-fastest-graphics-card.html


Must be the prototypes they built with Cooler Master then


----------



## Majin SSJ Eric

Thing is, AMD's official marketing is trumpeting 4k and beyond resolutions for Fury. They have to be confident in whatever capacity solution they have come up with in order to make hi-res gaming the cornerstone of their marketing platform for the card...


----------



## SpeedyVT

The second problem with making a 1024 bit GDDR5 GPU is the space DDR requires a lot of on die interconnects where HBM requires a tiny little space on die. HBM is better period. You sacrifice a lot of cores just to put 1024 bit GDDR5, not only that but it's heat generation would be insane.


----------



## tsm106

^^Yeap. IT would be so far off the path that the diminishing returns would be ridiculous. I'd imagine consumption costs over 30% of card power with a 300w block. That'd be crazy.


----------



## mfknjadagr8

Quote:


> Originally Posted by *SpeedyVT*
> 
> The second problem with making a 1024 bit GDDR5 GPU is the space DDR requires a lot of on die interconnects where HBM requires a tiny little space on die. HBM is better period. You sacrifice a lot of cores just to put 1024 bit GDDR5, not only that but it's heat generation would be insane.


yeah it's kinds like re inventing the wheel but you added flat spots that make it annoying and less efficient lol


----------



## iLeakStuff

OMG OMG OMG

Its happening.
R9 Fury and R9 Fury X listed (sort of) on Sapphire homepage


----------



## tsm106

^^What a tease!

3 days till the end of the rumor mill...


----------



## harney

Quote:


> Originally Posted by *iLeakStuff*
> 
> OMG OMG OMG
> 
> Its happening.
> R9 Fury and R9 Fury X listed (sort of) on Sapphire homepage


Wow the cards are that fast they have left the sapphire web page


----------



## Ganf

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Thing is, AMD's official marketing is trumpeting 4k and beyond resolutions for Fury. They have to be confident in whatever capacity solution they have come up with in order to make hi-res gaming the cornerstone of their marketing platform for the card...


This, and if AMD has done one thing for the GPU industry it's innovate in memory architecture and performance.

They may not know jack about marketing, product placement in the market, or how to play the blame game, but they do know how to handle VRAM.


----------



## iLeakStuff

Quote:


> Originally Posted by *harney*
> 
> Wow the cards are that fast they have left the sapphire web page


Placeholder until they upload the info









There is actually a chance we might get Fury (Fiji Pro) at launch too now. For cheaper price than Water Cooled Fury X.
Who knows


----------



## Ganf

Quote:


> Originally Posted by *iLeakStuff*
> 
> Placeholder until they upload the info
> 
> 
> 
> 
> 
> 
> 
> 
> 
> There is actually a chance we might get Fury (Fiji Pro) at launch too now. For cheaper price than Water Cooled Fury X.
> Who knows


Won't be satisfied unless there is a Fury X Toxic announced immediately.


----------



## gamervivek

If AMD had tight cooperation with M$ they wouldn't have been blindsided by the 12_1 feature level.
Quote:


> Originally Posted by *iLeakStuff*
> 
> If anyone wonders, AMD have just shipped a ton of Fiji XT (Fury X) to reviewers on June 8th.
> Well they probably went to AMD first, but they are not keeping 24 cards.
> 
> They have also changed from Cooler Master to Asetek for the cards


Do they bless the rains down in India?


----------



## iLeakStuff

Confirmed 4GB


----------



## Pantsu

At least Sapphire lists the Fury with the Fury X. There's been some talk that the cut version would release later, but this might indicate otherwise.


----------



## iLeakStuff

Quote:


> Originally Posted by *Pantsu*
> 
> At least Sapphire lists the Fury with the Fury X. There's been some talk that the cut version would release later, but this might indicate otherwise.


----------



## rt123

Quote:


> Originally Posted by *gamervivek*
> 
> If AMD had tight cooperation with M$ they wouldn't have been blindsided by the 12_1 feature level.


What do you mean blindsided.??


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> Confirmed 4GB


R9 Fury 4G *D5*

Fury Pro uses GDDR5 instead of HBM confirmed!
















Or maybe it comes with a D5 pump


----------



## ToTheSun!

Quote:


> Originally Posted by *magnek*
> 
> Fury Pro uses GDDR5 instead of HBM confirmed!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Or maybe it comes with a D5 pump


Good rad, good pump...

Best AIO 2015?


----------



## magnek

You bet


----------



## zealord

does anyone even still believe that there will be a 8GB HBM Fury card this month?









Specs are a safe bet by now I guess. Yeah no confirmation I know, but we all know what the specs are.

Performance at 4K and price point is the big question now


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> If anyone wonders, AMD have just shipped a ton of Fiji XT (Fury X) to reviewers on June 8th.
> Well they probably went to AMD first, but they are not keeping 24 cards.
> 
> They have also changed from Cooler Master to Asetek for the cards


So does this mean the HardwareLuxx rumor in the OP is full of bovine excrement then? Because
Quote:


> But let's start with the card itself. The water cooler iss supplied by CoolIT.


Unless there's supposed to be a watecooled Fury Pro and that uses a different CLC. Or they had an early ES prototype.


----------



## rizla1

Another thing is. ...the only 4k tests i have seen that criply 4GB cards have also left tye titan x at unbearable framerates... This leads to tye conclusion by the time >8GB hbm comes 14nm gpus from both will be able to make 8gb useable at 4k. Gtav 4k max settings btw. And please remember this is maybe 0.001% of the pc overlord population. I have 1440p and r9 290 i will get 4k but cannot afford £1000 worths of gpu,s to run a few games at 4k. My gpu was free due to mining proceeds.


----------



## DSgamer64

Quote:


> Originally Posted by *zealord*
> 
> does anyone even still believe that there will be a 8GB HBM Fury card this month?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Specs are a safe bet by now I guess. Yeah no confirmation I know, but we all know what the specs are.
> 
> Performance at 4K and price point is the big question now


I never like assuming anything until I hear it from the horses mouth, or at least there are cards out there for reviewers to actually run and test. An 8 GB HBM card would be nice, but it is doubtful. However, AMD could surprise us!


----------



## zealord

Quote:


> Originally Posted by *DSgamer64*
> 
> I never like assuming anything until I hear it from the horses mouth, or at least there are cards out there for reviewers to actually run and test. An 8 GB HBM card would be nice, but it is doubtful. However, AMD could surprise us!


Yeah but you know what I mean. The chances of the Fury X being anything but 4096 shader 4gb HBM card is very very small. Though I'd be delighed if we get a suprise with 8GB HBM but I doubt it


----------



## iLeakStuff

Quote:


> Originally Posted by *magnek*
> 
> So does this mean the HardwareLuxx rumor in the OP is full of bovine excrement then? Because
> Unless there's supposed to be a watecooled Fury Pro and that uses a different CLC. Or they had an early ES prototype.


I dunno.

My question is if they changed to Asetek because they learned its better during testing?
Another reason can be that the CoolIT/Cooler Master was indeed used on earlier prototypes which HWLuxx saw


----------



## Ganf

Quote:


> Originally Posted by *iLeakStuff*
> 
> I dunno.
> 
> My question is if they changed to Asetek because they learned its better during testing?
> Another reason can be that the CoolIT/Cooler Master was indeed used on earlier prototypes which HWLuxx saw


Or they've already got a working relationship with Asetek, so they might as well continue with that.

You wouldn't believe how fast management costs can add up when switching suppliers like that, the second run is almost always the easiest and most efficient. Everyone has learned what the other person expects and needs, they know what's required for their products to work together, and you've learned which mistakes to anticipate.

It's not always about the lowest bidder or the best product.


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> I dunno.
> 
> My question is if they changed to Asetek because they learned its better during testing?
> Another reason can be that the CoolIT/Cooler Master was indeed used on earlier prototypes which HWLuxx saw


They used Asetek for 295X2, so unless they were grossly unsatisfied with Asetek for whatever reason, it just seemed weird to switch to CoolIT anyhow.


----------



## VSG

There are still multiple different AIOs going around. Different rads and also different fans.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *zealord*
> 
> does anyone even still believe that there will be a 8GB HBM Fury card this month?


This has been pretty well known for a month now.

Pretty much anyone with an IQ above their shoe size knew that they weren't going to be 8GB when the CTO of AMD went on record as to say that "We do not see 4GB as a limitation that would cause performance bottlenecks. ... 4GB is more than sufficient. We've had to go do a little bit of investment in order to better utilize the frame buffer, but we're not really seeing a frame buffer capacity [problem]. You'll be blown away by how much [capacity] is wasted."

I give it a 50/50 chance that AMD will go and make a special interposer within 4-6 months from now, and come out with a stupid expensive 8GB version, and people will gobble it up, just in time for HBM GEN2 to come out and make it a moot point.


----------



## peateargryphon

Quote:


> Originally Posted by *47 Knucklehead*
> 
> I give it a 50/50 chance that AMD will go and make a special interposer within 4-6 months from now, and come out with a stupid expensive 8GB version, and people will gobble it up, just in time for HBM GEN2 to come out and make it a moot point.


You have an extremely optimistic estimate for the HBM v2 timeframe. A little more accurate estimate would put HBM v2 closer to Q2 2016 (at the earliest).


----------



## 47 Knucklehead

Quote:


> Originally Posted by *peateargryphon*
> 
> You have an extremely optimistic estimate for the HBM v2 timeframe. A little more accurate estimate would put HBM v2 closer to Q2 2016 (at the earliest).


Yeah, that's 12 months from now, and AMD coming out with an 8GB HBM GEN1 card in 4-6 months would still be a seller for some, and AMD can turn around and sell that card to people who either just bought the 4GB and are loyalists, or those who are holding out for an 8GB card, 6 months before nVidia drops Pascal with their HBM GEN2 memory and GPU die shrink and NVLink and totally steal the show.

That's about the best way I see AMD making money off this and have much hope for recouping the massive amount of money they put out for this.

And for the record, no, I don't think it will be 12 months (Q2 2016) for HBM GEN2. SK Hynix has been showing HBM GEN2 chips for some months now. Also nVidia has been showing both Pascal and NVLink as well. I think sometime in Q1 2016 is a more realistic date.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Yeah, that's 12 months from now, and AMD coming out with an 8GB HBM GEN1 card in 4-6 months would still be a seller for some, and AMD can turn around and sell that card to people who either just bought the 4GB and are loyalists, or those who are holding out for an 8GB card, 6 months before nVidia drops Pascal with their HBM GEN2 memory and GPU die shrink and NVLink and totally steal the show.
> 
> That's about the best way I see AMD making money off this and have much hope for recouping the massive amount of money they put out for this.
> 
> And for the record, no, I don't think it will be 12 months (Q2 2016) for HBM GEN2. SK Hynix has been showing HBM GEN2 chips for some months now. Also nVidia has been showing both Pascal and NVLink as well. I think sometime in Q1 2016 is a more realistic date.


Not happening. Nvidia is going to have to go through the same teething issues that AMD has done with HBM, compounded by the task of getting 14/16nm finfet up and running. I'd say 12 months from now is very optimistic for Pascal with HBM2. Possible, but optimistic...


----------



## magnek

I think he might be referring to the big Pascal taping out news.

Well so there, I think nVidia is in a hurry to get Pascal out because they need to combat Intel's Knights Landing, which will launch in 2H 2015, is based on Intel's most advanced 14nm process, and looks to be a total compute monster, offering over >3 teraflops of DP performance and ~6 teraflops SP. Maxwell is sufficiently competitive at SP (Titan X has 7.1 teraflops SP), but cannot DP to save its life (1/32, even worse than non-Titan Kepler's 1/24), so unless nVidia wants to keep pushing Kepler based Tesla cards (possibly a viable strategy too?), they need something out..

Right now nVidia's most powerful Tesla K80 uses *two* GK210 chips, and has a peak DP performance of 2.91 teraflops, and peak SP of 8.74 teraflops, so sufficiently competitive to hold down the fort for the meantime I suppose.


----------



## raghu78

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Not happening. Nvidia is going to have to go through the same teething issues that AMD has done with HBM, compounded by the task of getting 14/16nm finfet up and running. I'd say 12 months from now is very optimistic for Pascal with HBM2. Possible, but optimistic...


12 months from now is extremely optimistic. HBM2 is slated for Q2 2016 production.



http://www.kitguru.net/components/graphic-cards/anton-shilov/sk-hynix-adds-hbm-dram-into-catalogue-set-to-start-mass-production-in-q1-2015/



Look at Hynix timelines. HBM1 was slated for Q3 2014 production. But only qualification samples were achieved by late Q3 2014. We all know that HBM1 started production in early 2015 (Jan) and we are seeing Fury almost 6 months later. Thats because it takes time to ramp new technology and the 2.5D manufacturing process has far more complicated steps all of which leads to longer lead times for 2.5D HBM products. So even Sep 2016 is optimistic for HBM2. But we all know what knucklehead wants to believe. Anything Nvidia does is right and anything AMD does is wrong.


----------



## Casey Ryback

Quote:


> Originally Posted by *raghu78*
> 
> 12 months from now is extremely optimistic. HBM2 is slated for Q2 2016 production.


So cards might be ready for a Q3, but more likely late Q4 release.


----------



## SpeedyVT

Quote:


> Originally Posted by *Casey Ryback*
> 
> So cards might be ready for a Q3, but more likely late Q4 release.


I think any HBM2 cards will be late 2016 to early 2017.


----------



## Majin SSJ Eric

That was my point. Knuckle seems to think we'll have HBM2 Pascal cards in Q1 2016 but I find that impossible to believe. Titan X and Fury X are going to be the flagship cards for at least 12-16 months...


----------



## Randomdude

8GB HBM Fury in 4-6 months, just in time for a 12GB 3072 core 980Ti Ultra for 799USD, which the hordes of fanboys will gobble up just like the 980Ti. It'll be a steal considering it's not 2 months after the Titan X, but 6! Pretending to be concerned and hoping that AMD makes a profit, but also getting in remarks similar to this one on every occasion defeats the purpose of pretending to be concerned...

AMD releasing an 8GB Fury X in that time frame will be more acceptable (by more acceptable I mean less of a low blow to early adopters) than what nVidia did, but you don't see people making a deal of it's _possible_ release and people buying it. This goes both ways obviously. That's kinda getting a bit old, and redundant because it's a business. Some of the users here, honestly.


----------



## SpeedyVT

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> That was my point. Knuckle seems to think we'll have HBM2 Pascal cards in Q1 2016 but I find that impossible to believe. Titan X and Fury X are going to be the flagship cards for at least 12-16 months...


People don't realize you can't produce a GPU without the memory. Takes about a year for memory production to produce the desired yields needed for GPU production.


----------



## white owl

Quote:


> Originally Posted by *Randomdude*
> 
> 8GB HBM Fury in 4-6 months, *just in time for a 12GB 3072 core 980Ti Ultra* for 799USD, which the hordes of fanboys will gobble up just like the 980Ti. It'll be a steal considering it's not 2 months after the Titan X, but 6! Pretending to be concerned and hoping that AMD makes a profit, but also getting in remarks similar to this one on every occasion defeats the purpose of pretending to be concerned...
> 
> AMD releasing an 8GB Fury X in that time frame will be more acceptable (by more acceptable I mean less of a low blow to early adopters) than what nVidia did, but you don't see people making a deal of it's _possible_ release and people buying it. This goes both ways obviously. That's kinda getting a bit old, and redundant because it's a business. Some of the users here, honestly.


Isn't that what a Titan X is?


----------



## Randomdude

Quote:


> Originally Posted by *white owl*
> 
> Isn't that what a Titan X is?


Bar would-be non-reference models of it, yes.


----------



## trodas

Quote:


> 12 months from now is extremely optimistic. HBM2 is slated for Q2 2016 production.


And unlike nVidia, AMD cards have already internal support for HBM2 in the chips. Just HBM2 rams are not available...









*Casey Ryback* -
Quote:


> 12 months from now is extremely optimistic. HBM2 is slated for Q2 2016 production.


Quote:


> So cards might be ready for a Q3, but more likely late Q4 release.


This is IMHO the most realistic assesment. Not to mention that these plans on HBM2 are based on just predictions - a more educated speculations. How it all will turn out is just another question.

Quote:


> 8GB HBM Fury in 4-6 months, just in time for a 12GB 3072 core 980Ti Ultra


Since there is plenty of room for drivers Video Ram optimizing, as AMD engineer stated clearly, I see little reason for 12G on videocard. Even 6G is overkill these days. 4G (but real, not 3.5G like GTX 970) is enought in all cases. So what the possibility of 12G card bring us? I see a little reason for 12G of *slow* ram. I much rather to have 4G of superfast ram on 4096bit wide memory bus


----------



## Randomdude

Quote:


> Originally Posted by *trodas*
> 
> And unlike nVidia, AMD cards have already internal support for HBM2 in the chips. Just HBM2 rams are not available...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Casey Ryback* -
> 
> This is IMHO the most realistic assesment. Not to mention that these plans on HBM2 are based on just predictions - a more educated speculations. How it all will turn out is just another question.
> Since there is plenty of room for drivers Video Ram optimizing, as AMD engineer stated clearly, I see little reason for 12G on videocard. Even 6G is overkill these days. 4G (but real, not 3.5G like GTX 970) is enought in all cases. So what the possibility of 12G card bring us? I see a little reason for 12G of *slow* ram. I much rather to have 4G of superfast ram on 4096bit wide memory bus


Better have more and not need it than not have enough and need it.


----------



## Casey Ryback

Quote:


> Originally Posted by *Randomdude*
> 
> Better have more and not need it than not have enough and need it.


That wasn't my comment you quoted lol.


----------



## Randomdude

Quote:


> Originally Posted by *Casey Ryback*
> 
> That wasn't my comment you quoted lol.


I wasn't replying to you, sorry if I caused some misunderstanding.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *magnek*
> 
> I think he might be referring to the big Pascal taping out news.
> 
> Well so there, I think nVidia is in a hurry to get Pascal out because they need to combat Intel's Knights Landing, which will launch in 2H 2015, is based on Intel's most advanced 14nm process, and looks to be a total compute monster, offering over >3 teraflops of DP performance and ~6 teraflops SP. Maxwell is sufficiently competitive at SP (Titan X has 7.1 teraflops SP), but cannot DP to save its life (1/32, even worse than non-Titan Kepler's 1/24), so unless nVidia wants to keep pushing Kepler based Tesla cards (possibly a viable strategy too?), they need something out..
> 
> Right now nVidia's most powerful Tesla K80 uses *two* GK210 chips, and has a peak DP performance of 2.91 teraflops, and peak SP of 8.74 teraflops, so sufficiently competitive to hold down the fort for the meantime I suppose.


Yes, I am.

Both nVidia and AMD typically have a 10-14 month time lag between their past tapeouts and their releases. That is where I am basing my 12 month (splitting the difference) estimate on.

For those that don't know what a "tapeout" is ... It is basically the last step before a limited run of cards (in this case) is made to begin testing and then optimizing of a chip. While this doesn't mean that Pascal is ready for production, it does mean that their electronic designs are done enough that they feel confident to go to this step and have actually chips made and put on boards to begin testing. Invariably, there will be issues found and resolved, but if the past is any indication of what will happen, this means approximately 12 months.

Given how hard nVidia wants to push this through and totally destroy AMD (because the initial rumors are that Pascal will be 10 times more powerful than Titan X), this will be the nail in AMD's coffin because they got nothing to compete with that. All the bickering about will the Fury be 15-20% faster than a Titan X will be totally moot at that point, and given AMD's limited R&D budget and the fact that the CEO basically said that their new CPU line is their main focus, even if AMD dropped everything and poured 100% of their R&D money into beating Pascal, it will be years before they could compete again if Pascal is even ONLY 3 times as fast as Titan X. So while Q2 2016 is maybe the most realistic time for Pascal to hit, I'm optimistic that nVidia will bust their butts to get a late Q1/Early Q2 just to end this "15% back and forth" once and for all ... not to mention, as was mentioned above, to give Intel a bit of competition, especially since nVidia is partnering with Elon Musk and Tesla for automated cars.


----------



## Casey Ryback

Quote:


> Originally Posted by *47 Knucklehead*
> 
> So while Q2 2016 is maybe the most realistic time for Pascal to hit, I'm optimistic that nVidia will bust their butts to get a late Q1/Early Q2 just to end this "15% back and forth" once and for all


Even if pascal was 10x faster, they would sell cards cut down and clocked lower, until the time came to actually release it.

They'd need plans the following cycle to be able to increase the power above pascal again by 20%, or they shoot themselves in the foot as a business.

Companies love the whole 15% thing, it allows the same customers to return to them cycle after cycle. Why on earth would they stop that kind of successful business practice? intel does the same with cpus.

They won't bust their butts to do anything, simply because they don't have to.

10X the performance is probably exaggerated crap anyway honestly, and even if it isn't like I said, we won't see it, it'll be shut inside a vault somewhere lol.


----------



## Ganf

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Given how hard nVidia wants to push this through and totally destroy AMD (because the initial rumors are that Pascal will be 10 times more powerful than Titan X)












10 times more powerful than MAXWELL (the compute card lineup) in 8x SLI on a 2p workstation...

I'm starting to get why you say the things you do. Nvidia's marketing is perfect for you. Put the tasty bits at the end of a long sentence that contains all of the caveats, and they'll forget the caveats.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Casey Ryback*
> 
> Even if pascal was 10x faster, they would sell cards cut down and clocked lower, until the time came to actually release it.
> 
> They'd need plans the following cycle to be able to increase the power above pascal again by 20%, or they shoot themselves in the foot as a business.
> 
> Companies love the whole 15% thing, it allows the same customers to return to them cycle after cycle. Why on earth would they stop that kind of successful business practice? intel does the same with cpus.
> 
> They won't bust their butts to do anything, simply because they don't have to.
> 
> 10X the performance is probably exaggerated crap anyway honestly, and even if it isn't like I said, we won't see it, it'll be shut inside a vault somewhere lol.


Well, the 10x number is on compute power (ie CUDA), which doesn't mean 10x gaming FPS. Regardless, it will be a massive increase, and not the 15% number we've seen.

We've been coasting along on the 15% increase for a long time now, mainly because the architectural changes haven't been as revolutionary as some in the the past. We are well overdue for another such jump.


----------



## raghu78

Quote:


> Originally Posted by *Ganf*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 10 times more powerful than MAXWELL (the compute card lineup) in 8x SLI on a 2p workstation...
> 
> I'm starting to get why you say the things you do. Nvidia's marketing is perfect for you. Put the tasty bits at the end of a long sentence that contains all of the caveats, and they'll forget the caveats.


http://techreport.com/news/27978/nvidia-pascal-to-feature-mixed-precision-mode-up-to-32gb-of-ram

well said







We can expect GP204 to be a 4096 cc, HBM2 chip with roughly 40% higher performance than GM200. btw I don't expect the first gen FINFET chips to be overclocking monsters like Maxwell. The foundries are going to have a tough time with yields when even the mighty Intel is having so called yield issues on a 2nd gen 14nm FINFET process with 3+ years of 22nm FINFET high volume production experience. Heck the foundries can't even yield a <100sq mm chip running with at 5W TDP at > 80% yields. Whats the hope for 300 sq mm chips at 180 - 200W TDP.


----------



## Ganf

Quote:


> Originally Posted by *raghu78*
> 
> http://techreport.com/news/27978/nvidia-pascal-to-feature-mixed-precision-mode-up-to-32gb-of-ram
> 
> well said
> 
> 
> 
> 
> 
> 
> 
> We can expect GP204 to be a 4096 cc, HBM2 chip with roughly 40% higher performance than GM200. btw I don't expect the first gen FINFET chips to be overclocking monsters like Maxwell. The foundries are going to have a tough time with yields when even the mighty Intel is having so called yield issues on a 2nd gen 14nm FINFET process with 3+ years of 22nm FINFET high volume production experience. Heck the foundries can't even yield a <100sq mm chip running with at 5W TDP at > 80% yields. Whats the hope for 300 sq mm chips at 180 - 200W TDP.


It'll be rough, we can only wait and see, as usual.

The most telling part of that presentation though was how all of the industry professionals were just sitting in their seats fuming after he went through that convoluted explanation about how Pascal is going to be 10 times more compute than Maxwell. Jen was basically telling them to shutup about the DP performance on Maxwell and buy more cards, because it isn't going to get significantly better any time soon, and they were not happy about that at all. The previous sections of the presentation had polite applause at the appropriate moments, 10x faster than Maxwell was greeted with dead silence.


----------



## cstkl1

Quote:


> Originally Posted by *Ganf*
> 
> It'll be rough, we can only wait and see, as usual.
> 
> The most telling part of that presentation though was how all of the industry professionals were just sitting in their seats fuming after he went through that convoluted explanation about how Pascal is going to be 10 times more compute than Maxwell. Jen was basically telling them to shutup about the DP performance on Maxwell and buy more cards, because it isn't going to get significantly better any time soon, and they were not happy about that at all. The previous sections of the presentation had polite applause at the appropriate moments, 10x faster than Maxwell was greeted with dead silence.


Agreed a lot of ppl taking sites etc that posted shots of the slides out of context and never view the feed/video

Afaik it was deep learning single compute performance of 4way vs 8way pascal with nvlink. As per the slide show

No where did he ever state a single titan x vs pascal Titan xXx had 10x performance gain.


----------



## Boomstick727

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Well, the 10x number is on compute power (ie CUDA), which doesn't mean 10x gaming FPS. Regardless, it will be a massive increase, and not the 15% number we've seen.
> 
> We've been coasting along on the 15% increase for a long time now, mainly because the architectural changes haven't been as revolutionary as some in the the past. We are well overdue for another such jump.


Exactly, and for all we know they might mean VS Titan X (GM200) which as we know is compute gimped so a 10 X improvement of nothing isn't that impressive.

Still I am expecting a big improvement all around for the next die shrunk GPU's from both camps. god knows it's needed we've been on 28nm for way to long and drip fed small upgrades. So a Titan X felt like a big upgrade and we paid extra but really these past few gens have given very little performance improvement.

Can't wait for die shrink and new architecture will be like a reset in the GPU space, and real progression over the next few years.


----------



## flopper

Quote:


> Originally Posted by *47 Knucklehead*
> 
> (because the initial rumors are that Pascal will be 10 times more powerful than Titan X),


I dont belive in fairies.
pure silliness


----------



## Creator

I expect 100% gains over Maxwell with Pascal. People forget that GTX 580 to Titan was 100%. But since Kepler, the release cycle has been stretched out. However, I don't expect Pascal until later in 2016. Because when does anything ever come out on time anymore? NV has been good with being on time, unlike AMD, and even now Intel, but HBM2 production is something that is out of their control. I just hope we get big Pascal right away, and not this stupid mid-range first release cycle of recent.

I have enough compute and gaming power to hold me over until then. But I think I'll still get a 390X just to play around with and benchmark in my other workstation. Maybe I'll try to learn some Direct Compute programming and see how the compute performance of AMD cards, as well as ease of application, compares to team green.


----------



## epic1337

Titan Black to Titan X is also around 50% which is not a small thing, the progress hadn't slowed down that much.
900 series isn't even complete yet, Pascal will still be more than half a year away.
Quote:


> Originally Posted by *Creator*
> 
> I just hope we get big Pascal right away, and not this stupid mid-range first release cycle of recent.


thats because of yield issues, the initial batch of GPU dies have defects, salvaged to be sold as slower chips.
though thats on the 204 chips, 200 chips gets delayed because logically its harder to get good yields right off the bat on higher end chips.


----------



## gamervivek

Considering Titan X's lower than 4870(a card from 2008) DP compute capability, the 10x improvement could be very real.

As for AMD lagging, they have jumped to a newer node faster than nvidia for quite some time now, doubt it's gonna change now. And now they also have experience with bigger GPU dies with Hawaii and now Fiji smashing their record.


----------



## magnek

Good lord not the 10x thing again. First the 10x is really 5x since NVLink allows scaling to 8 way GPUs, and then that is in the context of deep learning, so nothing to do with gaming. Heck Jen-Hsun himself said the 10x figure was derived using "CEO math", clearly signalling it was intended to be humorous and not taken at face value. Also at least first gen NVLink appears to be a mezzanine card, so I doubt it's a consumer oriented feature.

But yes gaming wise I expect the jump from Maxwell to Pascal to least be on par with 580 to Titan, and will likely exceed that. So basically over 2x the performance.


----------



## Randomdude

Quote:


> Originally Posted by *magnek*
> 
> Good lord not the 10x thing again.


Funny how you see people with close to 500 rep who claim to be fighting misconceptions do exactly just that, spread misconceptions when it's nVidia.
Quote:


> Originally Posted by *47 Knucklehead*
> 
> Just trying to educate the masses. Even on a board that is advanced as OCN, some of the misconceptions that I see about water cooling and thermodynamics is amazing. That's all.


Or do you only go out of your way fighting the good fight when it's strictly about water cooling?
Quote:


> Originally Posted by *47 Knucklehead*
> 
> If it was GEN 1 HBM, then I'd still avoid it for the reasons I gave above.
> 
> Me, I'm willing to wait _9 months_ for twice the speed and _16 times the capacity_.


32/4 is 8 times the capacity, fyi. That's also a best case scenario, actually it's 4 times as dense.



http://www.kitguru.net/components/graphic-cards/anton-shilov/sk-hynix-demos-hbm2-memory-chips-opens-way-for-32gb-graphics-cards/

Also, you say 9 months, then suddenly it's 12 months... most the things you say you say according to what fits your needs at the time it seems. How do people find you credible?
Quote:


> Originally Posted by *47 Knucklehead*
> 
> Both nVidia and AMD typically have a 10-14 month time lag between their past tapeouts and their releases. That is where I am basing my _12 month_ (splitting the difference) estimate on.


The more I look into this, the uglier it gets. Just why?


----------



## Serandur

Quote:


> Originally Posted by *epic1337*
> 
> Titan Black to Titan X is also around 50% which is not a small thing, the progress hadn't slowed down that much.
> 900 series isn't even complete yet, Pascal will still be more than half a year away.
> thats because of yield issues, the initial batch of GPU dies have defects, salvaged to be sold as slower chips.
> though thats on the 204 chips, 200 chips gets delayed because logically its harder to get good yields right off the bat on higher end chips.


It depends on which company we're looking at, I suppose. Judging by Nvidia's development, progress hasn't slowed down too much. 20nm was a dud process node, but Nvidia are still pushing full-steam ahead in full-scale architectural overhauls and big-die chips pushing TSMC's manufacturing limits. The problem is on AMD's end. They rely way too much on process nodes alone it seems and don't have the funding to genuinely beat Nvidia therefore Nvidia have little reason to sell us shiny new high-end stuff when the mid-range stuff is easier on them to initially manufacture and is enough to outpace or outright crush the competition.

Progress has slowed down significantly on AMD's end, however. The 7970 was rather underwhelming for a gaming chip considering the 28nm shrink, the brand new architecture, and the $550 price-tag. There's little doubt they botched the launch with immature drivers, as well. It then took them nearly two years to get around to releasing Hawaii in any product while Nvidia had GK110 out in Teslas in late 2012. Then Nvidia had GM200 pretty much ready for release by the end of 2014 (not too long after GM204; the chips did tape out close to each other and are clearly designed as part of the same series), they just had no incentive to release it any earlier or cheaper. They didn't need HBM or anything and their whole lineup is new Maxwell stuff, not just the high-end. People are getting confused by the naming schemes and don't always seem to understand that GM204 in the 980 isn't the successor to GK110 in the 780 which isn't the successor (but rather the much bigger brother) to GK104 in the 680.

Anyone else remember Volta and how it was supposed to be the 2016 release until Pascal randomly showed up on the roadmap and Volta got delayed to 2017? I think the guess people have about deciding to make Pascal a "Maxwell 3.0 with HBM on 16nm" makes sense; given AMD's lack of GCN development, that alone would be enough for Nvidia to continue winning. I wonder what AMD will do when Volta rolls around; I hope Zen will be a success and they'll have the R&D to make a true "GCN 2.0" or post-GCN architecture, but I fear they'll just repeat this whole "being late and mostly just a rebrand" mess and roll over dead.


----------



## Baulten

Quote:


> Originally Posted by *Randomdude*
> 
> Funny how you see people with close to 500 rep who claim to be fighting misconceptions do exactly just that, spread misconceptions when it's nVidia.
> Or do you only go out of your way fighting the good fight when it's strictly about water cooling?
> 32/4 is 8 times the capacity, fyi. That's also a best case scenario, actually it's 4 times as dense.
> 
> 
> 
> http://www.kitguru.net/components/graphic-cards/anton-shilov/sk-hynix-demos-hbm2-memory-chips-opens-way-for-32gb-graphics-cards/
> 
> Also, you say 9 months, then suddenly it's 12 months... most the things you say you say according to what fits your needs at the time it seems. How do people find you credible?
> The more I look into this, the uglier it gets. Just why?


He just ignores you when you point this out. Hopefully some other folks see it so they don't get misled by him.

It's laughable to think Nvidia will suddenly beat AMD to a new node, with a new architecture, using a new memory style. It's possible, but given history, doubtful.


----------



## provost

After multitude of crashes and no remedy in sight from Nvidia for the drivers, I have decided to ice my Titans and their pathetic divers.
Is this thing coming out anytime soon, or we just being teased for another few months?


----------



## magnek

^paper launch tomorrow, reviews to follow in a week or two seems to be the consensus right now
Quote:


> Originally Posted by *Baulten*
> 
> He just ignores you when you point this out. Hopefully some other folks see it so they don't get misled by him.
> 
> It's laughable to think Nvidia will suddenly beat AMD to a new node, with a new architecture, using a new memory style. It's possible, but given history, doubtful.


Well times have changed, and the nVidia today is very different from the nVidia in 2010. With that said, I will say this: the last time ATi jumped ahead of the memory game we had 2900 XT. But the last time nVidia did a memory change, arch update, and process shrink all in one go, we got the GTX 480.


----------



## tpi2007

Quote:


> Originally Posted by *magnek*
> 
> Well times have changed, and the nVidia today is very different from the nVidia in 2010. With that said, I will say this: the last time ATi jumped ahead of the memory game we had 2900 XT. But the last time nVidia did a memory change, arch update, and process shrink all in one go, we got the GTX 480.


That is not correct. The last time was two series later, with the HD 4000 in 2008, with AMD's HD 4870 512 MB / 1 GB GDDR5. That series, especially the HD 4870 and HD 4850 forced Nvidia to lower the price of the GTX 280 in a matter of weeks, introduce a faster GTX 260 (260 Core 216) and release a factory overclocked 9800 GTX (9800 GTX+).


----------



## Vesku

Quote:


> Originally Posted by *iLeakStuff*
> 
> I dunno.
> 
> My question is if they changed to Asetek because they learned its better during testing?
> Another reason can be that the CoolIT/Cooler Master was indeed used on earlier prototypes which HWLuxx saw


http://asetek.com/press-room/news/2014/asetek-announces-largest-ever-design-win.aspx

Interestingly ~10-20 thousand Asetek GPU AIO units were ordered for delivery in first half of 2015. That was August of last year. Possibly AMD has sourced from multiple companies.


----------



## extracrunchy

Quote:


> Originally Posted by *Creator*
> 
> I expect 100% gains over Maxwell with Pascal. People forget that GTX 580 to Titan was 100%. But since Kepler, the release cycle has been stretched out. However, I don't expect Pascal until later in 2016. Because when does anything ever come out on time anymore? NV has been good with being on time, unlike AMD, and even now Intel, but HBM2 production is something that is out of their control. I just hope we get big Pascal right away, and not this stupid mid-range first release cycle of recent.
> 
> I have enough compute and gaming power to hold me over until then. But I think I'll still get a 390X just to play around with and benchmark in my other workstation. Maybe I'll try to learn some Direct Compute programming and see how the compute performance of AMD cards, as well as ease of application, compares to team green.


I bet Pascal is at most 50% over Maxwell. I just don't see the TSMC 14nm process giving twice the power efficiency of the 28nm process. I hope I am wrong.


----------



## magnek

Quote:


> Originally Posted by *tpi2007*
> 
> That is not correct. The last time was two series later, with the HD 4000 in 2008, with AMD's HD 4870 512 MB / 1 GB GDDR5. That series, especially the HD 4870 and HD 4850 forced Nvidia to lower the price of the GTX 280 in a matter of weeks, introduce a faster GTX 260 (260 Core 216) and release a factory overclocked 9800 GTX (9800 GTX+).


You're right. When I said "last time" I think I really meant "jumped ahead of the memory curve when the need wasn't really there".


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Serandur*
> 
> It depends on which company we're looking at, I suppose. Judging by Nvidia's development, progress hasn't slowed down too much. 20nm was a dud process node, but Nvidia are still pushing full-steam ahead in full-scale architectural overhauls and big-die chips pushing TSMC's manufacturing limits. The problem is on AMD's end. They rely way too much on process nodes alone it seems and don't have the funding to genuinely beat Nvidia therefore Nvidia have little reason to sell us shiny new high-end stuff when the mid-range stuff is easier on them to initially manufacture and is enough to outpace or outright crush the competition.
> 
> Progress has slowed down significantly on AMD's end, however. The 7970 was rather underwhelming for a gaming chip considering the 28nm shrink, the brand new architecture, and the $550 price-tag. There's little doubt they botched the launch with immature drivers, as well. It then took them nearly two years to get around to releasing Hawaii in any product while Nvidia had GK110 out in Teslas in late 2012. Then Nvidia had GM200 pretty much ready for release by the end of 2014 (not too long after GM204; the chips did tape out close to each other and are clearly designed as part of the same series), they just had no incentive to release it any earlier or cheaper. They didn't need HBM or anything and their whole lineup is new Maxwell stuff, not just the high-end. People are getting confused by the naming schemes and don't always seem to understand that GM204 in the 980 isn't the successor to GK110 in the 780 which isn't the successor (but rather the much bigger brother) to GK104 in the 680.
> 
> Anyone else remember Volta and how it was supposed to be the 2016 release until Pascal randomly showed up on the roadmap and Volta got delayed to 2017? I think the guess people have about deciding to make Pascal a "Maxwell 3.0 with HBM on 16nm" makes sense; given AMD's lack of GCN development, that alone would be enough for Nvidia to continue winning. I wonder what AMD will do when Volta rolls around; I hope Zen will be a success and they'll have the R&D to make a true "GCN 2.0" or post-GCN architecture, but I fear they'll just repeat this whole "being late and mostly just a rebrand" mess and roll over dead.


You're writing AMD's epitaph before we even get to see how Fiji performs? And for that matter, Hawaii was and still is a pretty decent chip, pretty close to on par with the much larger GK110. GCN has been a pretty successful architecture in my view and has proven that AMD is willing and able to make larger chips that compete with Nvidia's behemoths. I have a feeling Fiji will match GM200 every step of the way and considering its only a couple months later, I'd say we have pretty good parity at the top end right now (assuming Fiji performs the way I imagine it will). Next year both teams will release FINFET with HBM2 and I see no reason why this parity shouldn't continue. One thing is for sure, at least performance-wise AMD is matching Nvidia blow for blow right now. That said, they really do need to do something for the mid range of their lineup to try and recapture some market share. Rebrands are not going to get it done, even if the performance is on par with small Maxwell...


----------



## Serandur

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> You're writing AMD's epitaph before we even get to see how Fiji performs? And for that matter, Hawaii was and still is a pretty decent chip, pretty close to on par with the much larger GK110. GCN has been a pretty successful architecture in my view and has proven that AMD is willing and able to make larger chips that compete with Nvidia's behemoths. I have a feeling Fiji will match GM200 every step of the way and considering its only a couple months later, I'd say we have pretty good parity at the top end right now (assuming Fiji performs the way I imagine it will). Next year both teams will release FINFET with HBM2 and I see no reason why this parity shouldn't continue. One thing is for sure, at least performance-wise AMD is matching Nvidia blow for blow right now. That said, they really do need to do something for the mid range of their lineup to try and recapture some market share. Rebrands are not going to get it done, even if the performance is on par with small Maxwell...


Where did I write AMD's epitaph? I summarized their lagging development and R&D for the last few years and brought up GCN's current inferiority to Maxwell for consumer products. I then concluded with my concern that if they keep going like this, Volta will wreck them a couple years down the line and I hope Zen succeeds and gives them the financial push they need to compete. That's not an epitaph, but the company is faltering and Fiji alone isn't going to do too much.

Regardless of where Fiji ends up, there are a few things we "know". One, that the rest of the lineup are rebrands (indicating AMD simply don't have an architecture to compete with Maxwell in the consumer space); two, that it taped out at the same time as Tonga last year; and three, that AMD's R&D budget is at a relative low and Nvidia's at a record high. It's not a bad guess to say Fiji will also be GCN 1.2 as a result which means still not on par with Maxwell (from an architectural efficiency viewpoint). It has HBM, yes, but so will Nvidia next year. If AMD's goal is, as my fears are based on (note: "fears", not "facts") simply to shrink GCN with some minor tweaks then even an untouched Maxwell 2.0 on 16nm with HBM will still win, let alone the question of when or will Nvidia pull off another efficient jump forward like Maxwell itself is. AMD are demonstrating little interest in doing a similar architectural re-imagining anytime soon themselves.

Hawaii is a fine chip, I didn't say otherwise. But it was late, a year later than the first GK110 products. And more specifically, Tahiti was tiny and underwhelming. I give Nvidia plenty of flak for sending GK104 into the battlefield under the banner of "GTX 680" because of price-doubling, but so too does it seem AMD tried the same thing with Tahiti. They didn't demonstrate they were willing to take a risk with a large die, they demonstrated (as is now clear comparing Tahiti, Hawaii, and Fiji with miniscule architectural developments the whole way) that they were planning to space out progressively larger GCN dies over several years (Fiji probably planned to be on 20nm though; Hawaii not being very large) rather than go all-out. They needed bigger dies, otherwise Nvidia would slaughter them and I'd guess it's simply because that's cheaper than recreating the architecture.

*Edit: I get a bit long-winded sometimes, but the main point I want to get across is architectural development is an iterative process that builds upon what came before and falling behind makes future prospects progressively bleaker. Also, that what we as consumers get isn't necessarily indicative of what's possible. I believe Nvidia are holding back and that GK110's release date, price-doubling, and GM200's 2014 manufacturing dates (as per examined die shots) are evidence of that. I would say that AMD have therefore already failed to compete and are going to have more trouble once the playing field is leveled with HBM as a standard. That's not to say Tahiti, Hawaii, and Fiji aren't/weren't fine products but I think they were either too expensive, too late, or too late/limited respectively to really put Nvidia under any pressure.*


----------



## harney

http://www.legitreviews.com/amd-radeon-r9-fury-x-4gb-video-card-has-512gbs-memory-bandwidth_166150


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Baulten*
> 
> He just ignores you when you point this out. Hopefully some other folks see it so they don't get misled by him.
> 
> It's laughable to think Nvidia will suddenly beat AMD to a new node, with a new architecture, using a new memory style. It's possible, but given history, doubtful.


You are just ticked that every post you've ever made here except one, has been from the POV of an AMD fanboy and someone is saying something you don't like.









Fact of the matter is, come today, you will see just how right I have been about AMD and this generation. A total rebrand of everything, except one card. That card will be expensive as sin (most likely as fast as a Titan X ... before overclocking), but otherwise a pretty "ho hum" launch of rebrands.

Most likely the stock Fury will beat the Titan X by about 10%, but it will have lower overclocking headroom, so in the end, a Titan X will beat it.

As far as my crystal ball predictions ... I don't see you or the other guy making them. How is the view from the cheap seats? A 9-12 month (aka a quarter) prediction is pretty standard, and I'll be here for a long time to stand behind my predictions, and support my "nearly 500 reputation".

We shall see.

As for ignoring you, no, I just was doing something else last night. My daughter was in town and I spend time with her, and then with my wife. I do have a life outside of posting.

Quote:


> Originally Posted by *harney*
> 
> http://www.legitreviews.com/amd-radeon-r9-fury-x-4gb-video-card-has-512gbs-memory-bandwidth_166150


And as predicted, but all the AMD fanboys said "No no no, it will be 8GB!" ... it's a 4GB card. So says the people that are the major driving force behind inventing HBM ... but what do they know?


----------



## Ganf

Quote:


> Originally Posted by *47 Knucklehead*
> 
> You are just ticked that every post you've ever made here except one, has been from the POV of an AMD fanboy and someone is saying something you don't like.
> 
> 
> 
> 
> 
> 
> 
> 
> And as predicted, but all the AMD fanboys said "No no no, it will be 8GB!" ... it's a 4GB card. So says the people that are the major driving force behind inventing HBM ... but what do they know?


Despite the fact that I'm pretty sure the only one that's still hung up on whether the launch card will have 8gb or not is... Well... You... That link only states that they will be launching with a 4gb card today, nothing about the coming months.

But I don't care, not my fight, I personally think it would be stupid for AMD to release an 8gb card months later after the enthusiast market is completely saturated, but that's just me.


----------



## Wishmaker

Quote:


> Originally Posted by *47 Knucklehead*
> 
> You are just ticked that every post you've ever made here except one, has been from the POV of an AMD fanboy and someone is saying something you don't like.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Fact of the matter is, come today, you will see just how right I have been about AMD and this generation. A total rebrand of everything, except one card. That card will be expensive as sin (most likely as fast as a Titan X ... before overclocking), but otherwise a pretty "ho hum" launch of rebrands.
> 
> Most likely the stock Fury will beat the Titan X by about 10%, but it will have lower overclocking headroom, so in the end, a Titan X will beat it.
> 
> As far as my crystal ball predictions ... I don't see you or the other guy making them. How is the view from the cheap seats? A 9-12 month (aka a quarter) prediction is pretty standard, and I'll be here for a long time to stand behind my predictions, and support my "nearly 500 reputation".
> 
> We shall see.
> 
> As for ignoring you, no, I just was doing something else last night. My daughter was in town and I spend time with her, and then with my wife. I do have a life outside of posting.
> And as predicted, but all the AMD fanboys said "No no no, it will be 8GB!" ... it's a 4GB card. So says the people that are the major driving force behind inventing HBM ... but what do they know?


You are scaring people with your posts!!!


----------



## Baulten

Quote:


> Originally Posted by *47 Knucklehead*
> 
> You are just ticked that every post you've ever made here except one, has been from the POV of an AMD fanboy and someone is saying something you don't like.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Fact of the matter is, come today, you will see just how right I have been about AMD and this generation. A total rebrand of everything, except one card. That card will be expensive as sin (most likely as fast as a Titan X ... before overclocking), but otherwise a pretty "ho hum" launch of rebrands.
> 
> Most likely the stock Fury will beat the Titan X by about 10%, but it will have lower overclocking headroom, so in the end, a Titan X will beat it.
> 
> As far as my crystal ball predictions ... I don't see you or the other guy making them. How is the view from the cheap seats? A 9-12 month (aka a quarter) prediction is pretty standard, and I'll be here for a long time to stand
> behind my predictions, and support my "nearly 500 reputation".
> 
> We shall see.
> 
> As for ignoring you, no, I just was doing something else last night. My daughter was in town and I spend time with her, and then with my wife. I do have a life outside of posting.
> And as predicted, but all the AMD fanboys said "No no no, it will be 8GB!" ... it's a 4GB card. So says the people that are the major driving force behind inventing HBM ... but what do they know?


I forgot disagreeing with you made me an AMD fanboy. Oh, right, it doesn't. AMD has made some really bad decisions the past four years, but almost all of them have been on the CPU side, not GPU. Hawaii was late to the game, but arguably has performed remarkably well for how old it is now.

That being said, I don't actually disagree with anything in your post (aside from your faulty assertion of where my "loyalties" lie). While I was hopeful for 8 gb after that first rumor, I knew it was unlikely. I also doubt that Fiji will have the overclocking headroom necessary to be a clear cut Titan killer by any means, though I also expect it to perform decently for the pricepoint it launches at (As AMD cards usually do). I am excited for the launch because I want to see if HBM brings any unexpected improvements and I'm hopeful that there will be a new driver launch as well.

You want my prediction for 2016? Both camps late to the game. AMD releases a node shrink, mid tier card with HBM2 before Nvidia. Nvidia releases a solid card a little later, but it has some severe drawbacks (think the 480). AMD follows up with either a winner or their death certificate. I do not expect Pascal to be some monster that wins in everything. If Nvidia wins the next tier it will be because AMD couldn't pull their heads out of their rear ends and release something decent.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Baulten*
> 
> I forgot disagreeing with you made me an AMD fanboy. Oh, right, it doesn't.


No, it doesn't. Sorry, it's not Friday, I'm not in the mood for eating Red Herring.

Your post history does. As I said, 13 of your 14 posts were on AMD topics, and the other was on cell phone data caps.

Please try reading what I said again.

But back to the topic at hand, if "That being said, I don't actually disagree with anything in your post (aside from your faulty assertion of where my "loyalties" lie)." then why are you trying to bust my chops?









In case you missed the chain of events, I didn't say a word to you or about you until AFTER you started in on me, THEN I went to your post history and pointed out 13 of you4 14 posts were about AMD. So to borrow a line from a movie ... "He drew first blood."


----------



## decimator

Quote:


> Originally Posted by *47 Knucklehead*
> 
> A 9-12 month (aka a quarter) prediction


The only thing I got out of this post was this...I had no idea a quarter was 9-12 months.







Thanks!


----------



## 47 Knucklehead

Quote:


> Originally Posted by *decimator*
> 
> The only thing I got out of this post was this...I had no idea a quarter was 9-12 months. Thanks!


I'll take "What is another term for a 3 months range called?" for $1000 Alex.


----------



## harney

AMD at E3 2015: 16 June, 5pm BST 9am PST ect ect

Watch the AMD E3 2015 conference live here.

http://www.pcadvisor.co.uk/how-to/game/amd-e3-2015-press-conference-live-video-stream-fiji-radeon-300-gpu-launch-3615542/


----------



## peateargryphon

Quote:


> Originally Posted by *harney*
> 
> AMD at E3 2015: 16 June, 5pm BST 9am PST ect ect
> 
> Watch the AMD E3 2015 conference live here.
> 
> http://www.pcadvisor.co.uk/how-to/game/amd-e3-2015-press-conference-live-video-stream-fiji-radeon-300-gpu-launch-3615542/


Hurry, not much time left for wild speculation!


----------



## BigMack70

My final speculation:

Beats Titan X in some games. Loses in some others. Will show up as 2% slower than Titan X overall in TPU's performance summary - so just above 980 ti and just below Titan X. Launch price $699 for water cooled and $599 for air cooled. 4GB HBM.


----------



## harney

Quote:


> Originally Posted by *peateargryphon*
> 
> Hurry, not much time left for wild speculation!


well i wish them luck ...come on


----------



## epic1337

Quote:


> Originally Posted by *BigMack70*
> 
> My final speculation:
> 
> Beats Titan X in some games. Loses in some others. Will show up as 2% slower than Titan X overall in TPU's performance summary - so just above 980 ti and just below Titan X. Launch price $699 for water cooled and $599 for air cooled. 4GB HBM.


can't say $599 air cooled is worth it, 980Ti still offers some niche features that AMD doesn't have and it overclocks like a beast.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *epic1337*
> 
> can't say $599 is worth it, 980Ti still offers some niche features that AMD doesn't have and it overclocks like a beast.


I'd call HBM a "feature" Nvidia will not have...


----------



## epic1337

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'd call HBM a "feature" Nvidia will not have...


what would that give that Nvidia wouldn't be able to compare?
they perform almost identically, so whats the difference?

i'm being serious, is the HBM hype justified or not?


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *epic1337*
> 
> what would that give that Nvidia wouldn't be able to compare?
> they perform almost identically, so whats the difference?
> 
> i'm being serious, is the HBM hype justified or not?


No one will know that until we actually see how it performs. Everything people are saying until then is just wind...


----------



## epic1337

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> No one will know that until we actually see how it performs. Everything people are saying until then is just wind...


thats true, but if the card does performs identically to a 980Ti in all aspects, would having HBM give any merit then? i think not.

the problem i see with fiji is that it requires liquid cooling to some extent, which implies that its not an efficient GPU, possibly 300W or so, which means an air cooled version would have thermal constrictions for it to overclock.
in such a case, at $599 ( or even $649 due to aftermarket coolers from other manufacturers ) wouldn't be worth it compared to a $699 aftermarket 980Ti, 980Ti are known to overclock very well and still remain cool.

tl;dr - if fiji can assure an overclock of at least 40% on air cooler without overheating at $599, then maybe its worth it, while disregarding Nvidia niche features.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *epic1337*
> 
> thats true, but if the card does performs identically to a 980Ti in all aspects, would having HBM give any merit then? i think not.
> 
> the problem i see with fiji is that it requires liquid cooling to some extent, which implies that its not an efficient GPU, possibly 300W or so, which means an air cooled version would have thermal constrictions for it to overclock.
> in such a case, at $599 ( or even $649 due to aftermarket coolers from other manufacturers ) wouldn't be worth it compared to a $699 aftermarket 980Ti, 980Ti are known to overclock very well and still remain cool.
> 
> tl;dr - if fiji can assure an overclock of at least 40% on air cooler without overheating at $599, then maybe its worth it, while disregarding Nvidia niche features.


Well, obviously it wouldn't be considered a feature if it performed the same as GDDR5. They also wouldn't have bothered developing it for the past 3 years if it performed the same as GDDR5...


----------



## epic1337

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Well, obviously it wouldn't be considered a feature if it performed the same as GDDR5. They also wouldn't have bothered developing it for the past 3 years if it performed the same as GDDR5...


i'm not saying it would perform like GDDR5, i'm implying it like "can the GPU even utilize the advantage of HBM?", similar to how DDR4 hardly matters over DDR3 in terms of real-world performance.

to begin with, were GPUs even bandwidth bottlenecked?


----------



## magicc8ball

Quote:


> Originally Posted by *epic1337*
> 
> thats true, but if the card does performs identically to a 980Ti in all aspects, would having HBM give any merit then? i think not.
> 
> the problem i see with fiji is that it requires liquid cooling to some extent, which implies that its not an efficient GPU, possibly 300W or so, which means an air cooled version would have thermal constrictions for it to overclock.
> in such a case, at $599 ( or even $649 due to aftermarket coolers from other manufacturers ) wouldn't be worth it compared to a $699 aftermarket 980Ti, 980Ti are known to overclock very well and still remain cool.
> 
> tl;dr - if fiji can assure an overclock of at least 30% on air cooler without overheating at $599, then maybe its worth it, while disregarding Nvidia niche features.


So I thought the only reason they needed to use an AIO cooler is due to the size of the PCB. I believe they are coming out with an traditional Fan Cooled version as well. So while it is not efficient enough for a Fan cooled version on the smaller PCB it should be efficient enough on the more traditional sized PCB.

I would think Nvidia would run into the same issue if they had a enthusiast level GPU on a PCB the same size as we have seen the AIO fiji xt.


----------



## harney

http://wccftech.com/amd-radeon-300-series-e3-launch-presentation-leaked/


----------



## epic1337

Quote:


> Originally Posted by *magicc8ball*
> 
> So I thought the only reason they needed to use an AIO cooler is due to the size of the PCB. I believe they are coming out with an traditional Fan Cooled version as well. So while it is not efficient enough for a Fan cooled version on the smaller PCB it should be efficient enough on the more traditional sized PCB.
> 
> I would think Nvidia would run into the same issue if they had a enthusiast level GPU on a PCB the same size as we have seen the AIO fiji xt.


maybe, an oversized cooler works as well, the protruding part of the HSF gets free-flow directly exhausting hot air on the opposite side of the PCB.
still, 290X had issues with cooling itself, imagine a GPU with 50% more SP than that, the heat would be unbearable for most aftermarket air coolers.

thats why Nvidia had been adamant in making their GPUs more efficient.
if you've observed GTX980, even though they can consume upwards of 300W, the reference card stayed below 200W on averaged power consumption.
peak power (spikes) consumption of 980 cards can reach as high as 350W when overclocked, average around 280W.

and on a side note, GTX980Ti has good efficiency.
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html
TDP 250W, stock clock, stress test average power consumption 254W, peak (spike) power consumption 297W.

compared to 290X, which is incredibly scary.
http://www.tomshardware.com/reviews/r9-290x-lightning-performance-review,3782-6.html
TDP 290W, stock clock, stress test average power consumption 259W, peak (spike) power consumption 425W!!


----------



## Serandur

Oh god " long in the tooth", talk about memory, "pushing to the next level. Their architecture is long in the tooth, it's only 4 GBs, and Nvidia beat them to this performance tier. Marketing is so lame... all marketing.

I want reviews, dang it.


----------



## peateargryphon

"This is an overclockers dream" - straight from AMD


----------



## peateargryphon

Fury X $649 Can't wait to see proof in the benchmarks.


----------



## harney

mmmmm 275 watt $649 and $549

just been reading this

here we go again I thought a GB of ram is a GB..... no more no less

http://uk.ign.com/articles/2015/06/16/amd-announces-r9-fury-fury-x-and-nano-cards

The new cards all run with AMD's high-bandwidth memory technology, which gives them comparable power to competing cards with more memory. For example, a HBM card with 4GB of VRAM would be roughly equivalent to a card running 6GB without the tech.


----------



## harney




----------



## Ganf

Quote:


> Originally Posted by *epic1337*
> 
> what would that give that Nvidia wouldn't be able to compare?
> they perform almost identically, so whats the difference?
> 
> i'm being serious, is the HBM hype justified or not?


HBM hype is justified for VR. It'll make async shader ultra-low latency even lower.

These cards need to hurry the hell up, along with the Vive. I've been putting off some of my favorite games waiting for this magical combination for months already.


----------



## Attomsk

Quote:


> Originally Posted by *harney*
> 
> The new cards all run with AMD's high-bandwidth memory technology, which gives them comparable power to competing cards with more memory. For example, a HBM card with 4GB of VRAM would be roughly equivalent to a card running 6GB without the tech.


That really just sounds like a load of BS. HBM provides speed, it isn't some magical memory Tardis


----------



## BigMack70

Quote:


> Originally Posted by *Attomsk*
> 
> That really just sounds like a load of BS. HBM provides speed, it isn't some magical memory Tardis


Of course it's BS. It's IGN.


----------



## gamervivek

Quote:


> Originally Posted by *Serandur*
> 
> Where did I write AMD's epitaph? I summarized their lagging development and R&D for the last few years and brought up GCN's current inferiority to Maxwell for consumer products. I then concluded with my concern that if they keep going like this, Volta will wreck them a couple years down the line and I hope Zen succeeds and gives them the financial push they need to compete. That's not an epitaph, but the company is faltering and Fiji alone isn't going to do too much.
> 
> Regardless of where Fiji ends up, there are a few things we "know". One, that the rest of the lineup are rebrands (indicating AMD simply don't have an architecture to compete with Maxwell in the consumer space); two, that it taped out at the same time as Tonga last year; and three, that AMD's R&D budget is at a relative low and Nvidia's at a record high. It's not a bad guess to say Fiji will also be GCN 1.2 as a result which means still not on par with Maxwell (from an architectural efficiency viewpoint). It has HBM, yes, but so will Nvidia next year. If AMD's goal is, as my fears are based on (note: "fears", not "facts") simply to shrink GCN with some minor tweaks then even an untouched Maxwell 2.0 on 16nm with HBM will still win, let alone the question of when or will Nvidia pull off another efficient jump forward like Maxwell itself is. AMD are demonstrating little interest in doing a similar architectural re-imagining anytime soon themselves.
> 
> Hawaii is a fine chip, I didn't say otherwise. But it was late, a year later than the first GK110 products. And more specifically, Tahiti was tiny and underwhelming. I give Nvidia plenty of flak for sending GK104 into the battlefield under the banner of "GTX 680" because of price-doubling, but so too does it seem AMD tried the same thing with Tahiti. They didn't demonstrate they were willing to take a risk with a large die, they demonstrated (as is now clear comparing Tahiti, Hawaii, and Fiji with miniscule architectural developments the whole way) that they were planning to space out progressively larger GCN dies over several years (Fiji probably planned to be on 20nm though; Hawaii not being very large) rather than go all-out. They needed bigger dies, otherwise Nvidia would slaughter them and I'd guess it's simply because that's cheaper than recreating the architecture.
> 
> *Edit: I get a bit long-winded sometimes, but the main point I want to get across is architectural development is an iterative process that builds upon what came before and falling behind makes future prospects progressively bleaker. Also, that what we as consumers get isn't necessarily indicative of what's possible. I believe Nvidia are holding back and that GK110's release date, price-doubling, and GM200's 2014 manufacturing dates (as per examined die shots) are evidence of that. I would say that AMD have therefore already failed to compete and are going to have more trouble once the playing field is leveled with HBM as a standard. That's not to say Tahiti, Hawaii, and Fiji aren't/weren't fine products but I think they were either too expensive, too late, or too late/limited respectively to really put Nvidia under any pressure.*


You are long winded but you're also wrong. And GCN is quite good and would require just a few tweaks rather than the Kepler to Maxwell transition for better efficiency. Hopefully your views have changed a bit after the fury unveiling.


----------



## Serandur

Quote:


> Originally Posted by *gamervivek*
> 
> You are long winded but you're also wrong. And GCN is quite good and would require just a few tweaks rather than the Kepler to Maxwell transition for better efficiency. Hopefully your views have changed a bit after the fury unveiling.


I'm skeptical about the claim because of R&D funding and the 300 series being rebrands, but hoping to be surprised by some progress with Fiji and Arctic Islands. It's been quite a while since ATi/AMD didn't make much of an appearance for a whole GPU generation with the exception of one chip (2900XT). Hopefully they rebound like with the 4000 series.

My views after the Fury presentation are that I'm very intrigued by what changes Fiji has done to it, but they didn't give out those numbers/info yet. Next week it is.


----------



## TheBlindDeafMute

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Well, obviously it wouldn't be considered a feature if it performed the same as GDDR5. They also wouldn't have bothered developing it for the past 3 years if it performed the same as GDDR5...


Yes but its worth mentioning is memory speed the largest drawback when searching for more fps? If this is being marketed as a gaming card, that will be all that matters. If it allows it to keep up or only slightly exceed current cards, then I would state that is not worth it. Then of course, what if there are issues since it's brand new tech? Given AMD's track record of slowly releasing drivers. ...idk. I'll just wait for the reviews. Its all talk until then.


----------



## gamervivek

Tahiti was a big chip for AMD, though I do think they missed a beat by not doing 48ROPs on it and of course the driver improvements(especially in BF3) for their new architecture that came a bit late. It also has better directx features despite being out earlier than Kepler. The problem has been for AMD that nvidia's top chips can match their smaller die chips' clockspeed and now outstrip it rather easily. The FLOPs and real game performance discrepancy hasn't changed from the pre-GCN days. Or the fewer games that take advantage of AMD's architecture. For instance having more such benchmarks would help AMD far more in the overall picture.

http://www.techspot.com/articles-info/977/bench/Hitman.png

Not that I'm not disappointed by how late this launch has been, lack of 8GB version and that Fury seems to be only a smidgen faster than Titan X, if that. But I do expect AMD to get to 14/16nm faster than nvidia as has been their wont. How much of a difference would that make is of course up for guess work.


----------



## Casey Ryback

Quote:


> Originally Posted by *Serandur*
> 
> but as a GPU fan, I really wish they finally had enough R&D to stick it to Nvidia hard.


I think they are right now, ok so they have a lot of rebrands, but the new tech is young, but it's here.

http://www.overclock3d.net/articles/gpu_displays/amd_fury_nano_announced/1

The next series is going to be interesting, and I'm predicting AMD will be close, if not at the top of the GPU game.

I'm not saying they'll have 50%+ of the market share though. But from a product/performance perspective.

It will be a good day for consumers when both teams are pumping out beastly 8GB+ HBM2 cards and competing hard with prices. Probably perfect timing for 4K to become a bit more mainstream.


----------



## pengs

Quote:


> Originally Posted by *epic1337*
> 
> maybe, an oversized cooler works as well, the protruding part of the HSF gets free-flow directly exhausting hot air on the opposite side of the PCB.
> still, 290X had issues with cooling itself, imagine a GPU with 50% more SP than that, the heat would be unbearable for most aftermarket air coolers.


The reference cooler had the cooling issues, the AIB partners took care of it. That was AMD trying to knock the wind out of the OG Titan's sails and tweaking the clocks/voltage to run it on a cooler which hadn't been designed to care for part with a 290w TDP. With a slightly increased fan profile I can keep my 290x <70c all day long at 45% fan speed at a stock 1.12v, <75c with +100mV (1.22v) fans at 50% and if you look at the Vapor-X or the Lightning it's even easier to cool. The hot and loud stigma originates from the reference cooler.
Quote:


> thats why Nvidia had been adamant in making their GPUs more efficient.
> if you've observed GTX980, even though they can consume upwards of 300W, the reference card stayed below 200W on averaged power consumption.
> peak power (spikes) consumption of 980 cards can reach as high as 350W when overclocked, average around 280W.
> 
> and on a side note, GTX980Ti has good efficiency.
> http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html
> TDP 250W, stock clock, stress test average power consumption 254W, peak (spike) power consumption 297W.
> 
> compared to 290X, which is incredibly scary.
> http://www.tomshardware.com/reviews/r9-290x-lightning-performance-review,3782-6.html
> TDP 290W, stock clock, stress test average power consumption 259W, peak (spike) power consumption 425W!!


That's the MSI Lightning, it's a non reference with a beefed power delivery and runs at a higher voltage out of the box, most likely a higher power target also.



This may not be spike but you can see the overall delta from a Furmark run.
Quote:


> Idle power consumption of the MSI R9 290X Lightning is a bit increased over the AMD reference design. It seems as though the additional fans consume quite a lot of power, and it doesn't help that the GPU runs at 1.02 V instead of the ~0.96 V of the reference design in idle.
> 
> Normal gaming power consumption is only slightly increased, which is a good thing, but the MSI R9 290X Lightning is, even so, still a bit less power efficient than the AMD reference board, although only by a small margin.


----------



## Ganf

Quote:


> Originally Posted by *epic1337*
> 
> maybe, an oversized cooler works as well, the protruding part of the HSF gets free-flow directly exhausting hot air on the opposite side of the PCB.
> still, 290X had issues with cooling itself, imagine a GPU with 50% more SP than that, the heat would be unbearable for most aftermarket air coolers.
> 
> thats why Nvidia had been adamant in making their GPUs more efficient.
> if you've observed GTX980, even though they can consume upwards of 300W, the reference card stayed below 200W on averaged power consumption.
> peak power (spikes) consumption of 980 cards can reach as high as 350W when overclocked, average around 280W.
> 
> and on a side note, GTX980Ti has good efficiency.
> http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html
> TDP 250W, stock clock, stress test average power consumption 254W, peak (spike) power consumption 297W.
> 
> compared to 290X, which is incredibly scary.
> http://www.tomshardware.com/reviews/r9-290x-lightning-performance-review,3782-6.html
> TDP 290W, stock clock, stress test average power consumption 259W, peak (spike) power consumption 425W!!


Did you really just use a Lightning in an efficiency comparison? That's like pitting a dragster against a Ford Focus in a mileage comparison.









There's a reason that card has 8+8+6 pins, and it isn't to win any eco-friendly awards. It's nothing but raw performance with no compromises.


----------



## Woundingchaney

Quote:


> Originally Posted by *Casey Ryback*
> 
> I think they are right now, ok so they have a lot of rebrands, but the new tech is young, but it's here.
> 
> http://www.overclock3d.net/articles/gpu_displays/amd_fury_nano_announced/1
> 
> The next series is going to be interesting, and I'm predicting AMD will be close, if not at the top of the GPU game.
> 
> I'm not saying they'll have 50%+ of the market share though. But from a product/performance perspective.
> 
> It will be a good day for consumers when both teams are pumping out beastly 8GB+ HBM2 cards and competing hard with prices. Probably perfect timing for 4K to become a bit more mainstream.


I think this is the aspect that many aren't looking at currently. Realistically Fury should be very capable of competing with the Titan X and the 980ti. The new Fury isn't going to break performance barriers with Nvidias current high end offerings, but the next generation coming is going to be an excellent striking point for AMD. They will of already utilized the new memory architecture, have existing business arrangements with providers and have laid a working design that they can essentially refine based upon performance metrics.

If AMD can focus more on their Xfire driver team I would be more than happy to move back to the Red team for 2 of their higher end cards.


----------



## criminal

Quote:


> Originally Posted by *Ganf*
> 
> Did you really just use a Lightning in an efficiency comparison? That's like pitting a dragster against a Ford Focus in a mileage comparison.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> There's a reason that card has 8+8+6 pins, and it isn't to win any eco-friendly awards. It's nothing but raw performance with no compromises.


Do you think there will be a lightning version of the Fury X? Seems like AMD is going to take the "Titan" approach with the Fury X and have it only available in vanilla form. (Which it is really great in that form IMHO.) To me it seems we might only see a lightning based on the regular Fury model. Interesting things to come.


----------



## Noobism

Quote:


> Originally Posted by *criminal*
> 
> Do you think there will be a lightning version of the Fury X? Seems like AMD is going to take the "Titan" approach with the Fury X and have it only available in vanilla form. (Which it is really great in that form IMHO.) To me it seems we might only see a lightning based on the regular Fury model. Interesting things to come.


Yea I like the look of them personally, doesn't look cheap compared to past cards. Glad that they took a different approach for styling, rather then play it safe.


----------



## Ganf

Quote:


> Originally Posted by *criminal*
> 
> Do you think there will be a lightning version of the Fury X? Seems like AMD is going to take the "Titan" approach with the Fury X and have it only available in vanilla form. (Which it is really great in that form IMHO.) To me it seems we might only see a lightning based on the regular Fury model. Interesting things to come.


Harassing them on Twitter. If they don't make a Fury Lightning they will have earned a mortal enemy they cannot cow.


----------



## raghu78

Quote:


> Originally Posted by *criminal*
> 
> Do you think there will be a lightning version of the Fury X? Seems like AMD is going to take the "Titan" approach with the Fury X and have it only available in vanilla form. (Which it is really great in that form IMHO.) To me it seems we might only see a lightning based on the regular Fury model. Interesting things to come.


yeah I think partners will only be allowed to come out with air cooled versions of Fury. Fury X will remain a halo product and AMD will try and make some decent margins on that product. It all boils down to performance and OC headroom. If AMD Fury X can beat the EVGA GTX 980 Ti Hybrid , Classified and other such beefed up designs when both are overclocked to the max then AMD will be able to say they have the fastest GPU in the world. The fact that AMD is selling such a product with warranty at USD 649 will be icing on the cake. june 24th cannot get here sooner.


----------



## criminal

Quote:


> Originally Posted by *raghu78*
> 
> yeah I think partners will only be allowed to come out with air cooled versions of Fury. Fury X will remain a halo product and AMD will try and make some decent margins on that product. It all boils down to performance and OC headroom. If AMD Fury X can beat the EVGA GTX 980 Ti Hybrid when both are overclocked to the max then AMD will be able to say they have the fastest GPU in the world. The fact that AMD is selling such a product with warranty at USD 649 will be icing on the cake. june 24th cannot get here sooner.


No doubt. I really need to sell this 980 asap, but I dont have a spare card to hold me over until Fury is in my hands. Can't wait.


----------



## RSharpe

I've not had an ATI/AMD GPU since the 5970. Fury is piquing my interest. I'm not interested in a AIO water cooling solution and I'd rather run blocks in a custom loop. What would be the best solution? Buy a Fury X and rip off the AIO and replace it with a custom block? Do AMD's partners ever offer cards with waterblocks like EVGA?


----------



## Majin SSJ Eric

I would be shocked if EK doesn't make a block for Fury X (unless AMD has disallowed that for some reason).


----------



## RSharpe

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I would be shocked if EK doesn't make a block for Fury X (unless AMD has disallowed that for some reason).


But from what I've read, the problem is that you would still have to buy a Fury X, take off the AIO cooler, and replace it with an aftermarket block. My point is it would be nice to not have to pay for a cooler that I don't want if manufacturers sell blocks pre-installed. I don't think I've ever seen any AMD partner do this.


----------



## Ganf

Quote:


> Originally Posted by *RSharpe*
> 
> But from what I've read, the problem is that you would still have to buy a Fury X, take off the AIO cooler, and replace it with an aftermarket block. My point is it would be nice to not have to pay for a cooler that I don't want if manufacturers sell blocks pre-installed. I don't think I've ever seen any AMD partner do this.


Err.... you pay the same whether you buy the card and block separately or a pre-installed like EVGA's hydro coppers. The only thing you get with a waterblock from the AIB is the warranty and you don't have to install it yourself.


----------



## WorldExclusive

This thread wins the Rumor Mill.

Congratulations.


----------



## Alatar

http://www.overclock.net/t/1561860/various-amd-radeon-r9-fury-x-reviews


----------

