# [Anand] Ashes of the Singularity Revisited: A Beta Look at DirectX 12 & Asynchronous Shading



## CasualCat

Interesting. On the Fury X they've gotten 1440P to within a few frames of 1080P while boosting both.


----------



## Forceman

How does the Fury have negative improvement with async at 1080p?


----------



## Catscratch

I knew I should've bought a 290 instead of 280x :/ Completely ignored GCN version and the infamous ACEs


----------



## czin125

He said it was cpu limited, so is he going to test with a 5960X at 4.7ghz / 3200CL12 or a 6700K at 4.7ghz / 3200CL12 ?


----------



## Robenger

It looks like doing Async compute on the driver isn't working out very well. Nvidia has been very good with their drivers, but it seems even they can't overcome the hardware advantage AMD has right now.


----------



## PontiacGTX

Quote:


> Originally Posted by *czin125*
> 
> He said it was cpu limited, so is he going to test with a 5960X at 4.7ghz / 3200CL12 or a 6700K at 4.7ghz / 3200CL12 ?


the test was done with a
Intel Core i7-4960X @ 4.2GHz

but some benchmark shows that the game couldnt scale better with more than 4 cores


----------



## specopsFI

I must say that so far I haven't really bought into the claims that Maxwell V2 can't do async _at all_. I really thought Nvidia had some sort of a plan to get it to produce at least a little bit of a gain. But seriously I'm not buying into that "it's just disabled in the driver" thing anymore. This async disadvantage talk has been going on for so long now and the first actual async titles are so close to publishing that I'm sure they would have enabled it with this Ashes beta if it was to be. So yea, I'm beginning to lean towards Maxwell not liking the DX12 era so much. Maybe there's even some truth in the talk about Nvidia trying to delay certain DX12 implementations, but I still think they do have async figured out with Pascal, more or less.


----------



## Forceman

Quote:


> Originally Posted by *specopsFI*
> 
> I must say that so far I haven't really bought into the claims that Maxwell V2 can't do async _at all_. I really thought Nvidia had some sort of a plan to get it to produce at least a little bit of a gain. But seriously I'm not buying into that "it's just disabled in the driver" thing anymore. This async disadvantage talk has been going on for so long now and the first actual async titles are so close to publishing that I'm sure they would have enabled it with this Ashes beta if it was to be. So yea, I'm beginning to lean towards Maxwell not liking the DX12 era so much. Maybe there's even some truth in the talk about Nvidia trying to delay certain DX12 implementations, but I still think they do have async figured out with Pascal, more or less.


But then you get a comment like this from a guy who would presumably know. Of course it could all hinge on your definition of support.
Quote:


> Originally Posted by *Kollock*
> 
> I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.


----------



## philhalo66

interesting. I wonder why AMD has such a big lead on the async bench


----------



## Forceman

Quote:


> Originally Posted by *philhalo66*
> 
> interesting. I wonder why AMD has such a big lead on the async bench


That's why I think it hinges on the meaning of support. If it means supports it by not choking and dying when it is enabled, but at the same time not providing any performance improvement, then it kind of makes sense.


----------



## Assirra

That is quite a boost in DX12 for AMD cards.
Curious how it is going to be in the rest of the games.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> But then you get a comment like this from a guy who would presumably know. Of course it could all hinge on your definition of support.


It has been made clear by members like Mahigan that Nvidia support their own made-up version of Async, they cannot do the "real" DX12 Async, which is graphics+compute.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> But then you get a comment like this from a guy who would presumably know. Of course it could all hinge on your definition of support.


Yet again, if you were to evaluate the developer's comment in light of the whole post, the said driver build, sorry, was quoted as absent at the time.
Quote:


> Originally Posted by *Kollock*
> 
> Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
> I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.


----------



## Glottis

Quote:


> Originally Posted by *philhalo66*
> 
> interesting. I wonder why AMD has such a big lead on the async bench


Partly it has to do with that big AMD logo on official game website. Don't worry much about it, we'll only get full picture of DX12 performance when there are multiple games running multiple engines benchmarked.

I'm bored of seeing this ugly and dull game paraded on all tech websites and am ready to see more of what DX12 has to offer in other game and engines from other developers.


----------



## infranoia

Quote:


> Originally Posted by *hrockh*
> 
> For example, have you tried the game? how is it?


Along these lines, I can say that this game is actually pretty damned fun.

I settled down for a long session against the AI the other night instead of just benching it and quitting, and had an absolute blast.


----------



## keikei

Quote:


> NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.


----------



## Desolutional

Quote:


> Originally Posted by *keikei*


Async is not hardware based for the nVidia chips. Emulation will just result in even lower performance; unless it's pushed onto spare CPU headroom which kind of defeats the purpose of DX 12.


----------



## PontiacGTX

Quote:


> Originally Posted by *philhalo66*
> 
> interesting. I wonder why AMD has such a big lead on the async bench


better architecture
http://www.overclock.net/t/1572716/directx-12-asynchronous-compute-an-exercise-in-crowd-sourcing
http://www.overclock.net/t/1590939/wccf-hitman-to-feature-best-implementation-of-dx12-async-compute-yet-says-amd/770#post_24929199


----------



## Assirra

Quote:


> Originally Posted by *PontiacGTX*
> 
> better architecture


Well their "better architecture" made them lose hard in the past tough. They were so focused on the future they forgot the present.
Lets hope they kick some ass so it more balanced and as such, better for the consumers.

edit: and this thread is already stained, time to make my leave.


----------



## keikei

Quote:


> Originally Posted by *Desolutional*
> 
> Async is not hardware based for the nVidia chips. Emulation will just result in even lower performance; unless it's pushed onto spare CPU headroom which kind of defeats the purpose of DX 12.


Rather than driver updates, you think nvidia will introduce asynchronous shading with pascal?


----------



## PontiacGTX

Quote:


> Originally Posted by *Assirra*
> 
> Well their "better architecture" made them lose hard in the past tough. They were so focused on the future they forgot the present.
> Lets hope they kick some ass so it more balanced and as such, better for the consumers.


that present was running on low resolutions?


----------



## Desolutional

Quote:


> Originally Posted by *keikei*
> 
> Rather than driver updates, you think nvidia will introduce asynchronous shading with pascal?


Perhaps, they're getting themselves handed at the moment with the Stock Fury X destroying OCed 980 Tis. Of course, I *hate* Shadows of the Singularity - but I also *hate* RTS lol. I'm looking forward to seeing "action" gameplay performance with titles such as the new Hitman (yuck) or Deus Ex (yum).

The important thing is not to worry about resale value for green teamers: *DX11 or lower dominates the modern AAA market by 99%+, it will be a year or two before DX 12 starts becoming very mainstream; and even more so years for when it becomes the de facto standard.*


----------



## Assirra

Quote:


> Originally Posted by *PontiacGTX*
> 
> that present was running on low resolutions?


Even so, 80% vs 20% is not a nice battle.
Like i said, i WANT AMD to kick some but so we, the consumers win.


----------



## keikei

Quote:


> Originally Posted by *Desolutional*
> 
> Perhaps, they're getting themselves handed at the moment with the Stock Fury X destroying OCed 980 Tis. Of course, I *hate* Shadows of the Singularity - but I also *hate* RTS lol. I'm looking forward to seeing "action" gameplay performance with titles such as the new Hitman (yuck) or Deus Ex (yum).


DICE was talking about dx12 right before the fury x launch so BF5 may just have asynchronous shading. I really hope so.


----------



## Desolutional

Quote:


> Originally Posted by *keikei*
> 
> DICE was talking about dx12 right before the fury x launch so BF5 may just have asynchronous shading. I really hope so.


I know the new NFS definitely doesn't have it, I do have hopes for BF5, DICE are alright when it comes to gamer appreciation. (totally avoiding Hardline)


----------



## GorillaSceptre

Quote:


> Originally Posted by *Assirra*
> 
> Well their "better architecture" made them lose hard in the past tough. They were so focused on the future they forgot the present.
> Lets hope they kick some ass so it more balanced and as such, better for the consumers.
> 
> edit: and this thread is already stained, time to make my leave.


Quote:


> Originally Posted by *Assirra*
> 
> Even so, 80% vs 20% is not a nice battle.
> Like i said, i WANT AMD to kick some but so we, the consumers win.


Yup, people always say that until AMD actually starts "winning", then they start to focus on the past. Kind of like what you just did.


----------



## Assirra

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yup, people always say that until AMD actually starts "winning", then they start to focus on the past. Kind of like what you just did.


Sigh, i knew i should have left the thread.
Didn't took half a page for someone calling me out as a liar.


----------



## dragneel

I might be misunderstanding but in the first graph the 980 ti dx 11(1080p) FPS is very close to the FuryX DX12 (1080p) FPS. Is that not impressive?


----------



## GorillaSceptre

Quote:


> Originally Posted by *Assirra*
> 
> Sigh, i knew i should have left the thread.
> Didn't took half a page for someone calling me out as a liar.


I didn't mean to call you a liar. It was meant to be light hearted.









Quote:


> Originally Posted by *dragneel*
> 
> I might be misunderstanding but in the first graph the 980 ti dx 11(1080p) FPS is very close to the FuryX DX12 (1080p) FPS. Is that not impressive?


All this is showing is that AMD has a big disadvantage under DX11, Nvidia do not. What that means is things are going to be a lot closer under DX12, that's without the whole Async debacle. Async is merely the cherry on top for AMD.


----------



## coelacanth

I would have liked to see a video for the Fury X in addition to the 980 Ti video posted in order to see if the effects are substantially the same.


----------



## PontiacGTX

on CB

they show that some Maxwell/Kepler GPUs have negative performance with async shaders/compute



Quote:


> The more draw calls, the greater the advantage
> 
> Considering all the results shows that Asynchronous Compute many Draw Calls obviously a little more thrust brings than a few. In Ultra HD can grow by seven percent the AMD flagship at few Draw calls, many Draw calls the figure is 15 percent higher. Similarly, the Radeon R9 390. behaves For graphics cards from Nvidia, there is however no discernible regularity.
> 
> On the demand of computer base, why is used in Nvidia graphics cards, despite a slightly negative effect Async Compute replied oxides that you currently want to use any proprietary render paths - but only one that applies to all. However, the decision is not yet set to release in less than a month in stone. So you could provide each player the fastest available for their own hardware rendering method.


----------



## dragneel

Quote:


> Originally Posted by *mtcn77*
> 
> Have fun with 1080p.


Really? I'm not here to argue with anyone, just wanted to say that seemed pretty neat to me. I mean, I am only using a 280(non-X) with a 1080p screen and I still manage to have fun but meh. These numbers here are ultimately meaningless to me until I upgrade to polaris/pascal and get a 1440p monitor.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> All this is showing is that AMD has a big disadvantage under DX11, Nvidia do not. What that means is things are going to be a lot closer under DX12, that's without the whole Async debacle. Async is merely the cherry on top for AMD.


Oh cool. Thanks for clearing that up.


----------



## keikei

Quote:


> Originally Posted by *dragneel*
> 
> I might be misunderstanding but in the first graph the 980 ti dx 11(1080p) FPS is very close to the FuryX DX12 (1080p) FPS. Is that not impressive?


Well, the main point of the graph is to compare performance of the dx versions. The fury x gains 1/3 more frames or 33% improvement over dx11 @ 1080p. That is huge.


----------



## zealord

Quote:


> Originally Posted by *PontiacGTX*
> 
> on CB
> 
> they show that some Maxwell/Kepler GPUs have negative performance with async shaders/compute


CB performance graphs are perfect. I love them!

Also there is a new radeon driver with improved performance for ashes of singularity


----------



## sages

I vote for everyone uniting in team orange


----------



## infranoia

Quote:


> Originally Posted by *sages*
> 
> I vote for everyone uniting in team orange


Uhh... Is that BitBoys GPU finally coming out then?


----------



## dragneel

Quote:


> Originally Posted by *sages*
> 
> I vote for everyone uniting in team orange


I say we join team Sweden and become completely neutral instead


----------



## KyadCK

Quote:


> Originally Posted by *Forceman*
> 
> Quote:
> 
> 
> 
> Originally Posted by *specopsFI*
> 
> I must say that so far I haven't really bought into the claims that Maxwell V2 can't do async _at all_. I really thought Nvidia had some sort of a plan to get it to produce at least a little bit of a gain. But seriously I'm not buying into that "it's just disabled in the driver" thing anymore. This async disadvantage talk has been going on for so long now and the first actual async titles are so close to publishing that I'm sure they would have enabled it with this Ashes beta if it was to be. So yea, I'm beginning to lean towards Maxwell not liking the DX12 era so much. Maybe there's even some truth in the talk about Nvidia trying to delay certain DX12 implementations, but I still think they do have async figured out with Pascal, more or less.
> 
> 
> 
> But then you get a comment like this from a guy who would presumably know. Of course it could all hinge on your definition of support.
> Quote:
> 
> 
> 
> Originally Posted by *Kollock*
> 
> I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.
> 
> Click to expand...
Click to expand...

"Support" means "Accepts the command and will process it" rather than denying it. The game will not crash because you force enable ASync. It just wont run better.
Quote:


> Originally Posted by *philhalo66*
> 
> interesting. I wonder why AMD has such a big lead on the async bench


Because they have actual hardware to support it.

EDIT: Did not mean to post, didn't even read the whole thread yet, as still "building" my post, OCN forced my hand.







I stand by it.


----------



## NightAntilli

Quote:


> Originally Posted by *Forceman*
> 
> How does the Fury have negative improvement with async at 1080p?


CPU limitation.
Quote:


> Originally Posted by *Forceman*
> 
> But then you get a comment like this from a guy who would presumably know. Of course it could all hinge on your definition of support.


nVidia cards can do async on the compute queue. They can't do async on compute AND graphics at the same time. We call that concurrent async. All GCN cards can do concurrent async. None of nVidia's cards can, and probably neither will Pascal.


----------



## Omega X

Quote:


> Originally Posted by *dragneel*
> 
> Really? I'm not here to argue with anyone, just wanted to say that seemed pretty neat to me. I mean, I am only using a 280(non-X) with a 1080p screen and I still manage to have fun but meh.


I'm with you on that. I have no intention to upgrade to 4K at the moment. Its still early days as far as I'm concerned. 1080p is good enough for right now and the near future.


----------



## Desolutional

Quote:


> Originally Posted by *Omega X*
> 
> I'm with you on that. I have no intention to upgrade to 4K at the moment. Its still early days as far as I'm concerned. 1080p is good enough for right now and the near future.


Hmm... well... mmm...


----------



## CrazyElf

From a technical standpoint, the most interesting thing is that the gain at 4k - AMD's advantage increases at 4k, rather than decreases.

As the Anandtech article hints, this may be because of unused shader capacity. It's a sign that the Fury X is being bottlenecked somewhere - probably the Geometry. It looks like the AMD team has work they have to do on Polaris if they want this addressed.

Quote:


> Originally Posted by *philhalo66*
> 
> interesting. I wonder why AMD has such a big lead on the async bench


So far, the evidence suggests very heavily that Mahigan's hypothesis that Nvidia GPUs are not capable of simultaneous async compute and async graphics is correct. By contrast, the front end of AMD GPUs, particularly Hawaii and Fiji appear capable of doing so (due to the ACEs), which is why they see gains on DX12, while Nvidia GPUs don't. By contrast, Nvidia's better DX11 drivers do lead to better performance there.

Nobody has been able to provide a competing hypothesis, nor the data to back it up. Mahigan may very well be right that Pascal too will not support Async. It may very well have to wait for Volta.

This quote is interesting:
Quote:


> Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.


Why wouldn't they enable it? My gut feel is that this is face-saving attempt and that they cannot for technical reasons. Hint: It probably will be never because it is not a driver issue - it is a hardware issue.

The real question is, will they have cards out that can do it (Volta?) before DX12 becomes more commonplace? I suspect the answer might be yes, but there have been several high profile announcements of games with DX12. Strictly from a consumer standpoint, I am hoping the answer is no and that the current 80-20 marketshare becomes 50-50. A monopoly is a loss no matter how you slice it.

Quote:


> Originally Posted by *PontiacGTX*
> 
> the test was done with a
> Intel Core i7-4960X @ 4.2GHz
> 
> but some benchmark shows that the game couldnt scale better with more than 4 cores


In that case, the ideal gaming platform would be something like a 4790K or a 6700K. Haswell E would not be good in that regard because six cores are around 200 MHz or so slower. We need this test repeated with a 4.9 GHz 4790K or at least a 4.6 GHz or so 6700K. Ideal would be a highly binned 6700K (say 4.8 GHz) with fast RAM >3200 MHz to eliminate all CPU bottlenecks.

Quote:


> Originally Posted by *Forceman*
> 
> How does the Fury have negative improvement with async at 1080p?


Looks like it is a CPU bottleneck to me. See my response to Pontiac.

Edit:
A second hypothesis: It may be some inherent bottleneck within the Fury X GPU.

I will note that right up to 4k, the Fury X makes pretty impressive relative gains to the Nvidia GPUs.









I think that something is preventing the shaders from reaching their full potential; perhaps the poor geometry processors or the cache, as Mahigan once suggested.

For a similar reason, the Fury X doesn't see 45% more performance, despite having 45% more shader.

The only way to tell would be to repeat this whole experiment with a fast enough CPU that won't bottleneck.


----------



## Forceman

Quote:


> Originally Posted by *CrazyElf*
> 
> Looks like it is a CPU bottleneck to me. See my response to Pontiac.


Not seeing how a CPU bottleneck explains a 10% gain with async compute at 1440p and a -2% gain at 1080p, especially when the 290X is 9% gain in both.


----------



## f1LL

Although done with only 2 different GPU's, to me this is the more interesting news/benchmark:


----------



## kot0005

Hmm might buy a polaris instead of pascal. My 980ti is looking horrible and am on 1440p.


----------



## Shogon

Quote:


> Originally Posted by *hrockh*
> 
> For example, have you tried the game? how is it?
> What implication will DX12 have on hardware / software? Advantages / Disadvantages of this tech.


The game is pretty fun if you ask me. It's certainly one of the few games I've bought and actually enjoyed playing in the past 6 months. So far though I'm having a tough time getting passed the 'Tough' AI setting haha. The AI actually is pretty good as well, and performance is wonderful and looks great on my PG278Q and my Acer Predator 27" with G-SYNC. As far as DX 12 goes I'm not seeing much CPU usage at 1440p. Usually hovers around 50% or below on my 4790k @ 4.6, but the GPU gets a workout.

Here's my benchmark on 1440p @ Crazy settings. 4790k @ 4.6, Titan X @ 1439 MHz / 7700 MHz


----------



## PostalTwinkie

Quote:


> Originally Posted by *CrazyElf*
> 
> This quote is interesting:
> Why wouldn't they enable it? My gut feel is that this is face-saving attempt and that they cannot for technical reasons. Hint: It probably will be never because it is not a driver issue - it is a hardware issue.


Literally pure speculation. Why wouldn't they release it?

One reason, other than it just not being ready (have you written a driver for this before?), comes down to time and money, that simple.

AoS is one title, in early beta, featuring one aspect of DX 12. Nvidia isn't going to sink developer and engineer time into pushing out something ahead, just to quell a rabid fan-base around one small title. It, from a business perspective, is a Hell of a lot better utilization of resources to do it in a way that fits with the rest of the development schedule. A person would be insane to think Nvidia isn't, and hasn't, actively been working on DX 12 drivers for awhile now. There is no reason to break from that progress to address this one standing issue.

To say they can't do it at all, because it isn't out yet, that is just short sighted to the extreme.

EDIT:

The other consideration is how prevalent is ASC going to be in DX 12 titles across the board? I don't think anyone has the answer to that yet.

EDIT 2:

As stated by another; I am ready to see DX 12 in other titles and what they are doing. At this point Oxide is just getting massive free marketing for their game by outlets beating a dead horse. This issue has been hashed and re-hashed so many times now.


----------



## mtcn77

For the reference, R9 390 via Directx 12 is faster than Fury X is via Directx 11 while Nano is next to GTX 980 in Directx 11 whereas it is ahead of GTX 980 Ti *by 10%* in Directx 12. Makes you wonder...


----------



## Forceman

Yeah, makes you wonder why AMD's DX11 drivers are such a disaster, and what the competitive landscape would have looked like if they had bothered to try to optimize them better a year or so ago.


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Literally pure speculation. Why wouldn't they release it?
> 
> One reason, other than it just not being ready (have you written a driver for this before?), comes down to time and money, that simple.
> 
> AoS is one title, in early beta, featuring one aspect of DX 12. Nvidia isn't going to sink developer and engineer time into pushing out something ahead, just to quell a rabid fan-base around one small title. It, from a business perspective, is a Hell of a lot better utilization of resources to do it in a way that fits with the rest of the development schedule. A person would be insane to think Nvidia isn't, and hasn't, actively been working on DX 12 drivers for awhile now. There is no reason to break from that progress to address this one standing issue.
> 
> To say they can't do it at all, because it isn't out yet, that is just short sighted to the extreme.
> 
> EDIT:
> 
> The other consideration is how prevalent is ASC going to be in DX 12 titles across the board? I don't think anyone has the answer to that yet.
> 
> EDIT 2:
> 
> As stated by another; I am ready to see DX 12 in other titles and what they are doing. At this point Oxide is just getting massive free marketing for their game by outlets beating a dead horse. This issue has been hashed and re-hashed so many times now.


Besides all the very in-depth information supplied by very knowledgeable people, on multiple forums, including B3D where they actually tested it themselves. What about Nvidias own papers and their "do's and don'ts"? Or how about the fact that AMD flat out said they were the only ones who had the hardware capable of Async and Nvidia never corrected them?

Nvidia are always faster than AMD when it comes to software/drivers, for betas too. People like yourself remind us of that all the time, but on this occasion it's "why would they, beta, early." I'm not buying it, i think there's nothing for them to "fix". They simply can't do graphics + compute.

The writing is on the wall at this point. What other evidence would be needed to convince some of you? It seems like you won't be satisfied until Nvidia themselves turn it into a marketing campaign..

This game is also hardly using Async anyway, if this is making the news rounds, i can't imagine what it's going to be like when a game comes out that heavily uses it..

All DX12 is doing is nullifying Nvidias DX11 advantage, and putting them on a more even playing field.


----------



## NightAntilli

Quote:


> Originally Posted by *Forceman*
> 
> Yeah, makes you wonder why AMD's DX11 drivers are such a disaster, and what the competitive landscape would have looked like if they had bothered to try to optimize them better a year or so ago.


Drivers alone could never have fixed the issue under DX11.


----------



## Forceman

Quote:


> Originally Posted by *NightAntilli*
> 
> Drivers alone could never have fixed the issue under DX11.


It certainly could have helped. Look at the gains Nvidia got in DX11 when they put the effort in and released their 337.50 drivers.


----------



## Remij

Oh I remember the days when this benchmark came and AMD was leading, and it was the death of Nvidia... then things changed and Nvidia was leading AMD... and now AMD is back leading Nvidia... I wonder if Nvidia will come back and lead again...









This game/benchmark has been all over the place and none of these results mean ****. Weren't we supposed to get multi GPU support by beta 1?

Nope, nothing for us. Yet twice now these tech sites have had access to builds that support multi-gpus. Where's our update enabling this so we can really test things?


----------



## escksu

Quote:


> Originally Posted by *Forceman*
> 
> It certainly could have helped. Look at the gains Nvidia got in DX11 when they put the effort in and released their 337.50 drivers.


Its not just drivers, its hardware implementation as well. Drivers are not some form of silver bullet to the problem.

GCN is clearly designed with DX12 in mind they have to sacrifice DX11 in the process. You can't design an architecture that runs well in everything. You need to balance between power consumption, performance, cost, die size etc......

The same then goes for Nvidia. I am sure they would want to have hardware implementation for Async compute and beats AMD in all DX12 benchmarks. But perhaps, it may make the die too big and power consumption too high.


----------



## Ha-Nocri

Quote:


> Originally Posted by *NightAntilli*
> 
> Drivers alone could never have fixed the issue under DX11.


This. On hardware level AMD can't do much more for single-threaded performance. Polaris will change that.


----------



## escksu

Btw, there is game itself as well. GPU is not like CPU.
Quote:


> Originally Posted by *Ha-Nocri*
> 
> This. On hardware level AMD can't do much for single-threaded performance. Polaris will change that.


IMHO, it also depends on the game itself. If the game is optimised for Nvidia and not AMD, it certainly won't run well on AMD hardware.


----------



## delboy67

Quote:


> Originally Posted by *mtcn77*
> 
> For the reference, R9 390 via Directx 12 is faster than Fury X is via Directx 11 while Nano is next to GTX 980 in Directx 11 whereas it is ahead of GTX 980 Ti *by 10%* in Directx 12. Makes you wonder...


Am I reading this right? A 390x is matching a 980ti?


----------



## Forceman

Quote:


> Originally Posted by *escksu*
> 
> GCN is clearly designed with DX12 in mind they have to sacrifice DX11 in the process.


Well if that's the case, it was a decision that cost them dearly in the short term (or not so short, considering when GCN debuted).


----------



## mtcn77

Quote:


> Originally Posted by *delboy67*
> 
> Am I reading this right? A 390x is matching a 980ti?


I cannot say anything about that, but the results kind of validate my previous expectations that 290 would match or beat 'Fury'(and 290X would match or beat Fury X) in case the expectations for Asynchronous shaders as per developer mentions held true for the pc.







Quote:


> Originally Posted by *mtcn77*
> 
> Also, asynchronous shaders! Suppose the same performance gain in consoles applies to discrete gpus - you now have a Fury X instead of R9 290X when ASC is utilized. I wouldn't kid about the repercussions of a 42% upgrade.


----------



## ebduncan

Quote:


> Originally Posted by *Catscratch*
> 
> I knew I should've bought a 290 instead of 280x :/ Completely ignored GCN version and the infamous ACEs


the 280x supports Async compute


----------



## escksu

Quote:


> Originally Posted by *Forceman*
> 
> Well if that's the case, it was a decision that cost them dearly in the short term (or not so short, considering when GCN debuted).


I agreed. GCN decision really cost AMD dearly. How many years have async compute existed but no games ever take advantage of it. Maybe it could have been better if AMD optimise their hardware for DX11 first then only support DX12 with Fury.

Of course, its easier said then done. I don't think AMD knows when DX12 will be released and what features it will support as well.


----------



## escksu

Quote:


> Originally Posted by *ebduncan*
> 
> the 280x supports Async compute


Even 7970 supports Async. Can't imagine async already existed back in 2011 (I don't even know what is async compute back then).

Its sitting there for 5 long years w/o anyone utilising it.....


----------



## escksu

Quote:


> Originally Posted by *delboy67*
> 
> Am I reading this right? A 390x is matching a 980ti?


Its possible considering AMD hardware sees quite a big jump from DX11 to DX12. Async compute boost the performance even further.

Will need more games to confirm but I am getting excited about the future.


----------



## Fyrwulf

AMD cards are the most ADD things in the world. They don't really shine until you push them. If I'm right that the Fury lineup was a half-assed attempt to recover on costs related to the development of Polaris, then Polaris is going to be a monster. I hope nVidia put on their big boy britches for this gen, because otherwise this could get ugly.


----------



## 12Cores

They need to run some test where Windows can detect all of the memory on multigpu setups, how about at test with four 390x's with a usable 32gb of ram? That is what I want to see.


----------



## escksu

https://en.wikipedia.org/wiki/Graphics_Core_Next

Btw, I was reading wiki article about GCN. Looks like many features like HSA etc... already supported back in 2011, just that nobody talks about it and nobody uses it.


----------



## escksu

Quote:


> Originally Posted by *Fyrwulf*
> 
> AMD cards are the most ADD things in the world. They don't really shine until you push them. If I'm right that the Fury lineup was a half-assed attempt to recover on costs related to the development of Polaris, then Polaris is going to be a monster. I hope nVidia put on their big boy britches for this gen, because otherwise this could get ugly.


I don't know much about polaris/pascal but since its already under testing, I don't think major changes could be done to the architecture. If Pascal does not support Async compute or some other features, I don't think Nvidia could do anything about it now.


----------



## ebduncan

Quote:


> Originally Posted by *escksu*
> 
> https://en.wikipedia.org/wiki/Graphics_Core_Next
> 
> Btw, I was reading wiki article about GCN. Looks like many features like HSA etc... already supported back in 2011, just that nobody talks about it and nobody uses it.


Well you have to figure GCN was designed with HSA in mind via APU. AMD unlike Nvidia choose to keep their focus on compute power. This change happened with the induction of maxwell, were Nvidia removed compute features to focus on gaming performance and performance per watt. It paid off for them mostly as DX11 driver overhead was in their favor and the compute features of AMD were not being used in games until DX 12. It is nice to see the 290x/290 and the 390/390x do so well compared to the 970 and 980 these days. When the 980 was released most games at the time favored the 980 by like 20-25%. Now the 390x will often beat the 980, crazy....

Quote:


> Originally Posted by *escksu*
> 
> Its possible considering AMD hardware sees quite a big jump from DX11 to DX12. Async compute boost the performance even further.
> 
> Will need more games to confirm but I am getting excited about the future.


The performance boost isn't really from Async compute though, its driver overhead mainly.


----------



## escksu

Quote:


> Originally Posted by *ebduncan*
> 
> Well you have to figure GCN was designed with HSA in mind via APU. AMD unlike Nvidia choose to keep their focus on compute power. This change happened with the induction of maxwell, were Nvidia removed compute features to focus on gaming performance and performance per watt. It paid off for them mostly as DX11 driver overhead was in their favor and the compute features of AMD were not being used in games until DX 12. It is nice to see the 290x/290 and the 390/390x do so well compared to the 970 and 980 these days. When the 980 was released most games at the time favored the 980 by like 20-25%. Now the 390x will often beat the 980, crazy....
> The performance boost isn't really from Async compute though, its driver overhead mainly.


IMHO AMD's focus on compute power really cost them dearly. They lost so much market share to Nvidia and their cards could not keep up. Now with DX12, I hope things favours AMD. They badly need it.


----------



## PostalTwinkie

Quote:


> Originally Posted by *escksu*
> 
> IMHO AMD's focus on compute power really cost them dearly. They lost so much market share to Nvidia and their cards could not keep up. Now with DX12, I hope things favours AMD. They badly need it.


They badly need it? No, WE badly need it, and Zen to make miracles!


----------



## Unkzilla

As a 980 owner, i'm OK with the results.

It seems my card performs 20% worse then the 390x in this title. Even if this is a trend for DX12 games, Maxwell's supreme overclocking and power efficient architecture is going to cut that 20% gap to 0. My 980 runs at a 25% overclock vs a reference 980 without any + voltage. My card is still consuming less power (with the OC) vs a 390x too. Then i'm still way in front in DX11 stuff which will be the majority of titles for at least another year.

Same deal with the 980TI , 970 etc - a lot of people are even getting +30% OC's on non reference designs...


----------



## escksu

Quote:


> Originally Posted by *PostalTwinkie*
> 
> They badly need it? No, WE badly need it, and Zen to make miracles!


Llol.....true I think we need it more than AMD


----------



## infranoia

Quote:


> Originally Posted by *escksu*
> 
> IMHO AMD's focus on compute power really cost them dearly. They lost so much market share to Nvidia and their cards could not keep up. Now with DX12, I hope things favours AMD. They badly need it.


It cost them, but dearly? I disagree. Don't forget the 70 million+ GCN units in everyone's living room. Without GCN's low-level compute advantage AMD would not have snared the console contracts, which were the toehold they needed to leverage DX12 and Vulkan from Mantle, which is all coming to fruition now.

80/20 doesn't tell the whole story, that's units coming off shelves and shopping carts in a given quarter. Steam stats are more accurate for who is using what, when (and clearly Nvidia still has the advantage, but it's more 66/33 for dGPU market presence-- with Intel pulled out of the stats-- as of January 2016).

It's the long game versus the short game. Both have advantages and disadvantages.


----------



## steadly2004

So I haven't actually played this game. But does the visuals really represent the brute force required to be > 100fps? (I'm actually asking people that have and play the game) I've seen some you tube videos but they're all compressed. Is this just poor encoding until the next thing comes along? I gotta believe that better utilized games in the future will use the video cards more efficiently. Regardless of the the AMD vs Nvidia thing, and DX12. Do you guys think this justifies the hardware requirements?


----------



## infranoia

Quote:


> Originally Posted by *steadly2004*
> 
> So I haven't actually played this game. But does the visuals really represent the brute force required to be > 100fps? (I'm actually asking people that have and play the game) I've seen some you tube videos but they're all compressed. Is this just poor encoding until the next thing comes along? I gotta believe that better utilized games in the future will use the video cards more efficiently. Regardless of the the AMD vs Nvidia thing, and DX12. Do you guys think this justifies the hardware requirements?


If you watch some of the Oxide videos about the game, it's not just the visuals, or the number of units-- it's the detail. Each unit has individual turreted guns that all fire and rotate, with physics calculations and whatnot. I got the impression that the level of detail given the number of units is madness.

That impression isn't really delivered in the game itself, however, which is too bad. It just looks like a big scrum. I wish there was individual ship damage modeling, that would really show it off.


----------



## escksu

Quote:


> Originally Posted by *steadly2004*
> 
> So I haven't actually played this game. But does the visuals really represent the brute force required to be > 100fps? (I'm actually asking people that have and play the game) I've seen some you tube videos but they're all compressed. Is this just poor encoding until the next thing comes along? I gotta believe that better utilized games in the future will use the video cards more efficiently. Regardless of the the AMD vs Nvidia thing, and DX12. Do you guys think this justifies the hardware requirements?


I have never seen the game running before other than screenshots. IMHO, RTS games are among the most demanding on graphics. Each unit rendered needs polygons, textures etc.....

Hence, I don't think its poor coding.


----------



## ebduncan

Quote:


> Originally Posted by *steadly2004*
> 
> So I haven't actually played this game. But does the visuals really represent the brute force required to be > 100fps? (I'm actually asking people that have and play the game) I've seen some you tube videos but they're all compressed. Is this just poor encoding until the next thing comes along? I gotta believe that better utilized games in the future will use the video cards more efficiently. Regardless of the the AMD vs Nvidia thing, and DX12. Do you guys think this justifies the hardware requirements?


yes

if you play RTS you, will notice just how good the rendering is for a game of this type. It takes some horsepower to render all the unit physics and stuff.


----------



## escksu

Quote:


> Originally Posted by *infranoia*
> 
> If you watch some of the Oxide videos about the game, it's not just the visuals, or the number of units-- it's the detail. Each unit has individual turreted guns that all fire and rotate, with physics calculations and whatnot. I got the impression that the level of detail given the number of units is madness.
> 
> That impression isn't really delivered in the game itself, however, which is too bad. It just looks like a big scrum. I wish there was individual ship damage modeling, that would really show it off.


Wow, if individual ship damage modeling, I think we will need perhaps quad Fury X just to run 1080p....lol....just joking.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> Yeah, makes you wonder why AMD's DX11 drivers are such a disaster, and what the competitive landscape would have looked like if they had bothered to try to optimize them better a year or so ago.


AMD do support multi-threaded command listing as well as deferred rendering. That's not really the issue, the issue is the command buffer size. It is tailored to match the queue size in the Command Processor Q0. That means 64 threads wide. So basically, GCN is constantly fetching commands from the DirectX Command Buffer (kept in system memory), hitting thread0 of the CPU hard.

GCN is very parallel and thus needs a lot of work items being scheduled in order to get full utilization. So if the CPU is instructed to run some complex simulation, the GCN hardware schedule sends a fetch command but the CPU is busy so you get a GPU hardware stall. The GPU pretty much waits there for more commands. This of course hits AMDs draw call rate hard under GCN and DX11.

With DX12, the DirectX driver is spread amongst many cores. This helps ensure that if one CPU core is busy with other work, another will be able to process the fetch commands and transfer new commands to the Command Processor.

This is a mix of an API overhead and hardware issue (Queue size is too small on the Command Processor).

NVIDIA have an edge here because they've been using a Static Scheduler ever since Kepler (GTX 680). This means that a large segment of NVIDIAs scheduler is in the driver (software). Their scheduler is multi-threaded (what AMD call a hidden CPU thread). So when the GPU signals for more work another thread will process the command. Ontop of that, NVIDIA's Gigathread Engine can hold many more threads than AMDs command processor. So the hardware doesn't need to fetch commands as often.

So if CPU thread 0 is stuck doing other work, CPU thread 1 will be used by NVIDIA's scheduler. So NVIDIA's Kepler and newer architectures never skip a beat.

Evidently, if you move to DX12, NVIDIA won't see much in the way of a performance boost over DX11 but AMD will.

The other side of things is that under DX12 it is now NVIDIA who incur a larger CPU overhead as their Scheduler is in software and taking up cycles from the CPU that AMD GCN aren't.

Like a roll reversal but not nearly as bad as what AMD suffered under DX11 in draw call intensive titles.


----------



## escksu

Quote:


> Originally Posted by *Mahigan*
> 
> AMD do support multi-threaded command listing as well as deferred rendering. That's not really the issue, the issue is the command buffer size. It is tailored to match the queue size in the Command Processor Q0. That means 64 threads wide. So basically, GCN is constantly streaming commands from the DirectX Command Buffer, hitting thread0 of the CPU hard.
> 
> GCN is very parallel and thus needs a lot of work items being scheduled in order to get full utilization. So if the CPU is instructed to run some complex simulation, you get a GPU hardware stall. The GPU pretty much waits there for more commands. This of course hits AMDs draw call rate hard under GCN and DX11.
> 
> With DX12, the DirectX driver is spread amongst many cores. This helps ensure that if one CPU core is busy with other work, another will keep streaming commands to the Command Processor.
> 
> This is a mix of an API overhead and hardware issue (Queue size is too small on the Command Processor).
> 
> NVIDIA have an edge here because they've been using a Static Scheduler ever since Kepler (GTX 680). This means that a large segment of NVIDIAs scheduler is in the driver (software). Their scheduler is multi-threaded (what AMD call a hidden CPU thread). Ontop of that, NVIDIA's Gigathread Engine can hold many more threads than AMDs command processor.
> 
> So if CPU thread 0 is stuck doing other work, CPU thread 1 will be used by NVIDIA's scheduler. So NVIDIA's Kepler and newer architectures never skip a beat.
> 
> Evidently, if you move to DX12, NVIDIA won't see much in the way of a performance boost over DX11 but AMD will.
> 
> The other side of things is that under DX12 it is now NVIDIA who incur a larger CPU overhead as their Scheduler is in software and taking up cycles from the CPU that AMD GCN aren't.
> 
> Like a roll reversal but not nearly as bad as what AMD suffered under DX11 in draw call intensive titles.


OIC. So can I say this is due to hardware design rather than poor drivers (everyone seems to say its poor drivers).


----------



## AmericanLoco

There are a some hardware/scheduler limitations, but the big issue is that AMD's poor DX11 drivers cannot keep the card fed with instructions. DX12 can keep the card fed.


----------



## BeerPowered

I wouldn't put too much stock in Async Compute going very far. Just like Mantle, it will most likley be short lived. Plus Ashes is an AMD backed game which is a HUGE bias. Just like the Nvidia backed games have a huge bias over AMD.

AMD vs Nvidia is ALWAYS back and forth. Some games AMD has been better at, other games Nvidia will be better at. Its just how the cookie crumbles.


----------



## Mahigan

Quote:


> Originally Posted by *escksu*
> 
> OIC. So can I say this is due to hardware design rather than poor drivers (everyone seems to say its poor drivers).


Yeah, people say that AMD lack Multi-threaded command listing as well as Deferred rendering and I thought the same until a little birdie explained it to me.

There's a Microsoft Deferred rendering and multi-threaded command listing test and it works really well on AMD GCN hardware (meaning the drivers support the features).

What AMD can try and do is tune their hardware to fetch commands right before a complex simulation begins, but that varies on the title. Some titles are getting to be very complex. This is why it takes AMD some time to "optimize" a game. Take Rise of the Tomb Raider, that's a draw call intensive title. AMD are making gains but they'll likely never be able to reach NVIDIA's performance levels under DX11. Ashes of the Singularity is even more insane to try and optimize for GCN under DX11. So they left it alone.

Now if a DX11 title is relatively lower on draw calls and heavier on compute we see GCN reaching parity or near parity on the Top Cards and passing or tying nVIDIA on the Hawaii/Grenada cards. Lucky for AMD this has been a trend. That's why Kepler is falling back and the 290x passing the GTX 780 Ti. It is not due to NVIDIA abandoning Kepler. It is due to games being more compute bound. (don't be fooled by Kepler's theoretical compute numbers, it takes Kepler 9-10ns to complete a MADD instruction vs 4ns on GCN2&3 and 5-6 on Maxwell).

Polaris fixes these issues BTW


----------



## infranoia

Quote:


> Originally Posted by *BeerPowered*
> 
> Just like Mantle, it will most likley be short lived.


If your argument hangs on this supposition, it's pretty short-lived as well. Mantle was kind of a big deal for all the reasons we're talking about here.


----------



## AmericanLoco

Quote:


> Originally Posted by *BeerPowered*
> 
> I wouldn't put too much stock in Async Compute going very far. *Just like Mantle, it will most likley be short lived*. Plus Ashes is an AMD backed game which is a HUGE bias. Just like the Nvidia backed games have a huge bias over AMD.
> 
> AMD vs Nvidia is ALWAYS back and forth. Some games AMD has been better at, other games Nvidia will be better at. Its just how the cookie crumbles.


Async-compute is an integral part of DX12. That's like saying tesselation would be short lived.


----------



## Fyrwulf

Quote:


> Originally Posted by *BeerPowered*
> 
> I wouldn't put too much stock in Async Compute going very far. Just like Mantle, it will most likley be short lived. Plus Ashes is an AMD backed game which is a HUGE bias. Just like the Nvidia backed games have a huge bias over AMD.


You've lost the plot, I fear.

Fact 1) Dx12 and Vulkan were both heavily informed by Mantle, thus Mantle's feature set is implemented into both APIs. This includes Async Compute.
Fact 2) Gamers have been clamoring for full core utilization for ages and both APIs give developers the ability to utilize that. Any dev that doesn't take advantage of that is going to be lambasted.
Fact 3) As soon as AMD swept the console board, people have been saying this is coming. nVidia can't ride the coat tails of GameWorks forever and if they try nobody is going to buy Pascal or following generations.
Fact 4) Even if nVidia wanted to do such a thing, Microsoft's recent habit of rapid deprecation of legacy software won't allow them. They're in Microsoft's sandbox and when MS decides it's no longer going to support Dx11 then nVidia will be forced to make the stark choice of stepping up or stepping out.

When you get right down to it, AMD played the long game and ended up flipping the script on everybody. Not only that, but I don't think MS is going to allow a 2 year phase in of Dx12. I figure a year on the outside.


----------



## BeerPowered

Quote:


> Originally Posted by *Fyrwulf*
> 
> You've lost the plot, I fear.
> 
> Fact 1) Dx12 and Vulkan were both heavily informed by Mantle, thus Mantle's feature set is implemented into both APIs. This includes Async Compute.
> Fact 2) Gamers have been clamoring for full core utilization for ages and both APIs give developers the ability to utilize that. Any dev that doesn't take advantage of that is going to be lambasted.
> Fact 3) As soon as AMD swept the console board, people have been saying this is coming. nVidia can't ride the coat tails of GameWorks forever and if they try nobody is going to buy Pascal or following generations.
> Fact 4) Even if nVidia wanted to do such a thing, Microsoft's recent habit of rapid deprecation of legacy software won't allow them. They're in Microsoft's sandbox and when MS decides it's no longer going to support Dx11 then nVidia will be forced to make the stark choice of stepping up or stepping out.
> 
> When you get right down to it, AMD played the long game and ended up flipping the script on everybody. Not only that, but I don't think MS is going to allow a 2 year phase in of Dx12. I figure a year on the outside.


Its all about the money $$$$$. Not about whats best for gaming. Right now AMD is broke and indebted, they owe $2 Billion. Nvidia took most of the Market with their 80% slice of it. Nvidia has $$$$$.

You think Devs will do whats best for gaming? While catering to the smaller market?









Nvidia will keep pulling the same slick tricks they always have and give Devs $$$ reasons to keep them on top. AMD can buy a couple games but Nvidia can buy the whole table.


----------



## Clocknut

The bench just shows how awful AMD DX11 overhead is.

If u serious about gaming now AMD is not a recommended buy until DirectX12 game gain some market share.

IMO, I dont think I will even care for AMD DirectX12 advantage until DirectX12 game market share hit 50%.


----------



## Fyrwulf

Quote:


> Originally Posted by *BeerPowered*
> 
> You think Devs will do whats best for gaming? While catering to the smaller market?


Devs aren't going to have a choice. Their choice is Dx12 or Vulkan. It doesn't matter what nVidia wants, because they missed the vote.
Quote:


> Nvidia will keep pulling the same slick tricks they always have and give Devs $$$ reasons to keep them on top. AMD can buy a couple games but Nvidia can buy the whole table.


Yep, right up until the FTC crushes them. American companies must follow American laws and nVidia doing that is a massive violation of anti-trust law.


----------



## Mahigan

Quote:


> Originally Posted by *BeerPowered*
> 
> Its all about the money $$$$$. Not about whats best for gaming. Right now AMD is broke and indebted, they owe $2 Billion. Nvidia took most of the Market with their 80% slice of it. Nvidia has $$$$$.
> 
> You think Devs will do whats best for gaming? While catering to the smaller market?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nvidia will keep pulling the same slick tricks they always have and give Devs $$$ reasons to keep them on top. AMD can buy a couple games but Nvidia can buy the whole table.


Yes, NVIDIA have more money. Yes, AMD are in debt. NVIDIA may very well be participating in shady business practices (licensing agreements that cut out the competition). What you say is either true or more than likely true.

Here's the thing though,


Spoiler: Warning: Spoiler!








So that's AMD RTGs next gen Polaris GPU. The first shot is a block diagram and lets look into what it shows,

New Command Processor
New Geometry Processor
New Compute Units
New L2 Cache
New Memory Controller
New Multimedia Cores
New Display Engine
New Instruction pre-fetching

Now we're talking about how NVIDIA can buy game developer's. Fine, and do what?

- Boost tessellation? Polaris comes with Geometry Processors, not units, so it is a Primitive Discard Accelerator. In others words, we're looking at a large tessellation performance increase.

- Compel developer's to stay on DX11? Polaris has a new Command Processor with either more queue slots or a larger queue as well as a larger Command Buffer for increased single threaded performance. Polaris is also boosting its shader efficiency by re-designing their CUs. So we might see a different number of SIMDs per CU or a new large cache.

- New Multimedia cores is for livestreaming (shadow play) etc.

- New Display Engine is for HDR (better colors).

- New L2 Cache is for lowering latencies as well as allowing a higher GPU utilization without running into bottlenecks.

- New memory controller with new color compression is for compressing pixel data so that it reduces the amount of memory bandwidth during transfers. More efficient use out of available memory bandwidth as well.

- New Instruction pre-fetching allows to fetch instructions from memory and onto cache before the instructions are needed. This significantly reduces latency.

So Polaris fixes GCNs short comings. DX12 is a bonus. Asynchronous Compute is an additional bonus.

So what is NVIDIAs money going to buy?


----------



## escksu

Quote:


> Originally Posted by *BeerPowered*
> 
> Its all about the money $$$$$. Not about whats best for gaming. Right now AMD is broke and indebted, they owe $2 Billion. Nvidia took most of the Market with their 80% slice of it. Nvidia has $$$$$.
> 
> You think Devs will do whats best for gaming? While catering to the smaller market?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nvidia will keep pulling the same slick tricks they always have and give Devs $$$ reasons to keep them on top. AMD can buy a couple games but Nvidia can buy the whole table.


Not really, Nvidia may control 80% of the PC market but AMD controls almost 100% of the console market.

Devs have to optimise their games for console and then port it to PC.


----------



## BeerPowered

Quote:


> Originally Posted by *Fyrwulf*
> 
> Devs aren't going to have a choice. Their choice is Dx12 or Vulkan. It doesn't matter what nVidia wants, because they missed the vote.
> Yep, right up until the FTC crushes them. American companies must follow American laws and nVidia doing that is a massive violation of anti-trust law.


Intel gets away with it and so does Nvidia. FTC crush them? Anti Trust? Not gonna happen. Just because Async Compute is included in the API doesn't mean Devs have to use it. Console market? DX12 is a Microsoft API. I doubt PS4 games will get it.

Gameworks has been used by Nvidia to purposefully hinder AMD. Nvidia payed for its implementation. DX12 era won't be so different.


----------



## Mahigan

*Oh and confirmed Asynchronous compute support under*:

Hitman
Gears of War Ultimate Edition
Deus Ex: Mankind Divided
Fable Legends
Battlefield 5
Star Citizen

*Confirmed DX12 titles with no info on Asynchronous Compute*:

Quantum Break
Need For Speed
The Elder Scrolls Online (patch)
Just Cause 3 (patch)

That's not bad.


----------



## degenn

This thread is, as predicted when seeing the title, absolutely hilarious/pathetic.

Oh OCN.... how the mighty have fallen.

Carry on guys, carry on...


----------



## Fyrwulf

Quote:


> Originally Posted by *BeerPowered*
> 
> Intel gets away with it and so does Nvidia. FTC crush them? Anti Trust? Not gonna happen.


FTC fined Intel to the tune of a $1.5 billion payment to AMD. The only reason they got off so light was back then relative market shares were far closer than they are today. Intel doesn't do that anymore because there were pointed threats that a repeat offense would result in a breakup and because they don't have to. I could speculate as to the reason nVidia has gotten away with it so far, but that would violate site rules.


----------



## Defoler

Quote:


> Originally Posted by *Mahigan*
> 
> So what is NVIDIAs money going to buy?


Pascal?


----------



## Defoler

Quote:


> Originally Posted by *BeerPowered*
> 
> Gameworks has been used by Nvidia to purposefully hinder AMD. Nvidia payed for its implementation.


Do you have any proof of this?
AMD cards are performing worse in DX11 than nvidia cards even on AMD sponsored games. So how is that statement of yours really makes any sense?

Also on that same note, I bet AMD are going to pay a lot to developers to put as much async as they can even when not needed, forcefully trying to hinder nvidia cards in DX12. I wonder if you are going to cry foul then and defend nvidia









Two face hypocritical people and their made up conspiracy issues...


----------



## Mahigan

Quote:


> Originally Posted by *Defoler*
> 
> Pascal?


He was talking about NVIDIA paying off developer's in order for them to not use Async shading and instead use Game works. I simply replied that with Polaris, there's little nVIDIA could do (most likely) to hamper performance if they wanted too (not saying they are).

Polaris vs Pascal is another matter which will be a battle of hardware not so much software.


----------



## MadjinnSayan

TL,DR whole thread :
*yaaaaaaaaaaaaaaawwwwnnnn*, so it's just another tech to give AMD an edge ? If the current situation/events keep happening i may as well buy the new AMD cards and use my 780 ti as a physX dedicated one.


----------



## EightDee8D

i guess nvidia is silent on this issue because they are having internal communication issues again. last time it was 970 and now async. their Engineering team and pr team is not in "sync"


----------



## Mahigan

Quote:


> Originally Posted by *EightDee8D*
> 
> i guess nvidia is silent on this issue because they are having internal communication issues again. last time it was 970 and now async. their Engineering team and pr team is not in "sync"


NVIDIA were silent 6 months ago and have been since. In that time they've enabled Asynchronous compute, silently, in their driver. The thing is, however, what NVIDIA and Microsoft call "Asynchronous Compute" is not how AMD calls it or implemented it under Mantle.

For NVIDIA and Microsoft, Asynchronous compute is defined the same way as it is defined in computer science. It means that there is no defined order of execution between jobs in the same batch.

AMD take it a step further. For AMD, Asynchronous compute adds both concurrent and parallel execution of jobs. This is why AMD gains more performance. AMDs added hardware redundancy allows them to perform Asynchronous compute + graphics. Meaning that both Graphics and Compute jobs can be executed in parallel as well as concurrently.

So NVIDIA do have Asynchronous compute enabled in their driver, as Kollock stated, they just don't or can't take advantage of it. That's precisely what I had predicted almost 6 months ago.


----------



## kyrie74

Quote:


> Originally Posted by *BeerPowered*
> 
> Its all about the money $$$$$. Not about whats best for gaming. Right now AMD is broke and indebted, they owe $2 Billion. Nvidia took most of the Market with their 80% slice of it. Nvidia has $$$$$.
> 
> You think Devs will do whats best for gaming? While catering to the smaller market?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nvidia will keep pulling the same slick tricks they always have and give Devs $$$ reasons to keep them on top. AMD can buy a couple games but Nvidia can buy the whole table.


Do you know who has much deeper pockets than Nvidia? I'll give you a hint: the company that makes DirectX 12 and the XBone, also Sony has 8 ACEs in the PS4 (which is where the design for Hawaii came from) so I can guarantee you that Async Compute will be used.


----------



## SystemTech

Soo looking at Async Compute. AMD are rocking the king of the hill for performance, and we know that in the next year or 2, there will be a good dozen AAA titles released with Async Compute.
Clearly there is something not quite right with Maxwell otherwise we would see Nvidia looking better with Anyc Compute on, even marginally better, not worse.

Pascal is around the corner and i think we can all agree, there will be some for of Anyc compute from launch to try and put AMD in second again, even if it is just in the 1 or 2 DX12 instances we have (they already own DX11 so why bother with that).

My questions is, will Anyc compute degrade Pascals DX11 performance and put it alot closer to the AMD cards or is it possible to build in a switch to be able to switch to "single thread mode" for DX 11 titles?

Im not that clued up on the architecture etc so do not know if its possible or not.

However, a bit of speculation here....
If they cannot just turn parallel processing(Async) on and off like a switch and get maximum performance in either mode (ie if one mode gets hampered by the other) then i do think we will be seeing a possibly stagnant Non-Async titles improvements from Pascal but big gains in Async titles or would they skip Async for Pascal and focus on more "single threaded" DX 11 performance?

Basically what im saying, is unless they can figure out how to switch between the 2 without hampering performance, i think we may see either a very close Async battle between Polaris and Pascal, or we may see Pascal completely own Non-Async titles, while Polaris dominate in Async titles


----------



## Olivon

DX11 performance are really low for AMD ???
Do they actually know that's the main current API ???


----------



## MadRabbit

Can someone please explain me why AMD cant/dont want to fix the DX11 flaws they have?


----------



## EightDee8D

Quote:


> Originally Posted by *MadRabbit*
> 
> Can someone please explain me why AMD cant/dont want to fix the DX11 flaws they have?


http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading/70#post_24930629


----------



## infranoia

Quote:


> Originally Posted by *MadRabbit*
> 
> Can someone please explain me why AMD cant/dont want to fix the DX11 flaws they have?


For the same reason that Nvidia can't fix the Async Compute flaws they have with Maxwell. It's not a driver issue, it's a fundamental architectural issue. Mahigan describes the situation in this thread and others.

That said, AMD is doing *something* about DX11, which is why we see Tahiti and Hawaii punching above their launch weight in DX11 benchmarks today.


----------



## Mahigan

Quote:


> Originally Posted by *Olivon*
> 
> DX11 performance are really low for AMD ???
> Do they actually know that's the main current API ???


AMD haven't bothered optimizing DX11 for Ashes of the Singularity as it would be a pretty complicated affair to try and do so and gains would be quite low.

http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading/60#post_24930629
http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading/60#post_24930667

Sources: http://gpuopen.com/performance-tweets-series-command-lists/
https://www.amd.com/Documents/GCN_Architecture_whitepaper.pdf
http://developer.amd.com/wordpress/media/2013/06/2620_final.pdf
And the latest AMD YouTube Polaris video where the engineer mentions that they increased the Command Buffer size for better single thread performance.


----------



## Olivon

Really clever from their part knowing that there is no DX12 game available and just DX11 only games


----------



## EightDee8D

Quote:


> Originally Posted by *Olivon*
> 
> Really clever from their part knowing that there is no DX12 game available and just DX11 only games


future is dx12 not dx11, otherwise nvidia wouldn't make dx12 cards.


----------



## Defoler

One of the issues we will see with DX12, is windows 10 adaptation.
For people who refuse currently to install and move on to windows 10, DX12 is meaningless.

Also AoS CPU bottlenecking the game, seems to be part of the impact done on nvidia, as their need for CPU in order to schedule the work on async compute, might be hampered.

Still, I'm going to wait until the game is *actually out* and drivers are out with full implementations of DX12 done.


----------



## Defoler

Quote:


> Originally Posted by *EightDee8D*
> 
> future is dx12 not dx11, otherwise nvidia wouldn't make dx12 cards.


Yes but you are not buying a GPU so it can be useful in 2 years. You are buying a GPU to be useful now. In 2 years, you will have better (or worse) options which will fit the bill.


----------



## Mahigan

Quote:


> Originally Posted by *Olivon*
> 
> Really clever from their part knowing that there is no DX12 game available and just DX11 only games


Oh, AMD will still be optimizing for DX11 games, just not Ashes.


----------



## Olivon

Quote:


> Originally Posted by *Defoler*
> 
> Yes but you are not buying a GPU so it can be useful in 2 years. You are buying a GPU to be useful now. In 2 years, you will have better (or worse) options which will fit the bill.


Excactly and when you buy a card you don't want to wait 2 or 3 years for the drivers to be optimized
At launch, from start, nVidia seems way more optimized than AMD ones.
AMD talk a lot about DX12 because they know they are having hard time with nVidia drivers team.


----------



## EightDee8D

Quote:


> Originally Posted by *Defoler*
> 
> Yes but you are not buying a GPU so it can be useful in 2 years. You are buying a GPU to be useful now. In 2 years, you will have better (or worse) options which will fit the bill.


or , you can buy a gpu which gets performance boost in next 2 years and save money ? people bought 780 , ditching their 680. on the other side 7970 got performance through driver and better support. thus saving you money so you can buy big 14 nm gpu.


----------



## Mahigan

Quote:


> Originally Posted by *Defoler*
> 
> Yes but you are not buying a GPU so it can be useful in 2 years. You are buying a GPU to be useful now. In 2 years, you will have better (or worse) options which will fit the bill.


And AMD are competitive, with their cards beating, all of their intended NVIDIA price competitors under DX11 save for the Fury-X vs GTX 980 Ti.
https://www.techpowerup.com/mobile/reviews/ASUS/GTX_980_Ti_Matrix/23.html

In recent DX11 titles, AMD graphics cards have been quite surprising.

What DX12 does is allow for:

390 > GTX 970 by a long shot instead of a little
390X > GTX 980
Fury uncontested
Fury-X = GTX 980 Ti
TitanX uncontested

What Async compute allows for is:

390 > GTX 980
390x = GTX 980 Ti
TitanX
Fury uncontested
Fury-X uncontested.

So really, DX11 or DX12 w/wo Async doesn't change AMDs price competition figures for the worse. It only makes AMD GCN that much better.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Clocknut*
> 
> The bench just shows how awful AMD DX11 overhead is.
> 
> If u serious about gaming now AMD is not a recommended buy until DirectX12 game gain some market share.
> 
> IMO, I dont think I will even care for AMD DirectX12 advantage until DirectX12 game market share hit 50%.


All the posts like these are ridiculously funny..









AMD cards are hamstrung by DX11, that's true. But what's also true, is even with that disadvantage Hawai has competed with and beaten Kepler, and now competes very well with Maxwell, and in some cases even beats it. All while being "awful" at using the archaic API known as DX11..

Do any of you even pay attention to the performance of anything without a green sticker on it? AMD are a lot closer than you think:



The titles those averages come from are these; http://api.viglink.com/api/click?format=go&jsonp=vglnk_145313535440811&key=7777bc3c17029328d03146e0ed767841&libId=ijk77dug01000kb5000DAgkmupsr8&loc=http%3A%2F%2Fwww.overclock.net%2Ft%2F1579868%2Fpcghw-fallout-4-in-art-test-with-benchmarks-for-release%2F100&v=1&ref=http%3A%2F%2Fwww.overclock.net%2Fforums%2Fposts%2Fby_user%2Fid%2F428406%2Fpage%2F120&title=%5BNVIDIA%5D%20Engaging%20the%20Voyage%20to%20Vulkan%20-%20Page%204&txt=http%3A%2F%2Fapi.viglink.com%2Fapi%2Fclick%3Fformat%3Dgo%26amp%3Bjsonp%3Dvglnk_145313535440811%26amp%3Bkey%3D7777bc3c17029328d03146e0ed767841%26amp%3BlibId...&out=http%3A%2F%2Fimgur.com%2Fa%2FOcJUK

Let that sink in for a second.. They have Nvidia matched under most tiers except the 980Ti, while being awful, what's it going to be like when they are actually good?


----------



## escksu

I got a strong feeling that some people are feeling butt hurt about the benchmark results.... ok maybe I am being overly sensitive.lol.....


----------



## escksu

Quote:


> Originally Posted by *GorillaSceptre*
> 
> All the posts like these are ridiculously funny..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> AMD cards are hamstrung by DX11, that's true. But what's also true, is even with that disadvantage Hawai has competed with and beaten Kepler, and now competes very well with Maxwell, and in some cases even beats it. All while being "awful" at using the archaic API known as DX11..
> 
> Do any of you even pay attention to the performance of anything without a green sticker on it? AMD are a lot closer than you think:
> 
> 
> 
> The titles those averages come from are these; http://api.viglink.com/api/click?format=go&jsonp=vglnk_145313535440811&key=7777bc3c17029328d03146e0ed767841&libId=ijk77dug01000kb5000DAgkmupsr8&loc=http%3A%2F%2Fwww.overclock.net%2Ft%2F1579868%2Fpcghw-fallout-4-in-art-test-with-benchmarks-for-release%2F100&v=1&ref=http%3A%2F%2Fwww.overclock.net%2Fforums%2Fposts%2Fby_user%2Fid%2F428406%2Fpage%2F120&title=%5BNVIDIA%5D%20Engaging%20the%20Voyage%20to%20Vulkan%20-%20Page%204&txt=http%3A%2F%2Fapi.viglink.com%2Fapi%2Fclick%3Fformat%3Dgo%26amp%3Bjsonp%3Dvglnk_145313535440811%26amp%3Bkey%3D7777bc3c17029328d03146e0ed767841%26amp%3BlibId...&out=http%3A%2F%2Fimgur.com%2Fa%2FOcJUK
> 
> Let that sink in for a second.. They have Nvidia matched under most tiers except the 980Ti, while being awful, what's it going to be like when they are actually good?


Fury X is almost as fast as 980ti in DX11. I just wonder in which way is Nvidia much faster than AMD?????

As I have mentioned before, pple who do not own and use AMD GPU keep talking about how bad AMD drivers are.


----------



## Mahigan

Quote:


> Originally Posted by *escksu*
> 
> Fury X is almost as fast as 980ti in DX11. I just wonder in which way is Nvidia much faster than AMD?????
> 
> As I have mentioned before, pple who do not own and use AMD GPU keep talking about how bad AMD drivers are.


NVIDIA can overclock


----------



## Charcharo

Quote:


> Originally Posted by *Defoler*
> 
> Yes but you are not buying a GPU so it can be useful in 2 years. You are buying a GPU to be useful now. In 2 years, you will have better (or worse) options which will fit the bill.


When I buy a GPU I think long term. In this case 5-6 years.
We are PC Gamers, we should be capable of thinking long term for both gaming and the industry as well as our fellows that can not upgrade every year or two or three.


----------



## Olivon

5-6 years long term ? It's time to buy a console dude !


----------



## Charcharo

Quote:


> Originally Posted by *Olivon*
> 
> 5-6 years long term ? It's time to buy a console dude !


That is heresy mate!








Consoles cost a lot more. Especially in the long run. Besides, they do not have my mods, nor do they have my old games (very limited backwards compatibility) nor can they do real work or even emulate older consoles well (or they can but again, in a very limited manner).

Power and performance has always been icing on top of the PC Gaming cake. There is a huge amount more to it.


----------



## Mahigan

Just because of historical parallels...
http://forums.anandtech.com/showpost.php?p=38053246&postcount=804

http://www.anandtech.com/show/2549/7

Seems that this isn't the first time NVIDIA have *exaggerated* their support for a new APIs features.

For those waiting on full Async compute + graphics support, I believe the term is "When Hell freezes over" is when you will get it.

Oh and..
Quote:


> NVIDIA insists that if it reveals it's true feature set, AMD will buy off a bunch of developers with its vast hoards of cash to enable support for DX10.1 code NVIDIA can't run. Oh wait, I'm sorry, NVIDIA is worth twice as much as AMD who is billions in debt and struggling to keep up with its competitors on the CPU and GPU side. So we ask: who do you think is more likely to start buying off developers to the detriment of the industry?


NVIDIA GameWorks comes to mind...

And then this:
Quote:


> One of the other interesting tidbits about the PC port of AC is that it's one of the first titles to launch with support for DirectX 10.1... sort of. Version 1.0 does indeed support DirectX 10.1 features, but Ubisoft decided to remove this functionality in the 1.02 patch. Whether it will make a return in the future is unclear, but signs point to "no". Given that we're late to the game in terms of reviewing AC, we are going to spend a decent chunk of this review looking at the technology side of the equation and what it means to gamers.


http://www.anandtech.com/show/2536/6
Quote:


> So why did Ubisoft remove DirectX 10.1 support? The official statement reads as follows: "The performance gains seen by players who are currently playing AC with a DX10.1 graphics card are in large part due to the fact that our implementation removes a render pass during post-effect which is costly." An additional render pass is certainly a costly function; what the above statement doesn't clearly state is that DirectX 10.1 allows one fewer rendering pass when running anti-aliasing, and this is a good thing. We contacted AMD/ATI, NVIDIA, and Ubisoft to see if we could get some more clarification on what's going on. Not surprisingly, ATI was the only company willing to talk with us, and even they wouldn't come right out and say exactly what occurred.
> 
> *Reading between the lines, it seems clear that NVIDIA and Ubisoft reached some sort of agreement where DirectX 10.1 support was pulled with the patch.* ATI obviously can't come out and rip on Ubisoft for this decision, because they need to maintain their business relationship. We on the other hand have no such qualms. Money might not have changed hands directly, *but as part of NVIDIA's "The Way It's Meant to Be Played" program*, it's a safe bet that NVIDIA wasn't happy about seeing DirectX 10.1 support in the game -- particularly when that support caused ATI's hardware to significantly outperform NVIDIA's hardware in certain situations.
> 
> Last October at NVIDIA's Editors Day, we had the "opportunity" to hear from several gaming industry professionals about how unimportant DirectX 10.1 was, and how most companies weren't even considering supporting it. Amazingly, even Microsoft was willing to go on stage and state that DirectX 10.1 was only a minor update and not something to worry about. NVIDIA clearly has reasons for supporting that stance, as their current hardware -- and supposedly even their upcoming hardware -- will continue to support only the DirectX 10.0 feature set.


Who paid off a developer? NVIDIA.

Gee, I wonder why Ark's DX12 patch was delayed... And no, it is not unreasonable to think this way.

Who got all pissy when Oxide refused to pull out Asynchronous Compute? NVIDIA. Kollock did say that he does not bend to either AMD or NVIDIA and he emphasized NVIDIAs bad behavior. Let me guess, NVIDIA tried to come to an agreement with Oxide as well. Kollock will never admit to this but evidently, historical precedent in place, that IS what NVIDIA do.

Why was DX12 pulled from Rise of the Tomb Raider? Gee, I wonder.


----------



## degenn

Quote:


> Originally Posted by *Mahigan*
> 
> "When Hell freezes over"


Is that Nvidia's code-name for Volta?


----------



## specopsFI

Financially, it's taken too long for GCN to bloom. But right now, it is the better deal. I had a good time with the 780's and I even enjoyed my 970, but the circle is finally closing. Three years ago, I was struggling with 7970CF. When I had had enough with them, I moved to OG GTX Titan. Titan was simply a lot better user experience back then. Looking at it now...









GTX 780 was better than 290X for me two years ago. Next time I tried the 290X in the summer of 2014, it was better but still not enough to keep me from going with a 780 Classy a little later. I then went with the 970 which was better than the 780 Classy. And when the whole 3.5GB issue came, I returned the 970 and finally went back to GCN with my current 290 which is looking like a great deal right now. So yeah, me and GCN go way back, it wasn't always rosy, but it got the staying power. I just had my bit of fun with Nvidia in between. Nothing wrong with that, you got to do what feels right to you


----------



## EightDee8D

Quote:


> Originally Posted by *Mahigan*
> 
> Just because of historical parallels...
> http://forums.anandtech.com/showpost.php?p=38053246&postcount=804
> 
> http://www.anandtech.com/show/2549/7
> 
> Seems that this isn't the first time NVIDIA have *exaggerated* their support for a new APIs features.
> 
> For those waiting on full Async compute + graphics support, I believe the term is "When Hell freezes over" is when you will get it.
> 
> Oh and..
> NVIDIA GameWorks comes to mind...
> 
> And then this:
> http://www.anandtech.com/show/2536/6
> Who paid off a developer? NVIDIA.
> 
> Gee, I wonder why Ark's DX12 patch was delayed... And no, it is not unreasonable to think this way.
> 
> Who got all pissy when Oxide refused to pull out Asynchronous Compute? NVIDIA. Kollock did say that he does not bend to either AMD or NVIDIA and he emphasized NVIDIAs bad behavior. Let me guess, NVIDIA tried to come to an agreement with Oxide as well. Kollock will never admit to this but evidently, historical precedent in place, that IS what NVIDIA do.
> 
> Why was DX12 pulled from Rise of the Tomb Raider? Gee, I wonder.


haha , they don't support dx11.2/dx11.1 on kepler. they also don't support pci-e3 on SB-E processors. nvidia "the way it means to be shady"


----------



## Fyrwulf

Quote:


> Originally Posted by *Olivon*
> 
> 5-6 years long term ? It's time to buy a console dude !


Sane people buy a top of the line graphics card and stick with it until it doesn't run well anymore. I had a TNT2 Ultra in my first comp and when it died I got a 9100XT and rode that until my comp bricked (Crapaq, what can you do?) Then I got an Alienware with an X800 Pro and used that until the PSU blew and took everything with it; I had that computer for 8 years. I will never understand why somebody would spend $1k on a Titan, which will run all games nowadays just fine, and then go through two succeeding generations of very expensive cards for minor performance gains.


----------



## Charcharo

Quote:


> Originally Posted by *Fyrwulf*
> 
> Sane people buy a top of the line graphics card and stick with it until it doesn't run well anymore. I had a TNT2 Ultra in my first comp and when it died I got a 9100XT and rode that until my comp bricked (Crapaq, what can you do?) Then I got an Alienware with an X800 Pro and used that until the PSU blew and took everything with it; I had that computer for 8 years. I will never understand why somebody would spend $1k on a Titan, which will run all games nowadays just fine, and then go through two succeeding generations of very expensive cards for minor performance gains.


Neither will I mate. It seems like a waste of money honestly.

Less you get good sales on old hardware, but that is usually not the case (props if you can manage to do it though).

Still even lower end cards do fine








Here is my previous PC's card reviewed in new games:
https://www.youtube.com/watch?v=GkLyx_tiUgc


----------



## Mahigan

Quote:


> Originally Posted by *Fyrwulf*
> 
> Sane people buy a top of the line graphics card and stick with it until it doesn't run well anymore. I had a TNT2 Ultra in my first comp and when it died I got a 9100XT and rode that until my comp bricked (Crapaq, what can you do?) Then I got an Alienware with an X800 Pro and used that until the PSU blew and took everything with it; I had that computer for 8 years. I will never understand why somebody would spend $1k on a Titan, which will run all games nowadays just fine, and then go through two succeeding generations of very expensive cards for minor performance gains.


Consumerism, it's a mental disorder. I had it BIG time. I had a basement full of PC hardware. When I say full, I mean full. I had to have the latest GPU for bragging rights. My own self esteem was tied to the size of my hardware.

Many folks will deny it, like an alcoholic denies being alcoholic, but they're suffering the same thing.

The reason people get "butthurt" by benchmarks showcasing their hardware in a bad light is because it touches their ego.


----------



## Mampus

Speaking of long-term, I still use my 1440 x 900 monitor. That, with some 970/390-like (or even better, 980Ti/Fury X) performance, I still think that I can play next year or so AAA games with relative ease









Plus I don't like anti-aliasing that much, 1X or 2X is fine for me at least


----------



## Desolutional

Quote:


> Originally Posted by *kot0005*
> 
> Hmm might buy a polaris instead of pascal. My 980ti is looking horrible and am on 1440p.


Just imagine how I'm feeling at 4K then!


----------



## spyshagg

Karma, is happening. Pivotal point in this industry. Fascinating to see how it will pan out.

My 500$ investment (2 x 290x) from mid 2015 is looking to last me 2 more years @ 1440p and perhaps even more @1080p


----------



## Defoler

Quote:


> Originally Posted by *Mahigan*
> 
> And AMD are competitive, with their cards beating, all of their intended NVIDIA price competitors under DX11 save for the Fury-X vs GTX 980 Ti.
> https://www.techpowerup.com/mobile/reviews/ASUS/GTX_980_Ti_Matrix/23.html
> 
> In recent DX11 titles, AMD graphics cards have been quite surprising.
> 
> What DX12 does is allow for:
> 
> 390 > GTX 970 by a long shot instead of a little
> 390X > GTX 980
> Fury uncontested
> Fury-X = GTX 980 Ti
> TitanX uncontested
> 
> What Async compute allows for is:
> 
> 390 > GTX 980
> 390x = GTX 980 Ti
> TitanX
> Fury uncontested
> Fury-X uncontested.
> 
> So really, DX11 or DX12 w/wo Async doesn't change AMDs price competition figures for the worse. It only makes AMD GCN that much better.


Yet nvidia had a record selling year, again, and AMD have been losing ground, again.
So if your claim was always right, why aren't we all having AMD cards, and why nvidia didn't suddenly disappear for having higher priced cards?

Price, is not everything. The fact that nvidia for a bit more money, sells their better GPUs, is the bottom line. No beta or alpha results are going to matter unless once the game is out, AMD will show performance gains and better results once nvidia driver team do their job.

To remind you, that ashes is a game fully optimised and developed with AMD, but not with nvidia. Meaning that of course we will get optimised drivers to AMD, and alpha and beta results will always favour AMD.
This is also a bit hypocritical from AMD, as games developed but nvidia, aren't being constantly advertised at "look at those beta performance results! aren't they better than AMD?".


----------



## FLCLimax

Quote:


> Originally Posted by *keikei*
> 
> Quote:
> 
> 
> 
> NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.
Click to expand...

They'll enable it when Pascal launches, lol.


----------



## EightDee8D

Quote:


> Originally Posted by *Defoler*
> 
> Yet nvidia had a record selling year, again, and AMD have been losing ground, again.
> So if your claim was always right, why aren't we all having AMD cards, and why nvidia didn't suddenly disappear for having higher priced cards?
> 
> Price, is not everything. The fact that nvidia for a bit more money, sells their better GPUs, is the bottom line. No beta or alpha results are going to matter unless once the game is out, AMD will show performance gains and better results once nvidia driver team do their job.
> 
> To remind you, that ashes is a game fully optimised and developed with AMD, but not with nvidia. Meaning that of course we will get optimised drivers to AMD, and alpha and beta results will always favour AMD.
> This is also a bit hypocritical from AMD, as games developed but nvidia, aren't being constantly advertised at "look at those beta performance results! aren't they better than AMD?".


amd is allowing developers to do whatever they want with nvidia, there's no blocked code or anything. on the other hand nvidia doesn't provide source code to amd nor the devs are allowed to share anything with amd if title is gameworks sponsored . how is that being hypocritical ? heck they even made gpu open so even nvidia can optimize the code for their arch.

nvidia gives you more inefficient visual effects which slows down the fps.
Amd gives you performance boost which can be used for/with more visual effects.

where is hypocrisy ?


----------



## mtcn77

Quote:


> Originally Posted by *Defoler*
> 
> Yet nvidia had a record selling year, again, and AMD have been losing ground, again.
> So if your claim was always right, why aren't we all having AMD cards, and why nvidia didn't suddenly disappear for having higher priced cards?
> 
> Price, is not everything. The fact that nvidia for a bit more money, sells their better GPUs, is the bottom line. No beta or alpha results are going to matter unless once the game is out, AMD will show performance gains and better results once nvidia driver team do their job.
> 
> To remind you, that ashes is a game fully optimised and developed with AMD, but not with nvidia. Meaning that of course we will get optimised drivers to AMD, and alpha and beta results will always favour AMD.
> This is also a bit hypocritical from AMD, as games developed but nvidia, aren't being constantly advertised at "look at those beta performance results! aren't they better than AMD?".


Name any two GW titles the performance of which were better than this beta...


----------



## Sleazybigfoot

Quote:


> Originally Posted by *NightAntilli*
> 
> CPU limitation.
> nVidia cards can do async on the compute queue. They can't do async on compute AND graphics at the same time. We call that concurrent async. All GCN cards can do concurrent async. None of nVidia's cards can, and probably neither will Pascal.


I'm thinking this, from what I understood Pascal will be the complete version of Maxwell 2. The current Maxwell 2 is stripped down because of the 28nm limitation.

At least this is what I read, don't quote me on this.









Quote:


> Originally Posted by *Defoler*
> 
> Yet nvidia had a record selling year, again, and AMD have been losing ground, again.
> So if your claim was always right, why aren't we all having AMD cards, and why nvidia didn't suddenly disappear for having higher priced cards?
> 
> Price, is not everything. The fact that nvidia for a bit more money, sells their better GPUs, is the bottom line. No beta or alpha results are going to matter unless once the game is out, AMD will show performance gains and better results once nvidia driver team do their job.
> 
> To remind you, that ashes is a game fully optimised and developed with AMD, but not with nvidia. Meaning that of course we will get optimised drivers to AMD, and alpha and beta results will always favour AMD.
> This is also a bit hypocritical from AMD, as games developed but nvidia, aren't being constantly advertised at "look at those beta performance results! aren't they better than AMD?".


Well 1 thing comes to mind is the constant AMD hate, when a consumer who doesn't know a lot about gpus looks up NVidia and AMD gpus to see which one he should buy, he'll jump to NVidia 80% of the time because a lot of posts bash AMD for having worse performance (even if it's just 5 - 10 frames).


----------



## SpeedyVT

Quote:


> Originally Posted by *Robenger*
> 
> It looks like doing Async compute on the driver isn't working out very well. Nvidia has been very good with their drivers, but it seems even they can't overcome the hardware advantage AMD has right now.


They did cut an awful lot of corners with this generation although people will be tremendous naysayers to this statement. Seriously 980 is an older architecture than the 970, 970 has only 3.5GB of effective memory. (Performance loss or not) They threw out a GT 930 which wasn't even better than iGPUs, hasn't been released in US markets yet. The only crème de la crème of this generation worth owning in my book is the 980 ti and the 960.

You can buy a R9 390 w/ 8gb of VRAM for the same price as a 970. Yeah it consumes more but recently has been scoring a good 20-25% more than the 970. Some DX12 based benchmarks, even Vulkan has it above the 980.


----------



## mtcn77

ExtremeTech renewed their editorial, too.
[Source]


----------



## Glottis

Quote:


> Originally Posted by *Mahigan*
> 
> NVIDIA were silent 6 months ago and have been since. In that time they've enabled Asynchronous compute, silently, in their driver. The thing is, however, what NVIDIA and Microsoft call "Asynchronous Compute" is not how AMD calls it or implemented it under Mantle.
> 
> For NVIDIA and Microsoft, Asynchronous compute is defined the same way as it is defined in computer science. It means that there is no defined order of execution between jobs in the same batch.
> 
> AMD take it a step further. For AMD, Asynchronous compute adds both concurrent and parallel execution of jobs. This is why AMD gains more performance. AMDs added hardware redundancy allows them to perform Asynchronous compute + graphics. Meaning that both Graphics and Compute jobs can be executed in parallel as well as concurrently.
> 
> So NVIDIA do have Asynchronous compute enabled in their driver, as Kollock stated, they just don't or can't take advantage of it. That's precisely what I had predicted almost 6 months ago.


Some of your posts are good, but I never saw you mention once to people that Ashes is a game heavily sponsored by AMD with big AMD logo on game's website. You always seem to conveniently leave this info out.

Care to comment about DX12 performance in this game, which is a bit more neutral and isn't sponsored by neither AMD or Nvidia? http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/2


----------



## Desolutional

Quote:


> Originally Posted by *Glottis*
> 
> Some of your posts are good, but I never saw you mention once to people that Ashes is a game heavily sponsored by AMD with big AMD logo on game's website. You always seem to conveniently leave this info out.


----------



## sage101

Quote:


> Originally Posted by *SpeedyVT*
> 
> They did cut an awful lot of corners with this generation although people will be tremendous naysayers to this statement. Seriously 980 is an older architecture than the 970, 970 has only 3.5GB of effective memory. (Performance loss or not) They threw out a GT 930 which wasn't even better than iGPUs, hasn't been released in US markets yet. *The only crème de la crème of this generation worth owning in my book is the 980 ti and the 960.*
> 
> You can buy a R9 390 w/ 8gb of VRAM for the same price as a 970. Yeah it consumes more but recently has been scoring a good 20-25% more than the 970. Some DX12 based benchmarks, even Vulkan has it above the 980.


The GTX 960 isn't even worth it since the R9 380 exist. Like I've said countless times the only nvidia card worth owing over amd counterparts is the GTX 980Ti.


----------



## sugarhell

Quote:


> Originally Posted by *Glottis*
> 
> Some of your posts are good, but I never saw you mention once to people that Ashes is a game heavily sponsored by AMD with big AMD logo on game's website. You always seem to conveniently leave this info out.
> 
> Care to comment about DX12 performance in this game, which is a bit more neutral and isn't sponsored by neither AMD or Nvidia? http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/2


But oxide has a branch that is open sourced to all the IHV and they can see the source code and make changes if they want.

http://www.extremetech.com/gaming/223567-amd-clobbers-nvidia-in-updated-ashes-of-the-singularity-directx-12-benchmark
Quote:


> [W]e have created a special branch where not only can vendors see our source code, but they can even submit proposed changes. That is, if they want to suggest a change our branch gives them permission to do so&#8230;
> 
> This branch is synchronized directly from our main branch so it's usually less than a week from our very latest internal main software development branch. IHVs are free to make their own builds, or test the intermediate drops that we give our QA.
> 
> Oxide also addresses the question of whether or not it optimizes for specific engines or graphics architectures directly.
> 
> Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known "glass jaws" which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.


----------



## NightAntilli

Quote:


> Originally Posted by *Glottis*
> 
> Some of your posts are good, but I never saw you mention once to people that Ashes is a game heavily sponsored by AMD with big AMD logo on game's website. You always seem to conveniently leave this info out.
> 
> Care to comment about DX12 performance in this game, which is a bit more neutral and isn't sponsored by neither AMD or Nvidia? http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/2


Because it isn't what you think it is.... From Kollock, which works in the dev team;

Quote;
_Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path._

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1200_20#post_24356995


----------



## escksu

Quote:


> Originally Posted by *Glottis*
> 
> Some of your posts are good, but I never saw you mention once to people that Ashes is a game heavily sponsored by AMD with big AMD logo on game's website. You always seem to conveniently leave this info out.
> 
> Care to comment about DX12 performance in this game, which is a bit more neutral and isn't sponsored by neither AMD or Nvidia? http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/2


If Nivida hardware and drivers are superior, the AMD logo won't make any difference right?


----------



## CasualCat

Quote:


> Originally Posted by *Mahigan*
> 
> *Oh and confirmed Asynchronous compute support under*:
> 
> Hitman
> Gears of War Ultimate Edition
> Deus Ex: Mankind Divided
> Fable Legends
> Battlefield 5
> Star Citizen
> 
> *Confirmed DX12 titles with no info on Asynchronous Compute*:
> 
> Quantum Break
> Need For Speed
> The Elder Scrolls Online (patch)
> Just Cause 3 (patch)
> 
> That's not bad.


You have any sources on Star Citizen? I thought people have tried to get them to comment on it before but they wouldn't. I know they plan to support DX12, but don't recall anything specific re: asynch compute. Thanks.

Quote:


> Originally Posted by *Mahigan*
> 
> Yes, NVIDIA have more money. Yes, AMD are in debt. NVIDIA may very well be participating in shady business practices (licensing agreements that cut out the competition). What you say is either true or more than likely true.
> 
> Here's the thing though,
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> So that's AMD RTGs next gen Polaris GPU. The first shot is a block diagram and lets look into what it shows,
> 
> New Command Processor
> New Geometry Processor
> New Compute Units
> New L2 Cache
> New Memory Controller
> New Multimedia Cores
> New Display Engine
> New Instruction pre-fetching
> 
> Now we're talking about how NVIDIA can buy game developer's. Fine, and do what?
> 
> - Boost tessellation? Polaris comes with Geometry Processors, not units, so it is a Primitive Discard Accelerator. In others words, we're looking at a large tessellation performance increase.
> 
> - Compel developer's to stay on DX11? Polaris has a new Command Processor with either more queue slots or a larger queue as well as a larger Command Buffer for increased single threaded performance. Polaris is also boosting its shader efficiency by re-designing their CUs. So we might see a different number of SIMDs per CU or a new large cache.
> 
> - New Multimedia cores is for livestreaming (shadow play) etc.
> 
> - New Display Engine is for HDR (better colors).
> 
> - New L2 Cache is for lowering latencies as well as allowing a higher GPU utilization without running into bottlenecks.
> 
> - New memory controller with new color compression is for compressing pixel data so that it reduces the amount of memory bandwidth during transfers. More efficient use out of available memory bandwidth as well.
> 
> - New Instruction pre-fetching allows to fetch instructions from memory and onto cache before the instructions are needed. This significantly reduces latency.
> 
> So Polaris fixes GCNs short comings. DX12 is a bonus. Asynchronous Compute is an additional bonus.
> 
> So what is NVIDIAs money going to buy?


So the new command processor is intended to address the DX11 architecture performance issues you discussed earlier in the thread?

Might be minor but I appreciate the multimedia cores upgrade. I use shadowplay all the time and it was a bummer to hear current AMD cards' VCE (not sure of fury/furyx) couldn't actually encode above 1080p.


----------



## NightAntilli

Quote:


> Originally Posted by *Defoler*
> 
> Yet nvidia had a record selling year, again, and AMD have been losing ground, again.
> So if your claim was always right, why aren't we all having AMD cards, and why nvidia didn't suddenly disappear for having higher priced cards?


Because people are biased towards nVidia. They've fallen for nVidia's marketing schemes. They are emotionally attached to the brand rather than objective. I've been telling everyone that AMD's R9 390 is the better buy right now, but everyone still flocked to the GTX 970 like sheep. I hope they feel the burn as time progresses. They should use their brains when buying products.
Quote:


> Originally Posted by *Defoler*
> 
> Price, is not everything. The fact that nvidia for a bit more money, sells their better GPUs, is the bottom line. No beta or alpha results are going to matter unless once the game is out, AMD will show performance gains and better results once nvidia driver team do their job.


This will not be fixed by a driver. It is representative of the performance in the final game.
Quote:


> Originally Posted by *Defoler*
> 
> To remind you, that ashes is a game fully optimised and developed with AMD, but not with nvidia. Meaning that of course we will get optimised drivers to AMD, and alpha and beta results will always favour AMD.


Wrong. See my prior post above which has Kollock's quote.
Quote:


> Originally Posted by *Defoler*
> 
> This is also a bit hypocritical from AMD, as games developed but nvidia, aren't being constantly advertised at "look at those beta performance results! aren't they better than AMD?".


nVidia works differently. They hamper AMD's performance close to release with some GameWorks stuff, so that nVidia's cards look better in the benchmarks. After a while, AMD catches up or even surpasses them, but, most people don't look for those benchmarks, and lots of sites don't even re-run their benchmarks with newer drivers. So nVidia appears better even when they're not.


----------



## Glottis

Quote:


> Originally Posted by *NightAntilli*
> 
> Because people are biased towards nVidia. They've fallen for nVidia's marketing schemes. They are emotionally attached to the brand rather than objective. I've been telling everyone that AMD's R9 390 is the better buy right now, but everyone still flocked to the GTX 970 like sheep. I hope they feel the burn as time progresses. They should use their brains when buying products.


And there are people biased towards AMD, that's how it always was and always will be. Your comment makes this perfectly clear, you seem to be more concerned about people getting "owned" buying "wrong" graphics card, than actually helping them with decision. Maybe if you were less arrogant when helping people decide they would have chosen a 390? Just a thought.
Quote:


> Originally Posted by *NightAntilli*
> 
> This will not be fixed by a driver. It is representative of the performance in the final game.


No this isn't representative of final game, even devs said that it's just a beta and shipping product performance will be different.
Quote:


> Originally Posted by *NightAntilli*
> 
> Wrong. See my prior post above which has Kollock's quote.


this dev confirmed they partner with AMD. which is known for a long time with AMD logo on their website and oxyde promoting AMD products on various tech conferences.
Quote:


> Originally Posted by *NightAntilli*
> 
> nVidia works differently. They hamper AMD's performance close to release with some GameWorks stuff, so that nVidia's cards look better in the benchmarks. After a while, AMD catches up or even surpasses them, but, most people don't look for those benchmarks, and lots of sites don't even re-run their benchmarks with newer drivers. So nVidia appears better even when they're not.


you mean like Project CARS when people like you screamed it was sabotaged by Nvidia, but turns out AMD was just slacking off with drivers and now Project CARS performance with AMD cards is about on par as with Nvidia cards. you mean people don't look for such benchmarks?


----------



## kingduqc

This just prove that AMD has been slacking for years about dx11 CPU overhead... Quite sad. Just look at the 1440p and 1080p result. 3 fps difference, way to bottleneck your cards AMD and it's been like that for years. I'm glad it will help em in the new DX12 market, but frankly I buy cards for games I play now, not a handful in a year that might see improvement over the competition.


----------



## cowie

Quote:


> Originally Posted by *sage101*
> 
> The GTX 960 isn't even worth it since the R9 380 exist. Like I've said countless times the only nvidia card worth owing over amd counterparts is the GTX 980Ti.


ahhhhh where have you been for the last 1+ year 970 and 980 were both performing better the any amd card?








amd is still behind on what really matters and that's dx11 they let you suffer for awhile with out improvements.
now I guess you can jump on those cards







great see you when the conservative rasterization and other dx12 "bombs"drop they might really bring something new to looks in the next upcoming games


----------



## mtcn77

Quote:


> Originally Posted by *kingduqc*
> 
> This just prove that AMD has been slacking for years about dx11 CPU overhead... Quite sad. Just look at the 1440p and 1080p result. 3 fps difference, way to bottleneck your cards AMD and it's been like that for years. I'm glad it will help em in the new DX12 market, but frankly I buy cards for games I play now, not a handful in a year that might see improvement over the competition.


It just lowers the fps threshold somewhat. You can still use the gpu for extra filtering.
Besides, if you look at the charts, lower end gpus are free from the burden of Directx 11 single threading.


----------



## Defoler

Quote:


> Originally Posted by *EightDee8D*
> 
> amd is allowing developers to do whatever they want with nvidia, there's no blocked code or anything. on the other hand nvidia doesn't provide source code to amd nor the devs are allowed to share anything with amd if title is gameworks sponsored . how is that being hypocritical ? heck they even made gpu open so even nvidia can optimize the code for their arch.
> 
> nvidia gives you more inefficient visual effects which slows down the fps.
> Amd gives you performance boost which can be used for/with more visual effects.
> 
> where is hypocrisy ?


Did AMD provided nvidia with mantle source code?
Did AMD provided TressFX 1.0 and 2.0 source code?
Both of those were "open source free of charge". The answer to those to questions is no.
And that answer your talk about blocked code or opened code. Also TressFX went open *AFTER* the last tomb raider came out. Why didn't AMD release it earlier to allow nvidia to optimise to it I wonder?

Nvidia specific visuals can be turned off. New technology and visuals are sometimes too big. Can the 280x run a 4K with ultra settings at 60fps? Why not? Will the mid range card in 2 years will be able to do it? Most likely yes. C'est la vie. They are also affecting more. When running hairworks it affects all types of hair visuals, while tressfx is only running on a single character in the game. Now do the opposite and guess what happens?

What performance boost with more visuals did we get? TressFX 3.0 running on max takes about 5-10fps of performance. On Nvidia its takes a bit more.

Yes, it is hypocrisy. When nvidia does it, it is so wrong and bad. When AMD are doing it, it is so good and revolutionary.


----------



## Charcharo

For what it is worth...
People are forgetting that most AMD cards are competitive in DX11. The R9 380, 380X, 390, 390X and Fury Non X (and the Nano I guess) are not under their competition. Usually they win DX11 comparisons to their equivalent competitor. The Fury X was the problematic one (as well as... I guess the 370).

People are acting as if those cards are under their competitor when they ... are not. DX12 just adds a lot of performance on top. So does Async.
And that matters.


----------



## Defoler

Quote:


> Originally Posted by *NightAntilli*
> 
> Because people are biased towards nVidia. They've fallen for nVidia's marketing schemes. They are emotionally attached to the brand rather than objective. I've been telling everyone that AMD's R9 390 is the better buy right now, but everyone still flocked to the GTX 970 like sheep. I hope they feel the burn as time progresses. They should use their brains when buying products.


Right. That statement is not emotional favouring anything...









Sorry, but "nvidia's marketing schemes" are pretty simple. And TBH, I had't seen a single nvidia "marketing scheme" for a very, very, long time.
The only thing I have seen are AMD calling gameworks the devil, nvidia the devil, developers the devil because they aren't siding with AMD, and everything not AMD is pretty much the spawn of the devil.
I only see AMD bashing nvidia, calling "we are the best" and falling on their arse every iteration of GPUs, I see AMD saying one thing and doing the other.

I don't hate AMD. I hate the fact that they are trying to be the "good guys" by bashing "nvidia the devil", instead of sticking to their own GPUs, performance, and not act like hate filled children. And I want their fans like you, to not act like that, but it is too much to ask.

I don't hate AMD. But if GPU X is running better than GPU Y in the games I play, and the price I can put, I see no reason to buy GPU Y just for the sake of a "maybe one day it will be better".


----------



## airfathaaaaa

Quote:


> Originally Posted by *Defoler*
> 
> Did AMD provided nvidia with mantle source code?
> Did AMD provided TressFX 1.0 and 2.0 source code?
> Both of those were "open source free of charge". The answer to those to questions is no.
> And that answer your talk about blocked code or opened code. Also TressFX went open *AFTER* the last tomb raider came out. Why didn't AMD release it earlier to allow nvidia to optimise to it I wonder?
> 
> Nvidia specific visuals can be turned off. New technology and visuals are sometimes too big. Can the 280x run a 4K with ultra settings at 60fps? Why not? Will the mid range card in 2 years will be able to do it? Most likely yes. C'est la vie. They are also affecting more. When running hairworks it affects all types of hair visuals, while tressfx is only running on a single character in the game. Now do the opposite and guess what happens?
> 
> What performance boost with more visuals did we get? TressFX 3.0 running on max takes about 5-10fps of performance. On Nvidia its takes a bit more.
> 
> Yes, it is hypocrisy. When nvidia does it, it is so wrong and bad. When AMD are doing it, it is so good and revolutionary.


i dont understand what you are saying
its crystal clear that nvidia would have never used mantle even if amd gave money to them....a company that loves closing down software lying about their cards and capabilities wont go open source so easy
this is why vulkan came in..and the sudden love of nvidia towards opengl


----------



## NightAntilli

The nVidia salt in this thread is present in deadly amounts....
Quote:


> Originally Posted by *Glottis*
> 
> And there are people biased towards AMD, that's how it always was and always will be. Your comment makes this perfectly clear, you seem to be more concerned about people getting "owned" buying "wrong" graphics card, than actually helping them with decision. Maybe if you were less arrogant when helping people decide they would have chosen a 390? Just a thought.


Arrogant? Actually I'm annoyed and disgusted that people are so blind. nVidia has had multiple fiascos and people still support them. Avoiding DX10.1 from taking off despite all the benefits, people still buy nVidia. Gimping the performance of the gaming industry through tessellation, people still buy nVidia. Having bad support for their older cards forcing people to upgrade pretty much every two years, people still buy nVidia. They lie about memory on a graphics card, people still buy nVidia.
In contrast, AMD has a track record of performance increasing over time, having future proof features in their cards and supporting open source.

To put things into perspective, we have the following;...

GCN 1.0 supports FL11_1
GCN 1.1 supports FL12_0
GCN 1.2 supports FL12_0

Fermi supports FL11_0
Kepler supports FL11_0
Maxwell supports FL11_0
Maxwell 2 supports FL12_1 (sort of)

Note that GCN 1.1 and GCN 1.2 support FL12_0. This means that since 2013, AMD has had GPUs on the market supporting pretty much all DX12 features. On top of that, they support the majority of these features on the highest tiers, including the 'hidden' 11_2 feature level. They are only missing conservative rasterization and ROV. Compare that to nVidia. Maxwell which was released in 2014, were still only capable of FL11_0!!! In fact, AMD's GCN 1.0 from 2011 has a higher rank in feature level support than 2014's Maxwell... That means that you still really don't have to upgrade a card from 2011 in terms of features, while nVidia's GTX 900 series is pretty much already outdated.

Let that sink in for a moment... And yet, people STILL invest into nVidia. It makes no sense whatsoever. So yes. I want people to feel the burn in their wallets. When trying to help people, they choose differently anyway, so, it's their problem then. And well, I enjoy it then, because when people don't listen, they have to carry the consequence, and I can say I f-ing told you so, so don't come back to me now.
People have to wake up to their bad investments. AMD has been the superior hardware manufacturer for a while now, but is not rewarded for it, when they obviously should be. And if AMD dies, things will be worse for all of us, including nVidia users. People need to wake up from their slumber of fanboyism and do what's right for once.
Quote:


> Originally Posted by *Glottis*
> 
> No this isn't representative of final game, even devs said that it's just a beta and shipping product performance will be different.


There have been multiple beta iterations, and with each iteration, the gap just widens. And, considering that it has been officially stated that they worked closer with nVidia than AMD, your point is completely moot.
Quote:


> Originally Posted by *Glottis*
> 
> this dev confirmed they partner with AMD. which is known for a long time with AMD logo on their website and oxyde promoting AMD products on various tech conferences.


Did you actually understand what was said?
Quote:


> Originally Posted by *Glottis*
> 
> you mean like Project CARS when people like you screamed it was sabotaged by Nvidia, but turns out AMD was just slacking off with drivers and now Project CARS performance with AMD cards is about on par as with Nvidia cards. you mean people don't look for such benchmarks?


It was more than obvious.


----------



## Defoler

Quote:


> Originally Posted by *NightAntilli*
> 
> This means that since 2013, AMD has had GPUs on the market supporting pretty much all DX12 features.


I'm still trying to figure out what is the point of buying an underpowered card to use now (in 2013), for a chance of maybe in 2016 it will run equal to its competitor counterpart?
This shows a completely flawed logic.


----------



## NightAntilli

Quote:


> Originally Posted by *Defoler*
> 
> I'm still trying to figure out what is the point of buying an underpowered card to use now (in 2013), for a chance of maybe in 2016 it will run equal to its competitor counterpart?
> This shows a completely flawed logic.


Except it's not underpowered. The R9 390 and GTX 970 were equal in performance. Right now, the R9 390 is leaving the GTX 970 in the dust, and the gap will only keep widening. It's not inferior performance with the hope that it will increase. It's equal performance with pretty much a guarantee that it will increase. Of course, only the ones that do their homework and understand what's going on will see that it's guaranteed. The rest prefer to stay in their speculative nonsense in order to support their favorite manufacturer.

I'm out. It's useless trying to explain sound to a willingly deaf person.


----------



## EightDee8D

Quote:


> Originally Posted by *Defoler*
> 
> Did AMD provided nvidia with mantle source code?
> Did AMD provided TressFX 1.0 and 2.0 source code?
> Both of those were "open source free of charge". The answer to those to questions is no.
> And that answer your talk about blocked code or opened code. Also TressFX went open *AFTER* the last tomb raider came out. Why didn't AMD release it earlier to allow nvidia to optimise to it I wonder?
> 
> Nvidia specific visuals can be turned off. New technology and visuals are sometimes too big. Can the 280x run a 4K with ultra settings at 60fps? Why not? Will the mid range card in 2 years will be able to do it? Most likely yes. C'est la vie. They are also affecting more. When running hairworks it affects all types of hair visuals, while tressfx is only running on a single character in the game. Now do the opposite and guess what happens?
> 
> What performance boost with more visuals did we get? TressFX 3.0 running on max takes about 5-10fps of performance. On Nvidia its takes a bit more.
> 
> Yes, it is hypocrisy. When nvidia does it, it is so wrong and bad. When AMD are doing it, it is so good and revolutionary.


1. yes it's called dx12.
2. yes after game launched

with more performance you can implement more visuals.

it's not hypocrisy because
when nvidia does it, they tie hands of devs and give no source
when amd does it , nvidia can do whatever they want. developers are free to implement what nvidia wants.

1 game vs alot of gimpworks title. talk about hypocrisy here.









it's surprising how people always talk that they want AMD to do better. and here we are with closed eyes not willing to see THEY ARE DOING BETTER !!


----------



## cowie

Quote:


> Originally Posted by *Charcharo*
> 
> For what it is worth...
> People are forgetting that most AMD cards are competitive in DX11. The R9 380, 380X, 390, 390X and Fury Non X (and the Nano I guess) are not under their competition. Usually they win DX11 comparisons to their equivalent competitor. The Fury X was the problematic one (as well as... I guess the 370).
> 
> People are acting as if those cards are under their competitor when they ... are not. DX12 just adds a lot of performance on top. So does Async.
> And that matters.


now maybe with some driver improvement's they have made but the 3xxx is just rebrands of the 2xxx so it did take a long time for them to catch up.

and that's only fps wise not power use or other variables like cf profiles(anyone who knows inspector can easy make or refine sli profiles) or other features like the best driver support around of any company....sure you can not please everyone but its more then just fps numbers

but anyway its good that they step up after all these years...that is if this is not more hype and they cut a lot out of the game to make it more user/hardware friendly.the number are not really too good at higher rez on anything right now not that this game needs 60+ but if you are on pc you do want some better then console looks


----------



## Charcharo

Quote:


> Originally Posted by *Defoler*
> 
> I'm still trying to figure out what is the point of buying an underpowered card to use now (in 2013), for a chance of maybe in 2016 it will run equal to its competitor counterpart?
> This shows a completely flawed logic.


Actually, it makes sense.

Remember, not everyone upgrades as often as you guys (nor should they).
Besides, AMD 2013 GPU line up was AS good as Nvidias in DX11 and DX10. Not worse, so you did not sacrifice anything really.


----------



## p4inkill3r

Quote:


> Originally Posted by *Defoler*
> 
> Sorry, but "nvidia's marketing schemes" are pretty simple. And TBH, I had't seen a single nvidia "marketing scheme" for a very, very, long time.
> .


Selling a 3.5GB card as 4GB certainly is a scheme of some sort, and the millions of units sold means that people are willing to accept the lie unfortunate mistake because nvidia is great, or something.


----------



## NightAntilli

Not to mention the DX12.1 marketing...


----------



## cowie

Quote:


> Originally Posted by *p4inkill3r*
> 
> Selling a 3.5GB card as 4GB certainly is a scheme of some sort, and the millions of units sold means that people are willing to accept the lie unfortunate mistake because nvidia is great, or something.


yes because at the time it was head and shoulders above the 290 or the 290x even








not only that it seemed more amd fans cared about it then even us 970 owners...you ever oc a 970???/its totally rocks a 290 or 390x for that matter.
just like a hot bimbo sometimes you overlook some things but it does have 4g so by this time you should have it down pat 3.5+.5


----------



## Charcharo

Quote:


> Originally Posted by *cowie*
> 
> now maybe with some driver improvement's they have made but the 3xxx is just rebrands of the 2xxx so it did take a long time for them to catch up.
> 
> and that's only fps wise not power use or other variables like cf profiles(anyone who knows inspector can easy make or refine sli profiles) or other features like the best driver support around of any company....sure you can not please everyone but its more then just fps numbers
> 
> but anyway its good that they step up after all these years...that is if this is not more hype and they cut a lot out of the game to make it more user/hardware friendly.the number are not really too good at higher rez on anything right now not that this game needs 60+ but if you are on pc you do want some better then console looks


Refreshes. Not exactly rebrands









Power Usage is not that big a difference. Especially since FRTC exists and idle power usage is very good, so long term differences will be small.
I have no idea on CF. Only ever used SLI. Though the more VRAM and XDMA engine does seem like it may give AMD an edge when it works. But I can not tell you much here as my personal knowledge on CF is not great. Only the VRAM thing is certain.

That VRAM thing also allows for extreme modding and great performance YEARS from now. So it is a long term investment. I am a PC Gamer, I mod my games









I am a PC Gamer because of modding, Backwards Compatibility and Emulation. Also cheaper long term costs being a PC Gamer. Better visuals and performance (and the satisfaction of having fun with hardware) is JUST icing on top of the cake







!


----------



## cowie

Quote:


> Originally Posted by *Charcharo*
> 
> Refreshes. Not exactly rebrands
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Power Usage is not that big a difference. Especially since FRTC exists and idle power usage is very good, so long term differences will be small.
> I have no idea on CF. Only ever used SLI. Though the more VRAM and XDMA engine does seem like it may give AMD an edge when it works. But I can not tell you much here as my personal knowledge on CF is not great. Only the VRAM thing is certain.
> 
> That VRAM thing also allows for extreme modding and great performance YEARS from now. So it is a long term investment. I am a PC Gamer, I mod my games
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I am a PC Gamer because of modding, Backwards Compatibility and Emulation. Also cheaper long term costs being a PC Gamer. Better visuals and performance (and the satisfaction of having fun with hardware) is JUST icing on top of the cake
> 
> 
> 
> 
> 
> 
> 
> !


sorry man refresh

but with no improvments in power use most lower power results came from just a fan
the ref 290'x were just a big heat bomb and 390 'x with the same refv fan would be the same exact thing.
the same bs that nv said with the 9800 to 250 refresh

so if you are a gamer then you spend upwards of 300 500 a year in games? but you will not upgrade hardware for your hobby?

I can't follow you there because the games the last decade have really gone down the tubes....what dev/game makers do you buy from.....I am 100% sure they lied to you more the amd/nv could ever









i try everything then complane or be happy i am easy


----------



## Serios

Quote:


> Originally Posted by *Defoler*
> 
> Did AMD provided nvidia with mantle source code?
> Did AMD provided TressFX 1.0 and 2.0 source code?
> Both of those were "open source free of charge". The answer to those to questions is no.


Actually the answer is:

Yes, yes and yes.


----------



## Serios

Quote:


> Originally Posted by *airfathaaaaa*
> 
> i dont understand what you are saying
> its crystal clear that nvidia would have never used mantle even if amd gave money to them....a company that loves closing down software lying about their cards and capabilities wont go open source so easy
> this is why vulkan came in..and the sudden love of nvidia towards opengl


Nvidia has full access to Vulkan which has full access to Mantle's source code.
Khronos incorporated Mantle's best parts in Vulkan.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Serios*
> 
> Nvidia has full access to Vulkan which has full access to Mantle's source code.
> Khronos incorporated Mantle's best parts in Vulkan.


i mean as MANTLE back when the rumor that nvidia asked amd for it and they said its a beta and stuff


----------



## gamervivek

computerbase have 390X equal to 980Ti while Fury X is 25% faster.

So despite using async compute, Fury X is not scaling well over 390X. AMD still have work to do on optimizing for GCN3.

edit: guru3d test with crazy settings and things are a little crazy











Maybe dx12 problems and not just the settings hitting nvidia hardware way harder.


----------



## Serios

Quote:


> Originally Posted by *airfathaaaaa*
> 
> i mean as MANTLE back when the rumor that nvidia asked amd for it and they said its a beta and stuff


Nvidia said right from the start that the are not interested in using Mantle, you are taking about Intel.


----------



## PlugSeven

Quote:


> Originally Posted by *gamervivek*
> 
> computerbase have 390X equal to 980Ti while Fury X is 25% faster.
> 
> So despite using async compute, Fury X is not scaling well over 390X. AMD still have work to do on optimizing for GCN3.
> 
> edit: guru3d test with crazy settings and things are a little crazy
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Maybe dx12 problems and not just the settings hitting nvidia hardware way harder.


It's the same front-end on both fury and 390X, I bet even under dx12 the fury are not utilising all their CUs fully.


----------



## Charcharo

Quote:


> Originally Posted by *cowie*
> 
> sorry man refresh
> 
> but with no improvments in power use most lower power results came from just a fan
> the ref 290'x were just a big heat bomb and 390 'x with the same refv fan would be the same exact thing.
> the same bs that nv said with the 9800 to 250 refresh
> 
> so if you are a gamer then you spend upwards of 300 500 a year in games? but you will not upgrade hardware for your hobby?
> 
> I can't follow you there because the games the last decade have really gone down the tubes....what dev/game makers do you buy from.....I am 100% sure they lied to you more the amd/nv could ever
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i try everything then complane or be happy i am easy


Actually the 390 and 390X achieve higher results than the 290 and 290X whilst using a tad less power on average. This means AMD has improved efficiency overall.
http://www.anandtech.com/show/9387/amd-radeon-300-series/3
For more info.

The reference 290s were bad with heat though the 3XX series does well due to mature custom coolers.

I do not think I spend 500 dollars a year in games. Or leva for that matter.
I upgrade my hardware when the time comes. Why waste hard earned money on it when I do not need it? The old ATI 5770 plays even new games. The R9 390 should do better than it in the coming years so it is more future proof in a sense.

I buy games from good companies







For example, GSC, 4A or CD Project Red.


----------



## MadRabbit

Quote:


> Originally Posted by *Serios*
> 
> Nvidia said right from the start that the are not interested in using Mantle, you are taking about Intel.


Yep. That was Intel not nVidia.

Source

Still a crappy move from AMD if you ask me.


----------



## iLeakStuff

I dont trust Oxide. Too much close connections to AMD and they way they have been boasting AMD with DX12 ever since we saw the first benchmarks.

This benchmarks shows what AMD can do with Async Compute, not with DX12 in general. There are other DX12 test Anandtech have done where Nvidia came up on top because it did lightning etc better. So the total sum is that both have their own strengths and weaknesses in the different feature sets in DX12.


----------



## Greenland

I don't trust Oxide neither, they could sabotage nvidia performance by adding tons and tons of tessellation, oh wait...


----------



## infranoia

Quote:


> Originally Posted by *MadRabbit*
> 
> Yep. That was Intel not nVidia.
> 
> Source
> 
> Still a crappy move from AMD if you ask me.


Was it? In retrospect it's clear that AMD knew something we didn't at the time, about Mantle's future, in that they had already begun working with MS on DX12 in the XBox One. Given that, they didn't want Intel to go whole-hog on Mantle if there was no future in it as a specific API. Better Intel work with MS on DX12 once it was final. AMD did Intel a favor, as far as I can tell. Mantle was a ploy from the beginning to get DX12 and OpenGL to go low-level and play to GCN's strengths.

The API refocus was explicitly stated by AMD, by the way, when they announced Mantle's EOL in favor of DX12 and OpenGL-Next (now Vulkan).


----------



## ku4eto

Quote:


> Originally Posted by *Charcharo*
> 
> Actually the 390 and 390X achieve higher results than the 290 and 290X whilst using a tad less power on average. This means AMD has improved efficiency overall.
> http://www.anandtech.com/show/9387/amd-radeon-300-series/3
> For more info.
> 
> The reference 290s were bad with heat though the 3XX series does well due to mature custom coolers.
> 
> I do not think I spend 500 dollars a year in games. Or leva for that matter.
> I upgrade my hardware when the time comes. Why waste hard earned money on it when I do not need it? The old ATI 5770 plays even new games. The R9 390 should do better than it in the coming years so it is more future proof in a sense.
> 
> I buy games from good companies
> 
> 
> 
> 
> 
> 
> 
> For example, GSC, 4A or CD Project Red.


I spent around 500$ (900 leva) to upgrade my PC with 2nd hand parts and new parts, which i am going to use for at least 3 more years. My entire system, that i am going to use for 3 years+, costs less than a GPU flagship, that some people change every year. Most of those people are on the green team, and do it either for braggin' rights, or for work related stuff. The first type are sadly prevalent.


----------



## Gedm5

Quote:


> Originally Posted by *gamervivek*
> 
> computerbase have 390X equal to 980Ti while Fury X is 25% faster.
> 
> So despite using async compute, Fury X is not scaling well over 390X. AMD still have work to do on optimizing for GCN3.
> 
> edit: guru3d test with crazy settings and things are a little crazy
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Maybe dx12 problems and not just the settings hitting nvidia hardware way harder.


I think it could be a limitation due to the Fury's 4G of frame buffer running out, but without seeing a vram usage test its hard to say. With that said I wouldn't rule out an architectural issue.


----------



## f1LL

Quote:


> Originally Posted by *gamervivek*
> 
> computerbase have 390X equal to 980Ti while Fury X is 25% faster.
> 
> So despite using async compute, Fury X is not scaling well over 390X. AMD still have work to do on optimizing for GCN3.
> 
> edit: guru3d test with crazy settings and things are a little crazy
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Maybe dx12 problems and not just the settings hitting nvidia hardware way harder.


R7 370 performing better than R9 380X is a little crazy indeed!


----------



## Charcharo

Quote:


> Originally Posted by *iLeakStuff*
> 
> Or they want a system that can play other settings than 720P and low textures


This might surprise you, but even 6 year old mid range ATI 5770 can do better than that









So no, a 500 dollar PC can do VERY good. Above consoles even, though it has the inherent advantages of a PC (which are awesome).

And upgrading a PC with that much money? A very good choice.


----------



## keikei

Quote:


> Originally Posted by *Charcharo*
> 
> This might surprise you, but even 6 year old mid range ATI 5770 can do better than that
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So no, a 500 dollar PC can do VERY good. Above consoles even, though it has the inherent advantages of a PC (which are awesome).
> 
> And upgrading a PC with that much money? A very good choice.


Due to the industry standard of platform parody the time frame of needing to upgrade to play the latest game is becoming more and more extended.


----------



## Greenland

The 7970 struggles a lot, oh yeah:



30% faster than 960, 13% slower than the 780.


----------



## PontiacGTX

Quote:


> Originally Posted by *BeerPowered*
> 
> You think Devs will do whats best for gaming? While catering to the smaller market?
> 
> 
> 
> 
> 
> 
> 
> 
> .


Xbox one and PC , sharing similar CPU and GPU architecture, and vram system.aswell same API for graphics will make a big difference for them since DX12 might save them time working in one single game port
Quote:


> Originally Posted by *Mahigan*
> 
> And AMD are competitive, with their cards beating, all of their intended NVIDIA price competitors under DX11 save for the Fury-X vs GTX 980 Ti.
> https://www.techpowerup.com/mobile/reviews/ASUS/GTX_980_Ti_Matrix/23.html
> 
> In recent DX11 titles, AMD graphics cards have been quite surprising.
> 
> What DX12 does is allow for:
> 
> 390 > GTX 970 by a long shot instead of a little
> 390X > GTX 980
> Fury uncontested
> Fury-X = GTX 980 Ti
> TitanX uncontested
> 
> What Async compute allows for is:
> 
> 390 > GTX 980
> 390x = GTX 980 Ti
> TitanX
> Fury uncontested
> Fury-X uncontested.
> 
> So really, DX11 or DX12 w/wo Async doesn't change AMDs price competition figures for the worse. It only makes AMD GCN that much better.


The r9 Nano would be an interesting example which therically with Fury X clocks can get 100% performance and while being cheaper under DX11 and DX12 couldnt be matched
Quote:


> Originally Posted by *iLeakStuff*
> 
> I dont trust Oxide. Too much close connections to AMD and they way they have been boasting AMD with DX12 ever since we saw the first benchmarks.


Then any game dev that uses Nvidia logo and gameworks should not be trusted since they use Nvidia software that is already biased just to cripple the competitor the.meanwhile this game developer can work with nvidia to try to fix their faults








Quote:


> Spoiler: ExtremeTech Art
> 
> 
> 
> Is Ashes of the Singularity biased?
> Ashes of the Singularity is the first DX12 game on the market, and the performance delta between AMD and Nvidia is going to court controversy from fans of both companies. We won't know if its performance results are typical until we see more games in market. But is the game intrinsically biased to favor AMD? I think not - for multiple interlocking reasons.
> 
> First, there's the fact that Oxide shares its engine source code with both AMD and Nvidia and has invited both companies to both see and suggest changes for most of the time Ashes has been in development. The company's Reviewer's Guide includes the following:
> 
> [W]e have created a special branch where not only can vendors see our source code, but they can even submit proposed changes. That is, if they want to suggest a change our branch gives them permission to do so&#8230;
> 
> This branch is synchronized directly from our main branch so it's usually less than a week from our very latest internal main software development branch. IHVs are free to make their own builds, or test the intermediate drops that we give our QA.
> 
> Oxide also addresses the question of whether or not it optimizes for specific engines or graphics architectures directly.
> 
> Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known "glass jaws" which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.
> 
> We reached out to Dan Baker of Oxide regarding the decision to turn asynchronous compute on by default for both companies and were told the following:
> 
> "Async compute is enabled by default for all GPUs. We do not want to influence testing results by having different default setting by IHV, we recommend testing both ways, with and without async compute enabled. Oxide will choose the fastest method to default based on what is available to the public at ship time."
> 
> Second, we know that asynchronous compute takes advantages of hardware capabilities AMD has been building into its GPUs for a very long time. The HD 7970 was AMD's first card with an asynchronous compute engine and it launched in 2012. You could even argue that devoting die space and engineering effort to a feature that wouldn't be useful for four years was a bad idea, not a good one. AMD has consistently said that some of the benefits of older cards would appear in DX12, and that appears to be what's happening.
> 
> Asynchronous computing is not itself part of the DX12 specification, but it's one method of implementing a DirectX 12 multi-engine. Multi-engines are explicitly part of the DX12 specification. How these engines are implemented may well impact relative performance between AMD and Nvidia, but they're one of the advantages to using DX12 as compared with previous APIs.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Third, every bit of independent research on this topic has confirmed that AMD and Nvidia have profoundly different asynchronous compute capabilities. Nvidia's own slides illustrate this as well. Nvidia cards cannot handle asynchronous workloads the way that AMD's can, and the differences between how the two cards function when presented with these tasks can't be bridged with a few quick driver optimizations or code tweaks. Beyond3D forum member and GPU programmer Ext3h has written a guide to the differences between the two platforms - it's a work-in-progress, but it contains a significant amount of useful information.
> 
> Fourth, Nvidia PR has been silent on this topic. Questions about Maxwell and asynchronous compute have been bubbling for months; we've requested additional information on several occasions. Nvidia is historically quick to respond to either incorrect information or misunderstandings, often by making highly placed engineers or company personnel available for interview. The company has a well-deserved reputation for being proactive in these matters, but we've heard nothing through official channels.
> 
> Fifth and finally, we know that AMD GPUs have always had enormous GPU compute capabilities. Those capabilities haven't always been displayed to their best advantage for a variety of reasons, but they've always existed, waiting to be tapped. When Nvidia designed Maxwell, it prioritized rendering performance - there's a reason why the company's highest-end Tesla SKUs are still based on Kepler (aka the GTX 780 Ti / Titan Black).
> 
> It's fair to say that the Nitrous Engine's design runs better on AMD hardware - but there's no proof that the engine was designed to disadvantage Nvidia hardware, or to prevent Nvidia cards from executing workloads effectively.


----------



## iLeakStuff

Quote:


> Originally Posted by *Greenland*
> 
> The 7970 struggles a lot, oh yeah:
> 
> 
> 
> 30% faster than 960, 13% slower than the 780.


15-20% faster than GTX 960 (source)








And that doesnt show anything. Again, there are a growing number of games where a 7970 or even 290 would struggle today at acceptable frame rates and settings at 1440p. Obviously 290 would do ok in the majority of the games in 1080p but I wouldnt even be happy with a 1080p display today


----------



## james8

Why the heck did Anand decide to call it RTG instead of AMD???


----------



## Charcharo

Quote:


> Originally Posted by *iLeakStuff*
> 
> 15-20% faster than GTX 960 (source)
> 
> 
> 
> 
> 
> 
> 
> 
> And that doesnt show anything. Again, there are a growing number of games where a 7970 or even 290 would struggle today at acceptable frame rates and settings at 1440p. Obviously 290 would do ok in the majority of the games in 1080p but I wouldnt even be happy with a 1080p display today


And honestly the majority of games today are not worth an install either way... mostly just ... well drivel.
The 280X does great at 1080P and decently at 1440P. Above a console visually with better frame rate.


----------



## mejobloggs

Quote:


> Originally Posted by *james8*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why the heck did Anand decide to call it RTG instead of AMD???


Yeah that confused me. I still don't know what RTG means


----------



## Ha-Nocri

http://www.amd.com/en-us/press-releases/Pages/radeon-technologies-group-2015sept09.aspx


----------



## Sleazybigfoot

Quote:


> Originally Posted by *Defoler*
> 
> *Did AMD provided nvidia with mantle source code?*
> *Did AMD provided TressFX 1.0 and 2.0 source code?*
> Both of those were "open source free of charge". The answer to those to questions is no.
> And that answer your talk about blocked code or opened code. Also TressFX went open *AFTER* the last tomb raider came out. Why didn't AMD release it earlier to allow nvidia to optimise to it I wonder?
> 
> Nvidia specific visuals can be turned off. New technology and visuals are sometimes too big. Can the 280x run a 4K with ultra settings at 60fps? Why not? Will the mid range card in 2 years will be able to do it? Most likely yes. C'est la vie. They are also affecting more. When running hairworks it affects all types of hair visuals, while tressfx is only running on a single character in the game. Now do the opposite and guess what happens?
> 
> What performance boost with more visuals did we get? TressFX 3.0 running on max takes about 5-10fps of performance. On Nvidia its takes a bit more.
> 
> Yes, it is hypocrisy. When nvidia does it, it is so wrong and bad. When AMD are doing it, it is so good and revolutionary.


*Did AMD provided nvidia with mantle source code?*
As far as I know Mantle never made it out of their beta state or whatever they called it, there were very few titles actually using this technology.

*Did AMD provided TressFX 1.0 and 2.0 source code?*
This is completely untrue, a while back I downloaded the source code myself to prove someone else wrong, whose claims matched yours.


----------



## Wovermars1996

Quote:


> Originally Posted by *james8*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why the heck did Anand decide to call it RTG instead of AMD???


Quote:


> Originally Posted by *mejobloggs*
> 
> Yeah that confused me. I still don't know what RTG means


"Radeon Technology Group" The division in AMD responsible for graphics cards


----------



## Greenland

Quote:


> Originally Posted by *iLeakStuff*
> 
> 15-20% faster than GTX 960 (source)
> 
> 
> 
> 
> 
> 
> 
> 
> And that doesnt show anything. Again, there are a growing number of games where a 7970 or even 290 would struggle today at acceptable frame rates and settings at 1440p. Obviously 290 would do ok in the majority of the games in 1080p but I wouldnt even be happy with a 1080p display today


Quote:


> Originally Posted by *iLeakStuff*
> 
> I`m saying 7970 is getting old and outdated. Not AMD cards in general.


It's funny because you picked a benchmark from over a year ago. Try to keep up with the current date m8.


----------



## Sleazybigfoot

As for the TressFX optimization, yes Nvidia didn't get to optimize until after release but once they did the performance impact on Nvidia cards was the same as on AMD cards. It ran great on GPUs from *BOTH* vendors.


----------



## doritos93

Quote:


> Originally Posted by *mejobloggs*
> 
> Yeah that confused me. I still don't know what RTG means


Radeon Technology Group


----------



## Serandur

Nice showing from Hawaii; always thought that chip had legs (on release, the doubling of ROP count and quadrupling of ACE count over Tahiti along with 4 GBs of VRAM was pretty cool).

Not so impressed with Fiji though. Sure, it's ~20% more than the reference 980 Ti at 1440p here, but then again, so are aftermarket 980 Ti's...

Most GM200-owning enthusiasts really shouldn't be losing sleep over this.


----------



## magnek

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> As for the TressFX optimization, yes Nvidia didn't get to optimize until after release but once they did the performance impact on Nvidia cards was the same as on AMD cards. It ran great on GPUs from *BOTH* vendors.


You know the thing that really pisses me off about the whole Gameworks/TressFX thing is how inconsistent nVidia is in their stance.

So here's a quote from Cem Cebenoyan, Director of Engineering, Developer Technology:
Quote:


> Seeking some clarification, I asked if perhaps AMD's concern was that they're not allowed to see the game's source code. *Cebenoyan says that a game's source code isn't necessary to perform driver optimization.* "Thousands of games get released, but we don't need to look at that source code," he says. "Most developers don't give you the source code. *You don't need source code of the game itself to do optimization for those games. AMD's been saying for awhile that without access to the source code it's impossible to optimize. That's crazy."*


Ok so when it comes to Gameworks and AMD wants the source code to optimize, nVidia is basically saying it's a pile of BS.

Now compare the above with what was said when TressFX ran poorly:
Quote:


> We are aware of performance and stability issues with GeForce GPUs running Tomb Raider with maximum settings. Unfortunately, NVIDIA didn't receive final game code until this past weekend which substantially decreased stability, image quality and performance over a build we were previously provided. We are working closely with Crystal Dynamics to address and resolve all game issues as quickly as possible.
> 
> *Please be advised that these issues cannot be completely resolved by an NVIDIA driver. The developer will need to make code changes on their end to fix the issues on GeForce GPUs as well.* As a result, we recommend you do not play Tomb Raider until all of the above issues have been resolved.
> 
> In the meantime, we would like to apologize to GeForce users that are not able to have a great experience playing Tomb Raider, as they have come to expect with all of their favorite PC games.


Interesting, so apparently when it's nVidia that needs to optimize for an AMD tech, they somehow suddenly can't do it on their own and need the source code and the dev's help.


----------



## infranoia

Quote:


> Originally Posted by *magnek*
> 
> Interesting, so apparently when it's nVidia that needs to optimize for an AMD tech, they somehow suddenly can't do it on their own and need the source code and the dev's help.


And in the case of AotS, they not only have the source code freely available, they made even more commits to it than AMD did. And the result? Ruh-roh.


----------



## Serandur

Quote:


> Originally Posted by *james8*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Why the heck did Anand decide to call it RTG instead of AMD???


Because the AMD name is tainted and "RTG" represents a more independent, ATi-like division that's more accurate I suppose. Anandtech might just like the sound of it more for that reason (recognizing the GPU division as separate from the CPU one). I like it, personally, though I still wish they just used "ATi" again instead.

Anyway, they label themselves RTG in official Polaris info slides too, so they're taking this separate name thing pretty seriously.


----------



## DeathMade

Quote:


> Originally Posted by *magnek*
> 
> You know the thing that really pisses me off about the whole Gameworks/TressFX thing is how inconsistent nVidia is in their stance.
> 
> So here's a quote from Cem Cebenoyan, Director of Engineering, Developer Technology:
> Ok so when it comes to Gameworks and AMD wants the source code to optimize, nVidia is basically saying it's a pile of BS.
> 
> Now compare the above with what was said when TressFX ran poorly:
> Interesting, so apparently when it's nVidia that needs to optimize for an AMD tech, they somehow suddenly can't do it on their own and need the source code and the dev's help.


DAMN! I didn't even realise that!


----------



## zalbard

Quote:


> Originally Posted by *magnek*
> 
> You know the thing that really pisses me off about the whole Gameworks/TressFX thing is how inconsistent nVidia is in their stance.
> 
> So here's a quote from Cem Cebenoyan, Director of Engineering, Developer Technology:
> Ok so when it comes to Gameworks and AMD wants the source code to optimize, nVidia is basically saying it's a pile of BS.
> 
> Now compare the above with what was said when TressFX ran poorly:
> Interesting, so apparently when it's nVidia that needs to optimize for an AMD tech, they somehow suddenly can't do it on their own and need the source code and the dev's help.


What he says makes perfect sense.

You don't need access to the source code of the game to optimize the driver, but you obviously also cannot rewrite the whole game in the driver - the developer has to make some changes as well. The latter has nothing to do with sharing the code at all.


----------



## Renner

Oh boy, I started aiming to migrate on DDR4 platform by the end of this year (most likely Zen if it turns out to be good), but after seeing those latest DX12 benches on my FX... Sweet, if only it remains constant for the rest of upcoming DX12 titles.


----------



## Dargonplay

Quote:


> Originally Posted by *iLeakStuff*
> 
> 15-20% faster than GTX 960 (source)
> 
> 
> 
> 
> 
> 
> 
> 
> And that doesnt show anything. Again, there are a growing number of games where a 7970 or even 290 would struggle today at acceptable frame rates and settings at 1440p. Obviously 290 would do ok in the majority of the games in 1080p but I wouldnt even be happy with a 1080p display today


A 290 would struggle? a 290 is only 28% slower than the fastest consumer Graphic Card available on the market, the 980Ti, if the 290 struggles then the 28% extra performance wont make a difference, either you are right and every GPU tier is struggling or you're absolutely wrong.

Also the 7970 launched in 2011, that's huge considering it's current performance have almost doubled ever since it launched, nobody would expect to play at 1440p with a card launched in a time where that resolution was today's 5K, the 7970 STOCK is an outperformer in 1080p even today, you saying it struggles is a flat out lie, and please understand that all of this is under the "STOCK" Clocks, because we can't just forget the fact that the HD 7970 was/is/Still is one of the best overclocking cards of the last decade.

My HD 7970 could reach core clocks of 1250MHz from 900MHz Stock clocks and remain absolutely stable in all scenarios with only 90mV extra voltage, I could push it even further to 1300MHz but I was concerned at its longevity and temps, with better cooling I'm sure you can forget this issue, the average overclock for this card was basically around 30% give or take.

And regarding the 290, yes it doesn't overclock, but saying it struggles at 1440p is ridiculous, I'm able to max out games like Battlefront at 1440p Full TSAA and maintain an average of 70 FPS with stock 1000MHz core clocks, with 1150MHz I'm reaching about 80 FPS.

Just no.


----------



## zealord

I just love Dargonplays avatar GIF way too much. It always makes me smile


----------



## magnek

Quote:


> Originally Posted by *zalbard*
> 
> What he says makes perfect sense.
> 
> You don't need access to the source code of the game to optimize the driver, but you obviously also cannot rewrite the whole game in the driver - the developer has to make some changes as well. The latter has nothing to do with sharing the code at all.


It's as simple as this. AMD claims they can't optimize Gameworks games because they don't have access to the source code. nVidia says that's a heap of rubbish and you don't need the game's source code to optimize for Gameworks games, and in fact many devs don't even give you their source code.

Ok forget about code sharing for a moment. What I really want to know then, is why nVidia needed the final build code to Tomb Raider before they could optimize TressFX? I mean given what Cem Cebenoyan said, it just appears completely contradictory.


----------



## rage fuury

Quote:


> Originally Posted by *zealord*
> 
> I just love Dargonplays avatar GIF way too much. It always makes me smile


I'm not sure you are right about that (this could not be a mere avatar in my book...) . I think Dargonplay filmed himself doing stuff and upload this everytime he post something here...


----------



## infranoia

Quote:


> Originally Posted by *rage fuury*
> 
> I'm not sure you are right about that (this could not be a mere avatar in my book...) . I think Dargonplay filmed himself doing stuff and upload this everytime he post something here...


Cute, but no. It's even better with sound.

https://www.youtube.com/watch?v=9CS7j5I6aOc

It's interesting that the DX12 titles we expected to see by now have all stalled out. I suspect a certain ISV is making backroom deals right about now. Where is Deus Ex, Fable, ARK, Hitman even?


----------



## PostalTwinkie

Quote:


> Originally Posted by *Serandur*
> 
> Because the AMD name is tainted and "RTG" represents a more independent, ATi-like division that's more accurate I suppose. Anandtech might just like the sound of it more for that reason (recognizing the GPU division as separate from the CPU one). I like it, personally, though I still wish they just used "ATi" again instead.
> 
> Anyway, they label themselves RTG in official Polaris info slides too, so they're taking this separate name thing pretty seriously.


I imagine a large part of it being its own division, and recognized as such, is in the event Zen doesn't save AMD. At that point RTG can be more easily split off from the main body of AMD itself. Either in an effort to sell RTG, or to keep RTG and sell AMD.

Also, it is to put distance between the AMD name in the GPU world as well. As you said, unfortunately enough, 'AMD' has become tainted in the gaming sector. Which is terribly unfortunate, because they do have a strong offer. So, they want to create a new image with Polaris.

Image is everything in business, thus why Coca-Cola spends about $2B a year on it.


----------



## Fyrwulf

Quote:


> Originally Posted by *Glottis*
> 
> this dev confirmed they partner with AMD. which is known for a long time with AMD logo on their website and oxyde promoting AMD products on various tech conferences.


Yes, but AMD doesn't implement special code that cripples their competition. They're just straight up better in Dx12, so it's in their interests to promote it. If the senior people making designs decisions at nVidia weren't so stupid, this wouldn't be a problem for them.


----------



## xxdarkreap3rxx

Late to the thread but I feel these results don't matter a whole lot when the current gen Nvidia cards can overclock higher and get more performance out of it. Although I do like that AMD cards are being given a hefty boost. Almost like a free OC going from 11 to 12.


----------



## Fyrwulf

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Late to the thread but I feel these results don't matter a whole lot when the current gen Nvidia cards can overclock higher and get more performance out of it. Although I do like that AMD cards are being given a hefty boost. Almost like a free OC going from 11 to 12.


The only thing that has a hope in hell of catching a Fury X at this point is a 980Ti and I'm not sure it can. What's really impressive is that the 7970, bone stock, gets playable frame rates in one of the most graphically punishing games to come out since Metro. I want to see what it can do with a max stable overclock.


----------



## delboy67

Quote:


> Originally Posted by *Fyrwulf*
> 
> The only thing that has a hope in hell of catching a Fury X at this point is a 980Ti and I'm not sure it can. What's really impressive is that the 7970, bone stock, gets playable frame rates in one of the most graphically punishing games to come out since Metro. I want to see what it can do with a max stable overclock.


At 'crazy quality' 1440p the 980ti is getting soundly beaten by the 390x, 35fps vs 45fps, thats impressive! Maybe the 390x is the card with any hope of catching fury x.


----------



## Desolutional

Quote:


> Originally Posted by *delboy67*
> 
> At 'crazy quality' 1440p the 980ti is getting soundly beaten by the 390x, 35fps vs 45fps, thats impressive! Maybe the 390x is the card with any hope of catching fury x.


AotS is meh, one game is not sufficient sample size to judge DX 12 performance. Forza 6, FH3 with DX 12 is the dream.


----------



## delboy67

Quote:


> Originally Posted by *Desolutional*
> 
> AotS is meh, one game is not sufficient sample size to judge DX 12 performance. Forza 6, FH3 with DX 12 is the dream.


I havent played it, not my type of game but i think its a good tech demo so far, i may buy it anyway but it shows what happens when hawaiis theoretical teraflops become teraflops. If forza comes to pc i think my rig will look like randy marsh's.


----------



## Themisseble

http://www.extremetech.com/extreme/223654-instrument-error-amd-fcat-and-ashes-of-the-singularity
interesting.


----------



## raghu78

From the benchmarks one thing is clear AMD's DX11 performance is pathetic. I hope Polaris fixes that with hardware and software (driver) improvements. DX12 performance is nothing short of amazing. If R9 390X can match GTX 980 Ti it points to how much capable the actual hardware is and how resource utilization is very poor in DX11. I am excited to see what Polaris can do. If Polaris can address the flaws of older GCN cards while retaining their strengths it would be an excellent architecture.


----------



## Forceman

Quote:


> Originally Posted by *Themisseble*
> 
> http://www.extremetech.com/extreme/223654-instrument-error-amd-fcat-and-ashes-of-the-singularity
> interesting.


Wonder what Microsoft is doing to "increase smoothness".
Quote:


> AMD's driver follows Microsoft's recommendations for DX12 and composites using the Desktop Windows Manager to increase smoothness and reduce tearing.


----------



## Unkzilla

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Late to the thread but I feel these results don't matter a whole lot when the current gen Nvidia cards can overclock higher and get more performance out of it. Although I do like that AMD cards are being given a hefty boost. Almost like a free OC going from 11 to 12.


My thoughts exactly... i've got a big OC on my 980 (25% over ref) without adding any voltage. I'm in no rush to buy any AMD products at this stage. Next round of GPU's will be more interesting.

I think the gain for AMD is great...will push Nvidia's driver team to the limit. Will be interesting to see if any of the deficit can be made up


----------



## gamervivek

Quote:


> Originally Posted by *f1LL*
> 
> R7 370 performing better than R9 380X is a little crazy indeed!


Yeah, GCN3, this time Tonga, uneperforming again unless they mislabeled the graph.


----------



## Clocknut

Quote:


> Originally Posted by *raghu78*
> 
> From the benchmarks one thing is clear AMD's DX11 performance is pathetic. I hope Polaris fixes that with hardware and software (driver) improvements. DX12 performance is nothing short of amazing. If R9 390X can match GTX 980 Ti it points to how much capable the actual hardware is and how resource utilization is very poor in DX11. I am excited to see what Polaris can do. If Polaris can address the flaws of older GCN cards while retaining their strengths it would be an excellent architecture.


Well.....the nay sayers will always bring their awesome "average" fps chart on those optimized AAA title and telling u that AMD DX11 drivers have no problem & it is on par with Nvidia DX11. (Just like the one that quoted me a few pages earlier).

They forgot it is the minimum fps that matters.

they also seems to forgot that vast majority of the games out there are poorly threaded, some are still stuck on dual core, even single core.









If Polaris did not bring that DX11 overhead down to on par with Nvidia's, AMD is still no buy for me. Ashes of the Singularity just showed how awful AMD DX11 driver is.


----------



## p4inkill3r

Quote:


> Originally Posted by *Clocknut*
> 
> Well.....the nay sayers will always bring their awesome "average" fps chart on those optimized AAA title and telling u that AMD DX11 drivers have no problem & it is on par with Nvidia DX11. (Just like the one that quoted me a few pages earlier).
> 
> They forgot it is the minimum fps that matters.
> 
> they also seems to forgot that vast majority of the games out there are poorly threaded, some are still stuck on dual core, even single core.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> If Polaris did not bring that DX11 overhead down to on par with Nvidia's, AMD is still no buy for me. Ashes of the Singularity just showed how awful AMD DX11 driver is.


Reason #2527 you won't buy an AMD card.


----------



## raghu78

Quote:


> Originally Posted by *Clocknut*
> 
> Well.....the nay sayers will always bring their awesome "average" fps chart on those optimized AAA title and telling u that AMD DX11 drivers have no problem & it is on par with Nvidia DX11. (Just like the one that quoted me a few pages earlier).
> 
> They forgot it is the minimum fps that matters.
> 
> they also seems to forgot that vast majority of the games out there are poorly threaded, some are still stuck on dual core, even single core.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> If Polaris did not bring that DX11 overhead down to on par with Nvidia's, AMD is still no buy for me. *Ashes of the Singularity just showed how awful AMD DX11 driver is*.


yeah but AoS also shows how awful Nvidia's DX12 hardware is.


----------



## SpeedyVT

Quote:


> Originally Posted by *raghu78*
> 
> yeah but AoS also shows how awful Nvidia's DX12 hardware is.


I have doubts that even the next NVidia card will adopt DX12 as much as GCN has in it's iterations.


----------



## Blameless

Quote:


> Originally Posted by *Wovermars1996*
> 
> "Radeon Technology Group" The division in AMD responsible for graphics cards


I have to keep reminding myself of this. RTG has always = radioisotope thermoelectric generator for me.
Quote:


> Originally Posted by *raghu78*
> 
> yeah but AoS also shows how awful Nvidia's DX12 hardware is.


By the time I have more than a handful of DX12 games, Maxwell 2 is going to be two or three generations old.


----------



## dragneel

Quote:


> Originally Posted by *p4inkill3r*
> 
> Reason #2527 you won't buy an AMD card.


Seems to me some people start to make up reasons not to go with AMD after a while. Can't imagine the buyers remorse I'd have right now If I had spent nearly an extra $100 on a GTX 960 over my 280 that clocks to near 280X performance..


----------



## keikei

Quote:


> Originally Posted by *SpeedyVT*
> 
> I have doubts that even the next NVidia card will adopt DX12 as much as GCN has in it's iterations.


Thats the million dollar question isnt it? How well will pascal run DX12?


----------



## tweezlednutball

just picked it up, half off on steam this weekend. played through one mission, pretty fun. also runs well on my 9590 and dual tahiti xt's


----------



## keikei

Quote:


> Originally Posted by *tweezlednutball*
> 
> just picked it up, half off on steam this weekend. played through one mission, pretty fun. also runs well on my 9590 and dual tahiti xt's


AOS looks like a fun game, but im terrible at rts games. Quantum Break (april release) will most likely be the first dx12 game i get this year.


----------



## Wovermars1996

Quote:


> Originally Posted by *keikei*
> 
> AOS looks like a fun game, but im terrible at rts games. Quantum Break (april release) will most likely be the first dx12 game i get this year.


So you're not interested in Hitman?


----------



## raghu78

Quote:


> Originally Posted by *Blameless*
> 
> By the time I have more than a handful of DX12 games, Maxwell 2 is going to be two or three generations old.


really. so you don't expect more than a handful of DX12 games till 2018-2020. lets see. here are a few games in 2016

https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support

HITMAN
DEUS EX MANKIND DIVIDED
AGE OF SINGULARITY
FABLE LEGENDS
GEARS OF WAR ULTIMATE EDITION
ARK EVOLVED
QUANTUM BREAK

Microsoft is pushing Windows 10 and DX12 more aggressively than any earlier transitions. A good example is Quantum break which Microsoft is publishing and will only be available on DX12 and Windows 10. In fact I would go so far as to say that DX12 gaming is the best use case for why to move to Windows 10 as its such a huge leap in efficiency. I expect all major game engines to have a DX12 renderer by the end of 2016.


----------



## Glottis

Quote:


> Originally Posted by *Wovermars1996*
> 
> So you're not interested in Hitman?


unless I missed something, Hitman is a DX11 game (with DX12 patch coming at an unannounced date, which doesn't mean much, since we haven't got a single DX12 patch out of all games that promised us DX12 patch).


----------



## keikei

Quote:


> Originally Posted by *Wovermars1996*
> 
> So you're not interested in Hitman?


Not as a launch buy. I already have an SE game (ROTR), which i havent touch, but purchased soely due to an insane deal from a fellow member. A good part of the QB allure is definitely DX12. The time mechanic in the game also looks badass as well.



Spoiler: Warning: Spoiler!










Quote:


> Originally Posted by *Glottis*
> 
> unless I missed something, Hitman is a DX11 game (with DX12 patch coming at an unannounced date, which doesn't mean much, since we haven't got a single DX12 patch out of all games that promised us DX12 patch).


http://wccftech.com/hitman-feature-implementation-dx12-async-compute-amd/
http://www.gamersnexus.net/news-pc/2309-amd-hitman-dx12-ace-workload-management

I believe ROTR has the _possible_ DX12 patch.


----------



## escksu

2016 will be AMD's year......


----------



## Liranan

nVidia basically forced MSpy to implement a special layer within DX12 just for themselves and their hardware still isn't up to par. Obviously this is due to Mantle having been made for GCN in the first place but shouldn't that nVidia specific layer have helped nVidia anyway?


----------



## Olivon

Quote:


> Originally Posted by *escksu*
> 
> 2016 will be AMD's year......


Or it will be AMD's end...
But 2017 will be the most important year. If AMD don't deliver with Zen, firm will probably be dismantled.


----------



## Forceman

Quote:


> Originally Posted by *keikei*
> 
> Not as a launch buy. I already have an SE game (ROTR), which i havent touch, but purchased soely due to an insane deal from a fellow member. A good part of the QB allure is definitely DX12. The time mechanic in the game also looks badass as well.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://wccftech.com/hitman-feature-implementation-dx12-async-compute-amd/
> http://www.gamersnexus.net/news-pc/2309-amd-hitman-dx12-ace-workload-management
> 
> I believe ROTR has the _possible_ DX12 patch.


Neither of those definitively say it is launching as DX12.


----------



## kyrie74

The next Mirror's Edge is supposed to use DX12 as well.


----------



## keikei

Quote:


> Originally Posted by *Forceman*
> 
> Neither of those definitively say it is launching as DX12.


DX12 launch, no. Will it eventually have it? Yes. Given theres no info on an implementation date, it could be a while. QB will launch with DX12.


----------



## jologskyblues

Looks to me like things are finally going AMD's way. It took a while but AMD's investments in getting GCN and ACEs on the PS4 and XB1 about to start paying off in the 2016-2017 time-frame I'm guessing.

I'm starting to see that Nvidia will be on the defensive from now on. I'm very interested how they would be able to leverage the strengths of their software engineers and market share to remedy the impending DX12/Vulkan Async situation.

It would be very disappointing if NV would end up recommending its users to keep on using DX11 mode while AMD users are enjoying DX12 to its fullest.


----------



## Forceman

Quote:


> Originally Posted by *kyrie74*
> 
> The next Mirror's Edge is supposed to use DX12 as well.


That's the one I'm looking forward to.
Quote:


> Originally Posted by *keikei*
> 
> DX12 launch, no. Will it eventually have it? Yes. Given theres no info on an implementation date, it could be a while. QB will launch with DX12.


I think it is a bad sign that the demo is DX11 only, what with it being three weeks from launch and all.

I need to look into Quantum Break. Isn't it rumored to be Windows Store only?


----------



## keikei

Quote:


> Originally Posted by *Forceman*
> 
> That's the one I'm looking forward to.
> I think it is a bad sign that the demo is DX11 only, what with it being three weeks from launch and all.
> 
> I need to look into Quantum Break. Isn't it rumored to be Windows Store only?


That was my initial impression with hitman, but the game is episodic, so its possible the later parts will fully launch with dx12 and patch the earlier parts later on. Indeed, Quantum Break on Windows 10 is a Windows Store exclusive for now at least. Thats fine by me. The harder pill to swallow will be getting Win10.


----------



## tpi2007

Quote:


> Originally Posted by *raghu78*
> 
> Microsoft is pushing Windows 10 and DX12 more aggressively than any earlier transitions. A good example is Quantum break which Microsoft is publishing and will only be available on DX12 and Windows 10. In fact I would go so far as to say that DX12 gaming is the best use case for why to move to Windows 10 as its such a huge leap in efficiency. I expect all major game engines to have a DX12 renderer by the end of 2016.


That is not really true regarding DX 12. It's been almost 7 months since Windows 10 was released and how many finished games are there with even superficial DX 12 support? There is exactly one unfinished released game, the first episode of a game called Caffeine, where the episode's gameplay lasts around one hour.

Windows 7 had at least three titles with DX 11 support, even if superficial, at or very near the launch of the OS: BattleForge, S.T.A.L.K.E.R.: Call of Pripyat and Colin McRae: Dirt 2.

And even if you consider that DX 11 is just a superset of DX 10, then let's look at how it was with DX 10: better to start with (four games with support after seven months) and otherwise similar to what is expected from DX 12, Windows Vista launched on the 30th of January of 2007 with the first games with DX 10 support being:


Company of Heroes, patch 1.7, May 29, 2007;
Call of Juarez with patch v1.1.1.0, June 20, 2007;
Lost Planet: Extreme Condition: June 26, 2007;
Bioshock, August 21, 2007;
World in Conflict, September 18, 2007;
Fury, October 16, 2007;
The Lord of the Rings Online, "Book 11", October 24, 2007;
Hellgate: London, October 31, 2007;
Gears of War, 6 November 2007;
Crysis, November 13, 2007;
Universe at War: Earth Assault, December 10, 2007.

As to Quantum Break, it will be Windows Store only, so it will be one of the worst examples to showcase DX 12 simply because the Universal Windows platform is gimped compared to Win32. Take the current Rise of the Tomb Raider game available on the Windows Store and compare it with the Steam version that has none of these downsides:

a) doesn't have SLI or Crossfire support;
b) enforces borderless non-exclusive fullscreen (performance penalty);
c) and V-Sync (more input lag);
d) and thus no G-Sync or FreeSync;
e) In addition to that, no modding;
f) no launching from the .exe;
g) no overlays (Fraps, RivaTuner Statistics Server, etc);
h) and no mouse macros.


----------



## STEvil

Quote:


> Originally Posted by *Defoler*
> 
> Yet nvidia had a record selling year, again, and AMD have been losing ground, again.
> So if your claim was always right, why aren't we all having AMD cards, and why nvidia didn't suddenly disappear for having higher priced cards?
> 
> Price, is not everything. The fact that nvidia for a bit more money, sells their better GPUs, is the bottom line. No beta or alpha results are going to matter unless once the game is out, AMD will show performance gains and better results once nvidia driver team do their job.
> 
> To remind you, that ashes is a game fully optimised and developed with AMD, but not with nvidia. Meaning that of course we will get optimised drivers to AMD, and alpha and beta results will always favour AMD.
> This is also a bit hypocritical from AMD, as games developed but nvidia, aren't being constantly advertised at "look at those beta performance results! aren't they better than AMD?".


Not sure if anyone pointed out your wrong yet, so i'll do so: nV has access to the same source level code that AMD does with ashes as mentioned on this forum by the dev.


----------



## airfathaaaaa

the game isnt out yet and people have already concluded it will be crippled (quantum break)


----------



## degenn

Quantum Break looks mediocre from what I've seen of it. They need to spend less time/money casting hollywood talent and more on coming up with fun stories and gameplay ideas. The gameplay videos I've seen are pretty bland. If there's a game that would have me concede to Windows 10/Windows Store exclusive, it doesn't look like it'll be Quantum Break.


----------



## Skrillex

Hard to draw anything meaningful/substantial from a one game/benchmark test.

Lets wait for a few more native DX12 games to ship then look at the performance in comparison with DX11 driving the game,

I can smell a faint whiff of AMD favouritism in this test, even if it is just that their DX11 performance is so poor it makes the DX12 delta jump look huge..


----------



## escksu

Quote:


> Originally Posted by *Skrillex*
> 
> Hard to draw anything meaningful/substantial from a one game/benchmark test.
> 
> Lets wait for a few more native DX12 games to ship then look at the performance in comparison with DX11 driving the game,
> 
> I can smell a faint whiff of AMD favouritism in this test, even if it is just that their DX11 performance is so poor it makes the DX12 delta jump look huge..


Its well known that AMD hardware is designed to run well on DX12.


----------



## Skrillex

Quote:


> Originally Posted by *escksu*
> 
> Its well known that AMD hardware is designed to run well on DX12.


Well you wouldn't purposefully design it to run poorly would you?

I'm sure Nvidia have been designing with DX12 in mind too.


----------



## tpi2007

Quote:


> Originally Posted by *airfathaaaaa*
> 
> the game isnt out yet and people have already concluded it will be crippled (quantum break)


What I said is entirely based on current facts.

http://www.gamespot.com/articles/microsoft-exec-explains-quantum-break-pc-release-c/1100-6434780/
Quote:


> "Quantum Break on Windows 10 is a Windows Store exclusive," Greenberg said when directly asked if the game would come to Steam.


Now, if they improve the Universal Windows Platform considerably to remove all the downsides I listed in the meantime, we'll see. But for now, it is what it is, and can be seen in the latest Tomb Raider game, it's not even about the game(play) which may turn out good or bad, but the constraints of the platform it will be released on.


----------



## Tivan

Quote:


> Originally Posted by *Skrillex*
> 
> Well you wouldn't purposefully design it to run poorly would you?
> 
> I'm sure Nvidia have been designing with DX12 in mind too.


They designed for drastically different aspects of Dx12, though. What ends up more performant, we'll see.


----------



## Serios

Quote:


> Originally Posted by *Skrillex*
> 
> Well you wouldn't purposefully design it to run poorly would you?
> 
> I'm sure Nvidia have been designing with DX12 in mind too.


AMD won the console contracts before GCN was introduced and now we know why.
It's funny how outdated Kepler looks in comparison to GCN.

The thing is DX12 introduced some features that take better advantage of AMD's GPU architecture.
Now we know why AMD introduced Mantle and didn't want to wait for Microsoft to come or not whit something similar on PC.

I think Kepler and Maxwell are very well optimized for DX11 but not so much for DX12.
Even if AMD's performance under DX11 is not better they still are able to keep close to Nvidia and take off when DX12 is involved.
The fact that Nvidia doesn't see gains from DX12 is quite disappointing.


----------



## Serios

Quote:


> Originally Posted by *Skrillex*
> 
> Hard to draw anything meaningful/substantial from a one game/benchmark test.
> 
> Lets wait for a few more native DX12 games to ship then look at the performance in comparison with DX11 driving the game,
> 
> *I can smell a faint whiff of AMD favouritism* in this test, even if it is just that their DX11 performance is so poor it makes the DX12 delta jump look huge..


Because it's a DX12 games?
Nvidia had full access to optimize their drivers for AotS and their DX11 performance is much better the problem is DX12 performance.


----------



## SpeedyVT

Quote:


> Originally Posted by *Skrillex*
> 
> Well you wouldn't purposefully design it to run poorly would you?
> 
> I'm sure Nvidia have been designing with DX12 in mind too.


Not really. We've broken down that NVidia has a limited mindset toward DX12. Everything DX12 is pretty much emulated on an NVidia card which defeats the purpose of having a slow CPU with DX12. So if you put an NVidia card in a system with more limited resources it bottlenecks in DX12 along with DX11. AMD does not bottleneck in DX12 on low CPU specifications.

It's quite embarassing.


----------



## Glottis

Quote:


> Originally Posted by *SpeedyVT*
> 
> Not really. We've broken down that NVidia has a limited mindset toward DX12. Everything DX12 is pretty much emulated on an NVidia card which defeats the purpose of having a slow CPU with DX12. So if you put an NVidia card in a system with more limited resources it bottlenecks in DX12 along with DX11. AMD does not bottleneck in DX12 on low CPU specifications.
> 
> It's quite embarassing.


oh so now EVERYTHING DX12 is emulated on nvidia. wow things change by the hour here.


----------



## SpeedyVT

Quote:


> Originally Posted by *Glottis*
> 
> oh so now EVERYTHING DX12 is emulated on nvidia. wow things change by the hour here.


Well all of the important features. Not it's core optimization. The important features are important because it gives it significant gains over DX11. You won't see a huge gain over DX12 with NVidia. At the initial DX12 beta benchmarks without all of these features enabled NVidia pulled ahead and AMD was just behind. Now that games are using Asynchronous Compute and it's other dynamic features all of what NVidia is emulating it's costing it in performance.

HOWEVER if you pair an NVidia GPU with an AMD GPU in DX12 for graphics for those bizarre dual graphics that DX12 can do! They seem to accelerate each other giving larger gains than two of the same GPUs.


----------



## dalastbmills

Quote:


> "HOWEVER if you pair an NVidia GPU with an AMD GPU in DX12 for graphics for those bizarre dual graphics that DX12 can do! They seem to accelerate each other giving larger gains than two of the same GPUs.


I'm glad you brought this up because I've read this whole thread and not one person has mentioned that MS/DX12 allows for the use of BOTH NVIDIA and AMD cards. I bought a 980ti Classy last week and I have no buyers remorse. If I need to pick up a 390x/Polaris to run in conjunction, so be it.


----------



## p4inkill3r

Quote:


> Originally Posted by *dalastbmills*
> 
> I'm glad you brought this up because I've read this whole thread and not one person has mentioned that MS/DX12 allows for the use of BOTH NVIDIA and AMD cards. I bought a 980ti Classy last week and I have no buyers remorse. If I need to pick up a 390x/Polaris to run in conjunction, so be it.


Congrats on your new card, but I find it hard to believe that nvidia will allow this mechanic to continue unabated throughout the lifecycle of DX12.


----------



## Glottis

Quote:


> Originally Posted by *p4inkill3r*
> 
> Congrats on your new card, but I find it hard to believe that nvidia will allow this mechanic to continue unabated throughout the lifecycle of DX12.


your "nvidia is the satan" stance is clouding your mind. it's up to game devs to implement this DX12 feature or not, nothing to do with nvidia.


----------



## dalastbmills

Most of what I have read is that Microsoft is implementing this; does NVIDIA have the power to push MS around?


----------



## Glottis

Quote:


> Originally Posted by *dalastbmills*
> 
> Most of what I have read is that Microsoft is implementing this; does NVIDIA have the power to push MS around?


depends on what you mean by push around. there are only 3 graphics manufacturers in the world: AMD, Nvidia and Intel so all of them had a say in shaping DX12. you think MS closed all doors and windows and developed DX12 in vacuum from outside world? that's not how it works.


----------



## p4inkill3r

Quote:


> Originally Posted by *Glottis*
> 
> your "nvidia is the satan" stance is clouding your mind. it's up to game devs to implement this DX12 feature or not, nothing to do with nvidia.


Let me know when you can use an AMD card for PhysX again.


----------



## Glottis

Quote:


> Originally Posted by *p4inkill3r*
> 
> Let me know when you can use an AMD card for PhysX again.


you can use physx with AMD cards if i'm not mistaken, it will just be simulated on CPU and not GPU. but that's beyond the scope of this thread isn't it?


----------



## rickcooperjr

Quote:


> Originally Posted by *Glottis*
> 
> Quote:
> 
> 
> 
> Originally Posted by *p4inkill3r*
> 
> Let me know when you can use an AMD card for PhysX again.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> you can use physx with AMD cards if i'm not mistaken, it will just be simulated on CPU and not GPU. but that's beyond the scope of this thread isn't it?
Click to expand...

yet you can't grasp the facts AMD GPU's have been found perfectly capable of physX but Nvidia blocked it and forced when it sees a AMD GPU it goes to the CPU.

You also used to be able to have a AMD GPU and Nvidia GPU in same rig and force Nvidia GPU to do the physX but again Nvidia blocked it which I believe was a stupid thing because regardless said person would be also running a Nvidia card it is kind of a dirty move Nvidia made just to be a you know what.

I want to also state Nvidia tried to legally block all of the features just about that make DX12 different they forced many game dev's to change things look at ARK survival evolved DX12 was delayed because Nvidia threw a fit because theyre cards lost performance over theyre emulation instead of hardware solution like AMD did and well ARK survival evolved is a Nvidia gameworks title by the way.


----------



## p4inkill3r

Quote:


> Originally Posted by *Glottis*
> 
> you can use physx with AMD cards if i'm not mistaken, it will just be simulated on CPU and not GPU. but that's beyond the scope of this thread isn't it?


You posited that nvidia would be hamstrung by MS; I'm merely stating that nvidia' has demonstrated the wherewithal to turn off 'features' like this in the past.


----------



## infranoia

Quote:


> Originally Posted by *dalastbmills*
> 
> I'm glad you brought this up because I've read this whole thread and not one person has mentioned that MS/DX12 allows for the use of BOTH NVIDIA and AMD cards. I bought a 980ti Classy last week and I have no buyers remorse. If I need to pick up a 390x/Polaris to run in conjunction, so be it.


Oxide had to work very, very hard to make this happen. Explicit multi-GPU is a dev call 100%, which will limit it.


----------



## Glottis

Quote:


> Originally Posted by *rickcooperjr*
> 
> I want to also state Nvidia tried to legally block all of the features just about that make DX12 different they forced many game dev's to change things look at ARK survival evolved DX12 was delayed because Nvidia threw a fit because theyre cards lost performance over theyre emulation instead of hardware solution like AMD did and well ARK survival evolved is a Nvidia gameworks title by the way.


Interesting allegations, is there anywhere I can read about that, preferably on respectable publication?


----------



## dalastbmills

Quote:


> Originally Posted by *Glottis*
> 
> depends on what you mean by push around. there are only 3 graphics manufacturers in the world: AMD, Nvidia and Intel so all of them had a say in shaping DX12. you think MS closed all doors and windows and developed DX12 in vacuum from outside world? that's not how it works.


Well in financial terms, MS is worth $180B compared to NVIDIA's $7.25B. So I don't think NVIDIA can payoff MS. So what else could they possibly do to make MS not allow multi-brand GPU's? I've never had an ATI/AMD but I would love to get one IF I could use it in conjunction with my NVIDIA. I just don't like change


----------



## rickcooperjr

Quote:


> Originally Posted by *Glottis*
> 
> Quote:
> 
> 
> 
> Originally Posted by *rickcooperjr*
> 
> I want to also state Nvidia tried to legally block all of the features just about that make DX12 different they forced many game dev's to change things look at ARK survival evolved DX12 was delayed because Nvidia threw a fit because theyre cards lost performance over theyre emulation instead of hardware solution like AMD did and well ARK survival evolved is a Nvidia gameworks title by the way.
> 
> 
> 
> Interesting allegations, is there anywhere I can read about that, preferably on respectable publication?
Click to expand...

well lets see it was ready for lime light 2x and the devs said themselfs it worked perfectly then all of a sudden Nvidia blew a gasket because the results for theyre cards went down and in the patch notes I seen it once say something about irregular performance issues with Nvidia cards and they halted its release because of it. This makes it pretty obvious Nvidia was upset things didn't look good on theyre end it has been months since and Nvidia has yet to release drivers to allow it to function and well all info was removed from the patch notes and such about this days later in short Nvidia shut it down.

here is about only thing I can find at sec https://www.reddit.com/r/Amd/comments/3l15mn/dx12_ark_survival_evolved_amd_driver/

also a bit of info to read again cant find any definitive stuff https://steamcommunity.com/app/346110/discussions/0/483366528921253820/

keep in mind ARK survival evolved is a Nvidia gameworks title so yes Nvidia has alot of leash to pull and choke the dev's with in short they can literally stranglehold them and force them to do almost anything they want and that is why I hate Nvidia gameworks because it is proprietary and works as a incentive to dev's and puts Nvidia in charge of things it is Nvdias choke collar and leash.


----------



## Glottis

Quote:


> Originally Posted by *rickcooperjr*
> 
> well lets see it was ready for lime light 2x and the devs said themselfs it worked perfectly then all of a sudden Nvidia blew a gasket because the results for theyre cards went down and in the patch notes I seen it once say something about irregular performance issues with Nvidia cards and they halted its release because of it. This makes it pretty obvious Nvidia was upset things didn't look good on theyre end it has been months since and Nvidia has yet to release drivers to allow it to function and well all info was removed from the patch notes and such about this days later in short Nvidia shut it down.
> 
> here is about only thing I can find at sec https://www.reddit.com/r/Amd/comments/3l15mn/dx12_ark_survival_evolved_amd_driver/


oh come one man, you are grasping for straws here. they just delayed the patch, it will come, why do you always have to see something criminal in nvidia. by the way, Hitman is AMD title, so why is it only DX11 with DX12 nowhere to be seen, let me guess, the evil nvidia is at it again?


----------



## f1LL

Quote:


> Originally Posted by *rickcooperjr*
> 
> yet you can't grasp the facts AMD GPU's have been found perfectly capable of physX but Nvidia blocked it and forced when it sees a AMD GPU it goes to the CPU.
> 
> You also used to be able to have a AMD GPU and Nvidia GPU in same rig and force Nvidia GPU to do the physX but again Nvidia blocked it which I believe was a stupid thing because regardless said person would be also running a Nvidia card it is kind of a dirty move Nvidia made just to be a you know what.
> 
> I want to also state Nvidia tried to legally block all of the features just about that make DX12 different they forced many game dev's to change things *look at ARK survival evolved DX12 was delayed because Nvidia threw a fit because theyre cards lost performance* over theyre emulation instead of hardware solution like AMD did and well ARK survival evolved is a Nvidia gameworks title by the way.


As much as I believe this is actually true, it still is only speculation. Might as well be something completely different causing the delay.


----------



## rickcooperjr

Quote:


> Originally Posted by *Glottis*
> 
> Quote:
> 
> 
> 
> Originally Posted by *rickcooperjr*
> 
> well lets see it was ready for lime light 2x and the devs said themselfs it worked perfectly then all of a sudden Nvidia blew a gasket because the results for theyre cards went down and in the patch notes I seen it once say something about irregular performance issues with Nvidia cards and they halted its release because of it. This makes it pretty obvious Nvidia was upset things didn't look good on theyre end it has been months since and Nvidia has yet to release drivers to allow it to function and well all info was removed from the patch notes and such about this days later in short Nvidia shut it down.
> 
> here is about only thing I can find at sec https://www.reddit.com/r/Amd/comments/3l15mn/dx12_ark_survival_evolved_amd_driver/
> 
> 
> 
> oh come one man, you are grasping for straws here. they just delayed the patch, it will come, why do you always have to see something criminal in nvidia. by the way, Hitman is AMD title, so why is it only DX11 with DX12 nowhere to be seen, let me guess, the evil nvidia is at it again?
Click to expand...

well you need to keep in mind that well once again maybe the DEV's didn't want to have to fight with Nvidia given Nvidia has like 75% of the market meening Nvidia has the muscle / money and fanboys behind them to bash the DEV's into submission so this may have a part as to why DX12 isn't currently implimented.

I also want to say I believe also that Nvidia has had many months to get theyre DX12 drivers out and working how long now like 1yr or more and they have yet to do so this again is telling of something dodgy if you ask me.


----------



## Assirra

Quote:


> Originally Posted by *rickcooperjr*
> 
> yet you can't grasp the facts AMD GPU's have been found perfectly capable of physX but Nvidia blocked it and forced when it sees a AMD GPU it goes to the CPU.
> 
> You also used to be able to have a AMD GPU and Nvidia GPU in same rig and force Nvidia GPU to do the physX but again Nvidia blocked it which I believe was a stupid thing because regardless said person would be also running a Nvidia card it is kind of a dirty move Nvidia made just to be a you know what.
> 
> I want to also state Nvidia tried to legally block all of the features just about that make DX12 different they forced many game dev's to change things look at ARK survival evolved DX12 was delayed because Nvidia threw a fit because theyre cards lost performance over theyre emulation instead of hardware solution like AMD did and well ARK survival evolved is a Nvidia gameworks title by the way.


You better have some proof for statements like that. I really doubt nvidia cares about "open world survival game number 6000" which ran like crap anyway.


----------



## mav451

@rickcooperjr - I'm coming at this from another perspective.
When the developer is given such greater control in DX12, there is only so much that can be optimized at the driver-level.

I would like to hear this coming directly from Kollock however - and for him to illuminate us on the correspondence with nVidia's driver team through game development.


----------



## rickcooperjr

Quote:


> Originally Posted by *Assirra*
> 
> Quote:
> 
> 
> 
> Originally Posted by *rickcooperjr*
> 
> yet you can't grasp the facts AMD GPU's have been found perfectly capable of physX but Nvidia blocked it and forced when it sees a AMD GPU it goes to the CPU.
> 
> You also used to be able to have a AMD GPU and Nvidia GPU in same rig and force Nvidia GPU to do the physX but again Nvidia blocked it which I believe was a stupid thing because regardless said person would be also running a Nvidia card it is kind of a dirty move Nvidia made just to be a you know what.
> 
> I want to also state Nvidia tried to legally block all of the features just about that make DX12 different they forced many game dev's to change things look at ARK survival evolved DX12 was delayed because Nvidia threw a fit because theyre cards lost performance over theyre emulation instead of hardware solution like AMD did and well ARK survival evolved is a Nvidia gameworks title by the way.
> 
> 
> 
> You better have some proof for statements like that. I really doubt nvidia cares about "open world survival game number 6000" which ran like crap anyway.
Click to expand...

ARK survival evolved was #1 most streamed and played game and most bought game on steam / twitch / youtube for many months and such so it was no rinky dink game so acting like it was a blah game that didn't get Nvidias attention is literally insane.


----------



## Assirra

Quote:


> Originally Posted by *rickcooperjr*
> 
> ARK survival evolved was #1 most streamed and played game and most bought game on steam / twitch / youtube for many months and such so it was no rinky dink game so acting like it was a blah game that didn't get Nvidias attention is literally insane.


Ok i stand corrected, it seems it was more popular then i thought.
Still tough, what proof do you have that they are actively delaying it? And please no "i got a feeling" stuff. Nvdia has a lot of stuff on their plate now including their new generation GPU's in half a year. I rather have them not shouting before they actually did something compared to AMD who has been shouting for DX12 for what now? A year or so while keep getting kicked in the nuts and losing market share along the way.
Do not forsake the present for the future.


----------



## Potatolisk

I wouldn't put it past nVidia to force developer to not use better API or other features if they thought it would give them big enough advantage. But with Ark it's most likely developer's fault. It's over their head and the console version has still pretty bad performance.


----------



## DeathMade

Quote:


> Originally Posted by *f1LL*
> 
> As much as I believe this is actually true, it still is only speculation. Might as well be something completely different causing the delay.


I'd like to believe that but it is not most likely true. Ark is very trashy from the technical standpoint. My friend of mine works with UE4 and he said that Ark still runs on UE4 FPS basic blueprint. Devs most likely thought that they can switch APIs in UE4 and call it a day but 1) it doesnt work this way 2) DX12 in UE4 is a DX11 wrapper ATM.

I wouldn't be suprised if Nvidia left Ark completely just to stay out of it


----------



## Shogon

Quote:


> Originally Posted by *Serios*
> 
> AMD won the console contracts before GCN was introduced and now we know why.
> It's funny how outdated Kepler looks in comparison to GCN.


Yet with those millions of units sold, and total console dominance, they have yet to post a profitable quarter thus far.

It's also unfortunate it's taken this long for GCN to really shine. Things would be better today if AMD had the performance they do now. The 680 would never of been priced the way it was.

Quote:


> Originally Posted by *dalastbmills*
> 
> I'm glad you brought this up because I've read this whole thread and not one person has mentioned that MS/DX12 allows for the use of BOTH NVIDIA and AMD cards. I bought a 980ti Classy last week and I have no buyers remorse. If I need to pick up a 390x/Polaris to run in conjunction, so be it.


Maybe I'll get a Nano if this thing pans out. I enjoy this game pretty much. Just wish more would buy it, instead of using it as a benchmark for one company or another.

Quote:


> Originally Posted by *Glottis*
> 
> you can use physx with AMD cards if i'm not mistaken, it will just be simulated on CPU and not GPU. but that's beyond the scope of this thread isn't it?


Goal posts always change when bias clouds thought. You get used to his stance after a while.


----------



## superstition222

Anandtech is always amusing. No 390 or 390X in the charts?

L - O - L

Not having the 390X in there is beyond unprofessional.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *raghu78*
> 
> really. so you don't expect more than a handful of DX12 games till 2018-2020. lets see. here are a few games in 2016
> 
> https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support
> 
> HITMAN
> DEUS EX MANKIND DIVIDED
> AGE OF SINGULARITY
> FABLE LEGENDS
> GEARS OF WAR ULTIMATE EDITION
> ARK EVOLVED
> QUANTUM BREAK
> 
> Microsoft is pushing Windows 10 and DX12 more aggressively than any earlier transitions. A good example is Quantum break which Microsoft is publishing and will only be available on DX12 and Windows 10. In fact I would go so far as to say that DX12 gaming is the best use case for why to move to Windows 10 as its such a huge leap in efficiency. I expect all major game engines to have a DX12 renderer by the end of 2016.


That's still just a handful of games for the year. I don't agree that it will be 2 or 3 generations old but I can see Pascal being launched before DX12 has gained traction. Then it won't really matter at all how poorly the 980 Ti performs.


----------



## PontiacGTX

Quote:


> Originally Posted by *Skrillex*
> 
> Well you wouldn't purposefully design it to run poorly would you?
> 
> I'm sure Nvidia have been designing with DX12 in mind too.


Kepler indeed isnt focused for it...they didnt even support same FL on DX12 or binding resources... instead they released some drivers to improve a pretty limited API(DX11)
Quote:


> Originally Posted by *Potatolisk*
> 
> I wouldn't put it past nVidia to force developer to not use better API or other features if they thought it would give them big enough advantage. But with Ark it's most likely developer's fault. It's over their head and the console version has still pretty bad performance.


http://www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/


----------



## superstition222

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I can see Pascal being launched before DX12 has gained traction. Then it won't really matter at all how poorly the 980 Ti performs.


Anyone who is smart, though, will wait until Polaris is also out to avoid paying the early adopter tax on top of Nvidia's inflated pricing.


----------



## Assirra

Quote:


> Originally Posted by *superstition222*
> 
> Anyone who is smart, though, will wait until Polaris is also out to avoid paying the early adopter tax on top of Nvidia's inflated pricing.


Yea because nvidia is the only one with inflated prices.
You really think AMD will lower the general pricing if they stomp this generation?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *superstition222*
> 
> Anyone who is smart, though, will wait until Polaris is also out to avoid paying the early adopter tax on top of Nvidia's inflated pricing.


For the Titan, sure. But for the other cards, maybe not. Did waiting 2 weeks for the Fury X make the 980 Ti cheaper afterwards? AFAIK the 980 Ti is still about the same price.


----------



## superstition222

Quote:


> Originally Posted by *Assirra*
> 
> Yea because nvidia is the only one with inflated prices.
> You really think AMD will lower the general pricing if they stomp this generation?


You missed the point.

The point is that Nvidia will have to drop prices when Polaris is in the market, making that the only intelligent time to buy a next-gen GPU - or a bit later once the Polaris early adopter pricing is over.


----------



## superstition222

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> For the Titan, sure. But for the other cards, maybe not. Did waiting 2 weeks for the Fury X make the 980 Ti cheaper afterwards? AFAIK the 980 Ti is still about the same price.


Nvidia won't have the luxury of the lead they have with Maxwell when it's Pascal vs. Polaris.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *superstition222*
> 
> Nvidia won't have the luxury of the lead they have with Maxwell when it's Pascal vs. Polaris.


I'll believe it when I see it.


----------



## superstition222

Quote:


> Originally Posted by *PontiacGTX*


Clearly not an Anandtech chart because the top-performing Hawaii card isn't purposefully excluded in favor of one from 2013 that is probably throttling because they used the stock blower.


----------



## Klocek001

Quote:


> Originally Posted by *superstition222*
> 
> Clearly not an Anandtech chart because the top-performing Hawaii card isn't purposefully excluded in favor of one from 2013 that is probably throttling because they used the stock blower.


are they using non refrence 980s and 980Ti's Sherlock?


----------



## superstition222

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I'll believe it when I see it.


Like the way the 390X in that chart I just quoted isn't much behind the vastly more expensive 980 Ti - a card with a chip first released in 2013?

Nvidia isn't going to have the advantage of DX11's lack of threading and so on to rest on next time around.


----------



## superstition222

Quote:


> Originally Posted by *Klocek001*
> 
> are they using non refrence 980s and 980Ti's Sherlock?


It's 2016, Mr. Holmes.

In 2016 there are tons of 390X cards, none of which Anandtech used because of their typical need to show AMD as being less competitive.

And, if you want to see how the vast majority of Hawaii cards perform you use at least a two fan design. If you want reference clocks then run it at reference. The point of the review is not to review cooler design but instead chip performance. But that is a professional way of looking at things not a fanboy's which is why Anandtech does it differently.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *superstition222*
> 
> Like the way the 390X in that chart I just quoted isn't much behind the vastly more expensive 980 Ti - a card with a chip first released in 2013?
> 
> Nvidia isn't going to have the advantage of DX11's lack of threading and so on to rest on next time around.


Yeah, AMD's cards are built to take advantage of those features. Too bad they're 3 years too early to the game. DX 9 and 11 are what's out right now and look how poorly AMD performs in comparison. Again like I've said, by the time DX 12 has gained traction, it won't matter as the 980 Ti will be outdated.

Edit: AMD consistently fails to predict the market in both their CPU and GPU divisions.


----------



## AmericanLoco

Quote:


> Originally Posted by *superstition222*
> 
> It's 2016, Mr. Holmes.
> 
> In 2016 there are tons of 390X cards, none of which Anandtech used because of their typical need to show AMD as being less competitive.
> 
> And, if you want to see how the vast majority of Hawaii cards perform you use at least a two fan design. If you want reference clocks then run it at reference. The point of the review is not to review cooler design but instead chip performance. But that is a professional way of looking at things not a fanboy's which is why Anandtech does it differently.


...and I bet you'd cry bloody murder if they used a non-reference GTX980 that was able to turbo higher/longer thanks to a better cooler...


----------



## PontiacGTX

Quote:


> Originally Posted by *superstition222*
> 
> Clearly not an Anandtech chart because the top-performing Hawaii card isn't purposefully excluded in favor of one from 2013 that is probably throttling because they used the stock blower.


similar results to Guru3d on PCGH


----------



## superstition222

Quote:


> Originally Posted by *AmericanLoco*
> 
> ...and I bet you'd cry bloody murder if they used a non-reference GTX980 that was able to turbo higher/longer thanks to a better cooler...


Cards should be tested with two fan dual slot coolers at reference clocks to avoid throttling being an issue. Obviously this is more of an issue when dealing with a 290X, particularly since the quality of dies from 2013 was below what it is now and AMD made the poor decision to use a subpar cooler (as Nvidia did with the 480) - a cooler the vast majority of cards on the market do not use.

So, no. You are not correct with your ad hominem attempt.


----------



## Charcharo

Quote:


> Originally Posted by *Olivon*
> 
> Or it will be AMD's end...
> But 2017 will be the most important year. If AMD don't deliver with Zen, firm will probably be dismantled.


And PC Gaming will go even more down the drain that way








Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> That's still just a handful of games for the year. I don't agree that it will be 2 or 3 generations old but I can see Pascal being launched before DX12 has gained traction. Then it won't really matter at all how poorly the 980 Ti performs.


I am pretty sure it matters to the 90+% of people who do not upgrade top of the line GPUs (or even not quite top of the line ones) every single time something new arrives.
And that to some of the 980 TI owners it will matter as well....


----------



## superstition222

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yeah, AMD's cards are built to take advantage of those features. Too bad they're 3 years too early to the game. DX 9 and 11 are what's out right now and look how poorly AMD performs in comparison. Again like I've said, by the time DX 12 has gained traction, it won't matter as the 980 Ti will be outdated.
> 
> Edit: AMD consistently fails to predict the market in both their CPU and GPU divisions.


Too bad for Kepler owners? Hawaii buyers are getting quite a bit of longevity out of their cards, apparently.

If you have unlimited cash and can just replace that overpriced 980 Ti, maybe. As for new buyers, they should wait until both next-gen card platforms are on the market for the best pricing.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Charcharo*
> 
> I am pretty sure it matters to the 90+% of people who do not upgrade top of the line GPUs (or even not quite top of the line ones) every single time something new arrives.


I think the only people that haven't upgraded their top of the line card to the newest top of the line card are just waiting for the end of the 28nm GPU era (which this will be).


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *superstition222*
> 
> Too bad for Kepler owners? Hawaii buyers are getting quite a bit of longevity out of their cards, apparently.
> 
> If you have unlimited cash and can just replace that overpriced 980 Ti, maybe. As for new buyers, they should wait until both next-gen card platforms are on the market for the best pricing.


http://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html

Looks like the 780 Ti still holds up pretty well. 3 GB is a bit lacking though but then again so is 4 GB.


----------



## Charcharo

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I think the only people that haven't upgraded their top of the line card to the newest top of the line card are just waiting for the end of the 28nm GPU era (which this will be).


You do realize that we tech enthusiasts (and even here many people are still on 680s, 780s, 780 Tis, 290Xes and 7970s) are a small, tiny minority?









I mean no disrespect or dissing on people who do upgrade like that. I see it as misusing money, but if you have it and it gives you joy, then do it. *More power to you*, especially since a lot of those same people are also quite knowledgeable on hardware. But ... some sort of reality check must happen.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Charcharo*
> 
> You do realize that we tech enthusiasts (and even here many people are still on 680s, 780s, 780 Tis, 290Xes and 7970s) are a small, tiny minority?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I mean no disrespect or dissing on people who do upgrade like that. I see it as misusing money, but if you have it and it gives you joy, then do it. *More power to you*, especially since a lot of those same people are also quite knowledgeable on hardware. But ... some sort of reality check must happen.


I do and I've always believed that hanging onto cards for multiple generations is actually worse than just upgrading a top of the line card each generation.


----------



## Charcharo

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I do and I've always believed that hanging onto cards for multiple generations is actually worse than just upgrading a top of the line card each generation.


You are free to believe whatever you wish mate.
Unless you can sell them at high prices ... I do not see how that may actually be the case. And this is not the landscape (admittedly in my poor but PC Gaming Land country) I am used to seeing.

Will try it for once with Polaris. However I expect to be a bad experiment for me. Financially that is.


----------



## superstition222

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> http://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html
> 
> Looks like the 780 Ti still holds up pretty well. 3 GB is a bit lacking though but then again so is 4 GB.


DX11 tests. The context of this topic is "DX12 and asynchronous shading".


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Charcharo*
> 
> You are free to believe whatever you wish mate.
> Unless you can sell them at high prices ... I do not see how that may actually be the case. And this is not the landscape (admittedly in my poor but PC Gaming Land country) I am used to seeing.
> 
> Will try it for once with Polaris. However I expect to be a bad experiment for me. Financially that is.


Really? Say you had a 780 and bought it for the $650 in May 2013 or a 680 for $500 in March 2012. In Sept. 2014 the $330 GTX 970 launches and crushes both of those. Now you have to sell your card at bottom of the barrel prices and there's not many people who want to buy a card that old. GPU performance increases pretty drastically each generation compared to CPU performance so splurging on a top card to hold onto for generations just does not make sense when you can buy mid-range and most likely upgrade each generation.


----------



## NightAntilli

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I do and I've always believed that hanging onto cards for multiple generations is actually worse than just upgrading a top of the line card each generation.


That is only the case when you buy nVidia.


----------



## rickcooperjr

Quote:


> Originally Posted by *NightAntilli*
> 
> Quote:
> 
> 
> 
> Originally Posted by *xxdarkreap3rxx*
> 
> I do and I've always believed that hanging onto cards for multiple generations is actually worse than just upgrading a top of the line card each generation.
> 
> 
> 
> That is only the case when you buy nVidia.
Click to expand...

yes I agree AMD cards age well while Nvidia cards as soon as next gen releases go down the tubes and also AMD cards hold theyre value better and that is facts because once again AMD cards get better with age the HD 7970 has gained a whopping 30% or more performance since release same for the 290x and so on can you say that about a GTX 980 or 780 TI NO when they get 5% or so advancement in performance and are all but abandoned when next gen releases on Nvidia side AMD actively support theyre cards for many generations in drivers and game optimization but not Nvidia.

This is why the AMD R9 290 / 290x and 390 / 390x have gained so much performance and have climbed the ranks on Nvidia cards and are often hitting above theyre belt in non biased games.


----------



## NightAntilli

Look at the latest SteamVR benchmark. With the exception of a few overclocked GTX 780 Ti cards, most of Nvidia’s high-end GTX 700 series graphics cards couldn’t reach the “ready” status. GeForce GTX Titan, 780 Ti and 780 cards that were selling for $500, $700 and even $1000 just a couple of years ago can’t keep up with the R9 290X and 290... So yeah...


----------



## Charcharo

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Really? Say you had a 780 and bought it for the $650 in May 2013 or a 680 for $500 in March 2012. In Sept. 2014 the $330 GTX 970 launches and crushes both of those. Now you have to sell your card at bottom of the barrel prices and there's not many people who want to buy a card that old. GPU performance increases pretty drastically each generation compared to CPU performance so splurging on a top card to hold onto for generations just does not make sense when you can buy mid-range and most likely upgrade each generation.


The costs are much higher in Bulgaria. So I would be paying price premiums for cards with terrible price/performance metrics.

I give my old GTX 760 a lot of crap, but to be fair it still games REALLY well at 1080P. Always above a console either in IQ or in FPS (or both), whilst keeping PC Gaming heavy hitter pros.

And the old 5770 lasted me 6 years of playing even new AAA games as well (too bad most AAA games are bad though







)


----------



## f1LL

Quote:


> Originally Posted by *rickcooperjr*
> 
> yes I agree AMD cards age well while Nvidia cards as soon as next gen releases go down the tubes and also AMD cards hold theyre value better and that is facts because once again AMD cards get better with age the HD 7970 has gained a whopping 30% or more performance since release same for the 290x and so on can you say that about a GTX 980 or 780 TI NO when they get 5% or so advancement in performance and are all but abandoned when next gen releases on Nvidia side AMD actively support theyre cards for many generations in drivers and game optimization but not Nvidia.
> 
> This is why the AMD R9 290 / 290x and 390 / 390x have gained so much performance and have climbed the ranks on Nvidia cards and are often hitting above theyre belt in non biased games.


I don't think that better/worse support is the cause for this phenomenon. My interpretation is that it is due to the fact that AMD tries to build for future technology and Nvidia tries to max performance here and now. My guess would be that AMD cards are usually not utilized to their full potential at release because of hardware optimization for things to come, while Nvidia cards are already more or less min/maxed at launch.


----------



## spyshagg

Its not like every AMD GNC card is a bad performer at their release.

Even with all the competition shenanigans. they were still fast, they were still cheaper, and their longevity is proving insane.

...and if we take DX12 into account, its just ludicrous.

The gamer who at launch bought the i5 2500K in 2011 and the 290x in 2013 must be laughing its socks off. As a PC enthusiast, not recognizing these events its simply unbecoming. All the rest is salt.


----------



## spyshagg

Not only those 2013 buyers, but also those who (like me) bought 2 290x in 2015 for 240$ each. I'm entering the VR era with two golden shoes that costed me peanuts.

Its crazy not to love AMD after these deals.


----------



## f1LL

Quote:


> Originally Posted by *spyshagg*
> 
> Its not like every AMD GNC card is a bad performer at their release.
> 
> Even with all the competition shenanigans. they were still fast, they were still cheaper, and their longevity is proving insane.
> 
> ...and if we take DX12 into account, its just ludicrous.
> 
> The gamer who at launch bought the i5 2500K in 2011 and the 290x in 2013 must be laughing its socks off. As a PC enthusiast, not recognizing these events its simply unbecoming. All the rest is salt.


Absolutely agree. Just the huge number of "bad DirectX11 performance"-comments don't make it true.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Charcharo*
> 
> The costs are much higher in Bulgaria. So I would be paying price premiums for cards with terrible price/performance metrics.
> 
> I give my old GTX 760 a lot of crap, but to be fair it still games REALLY well at 1080P. Always above a console either in IQ or in FPS (or both), whilst keeping PC Gaming heavy hitter pros.
> 
> And the old 5770 lasted me 6 years of playing even new AAA games as well (too bad most AAA games are bad though
> 
> 
> 
> 
> 
> 
> 
> )


Right, so like I said, if you can't afford to upgrade your top of the line card each generation it makes more sense to go with a mid-range card for the better price/performance ratio.
Quote:


> Originally Posted by *superstition222*
> 
> DX11 tests. The context of this topic is "DX12 and asynchronous shading".


Sorry, I forgot about all the DX12 games that AMD users are currently enjoying.

Point is that people here are comparing the AMD cards like the 290X/390X to the 980 Ti in DX12 when the AMD cards were made to better utilize those features. AMD tries (and fails) to predict the market in both CPU and GPU divisions. They are their own downfall and they don't even know how to play the market. Like other posters have commented, "[Nvidia] get 5% or so advancement in performance and are all but abandoned when next gen releases on Nvidia side". That's how you play the game, like when Apple releases a new phone and a new OS which runs like crap on older phones. Gives people more incentive to upgrade. I'm sure that's exactly what AMD wants - users to keep their old ass cards and not buy new ones from them.

Edit:
Quote:


> Originally Posted by *f1LL*
> 
> I don't think that better/worse support is the cause for this phenomenon. My interpretation is that it is due to the fact that AMD tries to build for future technology and Nvidia tries to max performance here and now. My guess would be that AMD cards are usually not utilized to their full potential at release because of hardware optimization for things to come, while Nvidia cards are already more or less min/maxed at launch.


Same with their CPUs being beasts at anything that can utilize 6-8 cores. Unfortunately, not a whole lot of applications utilize that many so for real-world usage, it ends up just being a slow CPU that gets eaten by Intel's offerings.


----------



## Tivan

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Sorry, I forgot about all the DX12 games that AMD users are currently enjoying.


Hey, I'm enjoying a wide range of Dx9 games here, that API doesn't even have the multi cpu core capabilities that pushes Nvidia ahead a bit in some Dx11 scenarios.


----------



## Assirra

Quote:


> Originally Posted by *Tivan*
> 
> Hey, I'm enjoying a wide range of Dx9 games here, that API doesn't even have the multi cpu core capabilities that pushes Nvidia ahead a bit in some Dx11 scenarios.


So you get 200+ frames instead of just 200?


----------



## Blameless

Quote:


> Originally Posted by *raghu78*
> 
> really. so you don't expect more than a handful of DX12 games till 2018-2020.


_I_ don't expect to have more than a handful of DX12 titles until 2018 or so, perhaps even later.

The only 2016 game that has had DX12 support announced that I am at all inclined to consider at this point is the new _Deus Ex_. I don't know what 2017 will bring, but I'll probably be running a Pascal or Polaris part, if not their successors, before I drop any money any DX12 titles at all.

Even if I were less picky about the games I play, titles that only support DX12, and that run well on current architectures at the settings I'd want to use, are going to be very, very few.


----------



## kyrie74

https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support

Here you go, I think that those are more than a handful of games by 2018. They're all supposed to be out before the end of the year, and that list doesn't even include FFXV which should have DX12 as well.


----------



## Forceman

Quote:


> Originally Posted by *kyrie74*
> 
> https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support
> 
> Here you go, I think that those are more than a handful of games by 2018. They're all supposed to be out before the end of the year, and that list doesn't even include FFXV which should have DX12 as well.


That list includes Just Cause 3 and RotTR, so I don't know how much I'd trust it.


----------



## kyrie74

Quote:


> Originally Posted by *Forceman*
> 
> That list includes Just Cause 3 and RotTR, so I don't know how much I'd trust it.


According to the list Just Cause 3: DirectX 12 patch is going to be presented at 2016's GDC.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *kyrie74*
> 
> https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support
> 
> Here you go, I think that those are more than a handful of games by 2018. They're all supposed to be out before the end of the year, and that list doesn't even include FFXV which should have DX12 as well.


lol

"Releasing with expansion"
*Alpha*
*Indie*
*Not 12*
*Beta*
*Beta*
*Upcoming patch*
*Upcoming patch*
*Upcoming patch*
*Not implemented*

Upcoming games where 2 are "Windows Store" ones = junk with no vysnc, fullscreen, etc.


----------



## Charcharo

Most video games people play nowadays are basically absolute junk. Others are released broken as all hell (Bugout 4). Honestly, those things are terrible but my elitist self does not see the difference...


----------



## Pro3ootector

Nvidia will just use this, as a part of their marketing, to make sales of new cards.


----------



## airfathaaaaa

well that list has so many nvidia sponsored games that we already know they not gonna get dx12 updates any aion soon..


----------



## Shogon

http://store.steampowered.com/app/228880/

50% off, buy it now!


----------



## Pro3ootector

But it shows that AMD cards are so much very good. With this they gain a whole new level of quality.


----------



## clao

cant believe I read thru all 33 pages and all I can see is AMD sucks because it cant do this or Nvidia sucks because it cant do this. Who cares I've been playing at 30fps most my life and i still can not tell the freaking difference between 30, 60, or 100fps

I must be as blind as a snake


----------



## PontiacGTX

Quote:


> Originally Posted by *clao*
> 
> cant believe I read thru all 33 pages and all I can see is AMD sucks because it cant do this or Nvidia sucks because it cant do this. Who cares I've been playing at 30fps most my life and i still can not tell the freaking difference between 30, 60, or 100fps
> 
> I must be as blind as a snake


dont use a 30Hz monitor


----------



## inedenimadam

Quote:


> Originally Posted by *clao*
> 
> I must be as blind as a snake


must be


----------



## SpeedyVT

Inconsistent frames are worse than 30 fps.


----------



## Pro3ootector

Article from Seekingalpha.

_The true gist of this article is that at least when it comes to GPUs, the miracle AMD needed might have arrived.

AMD had long sustained that with DirectX 12 (which is the version that ships with Windows 10), its cards would be more competitive. Customers, however, had no way of knowing this. Indeed, if anything AMD seemed to be overplaying its cards yet again, for as it unveiled its Fury X cards they turned out to apparently once again be slower than already existing nVidia cards (GTX 980 Ti). For instance, AMD provided encouraging benchmarks.

This, alas, then seemingly didn't materialize fully. This once again disappointed the market and put into doubt AMD's ability to recover the lost GPU share.

So what hope remained? The thing here is that the comparisons showing AMD's latest struggling to match the older nVidia competitor were, even if run on Windows 10, still designed for prior DirectX implementations. They thus didn't fully utilize DirectX 12's ability to execute several GPU commands asynchronously, which would take much greater advantage of a GPU's true capacity.

Even now, no game on the market has yet implemented such ability. But as it turns out, the first benchmarks using such feature have become available in the last few days, using a game that's being readied to take advantage of it ("Ashes of the Singularity"). And therein the miracle saw the light.

It turns out that AMD's cards, and not just its latest, do outperform nVidia's once this feature is used. And what's more, they outperform nVidia on account of a structural, hardware-determined, advantage. That is, nVidia can't recoup the difference through driver changes (though it can minimize the disadvantage). Thus, it will take an whole new nVidia generation to bridge this gap, and it's not certain that nVidia will deliver it - whereas it's a near-certainty that AMD will keep this advantage in its next-generation architecture.

While existing games don't utilize this feature, new ones quickly will. Moreover, the gaming community will quickly take notice of this difference. As such, it's likely that until nVidia bridges the gap, AMD will start closing the GPU market share gulf.

This fact thus becomes a possible catalyst favoring AMD going forward. The impact should start right away, as the gaming community becomes aware of AMD's intrinsic advantage when it comes to games fully utilizing DirectX 12._

http://seekingalpha.com/article/3934656-amd-gets-1-2-miracles-needs


----------



## PlugSeven

Quote:


> Originally Posted by *Pro3ootector*
> 
> Article from Seekingalpha.
> 
> http://seekingalpha.com/article/3934656-amd-gets-1-2-miracles-needs


The article grossly underestimates fanboyism and brand loyalty, nvidians will buy nvidia regardless.Just watch as all manner of contrived metrics start to pop up to justify the geforce purchase.


----------



## Charcharo

Quote:


> Originally Posted by *PlugSeven*
> 
> The article grossly underestimates fanboyism and brand loyalty, nvidians will buy nvidia regardless.Just watch as all manner of contrived metrics start to pop up to justify the geforce purchase.


Unfortunately (and I hope I we are not correct), I do think you have it nailed









It does not matter what you do. Brand loyalty is above it.


----------



## Blameless

Takes a long time for brand loyalty to shift/die. Best to never develop such loyalties in the first place.

I know plenty of individuals who will buy NVIDIA no matter what they do, and many of these are the same people who refused to transition from AMD to Intel CPUs until very recently, despite Intel having an incontrovertible edge in the market segments they were interested in since 2006.

Anyway, I'm not sure Seeking Alpha is a particularly good source for tech information. Yes, if AMD holds on to it's DX12 advantage for the next generation and beyond, market share could gradually switch back (people will eventually start buying the better value, even if they had a brand loyalty elsewhere), but that's a pretty big if at this point. Maybe enough for market speculators though.


----------



## provost

Quote:


> Originally Posted by *Blameless*
> 
> Takes a long time for brand loyalty to shift/die. Best to never develop such loyalties in the first place.
> 
> I know plenty of individuals who will buy NVIDIA no matter what they do, and many of these are the same people who refused to transition from AMD to Intel CPUs until very recently, despite Intel having an incontrovertible edge in the market segments they were interested in since 2006.
> 
> Anyway, I'm not sure Seeking Alpha is a particularly good source for tech information. Yes, if AMD holds on to it's DX12 advantage for the next generation and beyond, market share could gradually switch back (people will eventually start buying the better value, even if they had a brand loyalty elsewhere), but that's a pretty big if at this point. Maybe enough for market speculators though.


You know, of course, I disagree. Nvidia has customers because it was able to provide hardware, combined with above average software support, relative to its competition. AMD has caught up on the software support side, except for maybe day zero drivers for Nvidia sponsored games. Well, the day zero gamers are turning out to be beta testers for the poorly ported PC games, and this will continue to give a lot of people pause before deciding to buy on console hyped AAA titles. What gives AMD edge for some consumers is the longevity of its optimization support and arch advantage over Nvidia vis-à-vis Dx12. I know this is how I am looking at it. I don't believe that you can make the same comparison using Intel/AMD history with Nvidia/AMD.

These two have gone back and forth quite a bit, and the so called market share is really what Nvidia sold in dpus last year vs AMD. If AMD comes out with a better product, this will shift back. I don't know how anyone can look at Nvidia's customers anything other than as informed consumers who will buy what they view as best for them based on performance and support. There is no contractual agreement binding them Nvidia, so the brand hype is just that. I realize that Nvidia has convinced itself internally that AMD does not exist, and thus, it proceeds to make decisions in a vacuum, as apparent by launch day performance benchmarks that start to slide a few months later, even on Nvidia sponsored gimpworks heavy titles. I don't know why Nvidia can't fathom that may be its customers are not cut out to be Apple type customers. And, even Apple provides an entire solution, OS and hardware, along with a contractual agreement on phones with carriers. So its a device with multiple uses, and an extremely important utility as a communication device. Nvidia doesn't have an OS, GPUs don't have multiple diverse uses other than gaming in this segment, and it is not meeting any mission critical need, such as a phone for communications.

Of course all of this is just my opinion, and I am sure others may have contrary opinions.

And, whoever these "market speculators" are, they better know what they are doing, as I even don't know what a "fan" is in a commoditized product segment&#8230;&#8230;..

Nvidia's best hope of making loyal customers is to continue to provide better hardware, software and optimization support relative to its competitor.. why is this such a foreign concept, that all Nvidia keeps parroting fans and brand loyalty.. this is pretty basic blocking and tackling and execution......... as long as Nvidia is focused on this consumer gpu segment, not just in words, but in deeds... including dedicating human and financial resources to this segment, other than just layering on proprietary API/library for PC ports ... this is not pushing the market forward, this is just pushing your product via pull through sales.. at some point you need to provide a real, sustaining and substantive value proposition relative to your competition, and for the consumers so that the consumers don't substitute out dgpus for other gaming alternatives


----------



## Blameless

Quote:


> Originally Posted by *provost*
> 
> You know, of course, I disagree. Nvidia has a customers because it was able to provide hardware, combined with above average software support, relative to its competition.


AMD had some major multi-GPU issues until recently, but the multi-GPU market is tiny, and outside of this NVIDIA really hasn't had a decisive edge in product value. Some small advantages here and there, which often switched sides with each new release, but you can't really explain NVIDIA's current edge in discrete market share by looking at just the products or their software support. Public perception is often based on exaggerated differences, or even entirely baseless, at the technical level.
Quote:


> Originally Posted by *provost*
> 
> What gives AMD edge for some consumers is the longevity of its optimization support and arch advantage over Nvidia vis-à-vis Dx12.


Longevity of "optimization support" is overstated. I have all sorts of earlier GCN cards and some Kepler cards, and the Kepler parts are by no means unusable or noncompetitive, even if the GCN parts have gained a bit more from drivers. My 7950s are still solid GPUs, as is my reference GTX 780.

The DX12 advantage AMD offers is quite new and there is no telling if it will remain long enough to really influence market share.
Quote:


> Originally Posted by *provost*
> 
> I don't believe that you can make the same comparison using Intel/AMD history with Nvidia/AMD.


My comparison was with respect to the inertia of brand favoritism that can be built up.

I ran predominantly Intel CPUs, barring some budget AMD parts, until the K7...then I used AMD almost exclusively until Core 2 showed up and immediately transitioned over to the newer, and obviously superior (to the K8s that existed at the time) Core 2 based parts. However, many of my peers refused to acknowledge this shift and stuck with AMD. Many of these were the same people who stuck with their Pentium 4s years earlier, even when K7s and K8s were matching or besting them for less money. Should AMD pull a miracle out of it's hat and make Zen so impressive that Intel is again forced to play catch up like they were a decade ago, these same people will again cling to their Intel parts.

I saw the same sort of thing with NVIDIA vs. ATI/AMD on the GPU side. I never had any GeForce FX parts of my own. I went straight from having used GeForce parts from the original GeForce 256 DDR through the GeForce 4ti 4800, to having all ATI Radeon 9000 series parts in the span of six months. Then I had a pretty even mix of X8xx Radeons and GeForce 6800s. Later I skipped the Radeon HD 2000 line entirely in favor of the GeForce 7900s. Many of enthusiasts I knew made either the mistake of sticking with NVIDIA when they should have moved to ATI, or vice versa, or both. I recall ridiculing a few people for going directly from FX 5800 series to the Radeon 2900.

The common theme here is the inclination of people to stick with the name they are familiar with, even if there is strong evidence that a competitor has a product of superior value.


----------



## superstition222

Quote:


> Originally Posted by *PlugSeven*
> 
> The article grossly underestimates fanboyism and brand loyalty, nvidians will buy nvidia regardless. Just watch as all manner of contrived metrics start to pop up to justify the geforce purchase.


And astroturf. But I see you pointed at that with your second sentence.


----------



## provost

Quote:


> Originally Posted by *Blameless*
> 
> AMD had some major multi-GPU issues until recently, but the multi-GPU market is tiny, and outside of this NVIDIA really hasn't had a decisive edge in product value. Some small advantages here and there, which often switched sides with each new release, but you can't really explain NVIDIA's current edge in discrete market share by looking at just the products or their software support. Public perception is often based on exaggerated differences, or even entirely baseless, at the technical level.
> Longevity of "optimization support" is overstated. I have all sorts of earlier GCN cards and some Kepler cards, and the Kepler parts are by no means unusable or noncompetitive, even if the GCN parts have gained a bit more from drivers. My 7950s are still solid GPUs, as is my reference GTX 780.
> 
> The DX12 advantage AMD offers is quite new and there is no telling if it will remain long enough to really influence market share.
> My comparison was with respect to the inertia of brand favoritism that can be built up.
> 
> I ran predominantly Intel CPUs, barring some budget AMD parts, until the K7...then I used AMD almost exclusively until Core 2 showed up and immediately transitioned over to the newer, and obviously superior (to the K8s that existed at the time) Core 2 based parts. However, many of my peers refused to acknowledge this shift and stuck with AMD. Many of these were the same people who stuck with their Pentium 4s years earlier, even when K7s and K8s were matching or besting them for less money. Should AMD pull a miracle out of it's hat and make Zen so impressive that Intel is again forced to play catch up like they were a decade ago, these same people will again cling to their Intel parts.
> 
> I saw the same sort of thing with NVIDIA vs. ATI/AMD on the GPU side. I never had any GeForce FX parts of my own. I went straight from having used GeForce parts from the original GeForce 256 DDR through the GeForce 4ti 4800, to having all ATI Radeon 9000 series parts in the span of six months. Then I had a pretty even mix of X8xx Radeons and GeForce 6800s. Later I skipped the Radeon HD 2000 line entirely in favor of the GeForce 7900s. Many of enthusiasts I knew made either the mistake of sticking with NVIDIA when they should have moved to ATI, or vice versa, or both. I recall ridiculing a few people for going directly from FX 5800 series to the Radeon 2900.
> 
> *The common theme here is the inclination of people to stick with the name they are familiar with, even if there is strong evidence that a competitor has a product of superior value*.


Well, we will have to agree to disagree on rest of the stuff, I guess time well tell. But, I do I agree with you on the highlighted part, however, that affinity only lasts through perhaps one more purchase. So, for example, I had been very reluctant to move to AMD after 580, 690, Titan to 780 ti. I did not move to Maxwell, and I believe I have outlined those reasons in detail before. But, when I saw AMD cards continue to perform better than my Kepler cards game after game, as the new games were released, and after experiencing numerous driver issues with my multiple gpu set up (including crashing, lack of scaling) I started taking serious notice of alternatives, and yes, including the peasant consoles. I also did some work on understanding where the market may be going next, i.e. DX 11 vs Dx12, and I ultimately decided that moving to AMD in the interim, while I waited for Polaris to come out was the right thing for me. I didn't have to fear losing optimization support on what I had already purchased , and in my opinion, AMD was better positioned to take advantage of next gen games because of its GCN arch. I don't how many other customers will think that same way, but if I was able to make the move which I have been quite happy with so far, there are probably others. And, I had very limited prior exposure to AMD, other than a 7950 in a, I don't know, 4th machine out of 5 or 6.
I still have no clue about AMD CPUs, as Intel hasn't given me a reason yet to change out my aging 3930 in the primary rig, or pushed me to upgrade it every few months (and I mean this in a positive sense). I do have Skylake and Ivy bridge, and may be another intel CPU in some other rigs around my house, but the primary one is the gaming one, even if others are capable.


----------



## infranoia

@Provost, appreciate your position but you are describing an idealistic capitalism, which does not take into consideration brand loyalty. People can be otherwise rational actors but still harbor a complete and total allegiance to a brand.

The only reason I can speak to this is that I have one in my family-- my brother absolutely, resolutely refuses to acknowledge AMD GPUs in any way, shape or form. He would rather plummet off the cliff with Nvidia than entertain, for one moment, that AMD has a compelling and competitive brand. As Blameless states, there are many on the Intel side as well.

That's the very definition of fanboy, and there is more than one example of them on these very forums. They confound idealized market forces completely.

Unfortunately this also describes most of the Best Buy and Fry's salespeople I've had the misfortune to interact with.


----------



## ToTheSun!

Quote:


> Originally Posted by *SpeedyVT*
> 
> Inconsistent frames are worse than 30 fps.


If you have a monitor with strobing, i can assure you that judder will be worse than inconsistent frames.


----------



## f1LL

G-Sync is here to strengthen brand loyalty. It will encourage lots of people to get the "worse deal for their money".


----------



## magnek

Quote:


> Originally Posted by *Blameless*
> 
> AMD had some major multi-GPU issues until recently, but the multi-GPU market is tiny, and outside of this NVIDIA really hasn't had a decisive edge in product value. Some small advantages here and there, which often switched sides with each new release, but you can't really explain NVIDIA's current edge in discrete market share by looking at just the products or their software support. Public perception is often based on exaggerated differences, or even entirely baseless, at the technical level.
> Longevity of "optimization support" is overstated. I have all sorts of earlier GCN cards and some Kepler cards, and the Kepler parts are by no means unusable or noncompetitive, even if the GCN parts have gained a bit more from drivers. My 7950s are still solid GPUs, as is my reference GTX 780.
> 
> The DX12 advantage AMD offers is quite new and there is no telling if it will remain long enough to really influence market share.
> My comparison was with respect to the inertia of brand favoritism that can be built up.
> 
> I ran predominantly Intel CPUs, barring some budget AMD parts, until the K7...then I used AMD almost exclusively until Core 2 showed up and immediately transitioned over to the newer, and obviously superior (to the K8s that existed at the time) Core 2 based parts. However, many of my peers refused to acknowledge this shift and stuck with AMD. Many of these were the same people who stuck with their Pentium 4s years earlier, even when K7s and K8s were matching or besting them for less money. Should AMD pull a miracle out of it's hat and make Zen so impressive that Intel is again forced to play catch up like they were a decade ago, these same people will again cling to their Intel parts.
> 
> I saw the same sort of thing with NVIDIA vs. ATI/AMD on the GPU side. *I never had any GeForce FX parts of my own.* I went straight from having used GeForce parts from the original GeForce 256 DDR through the GeForce 4ti 4800, to having all ATI Radeon 9000 series parts in the span of six months. Then I had a pretty even mix of X8xx Radeons and GeForce 6800s. Later I skipped the Radeon HD 2000 line entirely in favor of the GeForce 7900s. Many of enthusiasts I knew made either the mistake of sticking with NVIDIA when they should have moved to ATI, or vice versa, or both. I recall ridiculing a few people for going directly from FX 5800 series to the Radeon 2900.
> 
> The common theme here is the inclination of people to stick with the name they are familiar with, even if there is strong evidence that a competitor has a product of superior value.


Good call on the bold part. The FX line was absolute trash. What the hell were they thinking sticking a Dustbuster fan on the card?








Quote:


> Originally Posted by *f1LL*
> 
> G-Sync is here to strengthen brand loyalty. It will encourage lots of people to get the "worse deal for their money".


I have a GSync monitor and I definitely wouldn't mind buying Polaris if they offer more performance/a better deal. Granted that's mainly because I'm not all that impressed with GSync, and could most certainly live without it.


----------



## provost

Quote:


> Originally Posted by *infranoia*
> 
> @Provost, appreciate your position but you are describing an idealistic capitalism, which does not take into consideration brand loyalty. People can be otherwise rational actors but still harbor a complete and total allegiance to a brand.
> 
> The only reason I can speak to this is that I have one in my family-- my brother absolutely, resolutely refuses to acknowledge AMD GPUs in any way, shape or form. He would rather plummet off the cliff with Nvidia than entertain, for one moment, that AMD has a compelling and competitive brand. As Blameless states, there are many on the Intel side as well.
> 
> That's the very definition of fanboy, and there is more than one example of them on these very forums. They confound idealized market forces completely.
> 
> Unfortunately this also describes most of the Best Buy and Fry's salespeople I've had the misfortune to interact with.


I guess I should have read Blamless' comments in the last paragraph regarding him running Intel/AMD more carefully, as well as his conclusion. But, my ADD added to my reading incomprehension, and after reading the first two paragraphs, which I did not agree with, I assumed the rest was along the same lines, and I went right to the last two lines. You know what they say about someone who makes an assumption right







The bottom line is that I don't buy into the marketing pitch notion of "buy now, based on these benchmarks" and never mind what our track record is for support, and ignore that we are about to launch new products in a few months and will be conveniently EOl- ing you out. In other words, its very much caveat emptor, so everyone should do their own diligence and come to their own conclusion. I don't know how anyone can even suggest someone to buy a new product from a vendor now, without also informing the prospective buyer that hey by the way, new cards may be around the corner, so your decision, mr. buyer... , and here is our track record of support, etc...

In any event, you can categorize me as one of the not too informed CPU consumers, as i have never had any experience with an AMD CPU, other than knowing the market dynamic that resulted in Intel dominating after being sued for "incentivizing" OEMs, application developers, etc. to gimp AMD CPUs (well more or less, as I am just summarizing), And, the result has not been favorable for the consumers both in terms of price, and innovation, as Intel moved onto other "stuff" after getting a stranglehold on the consumer side. However, its great to see AMD coming back with Zen, and although I don't know too much about it just as yet, other than the basics, it seems like a promising product for the consumer segment. So , I am looking forward to switching out Intel with Zen, if the performance, support, etc is there (and I have no reason to believe otherwise at this time). I have always maintained that I view any seller (meaning company) of anything to me as a vendor, and have no issues trading out one for another, depending on who wants my business more, based on what's important to me as a consumer.


----------



## Ha-Nocri

I don't see why devs wouldn't use async compute, it's a great feature. Unless NV pay them not to, which wouldn't surprise me.


----------



## DeathMade

Quote:


> Originally Posted by *iLeakStuff*
> 
> Have you seen their twitter account and what they post there?
> https://twitter.com/oxidegames
> Tell me that they and AMD isnt working together and milking this for everything it got. For AMD I totally understand it since its a feature they do better than Nvidia, but for Oxide which is a game developer?
> 
> They don`t exactly come across as neutral. AMD AMD AMD


So them being AMD sponsored and claiming they work with all is wrong while PCars being bloated with Nvidia logos while devs claiming they are not sponsored by Nvidia is right?


----------



## Clocknut

Quote:


> Originally Posted by *Blameless*
> 
> Takes a long time for brand loyalty to shift/die. Best to never develop such loyalties in the first place.
> 
> I know plenty of individuals who will buy NVIDIA no matter what they do, and many of these are the same people who refused to transition from AMD to Intel CPUs until very recently, despite Intel having an incontrovertible edge in the market segments they were interested in since 2006.
> 
> Anyway, I'm not sure Seeking Alpha is a particularly good source for tech information. Yes, if AMD holds on to it's DX12 advantage for the next generation and beyond, market share could gradually switch back (people will eventually start buying the better value, even if they had a brand loyalty elsewhere), but that's a pretty big if at this point. Maybe enough for market speculators though.


I dont think brand loyalty play the biggest role in shifting the GPU market.

Majority of people buy product that works best right now, not for future proof. The recent shift is because of AMD's lackluster GPU nothing todo with Nvidia being better brand.


----------



## dragneel

Quote:


> Originally Posted by *Clocknut*
> 
> I dont think brand loyalty play the biggest role in shifting the GPU market.
> 
> Majority of people buy product that works best right now, not for future proof. The recent shift is because of AMD's lackluster GPU nothing todo with Nvidia being better brand.


But as far as I can tell, the "lackluster GPU's" from AMD that people talk about were on launch relatively close to the competing card's performance for less money. I can't be the only one willing to sacrifice 5 fps to save $100. I mean, here in Aus, the GTX 960 was $100 more than the R9 280 and performs roughly the same. I don't see that as lackluster, I see that as pretty solid competition.

Edit: Don't get me wrong though, If I had the money at the time for a top end card, there's a good chance I'd have chosen a 980ti over a FuryX. I'm even still considering a 980ti for my new build because of my impatience.


----------



## SuperZan

Quote:


> Originally Posted by *dragneel*
> 
> But as far as I can tell, the "lackluster GPU's" from AMD that people talk about were on launch relatively close to the competing card's performance for less money. *I can't be the only one willing to sacrifice 5 fps to save $100*. I mean, here in Aus, the GTX 960 was $100 more than the R9 280 and performs roughly the same. I don't see that as lackluster, I see that as pretty solid competition.


Particularly when that performance difference is resolved a month later. As someone who doesn't immediately hunker down for 30+ hours of "AAA GAME OF THE WEEK" each Saturday, I'd rarely if ever experience that fabled Day Zero Nvidia Advantage anyways.


----------



## Mahigan

Quote:


> Originally Posted by *Clocknut*
> 
> I dont think brand loyalty play the biggest role in shifting the GPU market.
> 
> Majority of people buy product that works best right now, not for future proof. The recent shift is because of AMD's lackluster GPU nothing todo with Nvidia being better brand.


Based on my experiences, and the backlash I've faced eventhough I was right, I would have to disagree with you on this but from a differing perspective.

Individuals are loyal to their bragging rights. If you somehow diminish the halo surrounding a product they've purchased, they see it as an attack on themselves. On their ego.

I'd say that NVIDIA is a halo brand, like Apple, whose users buy their cards (in the majority a side from hardware enthusiasts) because it's NVIDIA and a GeForce. Not because it is empirically superior.

AMD users, though some exhibit this behavior and particularly in the CPU arena, tend to be more bang for your buck consumers.


----------



## dragneel

Quote:


> Originally Posted by *Mahigan*
> 
> Based on my experiences, and the backlash I've faced eventhough I was right, I would have to disagree with you on this but from a differing perspective.
> 
> Individuals are loyal to their bragging rights. If you somehow diminish the halo surrounding a product they've purchased, they see it as an attack on themselves. On their ego.
> 
> I'd say that NVIDIA is a halo brand, like Apple, whose users buy their cards (in the majority asise from hardware enthusiasts) because it's NVIDIA and a GeForce. Not because it is emoirically superior.


Gotta be honest though, with the amount of times I saw the "nvidia, the way its meant to be played" splash screens on my games when I was younger I'm really not surprised by the brand loyalty. It often made me question my choice in buying an ATi card. It almost felt like a brainwashing tactic and that's not meant to be an indictment of Nvidia, just how it felt to me personally every time I saw it.
Quote:


> Originally Posted by *SuperZan*
> 
> Particularly when that performance difference is resolved a month later. As someone who doesn't immediately hunker down for 30+ hours of "AAA GAME OF THE WEEK" each Saturday, I'd rarely if ever experience that fabled Day Zero Nvidia Advantage anyways.


Exactly.


----------



## keikei

Quote:


> Originally Posted by *dragneel*
> 
> Gotta be honest though, with the amount of times I saw the "nvidia, the way its meant to be played" splash screens on my games when I was younger I'm really not surprised by the brand loyalty. It often made me question my choice in buying an ATi card. It almost felt like a brainwashing tactic and that's not meant to be an indictment of Nvidia, just how it felt to me personally every time I saw it.
> Exactly.


Its like any form of marketing. The more you get your brand out there, the more familar the public will be to it. Look at Coke. Millions of people drink it. Are people buying it because its the best soda ever or is the name so hammered into they minds (from commericals) that when people want to something to drink, 'Coke' automatically comes to their mind?


----------



## Kand

Quote:


> Originally Posted by *keikei*
> 
> Its like any form of marketing. The more you get your brand out there, the more familar the public will be to it. Look at Coke. Millions of people drink it. Are people buying it because its the best soda ever or is the name so hammered into they minds (from commericals) that when people want to something to drink, 'Coke' automatically comes to their mind?


I hate Coca Cola.

It's the worst soda among all.


----------



## Assirra

Quote:


> Originally Posted by *magnek*
> 
> Being open about a collaboration with an IHV is a bad thing?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Would you rather they delete their Twitter, and erase any and all references that might even hint a collaboration with AMD existed?


I think you might be missing the point.
From what i understand his point is that AoS a lot less neutral than people here seem to think it is.
This is basically the equivalent of a nvndia gameworks title benchmarks before the game comes out where AMD is losing.


----------



## infranoia

Quote:


> Originally Posted by *Assirra*
> 
> I think you might be missing the point.
> From what i understand his point is that AoS a lot less neutral than people here seem to think it is.
> This is basically the equivalent of a nvndia gameworks title benchmarks before the game comes out where AMD is losing.


I was not aware that Nvidia allows developers to give AMD the Gameworks render sourcecode to directly modify and submit to the source tree.

If they don't do that, then it's nowhere near the equivalent.


----------



## dragneel

Quote:


> Originally Posted by *keikei*
> 
> Its like any form of marketing. The more you get your brand out there, the more familar the public will be to it. Look at Coke. Millions of people drink it. Are people buying it because its the best soda ever or is the name so hammered into they minds (from commericals) that when people want to something to drink, 'Coke' automatically comes to their mind?


Yeah, I agree. I also mostly buy Coke even though I prefer the taste of Pepsi, strange isn't it? Then again that probably has more to do with my severe addiction to caffeine in general.


----------



## Kand

Quote:


> Originally Posted by *dragneel*
> 
> Yeah, I agree. I also mostly buy Coke even though I prefer the taste of Pepsi, strange isn't it?


Both brands are disgusting.


----------



## dragneel

Quote:


> Originally Posted by *Kand*
> 
> Both brands are disgusting.


That is your opinion and you are entitled to it.


----------



## Kollock

Quote:


> Originally Posted by *iLeakStuff*
> 
> Have you seen their twitter account and what they post there?
> https://twitter.com/oxidegames
> Tell me that they and AMD isnt working together and milking this for everything it got. For AMD I totally understand it since its a feature they do better than Nvidia, but for Oxide which is a game developer?
> 
> They don`t exactly come across as neutral. AMD AMD AMD


I'm not sure you understand the nature of this feature. There are 2 main types of tasks for a GPU, graphics and compute. D3D12 exposes main 2 queue types, a universal queue (compute and graphics), and a compute queue. For Ashes, use of this feature involves taking compute jobs which are already part of the frame and marking them up in a way such that hardware is free to coexecute it with other work. Hopefully, this is a relatively straightfoward tasks. No additional compute tasks were created to exploit async compute. It is merely moving work that already exists so that it can run more optimally. That is, if async compute was not present, the work would be added to the universal queue rather than the compute queue. The work still has to be done, however.

The best way to think about it is that the scene that is rendered remains (virtually) unchanged. In D3D12 the work items are simply arranged and marked in a manner that allows parallel execution. Thus, not using it when you could is seems very close to intentionally sandbagging performance.


----------



## Assirra

Quote:


> Originally Posted by *infranoia*
> 
> I was not aware that Nvidia allows developers to give AMD the Gameworks render sourcecode to directly modify and submit to the source tree.
> 
> If they don't do that, then it's nowhere near the equivalent.


Sigh, instead of actually countering my point you went after a specific sentence and "countered" me that way.
Once again, AoS is NOT neutral, they are hand in hand with AMD while nvidia hanging on the back somewhere which is completely fine, if people stopped pretending that this 1 game shows the future of the next 5year.


----------



## magnek

Quote:


> Originally Posted by *Assirra*
> 
> I think you might be missing the point.
> From what i understand his point is that AoS a lot less neutral than people here seem to think it is.
> This is basically the equivalent of a nvndia gameworks title benchmarks before the game comes out where AMD is losing.


Is he implying Oxide is somehow deliberately crippling nVidia's performance then? Because thus far we have seen absolutely no evidence of that.

If by "not neutral" you mean "taking advantage of features that AMD has that nVidia may or may not have", well then whoop-dee-freaking-do. Go buy an AMD card if you have a problem with that. I mean that's the argument thrown around when AMD users complain about Gameworks, so surely it goes the other way too.


----------



## infranoia

Quote:


> Originally Posted by *Assirra*
> 
> Sigh, instead of actually countering my point you went after a specific sentence and "countered" me that way.
> Once again, AoS is NOT neutral, they are hand in hand with AMD while nvidia hanging on the back somewhere which is completely fine, if people stopped pretending that this 1 game shows the future of the next 5year.


No, I countered your point, because that sentence was your summary. You seem to suggest that Oxide is hiding some AMD-specific code from Nvidia, when NV has all they need to optimize this game *within their hardware constraints*, because they have access to the source code.


----------



## mav451

Quote:


> Originally Posted by *Assirra*
> 
> I think you might be missing the point.
> From what i understand his point is that AoS a lot less neutral than people here seem to think it is.
> This is basically the equivalent of a nvndia gameworks title benchmarks before the game comes out where AMD is losing.


Exactly my impression as well. This has been a well-executed, pro-AMD campaign. Their first truly successful campaign since the A64 days really. Credit to maintaining the positive narrative, and muzzling Roy Taylor (to prevent further gaffes).


----------



## Wovermars1996

Quote:


> Originally Posted by *dragneel*
> 
> Yeah, I agree. I also mostly buy Coke even though I prefer the taste of Pepsi, strange isn't it? Then again that probably has more to do with my severe addiction to caffeine in general.


I prefer sprite over coke.


----------



## dragneel

Quote:


> Originally Posted by *Wovermars1996*
> 
> I prefer sprite over coke.


And in the end, Coca Cola Amatil still wins.








I couldn't believe how many drinks they actually own.

Going too far off topic now :L


----------



## provost

Assuming it's some grand well thought out genius campaign by AMD to outsmart the best and most well funded marketing machinery by Nvidia, the counter is pretty simple, STOP GIVING YOUR CUSTOMERS A REASON TO LEAVE!! At least that's how I would look at if I ever gave a darn about my customers, rather than thinking of them merely as an "install base" to be milked at will.....


----------



## Mampus

Quote:


> Originally Posted by *provost*
> 
> Assuming it's some grand well thought out genius campaign by AMD to outsmart the best and most well funded marketing machinery by Nvidia, the counter is pretty simple, *STOP GIVING YOUR CUSTOMERS A REASON TO LEAVE!!* At least that's how I would look at if I ever gave a darn about my customers, rather than thinking of them merely as an "install base" to be milked at will.....


True. For me is the Adaptive V-Sync. My soon-to-be next card will be 950 or 960


----------



## Slomo4shO

Quote:


> Originally Posted by *Mahigan*
> 
> So what is NVIDIAs money going to buy?


If I were a betting man, I would say AMD engineers


----------



## pengs

Quote:


> Originally Posted by *Wovermars1996*
> 
> I prefer sprite over coke.


Jokes on you, Sprite is owned by Coke







Quote:


> Originally Posted by *Assirra*
> 
> Sigh, instead of actually countering my point you went after a specific sentence and "countered" me that way.
> Once again, AoS is NOT neutral, they are hand in hand with AMD while nvidia hanging on the back somewhere which is completely fine, if people stopped pretending that this 1 game shows the future of the next 5year.


It's fun that you can make that correlation with a program which stifles it's own performance to get a leg up with something that is a hardware scheduler using a particular method to transport and execute data and an overall advancement in technology.

They are one in the same in the sense that AMD played it's competitors game better by slowly and deliberately installing it's architecture and accommodating API into the larger user base. This will be a strategical win via its architecture with minimal damage, if any, to the industry -- not by use of brute force, it's resources or persuasion ending in a blast radius which cripples the ignitor.


----------



## magnek

Quote:


> Originally Posted by *Assirra*
> 
> Sigh, instead of actually countering my point you went after a specific sentence and "countered" me that way.
> Once again, AoS is NOT neutral, *they are hand in hand with AMD while nvidia hanging on the back somewhere* which is completely fine, if people stopped pretending that this 1 game shows the future of the next 5year.


Ok so I won't pretend to know how much collaboration there is between each IHV these days, but Kollock had this to say last August:
Quote:


> Originally Posted by *Kollock*
> 
> Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). *Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD.*
> 
> 
> 
> 
> 
> 
> 
> As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.


----------



## oskullop

woooooow this tread


----------



## steadly2004

Quote:


> Originally Posted by *Mahigan*
> 
> Based on my experiences, and the backlash I've faced eventhough I was right, I would have to disagree with you on this but from a differing perspective.
> 
> Individuals are loyal to their bragging rights. If you somehow diminish the halo surrounding a product they've purchased, they see it as an attack on themselves. On their ego.
> 
> I'd say that NVIDIA is a halo brand, like Apple, whose users buy their cards (in the majority a side from hardware enthusiasts) because it's NVIDIA and a GeForce. Not because it is empirically superior.
> 
> AMD users, though some exhibit this behavior and particularly in the CPU arena, tend to be more bang for your buck consumers.


I agree to a point.... I did buy my Titan X's because I wanted the best. I wanted the best available when it was available (before the fury X and 980 ti). I may like the e-peen that comes with having the best I an afford at any given time.

However..... if my video cards get trampled by a Fury X, I don't get offended or take it against my ego. Just a learning experience. I bought a 780ti back in the day, and then traded it for a 4930k a few months later. I "downgraded" to a 290x, only to later realize I had comparable to better performance. Turned out to be a perk of AMD, like many are talking about playing the longevity game. However it was still a jet engine of a card or 2-3. I actually thought about posting a "want to trade" for my 2 titan X's for 2x Fury X's when this benchmark came out. The biggest factor that stopped me was having 4gb VRAM and a 4K monitor. 4000 shaders as opposed to 3000, sounds good right off the bat. I do like the idea of running 1400-1500mhz on the flipside.

On a side note, I do appreciate you wording your statement at "attack on themselves or ego," instead of just "butt-hurt"


----------



## p4inkill3r

Quote:


> Originally Posted by *Kollock*
> 
> I'm not sure you understand the nature of this feature. There are 2 main types of tasks for a GPU, graphics and compute. D3D12 exposes main 2 queue types, a universal queue (compute and graphics), and a compute queue. For Ashes, use of this feature involves taking compute jobs which are already part of the frame and marking them up in a way such that hardware is free to coexecute it with other work. Hopefully, this is a relatively straightfoward tasks. No additional compute tasks were created to exploit async compute. It is merely moving work that already exists so that it can run more optimally. That is, if async compute was not present, the work would be added to the universal queue rather than the compute queue. The work still has to be done, however.
> 
> The best way to think about it is that the scene that is rendered remains (virtually) unchanged. In D3D12 the work items are simply arranged and marked in a manner that allows parallel execution. *Thus, not using it when you could is seems very close to intentionally sandbagging performance*.


Emphasis mine.

It is good to see you refute the charges of being an AMD-puppet dev team in such a classy manner.


----------



## venom55520

All the planets have aligned for AMD in both the CPU and GPU market. This is good, we need competition. The only issue is, if anyone can poop the bed, it's AMD.


----------



## Catscratch

2500k 4ghz - 280x trix 1020mhz stock speed and power.
*1080p*

dx11 - standard - 36.8 fps


dx12 - standard - 43 fps


dx11 - crazy - 25.3 fps


dx12 - crazy - 26.5 fps


The beta2 bench has texture glitches. 280x just dont have cahones for crazy preset. However, I have to emphasize this, because of driver overhead also visible on screenshots, the dx11 frametimes were erratic thus jittery gameplay despite FPS and preset. Dx12 was smooth despite preset and fps because of low overhead thus steady frametimes.


----------



## kyrie74

Quote:


> Originally Posted by *pengs*
> 
> Jokes on you, Sprite is owned by Coke
> 
> 
> 
> 
> 
> 
> 
> 
> It's fun that you can make that correlation with a program which stifles it's own performance to get a leg up with something that is a hardware scheduler using a particular method to transport and execute data and an overall advancement in technology.
> 
> They are one in the same in the sense that AMD played it's competitors game better by slowly and deliberately installing it's architecture and accommodating API into the larger user base. This will be a strategical win via its architecture with minimal damage, if any, to the industry -- not by use of brute force, it's resources or persuasion ending in a blast radius which cripples the ignitor.


Are you saying AMD has been playing a game of chess, while Nvidia was playing checkers? /sarcasm kinda... If you are saying this? I agree completely because I said this when I first joined here after finding out about AMD getting the Next Gen console design wins, then announcing Mantle and the Apple deal. Oh yeah I almost forgot 5 of the most important announcements, the HSA Foundation, HBM designed with SK Hynix, Mantle code being the base for the following: (D3D12, Vulkan, Apple Metal, D3D12 for XBox1, the PS4 API, and the Nintendo NX API), the deal between Samsung/GlobalFoundries/AMD/IBM as a result of them all being part of The HSA Foundation or other various foundations or business deals, and last but not least 14LPP process from Samsung/GF/IBM.

Edit: for grammar.


----------



## infranoia

Quote:


> Originally Posted by *kyrie74*
> 
> Are you saying AMD has been playing a game of chess, while Nvidia was playing checkers? /sarcasm kinda... If you are saying this? I agree completely because I said this when I first joined here after finding out about AMD getting the Next Gen console design wins, then announcing Mantle and the Apple deal. Oh yeah I almost forgot the 5 of the most important announcements, the HSA Foundation, HBM designed with SK Hynix, Mantle code being the base for the following: (D3D12, Vulkan, Apple Metal, D3D12 for XBox1, the PS4 API, and the Nintendo NX API), the deal between Samsung/GlobalFoundries/AMD/IBM as a result of them all being part of The HSA Foundation or other various foundations or business deals, and last but not least 14LPP process from Samsung/GF/IBM.


For a company doing so poorly, no one can say they haven't had a plan in the last three years.

It would be frustrating if AMD has a heart attack six inches from the finish line, but at least we could say they played the game.


----------



## czin125

Does this game benefit from newer instruction sets?

http://ht4u.net/reviews/2013/intel_core_i7_4770_4670_haswell_cpus_test/index37.php


----------



## Charcharo

Quote:


> Originally Posted by *Clocknut*
> 
> I dont think brand loyalty play the biggest role in shifting the GPU market.
> 
> Majority of people buy product that works best right now, not for future proof. The recent shift is because of AMD's lackluster GPU nothing todo with Nvidia being better brand.


But the thing is that... apart from the Fury X and 370, AMD GPUs are either better or just as good and cheaper than Nvidia ones.

Of course, here it depends on your country's specific prices. But that is what the average seems to be. Either a very viable alternative or an outright superior GPU.

Hell the only times Nvidia ones have a real advantage (if a small one) are on launch for _some_ games. Those issues get solved soon though, via patching and driver updates. And since the drivel that is released these days is also broken and unplayable on release... you should probably wait anyway for your own sanity.


----------



## Klocek001

Quote:


> Originally Posted by *Charcharo*
> 
> But the thing is that... apart from the Fury X and 370, AMD GPUs are either better or just as good and cheaper than Nvidia ones.
> 
> Of course, here it depends on your country's specific prices. But that is what the average seems to be. Either a very viable alternative or an outright superior GPU.
> 
> Hell the only times Nvidia ones have a real advantage (if a small one) are on launch for _some_ games. Those issues get solved soon though, via patching and driver updates. And since the drivel that is released these days is also broken and unplayable on release... you should probably wait anyway for your own sanity.


totally agreed, but amd's problem isn't being uncompetitive.

who would choose to drop 1100PLN on 380X (giving you polish prices) if you can have 280X for 900 PLN or a used 7970 for 650 PLN.
same goes for 390/390X (1500PLN/1900PLN respectively) , there are 290/290X available for 1200/1350...........

amd's approach is a curse for their sales


----------



## Mahigan

Quote:


> Originally Posted by *czin125*
> 
> Does this game benefit from newer instruction sets?
> 
> http://ht4u.net/reviews/2013/intel_core_i7_4770_4670_haswell_cpus_test/index37.php


Things may have changed but this is what Oxide sent me when I asked the same question,

1. We have heavily invested in SSE ( mostly 2 for compatibility reasons ) and a significant portion of the engine is executing that code during the benchmark. It could very well be 40% of the frame. Possibly more.

2. While we do have large contiguous blocks of SSE code ( mainly in our simulations ) it is also rather heavily woven into the entire game via our math libraries. Our AI and gameplay code tend to be very math heavy.

3. The Nitrous engine is designed to be data oriented ( basically we know what memory we need and when ). Because of this, we can effectively utilize the SSE streaming memory instructions in conjunction with prefetch ( both temporal and non temporal ). In addition, because our memory accesses are more predictable the hardware prefetcher tends to be better utilized.

4. Memory bandwidth is definitely something to consider. The larger the scope of the application, paired with going highly parallel puts a lot of pressure on the Memory System. On my i7 3770s i'm hitting close to peak bandwith on 40% of the frame.


----------



## Charcharo

Quote:


> Originally Posted by *Klocek001*
> 
> totally agreed, but amd's problem isn't being uncompetitive.
> 
> who would choose to drop 1100PLN on 380X (giving you polish prices) if you can have 280X for 900 PLN or a used 7970 for 650 PLN.
> same goes for 390/390X (1500PLN/1900PLN respectively) , there are 290/290X available for 1200/1350...........
> 
> amd's approach is a curse for their sales


Is that not also a problem for Nvidia?

And damn you guys have it good. Where I live, most new 290s cost as much as a 390. The 290X costs as much as a 390X, I can not find a good deal on a 280X









Also... the GTX 780 TI costs almost as much as a 980 TI. More than a 980, Fury X and a 390X...
Such is life in Bulgaria...


----------



## MoorishBrutha

Quote:


> Originally Posted by *Mahigan*
> 
> Based on my experiences, and the backlash I've faced eventhough I was right, I would have to disagree with you on this but from a differing perspective.
> 
> Individuals are loyal to their bragging rights. If you somehow diminish the halo surrounding a product they've purchased, they see it as an attack on themselves. On their ego.
> 
> I'd say that NVIDIA is a halo brand, like Apple, whose users buy their cards (in the majority a side from hardware enthusiasts) because it's NVIDIA and a GeForce. Not because it is empirically superior.
> 
> AMD users, though some exhibit this behavior and particularly in the CPU arena, tend to be more bang for your buck consumers.


I disagree with both of you all. I think a very considerable reason why Nvidia has been dominating thus far is because of ignorance and the Gaming Media preying of such. The so-called Gaming media are shills that receiving FREE $1300 monitors from Nvidia and alot of PC Gamers simply don't know any better.

PC Gaming is primarily word-of-mouth advertisement thus you have to go to other people and ask them about what is the best parts/components for their build; hence, forums. Unfortunately, when asking other people for help/advice you expose yourself to propaganda and misinformation.

For example, alot of PC Gamers don't even know that Async Compute is actually old and always been on the consoles especially from a software perspective. Alot of PC gamers don't know what are *white papers*.

Granted, people are intellectually lazy; however, people do defer to others for help/advice sort of like when you are driving and get lost.

Remember now, alot of gaming publishers spend more money on *MARKETING* than actual *DEVELOPING*; hence, these shills are getting paid, indeed.

Then you also got misinformation- people simply don't know what they are talking about. *For instance, people saying that Intel's CPU with 4-cores and 8 threads (hyperthreading) is better than AMD's 8-cores CPU especially in regards to CPU-parallelism*.

Remember, consoles have been out before the Internet was even created, but PC Gaming is finally starting to make its mark into the Entertainment genre. *Alot of PC Gamers don't have a clue what 3dFX and ATI were*.


----------



## Klocek001

Quote:


> Originally Posted by *Charcharo*
> 
> Is that not also a problem for Nvidia?
> 
> And damn you guys have it good. Where I live, most new 290s cost as much as a 390. The 290X costs as much as a 390X, I can not find a good deal on a 280X
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also... the GTX 780 TI costs almost as much as a 980 TI. More than a 980, Fury X and a 390X...
> Such is life in Bulgaria...


like I said, bringing price into discussion on ocn is tricky, prices tend to vary.

here new 780Ti's are cheaper than 390X's (only slightly), but used ones are about the same price as new 290X's. The one here (Galaxy 780Ti HoF with a WB) is 1330 PLN, less than a new 970.



I think we have a good ratio between tiers, but the cards are expensive nevertheless. A new 980Ti's cost is twice the minimum monthly salary here. "Enthusiast rigs" in Poland usually have 290/2500K type of specs.


----------



## Charcharo

Same here. I live in PC Gaming Land (do we even have console gamers? I know none) where people seem to generally understand gaming to a great deal. At least from personal experience.
Yet my (admittedly not bad) rig is basically one of the highest end ones I know of.

A GTX 980 TI costs twice the average monthly salary (before tax) and 4 times the minimum. So does... a 780 TI.

On the other hand the Fury X is almost as "cheap" as a 980. However the R7 370 costs more than a GTX 950... bonkers







!


----------



## Clocknut

Quote:


> Originally Posted by *Mahigan*
> 
> Based on my experiences, and the backlash I've faced eventhough I was right, I would have to disagree with you on this but from a differing perspective.
> 
> Individuals are loyal to their bragging rights. If you somehow diminish the halo surrounding a product they've purchased, they see it as an attack on themselves. On their ego.
> 
> I'd say that NVIDIA is a halo brand, like Apple, whose users buy their cards (in the majority a side from hardware enthusiasts) because it's NVIDIA and a GeForce. Not because it is empirically superior.
> 
> AMD users, though some exhibit this behavior and particularly in the CPU arena, tend to be more bang for your buck consumers.


The public want to buy a product that works at the time they buy it. Nvidia notice this and capitalize it.

it is only AMD recent driver revision, async compute & DX12 that able to bring up AMD performance. Nobody talk about these benefit 1-2yrs ago.

Beside if u throw in power consumption into the mix, thats already giving negative value to AMD, which the public will weighted value against AMD too. (not that I care about the power consumption, but many might). Besides if u have 2 similar products, the one that consume more power is going to worth lesser.

So dont just go assume that the majority are soo stupid enough to believe just a "halo" marketing the Nvidia create. *It is obviously AMD didnt do enough on their product to match up Nvidia complete package.*


----------



## flopper

Quote:


> Originally Posted by *Clocknut*
> 
> The public want to buy a product that works at the time they buy it. Nvidia notice this and capitalize it.
> 
> it is only AMD recent driver revision, async compute & DX12 that able to bring up AMD performance. Nobody talk about these benefit 1-2yrs ago.
> 
> Beside if u throw in power consumption into the mix, thats already giving negative value to AMD, which the public will weighted value against AMD too. (not that I care about the power consumption, but many might). Besides if u have 2 similar products, the one that consume more power is going to worth lesser.
> 
> So dont just go assume that the majority are soo stupid enough to believe just a "halo" marketing the Nvidia create. *It is obviously AMD didnt do enough on their product to match up Nvidia complete package.*


AMD cards works better, meet people when a new game is out and the 970 owners just complains about it.
Grim Dawn in this case.

Mahigan is right.


----------



## Klocek001

Quote:


> Originally Posted by *Charcharo*
> 
> Same here. I live in PC Gaming Land (do we even have console gamers? I know none) where people seem to generally understand gaming to a great deal. At least from personal experience.
> Yet my (admittedly not bad) rig is basically one of the highest end ones I know of.
> 
> A GTX 980 TI costs twice the average monthly salary (before tax) and 4 times the minimum. So does... a 780 TI.
> 
> On the other hand the Fury X is almost as "cheap" as a 980. However the R7 370 costs more than a GTX 950... bonkers
> 
> 
> 
> 
> 
> 
> 
> !


here 980Ti Hybrid = Fury X, 980=390X price wise. But make no mistake, this is doesn't man nvidia is cheap here.
something happened to amd's prices after r9 3xx "refresh"








290 was 1250 pln, 290x was 1500-1600 for 4GB and 1800 for 8GB
now 390 is 1500pln and 390x is 1900pln.
fury (air cooled) launched for as much as reference 980Ti's, (2700-2800 pln) making it a lot more expensive than 980.


----------



## Randomdude

I just go with gut feeling when making a purchase, but then again I don't upgrade every generation. The best "feeling" in this hobby for me is buying the big-fat die of an architecture. 580 and Titan X are such dies, between Fury X and 980Ti I'd buy the Fury X simply because it's a full chip and the TX costs twice. Everyone has their reasoning.


----------



## Fyrwulf

Quote:


> Originally Posted by *Clocknut*
> 
> Beside if u throw in power consumption into the mix, thats already giving negative value to AMD, which the public will weighted value against AMD too. (not that I care about the power consumption, but many might). Besides if u have 2 similar products, the one that consume more power is going to worth lesser.[/B]


This myth that nVidia cards are _always_ more efficient than AMD cards needs to die.

http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-7.html
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html

It's only under maximum load that nVidia cards are superior and even the most hardcore gamer won't achieve that a majority of the time. Now I don't care about power consumption because here in America energy in plentiful and cheap, but that's not the case everywhere and if that's going to be a selling point then let's put the true facts out there.


----------



## SpeedyVT

Quote:


> Originally Posted by *Fyrwulf*
> 
> This myth that nVidia cards are _always_ more efficient than AMD cards needs to die.
> 
> http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-7.html
> http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html
> 
> It's only under maximum load that nVidia cards are superior and even the most hardcore gamer won't achieve that a majority of the time. Now I don't care about power consumption because here in America energy in plentiful and cheap, but that's not the case everywhere and if that's going to be a selling point then let's put the true facts out there.


Well you look at a Nano and suddenly the whole concept of efficient changes face.


----------



## magnek

Somebody didn't get the memo that efficiency only matters when nVidia is on top.


----------



## Fyrwulf

Quote:


> Originally Posted by *magnek*
> 
> Somebody didn't get the memo that efficiency only matters when nVidia is on top.


Well, to be fair, that's something that AMD started as a marketing campaign. It bit them hard when Intel and nVidia bought in on the idea and made sure their parts were more efficient. The crazy thing is that nVidia actually neutered the performance of their cards to win the efficiency battle and still came out on top in overall performance under Dx11. Now AMD is flipping the script again with the low level APIs. It must be so terribly taxing to work in the marketing department of nVidia and have to constantly come up with ways to maintain their monopoly.


----------



## Olivon

Quote:


> Originally Posted by *Fyrwulf*
> 
> The crazy thing is that nVidia actually neutered the performance of their cards to win the efficiency battle and still came out on top in overall performance under Dx11.


Even with a 200MHz+ on it, power efficiency is almost still the same :

http://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/24.html

And ditching DP is not a problem for gamers, AMD did the same with their last 2 chips, Tonga and Fiji.


----------



## cowie

I think that for amd's sake just keep it to how amd totally owns in dx12 and not go into power use because the charts don't lie and you can mis represent them at this point in time.


----------



## Blameless

Quote:


> Originally Posted by *Olivon*
> 
> And ditching DP is not a problem for gamers, AMD did the same with their last 2 chips, Tonga and Fiji.


Tonga and Fiji still have the same DP hardware as the other GCN parts...they just don't let consumers use it. No die area was saved. It's quite probable that GCN and DP capability are inseparable.


----------



## PontiacGTX

Quote:


> Originally Posted by *Blameless*
> 
> Tonga and Fiji still have the same DP hardware as the other GCN parts...they just don't let consumers use it. No die area was saved. It's quite probable that GCN and DP capability are inseparable.


the only thing that differ from Firepro/gl to Consumer are just drivers people could get a 290x working as a w9100 with a modded driver


----------



## Charcharo

Quote:


> Originally Posted by *Clocknut*
> 
> The public want to buy a product that works at the time they buy it. Nvidia notice this and capitalize it.
> 
> it is only AMD recent driver revision, async compute & DX12 that able to bring up AMD performance. Nobody talk about these benefit 1-2yrs ago.
> 
> Beside if u throw in power consumption into the mix, thats already giving negative value to AMD, which the public will weighted value against AMD too. (not that I care about the power consumption, but many might). Besides if u have 2 similar products, the one that consume more power is going to worth lesser.
> 
> So dont just go assume that the majority are soo stupid enough to believe just a "halo" marketing the Nvidia create. *It is obviously AMD didnt do enough on their product to match up Nvidia complete package.*


Well... the VAST majority of games these days do not work on day one, so it seems to me like the public has no idea what they want.

Power Consumption is not so simple. Remember Idle power usage and Frame Rate Target Control. For advanced users, "undervolting" as well. It is not so simple at all.

*Not to mention that this is not an issue EVEN in Bulgaria, let alone other, more developed countries... but still.

Very often most AMD products stack up very comparably and are cheaper/a bit better (especially at higher resolutions) so IDK. I do not see it like that for MOST of the products.

AMD does not market well enough. That really is it. Nothing more or less. Marketing is key.


----------



## airfathaaaaa

anyone remebers how many months the reviewers were pushing arkham knight as the best game and the best bench around?


----------



## Olivon

Quote:


> Originally Posted by *Blameless*
> 
> Tonga and Fiji still have the same DP hardware as the other GCN parts...they just don't let consumers use it. No die area was saved. It's quite probable that GCN and DP capability are inseparable.


I was talking about this :


Quote:


> And the fun doesn't stop there. Along with producing the biggest die they could, AMD has also more or less gone the direction of NVIDIA and Maxwell in the case of Fiji, building what is unambiguously the most gaming/FP32-centric GPU the company could build. With GCN supporting power-of-two FP64 rates between 1/2 and 1/16, AMD has gone for the bare minimum in FP64 performance that their architecture allows, leading to a 1/16 FP64 rate on Fiji. This is a significant departure from Hawaii, which implemented native support for ½ rate, and on consumer parts offered a handicapped 1/8 rate. Fiji will not be a FP64 powerhouse - its 4GB of VRAM is already perhaps too large of a handicap for the HPC market - so instead we get AMD's best FP32 GPU going against NVIDIA's best FP32 GPU.


http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/3


----------



## Defoler

Quote:


> Originally Posted by *Fyrwulf*
> 
> This myth that nVidia cards are _always_ more efficient than AMD cards needs to die.
> 
> http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-7.html
> http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-ti,4164-7.html
> 
> It's only under maximum load that nVidia cards are superior and even the most hardcore gamer won't achieve that a majority of the time. Now I don't care about power consumption because here in America energy in plentiful and cheap, but that's not the case everywhere and if that's going to be a selling point then let's put the true facts out there.


Those two comparisons are unequal. If you actually look at all the numbers and not the final number, you can see that on the nvidia side they also added motherboard consumption, which for some reason is missing from the AMD numbers.
Either they forgot, left out on purpose, but either way, they did not explain why it is not there, or why it is on the nvidia numbers.

Also this is a bit strange, as AMD seems to pull 428.80W from the 300W rated connection during gaming, and nvidia are pulling 358.80W from 225W rated connections. Both of these numbers aren't really making sense, unless on both times they are also adding the 75W from the PCIE lanes, in which case, why double it on the nvidia side as putting it on an extra set under motherboard total?


----------



## Worldwin

Quote:


> Originally Posted by *Defoler*
> 
> Those two comparisons are unequal. If you actually look at all the numbers and not the final number, you can see that on the nvidia side they also added motherboard consumption, which for some reason is missing from the AMD numbers.
> Either they forgot, left out on purpose, but either way, they did not explain why it is not there, or why it is on the nvidia numbers.
> 
> Also this is a bit strange, as AMD seems to pull 428.80W from the 300W rated connection during gaming, and nvidia are pulling 358.80W from 225W rated connections. Both of these numbers aren't really making sense, unless on both times they are also adding the 75W from the PCIE lanes, in which case, why double it on the nvidia side as putting it on an extra set under motherboard total?


Motherboard total is just the 3.3V + 12V added together. If you do the math they are the same. With the wattage pulled greater than the rated spec you can look it as this. If you pull 200V with half a amp you get 400W for that instant. To be more clear you just need to pull a lot of voltage in a short period of time to obtain that 400+W. Pull 400W for the first half of the second and then pull 100W for the last half and you get an average of 250W pulled.


----------



## mtcn77

Quote:


> Originally Posted by *Defoler*
> 
> Those two comparisons are unequal. If you actually look at all the numbers and not the final number, you can see that on the nvidia side they also added motherboard consumption, which for some reason is missing from the AMD numbers.
> Either they forgot, left out on purpose, but either way, they did not explain why it is not there, or why it is on the nvidia numbers.
> 
> Also this is a bit strange, as AMD seems to pull 428.80W from the 300W rated connection during gaming, and nvidia are pulling 358.80W from 225W rated connections. Both of these numbers aren't really making sense, unless on both times they are also adding the 75W from the PCIE lanes, in which case, why double it on the nvidia side as putting it on an extra set under motherboard total?


To the contrary, 980Ti's review is missing some key charts. I'm expectant of what these results were in relation to Fury X.


----------



## Mahigan

Quote:


> Originally Posted by *Defoler*
> 
> Those two comparisons are unequal. If you actually look at all the numbers and not the final number, you can see that on the nvidia side they also added motherboard consumption, which for some reason is missing from the AMD numbers.
> Either they forgot, left out on purpose, but either way, they did not explain why it is not there, or why it is on the nvidia numbers.
> 
> Also this is a bit strange, as AMD seems to pull 428.80W from the 300W rated connection during gaming, and nvidia are pulling 358.80W from 225W rated connections. Both of these numbers aren't really making sense, unless on both times they are also adding the 75W from the PCIE lanes, in which case, why double it on the nvidia side as putting it on an extra set under motherboard total?


I tore this myth apart over at HardOCP.

Take this review of the Fury against a GTX 980:
http://m.hardocp.com/article/2015/07/10/asus_strix_r9_fury_dc3_video_card_review/1

And I ran the numbers...
Quote:


> GTX980: 320 Watts, average FPS: 51.9
> R9 Fury: 367 Watts, average FPS: 60.4
> 
> GTX980 = 0.16 Frames/watt
> Fury = 0.17 Frames/watt
> 
> Or
> 
> GTX980 = 6.2 watts/frame
> Fury = 6.1 watts/frame
> 
> So mathematically, meaning empirically, speaking... Kyle's conclusion was wrong.
> 
> Bahanime is correct. The R9 Fury delivers more performance per watt than the GTX 980. The R9 Fury is therefore more"efficient" in terms of power usage when compared to delivered performance.


Of course if you run furmark or just load up the Fury, it will end up consuming more power but for gaming? Nope.


----------



## rickcooperjr

Quote:


> Originally Posted by *Mahigan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Defoler*
> 
> Those two comparisons are unequal. If you actually look at all the numbers and not the final number, you can see that on the nvidia side they also added motherboard consumption, which for some reason is missing from the AMD numbers.
> Either they forgot, left out on purpose, but either way, they did not explain why it is not there, or why it is on the nvidia numbers.
> 
> Also this is a bit strange, as AMD seems to pull 428.80W from the 300W rated connection during gaming, and nvidia are pulling 358.80W from 225W rated connections. Both of these numbers aren't really making sense, unless on both times they are also adding the 75W from the PCIE lanes, in which case, why double it on the nvidia side as putting it on an extra set under motherboard total?
> 
> 
> 
> I tore this myth apart over at HardOCP.
> 
> Take this review of the Fury against a GTX 980:
> http://m.hardocp.com/article/2015/07/10/asus_strix_r9_fury_dc3_video_card_review/1
> 
> And I ran the numbers...
> Quote:
> 
> 
> 
> GTX980: 320 Watts, average FPS: 51.9
> R9 Fury: 367 Watts, average FPS: 60.4
> 
> GTX980 = 0.16 Frames/watt
> Fury = 0.17 Frames/watt
> 
> Or
> 
> GTX980 = 6.2 watts/frame
> Fury = 6.1 watts/frame
> 
> So mathematically, meaning empirically, speaking... Kyle's conclusion was wrong.
> 
> Bahanime is correct. The R9 Fury delivers more performance per watt than the GTX 980. The R9 Fury is therefore more"efficient" in terms of power usage when compared to delivered performance.
> 
> Click to expand...
> 
> Of course if you run furmark or just load up the Fury, it will end up consuming more power but for gaming? Nope.
Click to expand...

very very interesting and eye opening


----------



## Clocknut

Quote:


> Originally Posted by *Charcharo*
> 
> Well... the VAST majority of games these days do not work on day one, so it seems to me like the public has no idea what they want.


Not really, I was talking about product launch driver. AMD seems to suck at these back then. Had Tahiti (or 7970) launch with a perfect driver, it would have destroy GK104, and force Nvidia to sell it as GTX660Ti.

Another scenario Hawaii GCN, had it have perfect driver, it would be above 780Ti when it launch instead of below 780TI back then. You see everytime AMD have something new, Nvidia is able to counter it.

290X also plague with its crappy reference cooler at launch. Nobody is allow to do AIB for the first few months, u either have to accept the cooler or wait for months. Fury X? It require Close loop cooling, doesnt have custom AIB at launch. I have no idea why would AMD intentionally restricting their flagship. Fury X should have been complete open to AIB at launch.

What about comparing AMD vs Nvidia exclusive features, Nvidia give extra effects on their physx games. AMD on mantle, do they get extra effects particles? nope, despite Mantle have impressive drawcalls, AMD did not capitalize that advantages. With extra drawcalls, developer could have at least throw extra things on the screen without tanking the fps. These are selling point for the public.

now lets back topic and compare Ashes of the Singularity vs Nvidia gimp work title....say Assassin creed. Clearly from here u can see Assassin creed is a far more popular title.

oh yeah I almost forgot--> lets talk about WHQL driver. There was a period where AMD have a 6 months old WHQL driver. It wouldnt matter much for enthusiast since they will use the newer beta driver. But If you are casual, AMD recommend you to download the WHQL, and it obviously not ready for the latest game, the casual will then experience the bugs or anything not working on that WHQL driver. He will then think "AMD driver suck".
Quote:


> Originally Posted by *Defoler*
> 
> Those two comparisons are unequal. If you actually look at all the numbers and not the final number, you can see that on the nvidia side they also added motherboard consumption, which for some reason is missing from the AMD numbers.
> Either they forgot, left out on purpose, but either way, they did not explain why it is not there, or why it is on the nvidia numbers.
> 
> Also this is a bit strange, as AMD seems to pull 428.80W from the 300W rated connection during gaming, and nvidia are pulling 358.80W from 225W rated connections. Both of these numbers aren't really making sense, unless on both times they are also adding the 75W from the PCIE lanes, in which case, why double it on the nvidia side as putting it on an extra set under motherboard total?


The 2 product also share the same TDP rating. So it is not surprise it shares the same power consumption. Fiji was out after AMD loss considerable market share.

he seems to conveniently ignore







290X vs GTX 980(Hawaii's name during launch), R9-285 vs GTX960, GTX950 vs 7870/270x, 750Ti vs 265. This is like the whole product stack having less efficient/performance.

This the reason why I said AMD has lackluster GPU overall. They need to have a better "complete package" in everything.


----------



## Forceman

Quote:


> Originally Posted by *Mahigan*
> 
> I tore this myth apart over at HardOCP.
> 
> Take this review of the Fury against a GTX 980:
> http://m.hardocp.com/article/2015/07/10/asus_strix_r9_fury_dc3_video_card_review/1
> 
> And I ran the numbers...
> Of course if you run furmark or just load up the Fury, it will end up consuming more power but for gaming? Nope.


And yet:



https://www.techpowerup.com/mobile/reviews/ASUS/R9_Fury_Strix/32.html


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> Hmm.
> 
> 
> 
> https://www.techpowerup.com/mobile/reviews/ASUS/R9_Fury_Strix/32.html


It all comes down to how you tested. If all you did was max out the Fury and then compared its maxed power usage to its avg frames per second you'd net that result.

What Hard OCP did was calculate the average power usage while playing those games. So all I had to do was compare it to the avg FPS numbers from HardOCP.

That gave me those results. Feel free to calculate it yourself.


----------



## Forceman

Quote:


> Originally Posted by *Mahigan*
> 
> It all comes down to how you tested. If all you did was max out the Fury and then compared its maxed power usage to its avg frames per second you'd net that result.
> 
> What Hard OCP did was calculate the average power usage while playing those games. So all I had to do was compare it to the avg FPS numbers from HardOCP.
> 
> That gave me those results. Feel free to calculate it yourself.


So like this:
Quote:


> The following graphs show the efficiency of the cards in our test group. We used the relative performance scores and the *typical gaming power consumption* result. These numbers are based on the performance summary with all games included.


----------



## STEvil

The 1920x1080 performance summary numbers dont seem to add up correctly.

980 is consistently 7-30fps behind the Fury Strix aside from a few games (P Cars, WoW, CoD) but is rated at 100% performance same as Fury Strix at the end.

I fully admit I did not break out a calculator and add every number together but the numbers do look off just keeping a running tally of fps deviations as I clicked through the pages.

That said, there should be some debate over how the benchmarks are done as well as running a "timedemo" style benchmark gives deceiving numbers in what actual performance will be.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> And yet:
> 
> 
> 
> https://www.techpowerup.com/mobile/reviews/ASUS/R9_Fury_Strix/32.html


Not saying i agree with whether or not a Fury is more efficient than a 980, because to me 50 watts either way is a mute point for high-end hardware anyway. But you can throw that graph out the window thanks to the disgrace that is Pcars being used in that average.



Titles like that aren't going to throw off the results are they?


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> So like this:


Nope read up:
Quote:


> We use Metro: Last Light as a standard test representing typical 3D gaming usage because it offers the following: very high power draw; high repeatability; is a current game that is supported on all cards; drivers are actively tested and optimized for it


https://www.techpowerup.com/mobile/reviews/ASUS/R9_Fury_Strix/29.html

They use a single game. I averaged the games. My results are more accurate than theirs.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Mahigan*
> 
> Nope read up:
> https://www.techpowerup.com/mobile/reviews/ASUS/R9_Fury_Strix/29.html
> 
> They use a single game. I averaged the games. My results are more accurate than theirs.


I always thought they averaged all titles together..







That is a completely useless test then imo. What's the point of only using one damn game.. Furmark too? Wth.. Surely AMD's architecture's would naturally use more power with a synthetic like that? I'll ignore those results from now on.


----------



## Blameless

Quote:


> Originally Posted by *Olivon*
> 
> I was talking about this :
> http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/3


Ah, I was unaware that 1/16 was a hardware limitation, probably because the GCN whitepaper I've been looking at long predates Fiji.

Thanks for the info.
Quote:


> Originally Posted by *Clocknut*
> 
> The 2 product also share the same TDP rating. So it is not surprise it shares the same power consumption.


Rated TDP often doesn't say much, unless it's being used as a throttle point and the apps in question are reaching it.

I have parts of the exact same model that differ significantly in power consumption. This is often the case for certain half-assed non-reference designs that give every GPU the same voltage regardless of leakage.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> I always thought they averaged all titles together..
> 
> 
> 
> 
> 
> 
> 
> That is a completely useless test then imo. What's the point of only using one damn game..


I didn't read the fine print and made the same assumption at first.

Still, the usefulness of the test would depend on how representative of gaming in general Metro: LL is.


----------



## Worldwin

Also don't forget that AMD uses higher default voltages to maximize yields as well to provide a margin incase the consumers PSU is subpar and cant deliver. If you were to look at starting voltage you would fine they are excessive eg. my 390X was set at 1.3V and i could drop it by 0.1V and maintain stock speeds.


----------



## Forceman

Quote:


> Originally Posted by *Mahigan*
> 
> Nope read up:
> https://www.techpowerup.com/mobile/reviews/ASUS/R9_Fury_Strix/29.html
> 
> They use a single game. I averaged the games. My results are more accurate than theirs.


Quote:


> Originally Posted by *GorillaSceptre*
> 
> I always thought they averaged all titles together..
> 
> 
> 
> 
> 
> 
> 
> That is a completely useless test then imo. What's the point of only using one damn game.. Furmark too? Wth.. Surely AMD's architecture's would naturally use more power with a synthetic like that? I'll ignore those results from now on.


They average all the titles for performance and use that in the perf/watt, but they use Metro for the power consumption number rather than measuring it in all titles.

Does HardOCP measure power in every game and average it, or just a single reading? They don't say, they just say they "use real gaming" .


----------



## GorillaSceptre

Quote:


> Originally Posted by *Blameless*
> 
> I didn't read the fine print and made the same assumption at first.
> 
> Still, the usefulness of the test would depend on how representative of gaming in general Metro: LL is.


Yeah, i just assumed they averaged all the games because TPU is usually very thorough. I didn't think they would half-ass such an "important" aspect.

It seems that Metro isn't very representative at all, as Mahigan showed by averaging other games together. Not to mention Furmark.. I don't even.








Quote:


> Originally Posted by *Forceman*
> 
> They average all the titles for performance, they just use Metro for the power consumption rather than measuring it in all titles.
> 
> Does HardOCP measure power in every game and average it, or just a single reading? They don't say, they just say they "use real gaming" .


I'm aware of the performance averages, i was talking about power consumption.


----------



## Forceman

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yeah, i just assumed they averaged all the games because TPU is usually very thorough. I didn't think they would half-ass such an "important" aspect.
> 
> It seems that Metro isn't very representative at all, as Mahigan showed by averaging other games together. Not to mention Furmark.. I don't even.
> 
> 
> 
> 
> 
> 
> 
> 
> I'm aware of the performance averages, i was talking about power consumption.


The performance average is used in perf/watt, but the watt part (and the power consumption chart) is from Metro and not all titles. The HardOCP review doesn't say where they get their power number.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> The performance average is used in perf/watt, but the watt part is from Metro and not all titles.


? I know.







That was my point, it makes it completely useless. Lets use a performance average across a bunch of titles, including ones like Pcars, and then we'll just use Furmark and metro to test power consumption, throw them together and call it an average. Nvidia wins, case closed.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> The performance average is used in perf/watt, but the watt part is from Metro and not all titles.


If you do the math, they used the metro numbers. Just add up all the avg fps for all the games and divide by 22 then take that figure and divide the metro voltage by it.

You'll get their figures. If you compare the GTX 980 and Fury differences, you'll get the same percentages.


----------



## Forceman

Quote:


> Originally Posted by *GorillaSceptre*
> 
> ? I know.
> 
> 
> 
> 
> 
> 
> 
> That was my point, it makes it completely useless. Lets use a performance average across a bunch of titles, including ones like Pcars, and then we'll just use Furmark and metro to test power consumption, throw them together and call it an average. Nvidia wins, case closed.


Quote:


> Originally Posted by *Mahigan*
> 
> If you do the math, they used the metro numbers. Just add up all the avg fps for all the games and divide by 22 then take that figure and divide the metro voltage by it.
> 
> You'll get their figures. If you compare the GTX 980 and Fury differences, you'll get the same percentages.


Does anyone measure power consumption in all games? You are making the assumption that the HardOCP number is across all games, but they don't say that specifically, just that it is measured in gaming use.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> Does anyone measure power consumption in all games? You are making the assumption that the HardOCP number is across all games, but they don't say that specifically, just that it is measured in gaming use.


No idea,

I only stumbled upon this recently. Of course the GTX 980 Ti offers more performance per watt over the FuryX but the Fury is surprisingly competitive.


----------



## Forceman

Quote:


> Originally Posted by *Mahigan*
> 
> No idea,
> 
> I only stumbled upon this recently. Of course the GTX 980 Ti offers more performance per watt over the FuryX but the Fury is surprisingly competitive.


Actually, it looks like they do. I looked back over old reviews, and the launch Fury X review mentions that the power draw was different across different games, so they must measure it multiple times.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> Actually, it looks like they do. I looked back over old reviews, and the launch Fury X review mentions that the power draw was different across different games, so they must measure it multiple times.


It's all irrelevant really. Based on [H]'s own data they came to the incorrect conclusion that the 980 is more efficient than the Fury. So whether or not they used it across all games or not is irrelevant, the how/why he came to the conclusion that the 980 is "better" than the Fury on that aspect is suspicious. But based on their actions and comments on that forum it's pretty obvious that they don't care for AMD to much, even when AMD deserve credit on something, they still paint Nvidia in a better light.

The whole [H] thing aside, TPU need to change the way they get perf-per-watt, their current method is nonsensical, how they got the idea to use Furmark power draw to show perf-per-watt across games they never tested is beyond me..


----------



## Forceman

It's all pretty irrelevant anyway, since straight performance per watt is so skewed towards the low-end. The 750Ti crushes everything, but that hardly matters.

Total power is somewhat more important (in a given performance category) since that's what you have to dissipate, but ultimately there isn't really much difference in a given tier.

Edit: where do you see TPU using Furmark for perf/watt? They use Metro for that, Furmark is just for the maximum power consumption chart.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> Actually, it looks like they do. I looked back over old reviews, and the launch Fury X review mentions that the power draw was different across different games, so they must measure it multiple times.


Well that's not what they said and you quoted it to me..
Quote:


> The following graphs show the efficiency of the cards in our test group. We used the relative performance scores and the *typical gaming power consumption result[/*. These numbers are based on the performance summary with all games included.


So they took the relative performance scores, which means averaging out all the games avg FPS for both cards, and compared that to a single result (their typical gaming power consumption result). This single result is based on a single game.

So they didn't do like I did. I calculated the avg FPS per watt (as well as avg watts per frame). They calculated the FPS per Metro Last Light Wattage. Or Metro Last Light Wattage per frame.

It's not the same thing at all.

None of this surprises me at all, not at all. And I can understand why many folks at AMD were frustrated and refused a Nano to Kyle. The guy is, well, not honest. After what he did, does it make sense to send him a more power efficient card if he's not going to be honest about it as well?

After I pointed it out to him, he proceeded to mock me and to change my forum "title" (name under your username) to a bunch of insults and award me a bunch of insult awards. He's not well in the head.

He also claimed that I should understand that HardOCP is about "The Gaming Experience", which is subjective and not an objective metric. So I hit back that he should therefore not call it performance per watt but instead Gaming Experience per watt. He didn't like that very much either (another round of insulting titles).


----------



## Forceman

Quote:


> Originally Posted by *Mahigan*
> 
> Well that's not what they said and you quoted it to me..
> So they took the relative performance scores, which means averaging out all the games avg FPS for both cards, and compared that to a single result (their typical gaming power consumption result). This single result is based on a single game.
> 
> So they didn't do like I did. I calculated the avg FPS per watt (as well as avg watts per frame). They calculated the FPS per Metro Last Light Wattage. Or Metro Last Light Wattage per frame.
> 
> It's not the same thing at all.
> 
> None of this surprises me at all, not at all. And I can understand why many folks at AMD were frustrated and refused a Nano to Kyle. The guy is, well, not honest.


It's not the same thing if the HardOCP power number is an average across all the games they test (which it appears to be, but they don't say specifically). It's exactly the same if their power number is a representative number based off a single sample. Someone should ask them how they get their power consumption number, since they don't make it clear in their review.

Bringing it back around to this thread, I wonder what DX12 and async compute do to AMD's power draw? Someone with the beta should check it real quick.

Edit: Oh, I see where you are coming from. This quote is about HardOCP, not TPU. I was saying that it looks like HardOCP measures it in every test, even though they don't say for sure.
Quote:


> Originally Posted by *Forceman*
> 
> Actually, it looks like they do. I looked back over old reviews, and the launch Fury X review mentions that the power draw was different across different games, so they must measure it multiple times.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> It's not the same thing if the HardOCP power number is an average across all the games they test (which it appears to be, but they don't say specifically). It's exactly the same if their power number is a representative number based off a single sample. Someone should ask them how they get their power consumption number, since they don't make it clear in their review.
> 
> Bringing it back around to this thread, I wonder what DX12 and async compute do to AMD's power draw? Someone with the beta should check it real quick.
> 
> Edit: Oh, I see where you are coming from. This quote is about HardOCP, not TPU. I was saying that it looks like HardOCP measures it in every test, even though they don't say for sure.


Ahh ok, I knew that because Kyle confirmed it to me in the forums.

As for Async, power draw will rise. Idling CUs are being used, caches, ACEs etc.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> Edit: where do you see TPU using Furmark for perf/watt? They use Metro for that, Furmark is just for the maximum power consumption chart.


I worded what i meant wrong. There's actually nothing wrong with their results, the problem is those graphs are thrown around every forum i know of, when people read a video card review from them they go through pages and pages of detailed results for each specific game, so at the end it's easy to make the assumption that they tested the power consumption for each game too. So when you see a maximum power draw chart it's easy to assume that those results are the peak during gaming (something i have always thought), admittedly it's my fault for not reading properly, but it seems I'm not the only one.

For instance, you used the perf-per-watt graph to disprove what mahigan was saying, even though they only test the power draw from one title. Someone else ages ago used the maximum power draw one to disprove something i said, but unlike mahigan, i can't read.. So I'm not saying there's anything wrong with TPU's conclusions, i just think it's irresponsible to throw the results together like that. They should label the graph, "perf-per-watt using Metro as power draw", but then that would look stupid wouldn't it? Because it is.


----------



## Forceman

Quote:


> Originally Posted by *GorillaSceptre*
> 
> I worded what i meant wrong. There's actually nothing wrong with their results, the problem is those graphs are thrown around every forum i know of, when people read a video card review from them they go through pages and pages of detailed results for each specific game, so at the end it's easy to make the assumption that they tested the power consumption for each game too. So when you see a maximum power draw chart it's easy to assume that those results are the peak during gaming (something i have always thought), admittedly it's my fault for not reading properly, but it seems I'm not the only one.
> 
> For instance, you used the perf-per-watt graph to disprove what mahigan was saying, even though they only test the power draw from one title. Someone else ages ago used the maximum power draw one to disprove something i said, but unlike mahigan, i can't read.. So I'm not saying there's anything wrong with TPU's conclusions, i just think it's irresponsible to throw the results together like that. They should label the graph, "perf-per-watt using Metro as power draw", but then that would look stupid wouldn't it? Because it is.


Well now I'm curious, because HardOCP show it drawing 281W (367-86 for the system) while TPU's Metro:LL average number is only 200W? And looking at a few other reviews, TPU numbers are lower across the board. Like 980 Ti Stirx is 332W at [H] but only 236W at TPU, and the 380X Strix is 237W at [H] but only 171W at TPU. So either Metro:LL isn't as stressful on load as TPU thinks, or something else is going on (I think [H] measures at the wall so PSU efficiency is a factor there, but it shouldn't account for that much of a discrepancy).


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> Well now I'm curious, because HardOCP show it drawing 281W (367-86 for the system) while TPU's Metro:LL average number is only 200W? And looking at a few other reviews, TPU numbers are lower across the board. Like 980 Ti Stirx is 332W at [H] but only 236W at TPU, and the 380X Strix is 237W at [H] but only 171W at TPU. So either Metro:LL isn't as stressful on load as TPU thinks, or something else is going on (I think [H] measures at the wall so PSU efficiency is a factor there, but it shouldn't account for that much of a discrepancy).


Something is not right that's for sure...

One thing worth mentioning, system power usage should always be higher when an NVIDIA card is present during gaming because of Static scheduling and a multi threaded driver.


----------



## magnek

HardOCP's power consumption numbers have always been whack and a fair bit deviated from all the other major sites, so I completely disregard their efficiency tests.


----------



## Clocknut

Quote:


> Originally Posted by *Forceman*
> 
> It's all pretty irrelevant anyway, since straight performance per watt is so skewed towards the low-end. The 750Ti crushes everything, but that hardly matters.
> 
> Total power is somewhat more important (in a given performance category) since that's what you have to dissipate, but ultimately there isn't really much difference in a given tier.
> 
> Edit: where do you see TPU using Furmark for perf/watt? They use Metro for that, Furmark is just for the maximum power consumption chart.


ohhh forget that 750Ti......they are going to only take 980Ti vs fury X for comparison and ignore everything below.


----------



## Mahigan

Quote:


> Originally Posted by *Clocknut*
> 
> ohhh forget that 750Ti......they are going to only take 980Ti vs fury X for comparison and ignore everything below.


No, not at all, I'm talking about Fury vs. GTX 980 not FuryX vs. GTX 980 Ti.

When it comes to the R9 390/x, both have a lower performance per watt figure than the GTX 970.

Something tells me that when Polaris releases, and if its performance per watt is better than Pascal, many folks will stop caring about performance/watt just as they didn't care when Fermi was around. It's a game as old as 3Dfx, ATi and nVidia. It also resembles politics where Democrats will be all anti-war when Republicans are in power and then when they're in power and bombing some country they'll talk about respecting the Commander and Chief, disagreeing is treason, praise the National Security State. It's a complete reversal when Republicans are out of power. They yell "Benghazi", the "Economy" etc.

Partisanship likes to masquerade as having higher principles and morals but it never does. Its always about supporting your team no matter what.

We see that behavior across AMD and NVIDIA partisans.


----------



## DeathMade

Quote:


> Originally Posted by *Mahigan*
> 
> Well that's not what they said and you quoted it to me..
> So they took the relative performance scores, which means averaging out all the games avg FPS for both cards, and compared that to a single result (their typical gaming power consumption result). This single result is based on a single game.
> 
> So they didn't do like I did. I calculated the avg FPS per watt (as well as avg watts per frame). They calculated the FPS per Metro Last Light Wattage. Or Metro Last Light Wattage per frame.
> 
> It's not the same thing at all.
> 
> None of this surprises me at all, not at all. And I can understand why many folks at AMD were frustrated and refused a Nano to Kyle. The guy is, well, not honest. After what he did, does it make sense to send him a more power efficient card if he's not going to be honest about it as well?
> 
> After I pointed it out to him, he proceeded to mock me and to change my forum "title" (name under your username) to a bunch of insults and award me a bunch of insult awards. He's not well in the head.
> 
> He also claimed that I should understand that HardOCP is about "The Gaming Experience", which is subjective and not an objective metric. So I hit back that he should therefore not call it performance per watt but instead Gaming Experience per watt. He didn't like that very much either (another round of insulting titles).


I've read his thread about AMD 16.2 driver. That was hilarious read indeed


----------



## Defoler

Quote:


> Originally Posted by *Mahigan*
> 
> I tore this myth apart over at HardOCP.
> 
> Take this review of the Fury against a GTX 980:
> http://m.hardocp.com/article/2015/07/10/asus_strix_r9_fury_dc3_video_card_review/1
> 
> And I ran the numbers...
> Of course if you run furmark or just load up the Fury, it will end up consuming more power but for gaming? Nope.


That is not really correct.

If you look at apples to apples, with gameworks enabled for example, the 980 gives for example almost the same amount of average FPS.
So those mathematical numbers end up in favor to nvidia.
Also in some games even with gameworks off, the 980 gives similar average fps as the fury. Sometimes even better FPS, sometimes much worse.

So depends on the game you are playing, the 980 can be much better watt/performance than the fury, sometimes it is the opposite, depends on who the developer has been making the game with.

Since hardocp reviews have been a bit... shady... in the past, I don't really take their word regarding performance, but even if you want to look at their numbers, your conclusion isn't really correct.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Mahigan*
> 
> No, not at all, I'm talking about Fury vs. GTX 980 not FuryX vs. GTX 980 Ti.
> 
> When it comes to the R9 390/x, both have a lower performance per watt figure than the GTX 970.
> 
> Something tells me that when Polaris releases, and if its performance per watt is better than Pascal, many folks will stop caring about performance/watt just as they didn't care when Fermi was around. It's a game as old as 3Dfx, ATi and nVidia. It also resembles politics where Democrats will be all anti-war when Republicans are in power and then when they're in power and bombing some country they'll talk about respecting the Commander and Chief, disagreeing is treason, praise the National Security State. It's a complete reversal when Republicans are out of power. They yell "Benghazi", the "Economy" etc.
> 
> Partisanship likes to masquerade as having higher principles and morals but it never does. Its always about supporting your team no matter what.
> 
> We see that behavior across AMD and NVIDIA partisans.


yeap first it was the noise
then it was the fps
then it was even the biggest card
after than was the heat
and now its the perf/watt
i guess the next one will be the name


----------



## specopsFI

Quote:


> Originally Posted by *Mahigan*
> 
> No, not at all, I'm talking about Fury vs. GTX 980 not FuryX vs. GTX 980 Ti.
> 
> When it comes to the R9 390/x, both have a lower performance per watt figure than the GTX 970.
> 
> Something tells me that *when Polaris releases, and if its performance per watt is better than Pascal*, many folks will stop caring about performance/watt just as they didn't care when Fermi was around. It's a game as old as 3Dfx, ATi and nVidia. It also resembles politics where Democrats will be all anti-war when Republicans are in power and then when they're in power and bombing some country they'll talk about respecting the Commander and Chief, disagreeing is treason, praise the National Security State. It's a complete reversal when Republicans are out of power. They yell "Benghazi", the "Economy" etc.
> 
> Partisanship likes to masquerade as having higher principles and morals but it never does. Its always about supporting your team no matter what.
> 
> We see that behavior across AMD and NVIDIA partisans.


Look, now, I think you've posted a lot of interesting stuff on the forums and indeed many of the things you've pointed out are beginning to make sense. But lately you've started to go off the deep end with statements such as the bolded part above. You remind me of raghu78 who is able to give very good advice based on factual information, but who also has a tendency to extrapolate based on biased and/or incomplete data. There is simply no way you could know how Polaris vs Pascal will end up going. When you make such wildly speculative statements with such arrogant self-confidence, and in the same post preach about partisanship...


----------



## SuperZan

Not for nothing but he did say 'if'.


----------



## magnek

Quote:


> Originally Posted by *specopsFI*
> 
> Look, now, I think you've posted a lot of interesting stuff on the forums and indeed many of the things you've pointed out are beginning to make sense. But lately you've started to go off the deep end with statements such as the bolded part above. You remind me of raghu78 who is able to give very good advice based on factual information, but who also has a tendency to extrapolate based on biased and/or incomplete data. There is simply no way you could know how Polaris vs Pascal will end up going. When you make such wildly speculative statements with such arrogant self-confidence, and in the same post preach about partisanship...


Well to be fair if Polaris did indeed end up having better perf/watt than Pascal, I'm sure that metric will lose a significant amount of importance.


----------



## specopsFI

But that is just it. Partisanship is the attitude that makes people to do speculative future predictions and use them as arguments. To use those kind of arguments against partisanship as a phenomenon is just hypocrisy at its purest.


----------



## Olivon

Quote:


> Originally Posted by *Mahigan*
> 
> It all comes down to how you tested. If all you did was max out the Fury and then compared its maxed power usage to its avg frames per second you'd net that result.
> 
> What Hard OCP did was calculate the average power usage while playing those games. So all I had to do was compare it to the avg FPS numbers from HardOCP.
> 
> That gave me those results. Feel free to calculate it yourself.


Loking at Hardware.fr power efficiency number, Fiji is better than Hawaii but nVidia still got a big advantage in BF4 (reasonnable GPU load) and Anno 2070 (high gpu load) :

 

http://www.hardware.fr/articles/937-7/consommation-efficacite-energetique.html

Fury-X direct from the street (not an AMD sample) offers lesser good results but seeing variations between samples is quite normal.

If we're looking at Fury scores :

 

http://www.hardware.fr/articles/938-4/consommation-efficacite-energetique.html

Srix sample got better power efficiency than Sapphire, Maxwell cards are still ~28% better power efficient.

Nano scores now :

 

http://www.hardware.fr/articles/942-4/consommation-efficacite-energetique.html

By capping power consumption by 50%, Fiji is now able to play in the same ballpark than Maxwell regarding power efficiency.

What we can conclude from this graphs ?

Fiji, llike Hawai,offers great perfromance scaling with GPU clock, but to achieve higher clocks it costs a lot of power.
On the other hand, nVidia continue to offer great efficiency even with custom cards that allow another level of performance and tons of models possibility for partners.
A good old 7970 is a tiny more power effcient than a 390X for example and it's quite a sad achievment.
Not being able to reach high clocks without breaking all standards is a real massive problem for AMD, particulary when you know that their reference cards are already quite pushed high and clock margin is really low.

Power efficiency is no just a power bill question, it's a strategic factor. When nVidia saw the wall coming with Fermi, they turned back.
Now it's time for AMD to follow the same path and we will see what Polaris and derivates will bring to the PC gaming scene.
For the good of all, we can only hope that AMD will show a different face. Without competition things become boring.


----------



## Themisseble

Quote:


> Originally Posted by *Olivon*
> 
> Loking at Hardware.fr power efficiency number, Fiji is better than Hawaii but nVidia still got a big advantage in BF4 (reasonnable GPU load) and Anno 2070 (high gpu load) :
> 
> 
> 
> http://www.hardware.fr/articles/937-7/consommation-efficacite-energetique.html
> 
> Fury-X direct from the street (not an AMD sample) offers lesser good results but seeing variations between samples is quite normal.
> 
> If we're looking at Fury scores :
> 
> 
> 
> http://www.hardware.fr/articles/938-4/consommation-efficacite-energetique.html
> 
> Srix sample got better power efficiency than Sapphire, Maxwell cards are still ~28% better power efficient.
> 
> Nano scores now :
> 
> 
> 
> http://www.hardware.fr/articles/942-4/consommation-efficacite-energetique.html
> 
> By capping power consumption by 50%, Fiji is now able to play in the same ballpark than Maxwell regarding power efficiency.
> 
> What we can conclude from this graphs ?
> 
> Fiji, llike Hawai,offers great perfromance scaling with GPU clock, but to achieve higher clocks it costs a lot of power.
> On the other hand, nVidia continue to offer great efficiency even with custom cards that allow another level of performance and tons of models possibility for partners.
> A good old 7970 is a tiny more power effcient than a 390X for example and it's quite a sad achievment.
> Not being able to reach high clocks without breaking all standards is a real massive problem for AMD, particulary when you know that their reference cards are already quite pushed high and clock margin is really low.
> 
> Power efficiency is no just a power bill question, it's a strategic factor. When nVidia saw the wall coming with Fermi, they turned back.
> Now it's time for AMD to follow the same path and we will see what Polaris and derivates will bring to the PC gaming scene.
> For the good of all, we can only hope that AMD will show a different face. Without competition things become boring.


Power efficiency test are big failure.
Never trust power efficiency test, never!

So lest say my PC use around 210-225W while gaming.
PSU efficiecny is "gold".

I have put R9 390 is my system too and gaming power consumption was 310-340W.
While R9 390 was around 80% faster than whole PC with R9 390 used only 50% more W.
So basically PC with R9 390 was more efficient.

+ The problem about testing GPu efficiency.
I am truly sorry but everyone who is stating that PC with R9 390 is using 100W more W than same PC with GTX 970 is wrong!

_Its around 40-50W and how much does that mean for PC?
Example:
PC with R9 390 is using 310-340W while gaming
PC with GTX 970 is using 260-290W while gaming

So PC with R9 390 is using 17% more.
Normally per year = 100$ for PC = 17$ more per year for AMD.

If you want power efficiency on GCN just clock it down to *850-925MHz* and 5 years old architecture will do wonders._

Even R9 390 will use less than 200W in gaming if you just clock it around 10% lower.
Well I bet I can undervolt stock R9 390 for -50mv or more.


----------



## Clocknut

Quote:


> Originally Posted by *Mahigan*
> 
> No, not at all, I'm talking about Fury vs. GTX 980 not FuryX vs. GTX 980 Ti.
> 
> When it comes to the R9 390/x, both have a lower performance per watt figure than the GTX 970.
> 
> Something tells me that when Polaris releases, and if its performance per watt is better than Pascal, many folks will stop caring about performance/watt just as they didn't care when Fermi was around. It's a game as old as 3Dfx, ATi and nVidia. It also resembles politics where Democrats will be all anti-war when Republicans are in power and then when they're in power and bombing some country they'll talk about respecting the Commander and Chief, disagreeing is treason, praise the National Security State. It's a complete reversal when Republicans are out of power. They yell "Benghazi", the "Economy" etc.
> 
> Partisanship likes to masquerade as having higher principles and morals but it never does. Its always about supporting your team no matter what.
> 
> We see that behavior across AMD and NVIDIA partisans.


power efficiency has its merit when u compare 2 product at the same speed. The one that consume lesser power will always look more attractive. AMD entire product stack below Fury are losing to Nvidia in this section. This is the biggest market. It is not surprise they loose the market share.

if u actually read my post, the overall value of product is actually more than just power efficiency. To win over the market, AMD need a complete package.(same for Nvidia)

So if Polaris are trumping pascal in every section including the driver + features etc, that includes WHQL driver release as frequent as Nvidia's, then it is a better product.


----------



## PlugSeven

If google translate is correct, Hd.fr are measuring power consumption at the card which is just wrong. There was a test done and I hope someone here remembers so they link it, were the 680 and the 7970 showed a ~45W less advantage to the 680 when measured at the card. It dropped down to ~10W when power was measured at system level. Your card ain't nothing but an expensive paper weight outside of a system to feed it, so why measure it's power use in isolation?


----------



## Themisseble

here is something.
http://www.tomshardware.com/reviews/msi-afterburner-undervolt-radeon-r9-fury,4425.html

GCN1.0-3.0 wasnt design for P/W it was design for pure compute power...

GCN4.0 will be designed for P/W.


----------



## Blameless

Quote:


> Originally Posted by *Mahigan*
> 
> One thing worth mentioning, system power usage should always be higher when an NVIDIA card is present during gaming because of Static scheduling and a multi threaded driver.


The modest differences in CPU load aren't going to make much difference in total power on many systems, not anywhere near as much as differences in GPU power.

Even at 4.8GHz with no power saving features, gaming load on a 3770k is going to be under 100w, and the differences from driver threading/scheduling won't change that much. On a more conservatively clocked CPU and/or one with power saving features enabled, the CPU is such a tiny portion of total system load during gaming that it will probably fall within margin of error.


----------



## Smanci

Quote:


> Originally Posted by *Blameless*
> 
> The modest differences in CPU load aren't going to make much difference in total power on many systems, not anywhere near as much as differences in GPU power.
> 
> Even at 4.8GHz with no power saving features, gaming load on a 3770k is going to be under 100w, and the differences from driver threading/scheduling won't change that much. On a more conservatively clocked CPU and/or one with power saving features enabled, the CPU is such a tiny portion of total system load during gaming that it will probably fall within margin of error.


Would be interesting to see how AMD's DX11 CPU overhead affects this in a CPU-heavy game.


----------



## Sleazybigfoot

Quote:


> Originally Posted by *magnek*
> 
> Good call on the bold part. The FX line was absolute trash. What the hell were they thinking sticking a Dustbuster fan on the card?
> 
> 
> 
> 
> 
> 
> 
> 
> *I have a GSync monitor and I definitely wouldn't mind buying Polaris if they offer more performance/a better deal. Granted that's mainly because I'm not all that impressed with GSync, and could most certainly live without it.*


Well, looking at your sig rig I'd say you have more money available for your hobby than most people, and it's those people that will keep using Nvidia BECAUSE of their Gsync monitor. They paid premium for the gsync module, they're going to want to use it.


----------



## sages

Tom has a review out about ashes of the singularity. It´s an interesting read , particularly the power consumption part.
http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476.html


----------



## Forceman

Quote:


> Originally Posted by *sages*
> 
> Tom has a review out about ashes of the singularity. It´s an interesting read , particularly the power consumption part.
> http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476.html


So Nvidia has a bigger drop in CPU draw going to DX12. That's interesting. (edit: On second thought, maybe not so surprising.)

But 100W extra for the 390X in DX12. Oh my.


----------



## caswow

Quote:


> Originally Posted by *Forceman*
> 
> So Nvidia has a bigger drop in CPU draw going to DX12. That's interesting. (edit: On second thought, maybe not so surprising.)
> 
> But 100W extra for the 390X in DX12. Oh my.


they only teste the msi which has high tdp. his would have been good to see too but they didint test it.


----------



## keikei

Quote:


> Originally Posted by *Forceman*
> 
> So Nvidia has a bigger drop in CPU draw going to DX12. That's interesting. (edit: On second thought, maybe not so surprising.)
> 
> But 100W extra for the 390X in DX12. Oh my.


Yeah, that is the one big draw back for the 390X, the power consumption. In terms of frames though, it beats the 980, which is costs $100 more. Its one reason the second hand market for the 290/290X is busy right now. Gamers are seeing the value of the card in the current market.


----------



## Sleazybigfoot

I never really understood the high power draw is bad thing, I'm a gamer looking for max performance, I don't care if the card uses more power.


----------



## GorillaSceptre

Yeah, the 390X is a damn power hog.. The MSI in particular. Does anyone know why the MSI card is so power hungry?


----------



## Cyro999

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> I never really understood the high power draw is bad thing, I'm a gamer looking for max performance, I don't care if the card uses more power.


Most people don't really care about the wattage number for the power bill, but it affects more than that.

The wattage is also a direct measure of the heat output of the GPU. Even if you perfectly remove that heat from the GPU without causing noise that bothers you (which becomes harder and harder as you get to higher wattages and multi-gpu) then it will heat up the inside of the case at that rate.

In high power situations, it will have a noticable effect on room temperature which can be very bad in hot areas of the world or when you have to run air conditioning to offset it.

It does also affect PSU choice / viability when there is a significant gap in efficiency.


----------



## keikei

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> I never really understood the high power draw is bad thing, I'm a gamer looking for max performance, I don't care if the card uses more power.


U.S. residents enjoy relatively cheap electric bills. I dont think our Euro brothers can say the same. Its a buying factor if one runs said gpu for long hours during the day (folding perhaps).


----------



## ku4eto

Quote:


> Originally Posted by *keikei*
> 
> U.S. residents enjoy relatively cheap electric bills. I dont think our Euro brothers can say the same. Its a buying factor if one runs said gpu for long hours during the day (folding perhaps).


I am living in Bulgaria, where the average wage-to-electricity price is really bad when compared to USA/Western Europe.

I can tell you, that no one here in his right mind is considering Power Consumption a viable characteristic when buying CPU/GPU. We are buying the cheapest possible solution, that will get us the job done, not the most power efficient one. We are not using those power hogs 24/7, hell, i am using mine with 100% GPU load only during Crysis 3 or TW3. 75% of the time is not even pulling more than 150W the entire system.


----------



## Smanci

Quote:


> Originally Posted by *keikei*
> 
> U.S. residents enjoy relatively cheap electric bills. I dont think our Euro brothers can say the same. Its a buying factor if one runs said gpu for long hours during the day (folding perhaps).


0,25 USD/kWh hardly matters to anyone spending 400€+ on a gpu. But when you start thinking about being able to run two GPUs for the consumption of one while the two being more quiet, cooler and much faster...


----------



## Robenger

Quote:


> Originally Posted by *keikei*
> 
> U.S. residents enjoy relatively cheap electric bills. I dont think our Euro brothers can say the same. Its a buying factor if one runs said gpu for long hours during the day (folding perhaps).


http://www.neo.ne.gov/statshtml/204.htm

A list of price per kilowatt hour by state if anyone was interested.


----------



## Robenger

In Idaho we pay around 0.08 cents per kilowatt hour.


----------



## Charcharo

I am from Bulgaria. The average here is MUCH worse than what is in Western Europe or the USA. And even to us, power consumptions is a small deal.

First of, you do not obviously use your PC all the time. So there is that. Also, people are for some reason forgetting *IDLE Power consumption* as well as consumption under *light loads*. Those are actually ... very important.
Also... for those that care so much, there is *Frame Rate Target Control*. There is also *undervolting*.

I am pretty much certain I would not have really saved who knows how much money if I went with a GTX 970. If any at all. And we are cheap as hell here, these GPUs are super expensive.


----------



## Mahigan

Quote:


> Originally Posted by *specopsFI*
> 
> Look, now, I think you've posted a lot of interesting stuff on the forums and indeed many of the things you've pointed out are beginning to make sense. But lately you've started to go off the deep end with statements such as the bolded part above. You remind me of raghu78 who is able to give very good advice based on factual information, but who also has a tendency to extrapolate based on biased and/or incomplete data. There is simply no way you could know how Polaris vs Pascal will end up going. When you make such wildly speculative statements with such arrogant self-confidence, and in the same post preach about partisanship...


While I understand your concerns,
I used the term "if". If is not a statement of fact.


----------



## Themisseble

Quote:


> Originally Posted by *sages*
> 
> Tom has a review out about ashes of the singularity. It´s an interesting read , particularly the power consumption part.
> http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476.html


sure R9 390X use 150W more than GTX 980.. what a lie.
Also had R9 390X and whole PC was using 330W with bronze PSU. Tomshardware is trying to tell me that I am stupid? or maybe I need glasses?


----------



## Mahigan

Quote:


> Originally Posted by *sages*
> 
> Tom has a review out about ashes of the singularity. It´s an interesting read , particularly the power consumption part.
> http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476.html


Thank you,

Those charts prove my statements









CPU power consumption is higher on NVIDIA Maxwell under DX11 (multi threaded driver and static scheduling).

And Async compute does push the power usage up on AMD GCN.


----------



## Themisseble

Quote:


> Originally Posted by *Mahigan*
> 
> Thank you,
> 
> Those charts prove my statements
> 
> 
> 
> 
> 
> 
> 
> 
> 
> CPU power consumption is higher on NVIDIA Maxwell under DX11 (multi threaded driver and static scheduling).
> 
> And Async compute does push the power usage up on AMD GCN.


Yep, but tomshardware doesnt even back up their own statements. In a year all that may change!
http://www.tomshardware.com/reviews/radeon-r9-290-and-290x,3728-4.html

Do I see it right?
250W for gaming power consumption R9 290X OC?

Tomshardware is not great site... they are contradicting their own statements.


----------



## iLeakStuff

Quote:


> Originally Posted by *Themisseble*
> 
> sure R9 390X use 150W more than GTX 980.. what a lie.
> Also had R9 390X and whole PC was using 330W with bronze PSU. Tomshardware is trying to tell me that I am stupid? or maybe I need glasses?


Everyone knows that the 390X is an extremely unefficient GPU. Get down to earth man.
290X was already using a lot of power. The more you increase clocks after a certain point, the less efficient it becomes. And it climbs like a rock even with smaller changes after a certain point


----------



## Themisseble

Quote:


> Originally Posted by *iLeakStuff*
> 
> Everyone knows that the 390X is an extremely unefficient GPU. Get down to earth man.
> 290X was already using a lot of power. The more you increase clocks after a certain point, the less efficient it becomes


I know I had it when I was building PC for my friend. But dont bull.... me when his whole PC was running under 300W while gaming with bronze PSU.

*TOMSHARDWARE AND MANY SITES WILL CONTRADICT THEMSELVES!*
*
I CANNOT BELIEVE MAN THAT WANTS TO CONVINCE ME THAT SINGLE R9 290X WILL USE 3x MORE POWER THAN SINGLE R9 270X!!*

Large dies are more power efficient!


----------



## iLeakStuff

Quote:


> Originally Posted by *Themisseble*
> 
> I know I had it when I was building PC for my friend. But dont bull.... me when his whole PC was running under 300W while gaming with bronze PSU.
> 
> *TOMSHARDWARE AND MANY SITES WILL CONTRADICT THEMSELVES!*


Thats what the cards itself draw.
I guess AOTS isnt as demanding on DX11 due to less usage on shaders (async compute perhaps) which cause the power draw to drop like that.
You see Nvidia`s cards stay the same during DX11 and DX12. Thats because they dont have async compute

Which makes me wonder if this is a sign of what future reviews of Pascal and Polairs will be about. AMD cards extremely inefficient. Cultprint: Async compute. I foresee tons of threads discussing this in the future


----------



## KyadCK

Quote:


> Originally Posted by *iLeakStuff*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Themisseble*
> 
> I know I had it when I was building PC for my friend. But dont bull.... me when his whole PC was running under 300W while gaming with bronze PSU.
> 
> *TOMSHARDWARE AND MANY SITES WILL CONTRADICT THEMSELVES!*
> 
> 
> 
> Thats what the cards itself draw.
> I guess AOTS isnt as demanding on DX11 due to less usage on shaders (async compute perhaps) which cause the power draw to drop like that.
> You see Nvidia`s cards stay the same during DX11 and DX12. Thats because they dont have async compute
Click to expand...

They don't stay the same power. 980TI gains 40-50w. There are separated numbers for DX11, 12, and 12+ASYNC if you actually read the source. There is no guessing to be done,

You are also not listening to what he has to say. He measured it. From the wall. His actual numbers are worth more than your in-my-head-it-works numbers.


----------



## caswow

they tested the most powerhungry 390x you can possibly buy relax guys...


----------



## Themisseble

Quote:


> Originally Posted by *caswow*
> 
> they tested the most powerhungry 390x you can possibly buy relax guys...


You are right R9 390X is power hungry just because of 512bit buss.

So basically same chip is using two years later 70W more?
http://www.tomshardware.com/reviews/radeon-r9-290-and-290x,3728-4.html
http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663-17.html

Undervolting AMD CPU/GPU makes a lot of difference, again TOMSHARDWARE
http://www.tomshardware.com/reviews/msi-afterburner-undervolt-radeon-r9-fury,4425.html

Different results, different sites... who to trust?


----------



## iLeakStuff

Quote:


> Originally Posted by *KyadCK*
> 
> They don't stay the same power. 980TI gains 40-50w. There are separated numbers for DX11, 12, and 12+ASYNC if you actually read the source. There is no guessing to be done,
> 
> You are also not listening to what he has to say. He measured it. From the wall. His actual numbers are worth more than your in-my-head-it-works numbers.


Except the numbers are there, by Tomshardware.









From DX11 to DX12 with Compute on, 390X efficiency drops by 12%. 980Ti increase by 10%. Thats what, 20% advantage to Nvidia?
Fury X doesnt increase with Compute On though. Will be very interesting to see other comparisons with this though.

Could just be that AMD isnt focusing on older GCN cards with DX12. Yet another exciting topic in the future


----------



## Themisseble

Quote:


> Originally Posted by *iLeakStuff*
> 
> Except the numbers are there, by Tomshardware.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> From DX11 to DX12 with Compute on, 390X efficiency drops by 12%. 980Ti increase by 10%. Thats what, 20% advantage to Nvidia?
> Fury X doesnt increase with Compute On though. Will be very interesting to see other comparisons with this though.
> 
> Could just be that AMD isnt focusing on older GCN cards with DX12. Yet another exciting topic in the future


Nope but you might see R7 370 with better efficiency than GTX 950 and GTX 960 in DX12 + async compute.


----------



## BlitzWulf

You are right to take these results with a grain of salt, that particular model of 390x is ridiculously inefficient I mean those numbers are close to what a max overclocked 390x pulls in DX11. I dont know why they use that card for efficiency testing,

http://www.tomshardware.com/reviews/amd-radeon-r9-390x-r9-380-r7-370,4178-10.html

Back in June of last year they said themselves that this model draws 53W more than the reference model at the same clocks ,even going so far as to say that the difference in draw is not an acceptable design deviation from reference, yet they use this same particular card to measure the power efficiency of 3XX series Hawaii chips?


----------



## Mahigan

Quote:


> Originally Posted by *iLeakStuff*
> 
> Thats what the cards itself draw.
> I guess AOTS isnt as demanding on DX11 due to less usage on shaders (async compute perhaps) which cause the power draw to drop like that.
> You see Nvidia`s cards stay the same during DX11 and DX12. Thats because they dont have async compute
> 
> Which makes me wonder if this is a sign of what future reviews of Pascal and Polairs will be about. AMD cards extremely inefficient. Cultprint: Async compute. I foresee tons of threads discussing this in the future


And NVIDIA GPUs already have a multi-threaded driver which translates command lists into ISA using more than one CPU core and then submits those commands in a multi-threaded fashion to their command buffer.

So NVIDIA has little to gain going from DX11 to DX12. NVIDIA might gain a bit because their DX12 driver will use more cores than their DX11 driver but their GPUs are pretty much operating at 100% capacity under DX11.

When it's all said and done, an R9 390x is becoming even more competitive against the GTX 980 in terms of bang for your buck. I'd wager to say that API overhead gone, under properly threaded DX12 engines, Grenada is superior to the GM204 found in the GTX 980 (except in terms of performance per watt).

AMDs old Hawaii architecture is mighty impressive indeed.


----------



## mtcn77

Quote:


> Originally Posted by *BlitzWulf*
> 
> You are right to take these results with a grain of salt, that particular model of 390x is ridiculously inefficient I mean those numbers are close to what a max overclocked 390x pulls in DX11. I dont know why they use that card for efficiency testing,
> 
> http://www.tomshardware.com/reviews/amd-radeon-r9-390x-r9-380-r7-370,4178-10.html
> 
> Back in June of last year they said themselves that this model draws 53W more than the reference model at the same clocks ,even going so far as to say that the difference in draw is not an acceptable design deviation from reference, yet they use this same particular card to measure the power efficiency of 3XX series Hawaii chips?


Inefficient? LOL! The results speak for themselves.


----------



## BlitzWulf

Quote:


> Originally Posted by *mtcn77*
> 
> Inefficient? LOL! The results speak for themselves.


Oh don't get me wrong I own an overclocked 390x I didnt care that it uses as much power as it does to compete with a 980 in DX 11 and I certainly dont mind it using that power to scrap with the 980ti in DX12.
I'm pleased as punch









However an MSI Gaming 390x is not an accurate representation of the Power draw of your average 390x


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> Except the numbers are there, by Tomshardware.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> From DX11 to DX12 with Compute on, 390X efficiency drops by 12%. 980Ti increase by 10%. Thats what, 20% advantage to Nvidia?
> Fury X doesnt increase with Compute On though. Will be very interesting to see other comparisons with this though.
> 
> Could just be that AMD isnt focusing on older GCN cards with DX12. Yet another exciting topic in the future


Are you for real?



390X going from DX11 to DX12 w/ compute uses 13% more watts/FPS, while 980 Ti also uses 10.5% more watts/FPS. Really not sure where you got the "980 Ti increases [efficiency] by 10%. Thats what, 20% advantage to Nvidia?" bit from.

The correct conclusion should've been nVidia has a 2.5% advantage, which is likely within the margin of error anyway, so basically a wash.


----------



## Mahigan

Quote:


> Originally Posted by *mtcn77*
> 
> Inefficient? LOL! The results speak for themselves.


I forsee goal post shifting soon. Absolute performance won't matter but instead performance per watt.

And if Polaris is slower than Pascal but offers more performance per watt, goal post shift, and now absolute performance will be the prime metric.

It's always like that. Not for everyone but for the partisans. I personally buy based on bang for your buck on the highend. So say I was to choose on the AMD side, it would be Fury not FuryX. On the NVIDIA side, GTX 980 Ti not TitanX.


----------



## Mahigan

Quote:


> Originally Posted by *BlitzWulf*
> 
> You are right to take these results with a grain of salt, that particular model of 390x is ridiculously inefficient I mean those numbers are close to what a max overclocked 390x pulls in DX11. I dont know why they use that card for efficiency testing,
> 
> http://www.tomshardware.com/reviews/amd-radeon-r9-390x-r9-380-r7-370,4178-10.html
> 
> Back in June of last year they said themselves that this model draws 53W more than the reference model at the same clocks ,even going so far as to say that the difference in draw is not an acceptable design deviation from reference, yet they use this same particular card to measure the power efficiency of 3XX series Hawaii chips?


Good catch.


----------



## Themisseble

Quote:


> Originally Posted by *BlitzWulf*
> 
> You are right to take these results with a grain of salt, that particular model of 390x is ridiculously inefficient I mean those numbers are close to what a max overclocked 390x pulls in DX11. I dont know why they use that card for efficiency testing,
> 
> http://www.tomshardware.com/reviews/amd-radeon-r9-390x-r9-380-r7-370,4178-10.html
> 
> Back in June of last year they said themselves that this model draws 53W more than the reference model at the same clocks ,even going so far as to say that the difference in draw is not an acceptable design deviation from reference, yet they use this same particular card to measure the power efficiency of 3XX series Hawaii chips?


Dont forget their test from 2 years ago... that time OCed R9 290X was using 255W in gaming mode.


----------



## Smanci

Quote:


> Originally Posted by *BlitzWulf*
> 
> Back in June of last year they said themselves that this model draws 53W more than the reference model at the same clocks ,even going so far as to say that the difference in draw is not an acceptable design deviation from reference, yet they use this same particular card to measure the power efficiency of 3XX series Hawaii chips?


Just like the other cards there. What's the problem?
The 960 gaming G1 there isn't an efficient 960 and those 980s and 980Tis aren't the most efficient ones, either.


----------



## Themisseble

Quote:


> Originally Posted by *Smanci*
> 
> Just like the other cards there. What's the problem?
> The 960 gaming G1 there isn't an efficient 960 and those 980s and 980Tis aren't the most efficient ones, either.


Yes, you are right. Larger dies = more efficiency.... well thats why I am surprised why is R9 390X using 3x more W than R9 270X. Why is not the same for NVIDIA...

The problem is not architecture, its memory. Still I dont believe anyone who will say that R9 390X is using more than 250W in gaming. I had it, I tested it.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> I forsee goal post shifting soon. Absolute performance won't matter but instead performance per watt.
> 
> And if Polaris is slower than Pascal but offers more performance per watt, goal post shift, and now absolute performance will be the prime metric.
> 
> It's always like that. Not for everyone but for the partisans. I personally buy based on bang for your buck on the highend. So say I was to choose on the AMD side, it would be Fury not FuryX. On the NVIDIA side, GTX 980 Ti not TitanX.


No P/W is always top metric, and Nvidia is always top card, because you know it cost more, and its green like money so its better, duh.

/Thread


----------



## Themisseble

Quote:


> Originally Posted by *Cyber Locc*
> 
> No P/W is always top metric, and Nvidia is always top card, because you know it cost more, and its green like money so its better, duh.
> 
> /Thread


Sure.
It depends on the game.
Far Cry primal
Lets say that GTX 960 vs R9 270X or R7 370 vs GTX 960/50.
http://www.computerbase.de/2016-02/far-cry-primal-benchmarks/2/#diagramm-far-cry-primal-1920-1080

Well we should wait for Hitman DX12 benchmark.


----------



## Cyber Locc

Quote:


> Originally Posted by *Themisseble*
> 
> Sure.
> It depends on the game.
> Far Cry primal
> Lets say that GTX 960 vs R9 270X or R7 370 vs GTX 960/50.
> http://www.computerbase.de/2016-02/far-cry-primal-benchmarks/2/#diagramm-far-cry-primal-1920-1080


LOL,


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> Yes, you are right. Larger dies = more efficiency.... well thats why I am surprised why is R9 390X using 3x more W than R9 270X. Why is not the same for NVIDIA...
> 
> The problem is not architecture, its memory. Still I dont believe anyone who will say that R9 390X is using more than 250W in gaming. I had it, I tested it.


That is because Nvidia gpus are waiting on the cpu *HALF* of the time.

It could be surmised as a necessity for a better cpu when paired with Nvidia gpus in Directx 12, or even falling behind in the gpu tiering system entirely.


----------



## Themisseble

Quote:


> Originally Posted by *mtcn77*
> 
> That is because Nvidia gpus are waiting on the cpu *HALF* of the time.
> 
> It could be surmised as a necessity for a better cpu when paired with Nvidia gpus in Directx 12, or even falling behind in the gpu tiering system entirely.


So you are saying me that GTX 960 is as fast as GTX 980TI and faster than GTX 970?


----------



## Mahigan

This section of the toms review is an eye opener:

http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476-3.html


----------



## Cyber Locc

Quote:


> Originally Posted by *Themisseble*
> 
> So you are saying me that GTX 960 is as fast as GTX 980TI and faster than GTX 970?


No he is telling you that all nvidia gpus are poorly designed for DX12. They are giving too much work to the CPU which is slowing them down, from what I am seeing. Now his statement of "We Need Faster CPUs". Is comical, it isn't going to happen, what we need is Nv to get there cards fixed (in pascal).


----------



## Mahigan

Quote:


> Originally Posted by *Themisseble*
> 
> Sure.
> It depends on the game.
> Far Cry primal
> Lets say that GTX 960 vs R9 270X or R7 370 vs GTX 960/50.
> http://www.computerbase.de/2016-02/far-cry-primal-benchmarks/2/#diagramm-far-cry-primal-1920-1080
> 
> Well we should wait for Hitman DX12 benchmark.


That's the trend. Because architecturally, GCN is more powerful than Maxwell. Newer DX11 games are even showing that









Why? Coded for consoles. So developers have been using compute resources in new ways to get playable framerates from the consoles. This code is making its way into PC games.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> That's the trend. Because architecturally, GCN is more powerful than Maxwell. Newer DX11 games are even showing that


What was funny about this post was what he was replying to. I was being sarcastic, which is pretty easy to see by the way I said it and of course looking at the AMD Trifire in my sig .


----------



## Themisseble

Quote:


> Originally Posted by *Mahigan*
> 
> That's the trend. Because architecturally, GCN is more powerful than Maxwell. Newer DX11 games are even showing that


But will be next GCN as powerfull as this one, if we would compare it on same node (TFLOPS per die size, performance per die size)? Or will it go for P/W.

GCN is not aging, it is getting younger every day.


----------



## BlitzWulf

Quote:


> Originally Posted by *Smanci*
> 
> Just like the other cards there. What's the problem?
> The 960 gaming G1 there isn't an efficient 960 and those 980s and 980Tis aren't the most efficient ones, either.


You're right it does cast doubt on the validity of all the P/W results and it leaves me with two points

1.) That each AIB partner has enough control over the components and rom design of the final shipping gpu that using any particular companies model is an innacurate way to gauge the power usage of a particular uArch. It *IS* an accurate way to gauge how well a particular card manufacturer has optimized the power usage of their cards to match the clocks of the reference design.

2.) #1 is no secret to these companies or the people that review their products. It's likely that most of the time the differences in power draw from reference designs and the designs of AIB partners designs are not so pronounced, which would explain why Toms Hardware felt it prudent to mention that that particular card used 53w more than the reference design *AT THE SAME CLOCKS* saying that this was beyond reasonable expectations Implying that a non reference model using that much power at the same clocks was not only abnormal but unreasonable.

"MSI R9 390X Gaming 8G

Registering 368W, we left the reasonable range far behind. A whopping 53W more than our comparable reference graphics card with a hybrid cooler running at the same clock rate is just too much. It's surprising that MSI's cooler is still effective, even though it does get pretty loud in this scenario. The trend we saw for a gaming load continues here."


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> So you are saying me that GTX 960 is as fast as GTX 980TI and faster than GTX 970?


Expect lower performance in a mirrored system match up with the gpu as the only variable.
Quote:


> Originally Posted by *Themisseble*
> 
> But will be next GCN as powerfull as this one, if we would compare it on same node (TFLOPS per die size, performance per die size)? Or will it go for P/W.
> *
> GCN is not aging, it is getting younger every day.*


Like the Mickey Mouse.


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> No he is telling you that all nvidia gpus are poorly designed for DX12. They are giving too much work to the CPU which is slowing them down, from what I am seeing. Now his statement of "We Need Faster CPUs". Is comical, it isn't going to happen, what we need is Nv to get there cards fixed (in pascal).


They're not poorly coded, they're poorly suited (architecturally). That static scheduling is hammering the CPU in more ways than one. Of course you spend more time in the CPU because of the scheduling.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> They're not poorly coded, they're poorly suited (architecturally). That static scheduling is hammering the CPU in more ways than one. Of course you spend more time in the CPU because of the scheduling.


That is why I edited my post to designed







, you must not have seen that









However like the other guys said, thats not NVs fault. Its intels we need faster CPUs, HAHAHA.


----------



## specopsFI

Quote:


> Originally Posted by *Mahigan*
> 
> While I understand your concerns,
> I used the term "if". If is not a statement of fact.


You said "if" once and then build a whole scenario on it. You described it as a fact just waiting to happen, and it is a perfect example of the kind of argumentation that is typical to people who have a side in the argument. I'm tired of the line of thinking, where someone says things like "just wait till the shoe is on the other food, you'll be saying the exact opposite". That is taking a personal stance, and what's more, it's usually not what happens. It's actually *not* that those same guys are moving goal posts, it's that the product will actually have different strengths and weaknesses than its predecessor. It means that the new product will rightfully be criticized on those metrics where it falls short of the new competition. That is just how things go, and being annoyed by it especially in advance means that you have picked your team.

Caring about what people *think* of a product such as GPU's is taking a side. If one's interests are purely technical, one should concentrate on the technical aspects. You are exceptionally good in bringing light in those aspects, so please don't take this as a personal attack. I just don't like the way you are increasingly criticizing the partisanship aspects when you yourself are subject to the very thing you criticize.


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> That is why I edited my post to designed
> 
> 
> 
> 
> 
> 
> 
> , you must not have seen that
> 
> 
> 
> 
> 
> 
> 
> 
> 
> However like the other guys said, thats not NVs fault. Its intels we need faster CPUs, HAHAHA.


It's a role reversal from DX11 where AMD suffered the higher API overhead. Of course moving tasks to compute reduces the amount of rendering calls (draw calls). And we've seen that in the Doom Alpha, Far Cry Primal, Hitman, Battlefront etc recently.


----------



## Mahigan

Quote:


> Originally Posted by *specopsFI*
> 
> You said "if" once and then build a whole scenario on it. You described it as a fact just waiting to happen, and it is a perfect example of the kind of argumentation that is typical to people who have a side in the argument. I'm tired of the line of thinking, where someone says things like "just wait till the shoe is on the other food, you'll be saying the exact opposite". That is taking a personal stance, and what's more, it's usually not what happens. It's actually *not* that those same guys are moving goal posts, it's that the product will actually have different strengths and weaknesses than its predecessor. It means that the new product will rightfully be criticized on those metrics where it falls short of the new competition. That is just how things go, and being annoyed by it especially in advance means that you have picked your team.
> 
> Caring about what people *think* of a product such as GPU's is taking a side. If one's interests are purely technical, one should concentrate on the technical aspects. You are exceptionally good in bringing light in those aspects, so please don't take this as a personal attack. I just don't like the way you are increasingly criticizing the partisanship aspects when you yourself are suspect to the very thing you criticize.


That's called logic...

If blah blah blah
Then blah blah blah
Else Blah Blah Blah

Same language used in programming. The entire scenario painted can be ignored if the primary claim turns out not to be true.

And yes, I am suspect by those who rejected the claims I was making making out of hand. With each passing day, however, I think that the claims I made are being realised. So it wasn't partisanship which drove my claims but rather facts.

Once Hitman, Deus Ex, Fable Legends, BF5 etc release, my statements will likely be perceived in a different light. Rather than seeing me as some AMD fanboi, I think that people will see me as a person who spoke truths when it was unpopular to do so.

I was raised by an activist father so telling the truth, even when it can get you in trouble, is what I was taught.


----------



## PontiacGTX

Quote:


> Originally Posted by *Mahigan*
> 
> This section of the toms review is an eye opener:
> 
> http://www.tomshardware.co.uk/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,review-33476-3.html


so DX12 overhead can be lowered even more?


----------



## cowie

Quote:


> Originally Posted by *Mahigan*
> 
> It's a role reversal from DX11 where AMD *sufferers* the higher API overhead. Of course moving tasks to compute reduces the amount of rendering calls (draw calls). And we've seen that in the Doom Alpha, Far Cry Primal, Hitman, Battlefront etc recently.


fixed that for you .
proof is in this very game


----------



## Cyber Locc

Quote:


> Originally Posted by *cowie*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Mahigan*
> 
> It's a role reversal from DX11 where AMD *suffered* the higher API overhead. Of course moving tasks to compute reduces the amount of rendering calls (draw calls). And *we've seen that in the Doom Alpha, Far Cry Primal, Hitman, Battlefront etc recently.*
> 
> 
> 
> fixed that for you .
> proof is in this very game
Click to expand...

Sufferers is not the correct word.

He also pretty clearly meant Suffered, as he was speaking of DX11, which will begin to be phased out, so he spoke in past tense. Also alot of that overhead has began to be fixed by AMDs drivers which is why there performance has increased AFAIK.

Just FTR however.

Suffers, experience or be subjected to (something bad or unpleasant).

Suffered, Past Tense, Some or all of the Suffering has been alleviated.

Sufferers, A person suffering from an illness, or, one who suffers for the sake of principle
IE, People that see this info and still think Nvidia are the winners may be sufferers of partisanship.

Oh and fixed that for you.


----------



## cowie

Quote:


> Originally Posted by *Cyber Locc*
> 
> Sufferers is not the correct word.
> 
> He also pretty clearly meant Suffered, as he was speaking of DX11, which will begin to be phased out, so he spoke in past tense. Also alot of that overhead has began to be fixed by AMDs drivers which is why there performance has increased AFAIK.
> 
> Just FTR however.
> 
> Suffers, experience or be subjected to (something bad or unpleasant).
> 
> Suffered, Past Tense, Some or all of the Suffering has been alleviated.
> 
> Sufferers, A person suffering from an illness, or, one who suffers for the sake of principle
> IE, People that see this info and still think Nvidia are the winners may be sufferers of partisanship.
> 
> Oh and fixed that for you.


fair enough spellcheck was even correcting me

dx11? phased out? of coarse just like dx12 will be too? right?
but its the main dx we have right now is it not?

you think they will sell more of this game to w10 owners or w7-8.1?or you think w10 owners will buy it because there is nothing else to buy that's truly dx12?

the fact is that the game runs slower on amd hardware in dx11 that alone shows you how poor dx11 still is on that hardware...come on dx12 already

no one even really tests this game In dx11 anymore I wonder why? I bet its amd telling them not too.
since it shows them in very very bad light.you know with the 8g 390x besting the flagship and all...


----------



## Cyber Locc

Quote:


> Originally Posted by *cowie*
> 
> fair enough spellcheck was even correcting me
> 
> dx11? phased out? of coarse just like dx12 will be too? right?
> but its the main dx we have right now is it not?
> 
> you think they will sell more of this game to w10 owners or w7-8.1?or you think w10 owners will buy it because there is nothing else to buy that's truly dx12?
> 
> the fact is that the game runs slower on amd hardware in dx11 that alone shows you how poor dx11 still is on that hardware...come on dx12 already
> 
> no one even really tests this game In dx11 anymore I wonder why? I bet its amd telling them not too.
> since it shows them in very very bad light.you know with the 8g 390x besting the flagship and all...


Well the thing is that DX12 is the future, yes DX11 is what we have now but not for long. As far as the other case about people using DX12 and Windows 10. Seeing how Microsoft is telling every on you do not have a choice, well ya you dont have a choice, its not an if people move to windows 10 its when. Like it or not Microsoft is forcing Windows 10 upgrades to everyone, they are very very serious about it.

I said that AMDs performance has been increasing in DX11 and will continue to do so, also as his been illustrated a million times already, its not just this game that will show the gains it is 80% of the current DX12 lineup. The reason for the gains is Async which AMD has Nvidia does not and that is a major aspect of DX12. I agree that with current cards it really doesnt matter much, but Pascal better seriously have Async.

Microsoft is being seriously pushy with Windows 10, even stating a few times they will forcefully upgrade without choice from the user via windows updates, you think they are not going to push devs in DX12 the same way, if for no other reason then to get more people to eat up that Win 10 Pie. Never mind the the fact that the XBox1 uses DX12, and AMD hardware and the games go there first especially XBox Games like Fable, and are then ported to PC.


----------



## mtcn77

Quote:


> Originally Posted by *cowie*
> 
> fair enough spellcheck was even correcting me
> 
> dx11? phased out? of coarse just like dx12 will be too? right?
> but its the main dx we have right now is it not?
> 
> you think they will sell more of this game to w10 owners or w7-8.1?or you think w10 owners will buy it because there is nothing else to buy that's truly dx12?
> 
> the fact is that the game runs slower on amd hardware in dx11 that alone shows you how poor dx11 still is on that hardware...come on dx12 already
> 
> no one even really tests this game In dx11 anymore I wonder why? I bet its amd telling them not too.
> since it shows them in very very bad light.you know with the 8g 390x besting the flagship and all...


Either way, you know that Nvidia offers way too many rops only to wait on the cpu. While I agree, that AMD is cutting corners(80% of the necessary rops in Pitcairn & Hawaii, 60% in Tahiti, 57% in Fiji), there appears to be more vital units to spare gpu space for.


----------



## cowie

I don't think they will force me

you give in too easy

anyways yeah you are right amd like always very very late to the party

quote from toms since the link is a few posts from ours?
are's?
so your smart you get it
Quote:


> With asynchronous shading/compute, AMD has a clear winner on its hands. At least for now. But where are the games that take advantage of the technology? It will be a long time until developers use DirectX 12 to its full advantage. By then, GPU architectures like Fiji or Maxwell might already be history, instead of the flagship designs they are now.


so this game still has so many things to fix and I guess 1 of the leading devs to work on dx12 sooo really dx11 is here to stay just a bit longer then you think
it will be when dx12 really brings some new features that most will adopt it just as dx11 and you can remember dx11 it was the same smoke screen amd is the king blah blah blah.

so far dx12 does not bring a whole lot to the end user at all. it does to some big company's but that's about it...the big corporate bank is counting on it and they want it now....lol


----------



## cowie

damn it xxxxx

was in response to

Cyber Locc


----------



## Cyber Locc

Quote:


> Originally Posted by *cowie*
> 
> I don't think they will force me
> 
> you give in too easy
> 
> anyways yeah you are right amd like always very very late to the party
> 
> quote from toms since the link is a few posts from ours?
> are's?
> so your smart you get it
> so this game still has so many things to fix and I guess 1 of the leading devs to work on dx12 sooo really dx11 is here to stay just a bit longer then you think
> it will be when dx12 really brings some new features that most will adopt it just as dx11 and you can remember dx11 it was the same smoke screen amd is the king blah blah blah.
> 
> so far dx12 does not bring a whole lot to the end user at all. it does to some big company's but that's about it...the big corporate bank is counting on it and they want it now....lol


Here we go again lol, " it will be when dx12 really brings some new features that most will adopt it just as dx11 and you can remember dx11 it was the same smoke screen amd is the king blah blah blah."

You dont get to make the call when DX12 is adopted the people that make the games do, and when Microsoft says no more DX11 they do not have much of a choice. You keep thinking that we have the control and that is hilarious.

I do not give in to easy, I like windows 10 I shut off the spying and I am happy. However one thing you will notice when moving to windows 10 is that alot of apps from Windwos 7 days wont work, once that goes into effect in reverse, you will change over. The change over rate already is insane, they are pushing hard and it is working. The techies like us that would care or do care or know how to stop it are few and far between the majority of gamers and other PC users have already switched.

People are sheep, we may not be, 99% of this forum isn't, but 90% of the world is and they will take whatever MS forces down there throats.

I dont remember seeing any "AMD is king DX11" talk. Many poeple have already fully documented exactly why NV will lose in DX12 unless they include Async support, which they have yet to do. when they do things will change. However you are free to continue to stick your head in the sand if thats what you want to do.

Also its no surprise that AMD will have the upper hand at first seeing how DX12 is heavily influenced by Mantle. AMD wrote the code for Mantle to work with there GPUs very well, this is Gimpworks in a codec. No one is saying that NV cant make changes to future cards to increase the performance, we are simply stating they haven't yet, and that it will require hardware changes so it may take awhile.

Also just to clarify I honestly dont care, I will buy the best cards for my usage at the time and be happy, I have no partisanship. I bought 290s because they were the best card at the time for 4k that I play at, if pascal changes that then I will buy NV, I actually prefer NV and these 290s are the first time in basically ever I have owned Radeons. However at the time they were smoking 780s in 4k so I went with them.


----------



## mtcn77

Quote:


> Originally Posted by *cowie*
> 
> I don't think they will force me
> 
> you give in too easy
> 
> anyways yeah you are right amd like always very very late to the party
> 
> quote from toms since the link is a few posts from ours?
> are's?
> so your smart you get it
> so this game still has so many things to fix and I guess 1 of the leading devs to work on dx12 sooo really dx11 is here to stay just a bit longer then you think
> it will be when dx12 really brings some new features that most will adopt it just as dx11 and you can remember dx11 it was the same smoke screen amd is the king blah blah blah.
> 
> so far dx12 does not bring a whole lot to the end user at all. it does to some big company's but that's about it...the big corporate bank is counting on it and they want it now....lol


I'm all for freeing up the gaming perspective. The last time a title could zoom this freely on the horizon was beyond reckoning. Time for close ups and full landscapes.


----------



## Blameless

Quote:


> Originally Posted by *GorillaSceptre*
> 
> The MSI in particular. Does anyone know why the MSI card is so power hungry?


It's clocked higher than most of the others and probably has a larger voltage offset as well.


----------



## escksu

http://www.tweaktown.com/articles/7578/dx12-benched-fury-cf-vs-titan-sli-ashes-singularity/index.html

Tweaktown just released a Fury X CF vs Titan X SLI review.

I have to say its really mind blowing. CAn't believe that Fury X is ahead of Titan X by such a wide margin.


----------



## gamervivek

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yeah, the 390X is a damn power hog.. The MSI in particular. Does anyone know why the MSI card is so power hungry?


MSI had their version at 1100Mhz and most likely volted it higher. HIS stayed close to 290 cards.

http://www.techspot.com/review/1019-radeon-r9-390x-390-380/page7.html


----------



## BradleyW

Quote:


> Originally Posted by *mejobloggs*
> 
> Yeah that confused me. I still don't know what RTG means


Radeon Technologies Group.
Edit: Does ATOS work with 144Hz and fps limiters?


----------



## czin125

http://www.tweaktown.com/articles/7578/dx12-benched-fury-cf-vs-titan-sli-ashes-singularity/index3.html

http://www.guru3d.com/articles-pages/ashes-of-singularity-directx-12-benchmark-ii-review,7.html

5960x at 4.4 / Fury average is 51 at 2560x1440 on crazy
stock 5960x / Fury X average is 44.7 at 2560x1440 on crazy
should be able to reach 60-62+ at 4.7 on the Fury X?


----------



## Themisseble

Quote:


> Originally Posted by *escksu*
> 
> http://www.tweaktown.com/articles/7578/dx12-benched-fury-cf-vs-titan-sli-ashes-singularity/index.html
> 
> Tweaktown just released a Fury X CF vs Titan X SLI review.
> 
> I have to say its really mind blowing. CAn't believe that Fury X is ahead of Titan X by such a wide margin.


As I was speaking Techspot just confirm my words

PC wtih GTX 390 = PC with GTX 970 + 40-50W not 150W
PC wtih GTX 390X = PC with GTX 980 + 40-50W not 150W


----------



## magnek

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> Well, looking at your sig rig I'd say you have more money available for your hobby than most people, and it's those people that will keep using Nvidia BECAUSE of their Gsync monitor. They paid premium for the gsync module, they're going to want to use it.


Dude have you seen the guys rocking 5960X and multiple Titan X's? My rig is nothing compared to theirs. Ok so they're in the minority even on OCN, but it's really not that hard to find people with 980 Ti's and hex core CPUs in their rigs here.

OT aside, yeah I see your point, but like I said it all depends on how impressed they are with G-Sync. Since I'm not too impressed by it, I don't mind not using the feature. But to be fair, I didn't buy my monitor for G-Sync, I bought it because it was the first 144Hz 1440p IPS panel on the market. I actually was going to wait for the MG279Q since it's cheaper, but not only was it 2 months late to market, the initial batch had frameskipping issues at 144Hz and were all recalled for a firmware upgrade, resulting in further delays in rolling out to market.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> Well, looking at your sig rig I'd say you have more money available for your hobby than most people, and it's those people that will keep using Nvidia BECAUSE of their Gsync monitor. They paid premium for the gsync module, they're going to want to use it.


I love the "G-Sync Premium" statement people make, yet they love to ignore that it is a $150 (or less now) part in the display. That we pay $700-$800 for our displays, not because of just G-Sync. Instead because of G-Sync, 144 Hz, IPS-like, 1440P panel, first to market. Also providing a VRR experience that FreeSync has yet to exactly match.

But we will ignore all the other overwhelming reasons we bought it, instead focusing on one aspect by itself in order to provide legitimacy to an unfounded argument.

......logic.


----------



## airfathaaaaa

Quote:


> Originally Posted by *escksu*
> 
> http://www.tweaktown.com/articles/7578/dx12-benched-fury-cf-vs-titan-sli-ashes-singularity/index.html
> 
> Tweaktown just released a Fury X CF vs Titan X SLI review.
> 
> I have to say its really mind blowing. CAn't believe that Fury X is ahead of Titan X by such a wide margin.


fury x is a compute beast why you wonder?


----------



## DeathMade

Question to those who understand how DX12 works. How significant will drivers be in DX12? Is it going to be like now that you almost cant even run the game without an optimised driver for it?

Thanks


----------



## Glottis

Quote:


> Originally Posted by *DeathMade*
> 
> Question to those who understand how DX12 works. How significant will drivers be in DX12? Is it going to be like now that you almost cant even run the game without an optimised driver for it?
> 
> Thanks


not sure what you mean about "now", during the years i had to stick on older nvidia drivers for some bug or other reason, as i'm sure many people also had to do at one time or another, and i didn't have any problems playing latest games. and when i finally was able to use latest driver i didn't notice any performance improvements, so this optimized for game x and y is just marketing bluff as far as i'm concerned. maybe it's a bit different with AMD, but i haven't used AMD gpus for gaming in many years.


----------



## BradleyW

Quote:


> Originally Posted by *DeathMade*
> 
> Question to those who understand how DX12 works. How significant will drivers be in DX12? Is it going to be like now that you almost cant even run the game without an optimised driver for it?
> Thanks


We are less dependant on drivers and more on the developer, and Microsoft's chosen behaviour of their WDDM 2.0 / DX12 pipeline.


----------



## DeathMade

Guys! Gears Of War silently launched today. It is DX12 and supposedly has Async too

And it is a mess!

http://www.forbes.com/sites/jasonevangelho/2016/03/01/gears-of-war-ultimate-edition-on-pc-is-a-disaster-for-amd-radeon-gamers/#5fd6ac17e7eb


----------



## NightAntilli

Apparently a 390X runs fine. Memory issue maybe? 8GB vs 4GB?


----------



## DeathMade

Quote:


> Originally Posted by *NightAntilli*
> 
> Apparently a 390X runs fine. Memory issue maybe? 8GB vs 4GB?


Dude they state that 970 runs better than 980ti. And that 370 doesnt have any issues


----------



## NightAntilli

Spoke too soon then.


----------



## Charcharo

Broken game really. Nothing more or less


----------



## dragneel

Quote:


> Originally Posted by *DeathMade*
> 
> Guys! Gears Of War silently launched today. It is DX12 and supposedly has Async too
> 
> And it is a mess!
> 
> http://www.forbes.com/sites/jasonevangelho/2016/03/01/gears-of-war-ultimate-edition-on-pc-is-a-disaster-for-amd-radeon-gamers/#5fd6ac17e7eb










seems like they may have lazily ported the dx9 version to dx12? is that a thing? I know software development about as much as I understand nuclear physics. Not at all.


----------



## Cyber Locc

"Ahah so in turns out that Stardock is gimping NV cards to make AMD look good, GoW proves exactly that. We knew it all along........"

In all seriousness though, what is MS doing this is hilarious.


----------



## BlitzWulf

Quote:


> Originally Posted by *Cyber Locc*
> 
> "Ahah so in turns out that Stardock is gimping NV cards to make AMD look good, GoW proves exactly that. We knew it all along........"
> 
> INB4
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In all seriousness though, what is MS doing this is hilarious.


I realize you're joking but here is a video of a 980ti getting 30fps at 1080p, and g sync is also broken , so the grass is hardly greener on the other side

https://www.youtube.com/watch?v=8yK_GBeC55c


----------



## Cyber Locc

Quote:


> Originally Posted by *BlitzWulf*
> 
> I realize you're joking but here is a video of a 980ti getting 30fps at 1080p, and g sync is also broken , so the grass is hardly greener on the other side
> 
> https://www.youtube.com/watch?v=8yK_GBeC55c


Oh I know but its just a matter of time before someone comes in saying that







, sadly thats honestly the truth.

Edit: just saw this, http://wccftech.com/amd-nvidia-gears-of-war-ultimate-broken/

Looks like fury cards have more issues than 980tis and 290/390/xs do. The GCN 1.1 and NV cards have the stutter but GCN 1.2 also has AI issues.


----------



## Mahigan

What a piece of you know what release...


----------



## Remij

DX12 has been the biggest flop ever so far. The API shouldn't have even been announced yet since anything associated with it has been complete garbage up to this point.

The best thing about DX12 at this very moment is its implementation in the Dolphin Emulator... but you know... MS had to hype Win10...


----------



## dagget3450

Anyone know whats in the forbes link? IT wont let me view without allowing ads on my system. I'd rather hear it second hand or not at all.


----------



## BlitzWulf

Quote:


> Originally Posted by *dagget3450*
> 
> Anyone know whats in the forbes link? IT wont let me view without allowing ads on my system. I'd rather hear it second hand or not at all.


TLR GOW Ultimate runs like crap on everything but especially so on GCN 1.2 the guy glosses over Nvidias performance problems and lays the entirety of the blame at MS feet.
All companies involved are promising to rectify these issues with updates very soon.


----------



## Mahigan

Oxide deserves a round of applause. Their alpha worked better on a Microsoft API. Microsoft couldn't even get a Microsoft title purchased from a Microsoft store running properly on a Microsoft API running on a Microsoft OS. Wow!


----------



## mav451

To be fair, MS is typically not a high bar to surpass hahah


----------



## dagget3450

Quote:


> Originally Posted by *Mahigan*
> 
> Oxide deserves a round of applause. Their alpha worked better on a Microsoft API. Microsoft couldn't even get a Microsoft title purchased from a Microsoft store running properly on a Microsoft API running on a Microsoft OS. Wow!


This made me


----------



## Cyber Locc

Quote:


> Originally Posted by *Remij*
> 
> DX12 has been the biggest flop ever so far. The API shouldn't have even been announced yet since anything associated with it has been complete garbage up to this point.
> 
> The best thing about DX12 at this very moment is its implementation in the Dolphin Emulator... but you know... MS had to hype Win10...


Up to this point all we have had was this garbage release and Ashes , and Ashes seems fine so I understand the logic here.

Edit: Oh never mind I see your avatar and understand your logic hahaha. No worries more are coming, tables are turning.

Its okay though, NV will still have the best performance per watt so there is that.
Quote:


> Originally Posted by *dagget3450*
> 
> Anyone know whats in the forbes link? IT wont let me view without allowing ads on my system. I'd rather hear it second hand or not at all.


Ya pretty much,

Blah Blah Blah, AMD Sucks, Blah Blah Blah, Nvidia is working okayish, proving AMD sucks, Blah Blah blah.

Seriously to me thats the TLDR version, like said before they are ignoring that 980s are having the same exact issue (with AI on) and just bashing the AMD cards.

To give you an idea, the only time in the entire (long) article is here

"It's not just AMD's Radeon Fury that's affected,either. The R9 Nano, which typically outperforms a Radeon 390x and Nvidia GTX 980, is similarly brought to its knees. Microsoft touted 4K textures with this PC remaster, so let's witness the Nano running the Gears of War: Ultimate Edition benchmark at 4K resolution with High quality settings:" Source

Every other part references AMD.........

Also in other news check this one guys, you thought forbes was bad, this dude blames this entire thing solely on AMD..... http://www.extremetech.com/gaming/223858-the-new-gears-of-war-ultimate-edition-is-a-dx12-disaster

Have to make a quote from there as well, "The developers recommend disabling ambient occlusion altogether on AMD cards, and state that G-Sync causes "significant performance issues." According to Nvidia, G-Sync works perfectly well in-game, but cutscenes may not render properly with G-Sync enabled. An upcoming driver from Team Green will solve this issue."

HAHA, he also says in the comments when he is told that 980tis are having the same issue, that he doesn't think so are you kidding me right now this is a joke.


----------



## dagget3450

I can see in our gaming future not far away, graphical setting DLC and wireframe DLC with 2010 texture DLC coming to the windows 10 store near you!!! To get optimal usage of your DLC's please disable them. Disable all settings as they may cause the game to perform badly. For best performance and gaming experience please make sure to power off the PC.


----------



## SpeedyVT

Quote:


> Originally Posted by *Cyber Locc*
> 
> Up to this point all we have had was this garbage release and Ashes , and Ashes seems fine so I understand the logic here.
> 
> Edit: Oh never mind I see your avatar and understand your logic hahaha. No worries more are coming, tables are turning.
> 
> Its okay though, NV will still have the best performance per watt so there is that.
> Ya pretty much,
> 
> Blah Blah Blah, AMD Sucks, Blah Blah Blah, Nvidia is working okayish, proving AMD sucks, Blah Blah blah.
> 
> Seriously to me thats the TLDR version, like said before they are ignoring that 980s are having the same exact issue (with AI on) and just bashing the AMD cards.
> 
> To give you an idea, the only time in the entire (long) article is here
> 
> "It's not just AMD's Radeon Fury that's affected,either. The R9 Nano, which typically outperforms a Radeon 390x and Nvidia GTX 980, is similarly brought to its knees. Microsoft touted 4K textures with this PC remaster, so let's witness the Nano running the Gears of War: Ultimate Edition benchmark at 4K resolution with High quality settings:" Source
> 
> Every other part references AMD.........
> 
> Also in other news check this one guys, you thought forbes was bad, this dude blames this entire thing solely on AMD..... http://www.extremetech.com/gaming/223858-the-new-gears-of-war-ultimate-edition-is-a-dx12-disaster
> 
> Have to make a quote from there as well, "The developers recommend disabling ambient occlusion altogether on AMD cards, and state that G-Sync causes "significant performance issues." According to Nvidia, G-Sync works perfectly well in-game, but cutscenes may not render properly with G-Sync enabled. An upcoming driver from Team Green will solve this issue."
> 
> HAHA, he also says in the comments when he is told that 980tis are having the same issue, that he doesn't think so are you kidding me right now this is a joke.


DX12 done bad isn't AMD's fault if it doesn't work on any hardware.


----------



## Mahigan

It runs on my friends TitanX, it stutters often enough but it runs. Looks like everyone needs to buy a TitanX to play a game that looks like its straight off the XBox 360


----------



## Cyber Locc

Quote:


> Originally Posted by *SpeedyVT*
> 
> DX12 done bad isn't AMD's fault if it doesn't work on any hardware.


I know that, apparently extreme tech does not lol.
Quote:


> Originally Posted by *Mahigan*
> 
> It runs on my friends TitanX, it stutters often enough but it runs. Looks like everyone needs to buy a TitanX to play a game that looks like its straight off the XBox 360


From what I have read it runs on all cards, its just a stuttery mess is all. Also 290x play it just fine (stuttery though) its just GCN1.2 that seems to be having severe issues.


----------



## BlitzWulf

Apparently Rise of the Tomb Raider is getting it's DX12 update soon or at least soon enough if Microsoft is showing it off.
Thankfully it's Nixxes handling the work so you should be able to count on a solid release,and as a plus we will get to see the differences between DX11 and DX12 on a game that is really representative of modern AAA games.

http://www.dsogaming.com/news/rise-of-the-tomb-raider-was-running-in-directx-12-at-microsoft-xbox-showcase-patch-incoming/


----------



## MonarchX

I can understand why AMD Radeon is doing better than NVidia GTX series, but why the hell is GTX slower in DirectX 12 than in DirectX 11 in some benchmarks? It makes no sense. It should get some benefit, even at 1080p with low, medium, and high quality settings. This tells me that AoS is still in early Beta stage or very AMD-oriented. Does it even utilize DirectX 12 features other than Asynchronous Shaders?


----------



## MonarchX

Quote:


> Originally Posted by *BlitzWulf*
> 
> Apparently Rise of the Tomb Raider is getting it's DX12 update soon or at least soon enough if Microsoft is showing it off.
> Thankfully it's Nixxes handling the work so you should be able to count on a solid release,and as a plus we will get to see the differences between DX11 and DX12 on a game that is really representative of modern AAA games.
> 
> http://www.dsogaming.com/news/rise-of-the-tomb-raider-was-running-in-directx-12-at-microsoft-xbox-showcase-patch-incoming/


It could very much run slower on both AMD and NVidia cards, or at least on NVidia cards. Is it known whether RoTR uses Asynchronous Shaders?


----------



## NightAntilli

Quote:


> Originally Posted by *MonarchX*
> 
> I can understand why AMD Radeon is doing better than NVidia GTX series, but why the hell is GTX slower in DirectX 12 than in DirectX 11 in some benchmarks? It makes no sense. It should get some benefit, even at 1080p with low, medium, and high quality settings. This tells me that AoS is still in early Beta stage or very AMD-oriented. Does it even utilize DirectX 12 features other than Asynchronous Shaders?


nVidia cards are pretty much fully utilized in DX11, while AMD's cards run at maybe 50%-75% percent of their true capacity. Under DX12, nVidia uses more CPU than under DX11, which can cause the drop in performance under DX11. nVidia users are better off running under DX11 as of now.


----------



## Pantsu

Quote:


> Originally Posted by *MonarchX*
> 
> I can understand why AMD Radeon is doing better than NVidia GTX series, but why the hell is GTX slower in DirectX 12 than in DirectX 11 in some benchmarks? It makes no sense. It should get some benefit, even at 1080p with low, medium, and high quality settings. This tells me that AoS is still in early Beta stage or very AMD-oriented. Does it even utilize DirectX 12 features other than Asynchronous Shaders?


The biggest benefit AotS gets from DX12 is the draw call increase. I'm not sure about asynchronous shaders, but I thought they said they didn't use much of it at all when they first showed the benchmark last year. Maybe things have changed since.

I wouldn't draw big conclusions about performance based on a couple early attempts at DX12. It'll take some time before the devs and AMD/Nvidia get the support working ok. Unfortunately in developing new stuff, things just don't work magically from the get go.


----------



## MonarchX

Quote:


> Originally Posted by *NightAntilli*
> 
> nVidia cards are pretty much fully utilized in DX11, while AMD's cards run at maybe 50%-75% percent of their true capacity. Under DX12, nVidia uses more CPU than under DX11, which can cause the drop in performance under DX11. nVidia users are better off running under DX11 as of now.


Why would NVidia cards use more CPU under DirectX 12? It has to be either poor game optimization or poor driver optimization...


----------



## Tivan

Quote:


> Originally Posted by *MonarchX*
> 
> Why would NVidia cards use more CPU under DirectX 12? It has to be either poor game optimization or poor driver optimization...


software emulation of async compute = the cpu does what the async compute hardware on a fully featured Dx12 GPU would do.

Though this might not be that much to do in many games, we'll see how specifically this plays out in the future.


----------



## BlitzWulf

Quote:


> Originally Posted by *MonarchX*
> 
> It could very much run slower on both AMD and NVidia cards, or at least on NVidia cards. Is it known whether RoTR uses Asynchronous Shaders?


They used async for global illumination and ambient occlusion on the XBOX1 I believe.
Sorry I dont think that will happen. as an Nvidia user I can see why you might be tentative about the shift to DX12 but we have only seen two examples of it so far Gears of war is a joke and I don't think it represents what DX12 performance will be like for either side,and we have Ashes, say what you want about it I think it definitively shows that under DX12 AMD's driver overhead and draw call limit issues are fixed. I only see a benefit....as long as the patch is coded properly.


----------



## airfathaaaaa

Quote:


> Originally Posted by *BlitzWulf*
> 
> They used async for global illumination and ambient occlusion on the XBOX1 I believe.
> Sorry I dont think that will happen. as an Nvidia user I can see why you might be tentative about the shift to DX12 but we have only seen two examples of it so far Gears of war is a joke and I don't think it represents what DX12 performance will be like for either side,and we have Ashes say what you want about it i think it definitively shows that under DX12 AMD's driver overhead and draw call limit issues are fixed. I only see a benefit as long as the patch is coded properly.


gears of war is the perfect example of how not to do a dx port from the company that invented dx...


----------



## Forceman

Quote:


> Originally Posted by *NightAntilli*
> 
> nVidia cards are pretty much fully utilized in DX11, while AMD's cards run at maybe 50%-75% percent of their true capacity. *Under DX12, nVidia uses more CPU than under DX11*, which can cause the drop in performance under DX11. nVidia users are better off running under DX11 as of now.


Where do you get that idea?


----------



## EightDee8D

Quote:


> Originally Posted by *Forceman*
> 
> Where do you get that idea?




http://www.tomshardware.com/reviews/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,4479-3.html


----------



## Forceman

Quote:


> Originally Posted by *EightDee8D*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> http://www.tomshardware.com/reviews/ashes-of-the-singularity-beta-async-compute-multi-adapter-power-consumption,4479-3.html


Those charts are comparing DX12 to DX12, not DX12 to DX11. Nvidia uses more CPU than AMD does in DX12 (or at least spends more time waiting on the CPU), but not more in DX12 than they do in DX11. DX12 is reducing CPU load on both, although AMD has more to gain since their DX11 CPU optimization isn't as good.


----------



## EightDee8D

Quote:


> Originally Posted by *Forceman*
> 
> Those charts are comparing DX12 to DX12, not DX12 to DX11. Nvidia uses more CPU than AMD does in DX12 (or at least spends more time waiting on the CPU), but not more in DX12 than they do in DX11. DX12 is reducing CPU load on both, although AMD has more to gain since their DX11 CPU optimization isn't as good.


oh my bad. i didn't notice it's not dx11 vs dx12.


----------



## Kollock

Quote:


> Originally Posted by *Forceman*
> 
> Where do you get that idea?


Interesting, though these charts need to be normalized for FPS. Faster FPS will result in more power, since more frames are rendered.


----------



## BradleyW

Quote:


> Originally Posted by *Kollock*
> 
> Interesting, though these charts need to be normalized for FPS. Faster FPS will result in more power, since more frames are rendered.


Not always though. Increasing resolution for instance? Load increases, power usage increases, FPS decreases.


----------



## NightAntilli

Quote:


> Originally Posted by *Kollock*
> 
> Interesting, though these charts need to be normalized for FPS. Faster FPS will result in more power, since more frames are rendered.


Look at this;









Fury X goes from;
Using 0.6 W/fps more under DX11 to
Pretty much equal under DX12 with no async, to,
0.1 W/fps more efficient with DX12 async.

Looking the graphs on the website, we can reach maybe 50 fps. So that means,
Fury X uses 30W more power under DX11
And Fury X uses 5W less under DX12 with async

And we do know power usage of the Fury X and its CPU increases under DX 12, so... What we're seeing is the Fury X becoming more efficient, and the 980 Ti less efficient.


----------



## Themisseble

Quote:


> Originally Posted by *NightAntilli*
> 
> Look at this;


Well where are midrange GPUs and where is R9 Nano?


----------



## BradleyW

Please read carefully.
The current list I made on Store and DX12 stands true at this very moment, today (02/03/2016 - dd/mm/yyyy)
"Store only" issues are "highlighted" on that list.
The list is based on ROTTR, GOW store versions and PC Per's article on ATOS and personal testing with DX12 and Store games.
The list may become redundant as software changes all the time through progression and development.
The list may become somewhat redundant if AMD allow FlipEx.

Fingers crossed that HITMAN D3D12 won't suffer from the points on my list. This is achievable through FlipEx and Dev implementations. As for store versions, hopefully MS remove the forced Vsync//No Gsync/No fps reading/No modding/No full screen (Which are ALL true AT THIS TIME). As for DX12 games across the board, as of right now, frames above 60 are dropped, due to DX12's pipeline. Again, as of right now, based on limited information and testing. This can be changed of course, or even bypassed via FlipEx possibly. It is early days and things can/will change in some way. Again, all this info is based on the situation as of right now. 02/03/2016 - (dd/mm/yyyy) and may change at any time through DX12 and the stores development, leaving the list created on 02/03/2016 possibly redundant. If/when this happens, I will be jumping with absolute joy!


----------



## airfathaaaaa

Quote:


> Originally Posted by *Kollock*
> 
> Interesting, though these charts need to be normalized for FPS. Faster FPS will result in more power, since more frames are rendered.


didnt you said that some of the effects where dumbed down or forcibly closed by nvidia driver some time ago? such a move wouldnt have an impact on the charts we see here?


----------



## Catscratch

I'm waiting for a FPS which has a lot less chars and action on screen, to really judge dx12. I mean you could still have 1billion piece explosions, clothing that actually acts like clothing instead of just being texture and also better hair with DX12 because cpu can assist the calculations now but i'm still skeptical on fps increases on other genre outside RTS.


----------



## MonarchX

Quote:


> Originally Posted by *Tivan*
> 
> software emulation of async compute = the cpu does what the async compute hardware on a fully featured Dx12 GPU would do.
> 
> Though this might not be that much to do in many games, we'll see how specifically this plays out in the future.


Then why use ASync at all? Why not use whatever hardware GPU can do instead? Its kind of dumb that with all this low overhead stuff, DirectX 12 games will actually run SLOWER on current NVidia cards...


----------



## NightAntilli

Which is exactly why nVidia wants to stay on DX11 as long as possible. AMD's hardware looks too good under DX12, and nVidia's cards don't really get a boost from it.


----------



## Cyber Locc

Quote:


> Originally Posted by *MonarchX*
> 
> Then why use ASync at all? Why not use whatever hardware GPU can do instead? Its kind of dumb that with all this low overhead stuff, DirectX 12 games will actually run SLOWER on current NVidia cards...


Thats not the industrys fault that is NV's, i honestly woildnt be suprised if NV doesnt support async for a long time as they are stubborn and properiarty so employing AMDs ideas os tough to swallow for them.

Why use Async at all, because it can improve what games can do. Why did they use phys X at all? When AMD had to have the CPU render it? Tables have turned and NV finds themselves getting a dose of there own medicine.

Despite what fan boys seem to think, NV does not rule the world. Actually if you want to get techincal AMD has the larger market share, NV may have the larger market share in DGPUs however AMD has APU shares and 100% of consoles, that will also use DX12 and ASync.

Also the AMD Dgpu is no where near as small as people seem to think. As has been proven time and time again AMDs cards age better, just because there market sales this year are 80/20 is irrelvant in truth its more like a 60/40 split of cards owned, if not 50/50. Then add the APUs and Consoles and AMD is the majority lead in Graphics Proccesing.


----------



## PontiacGTX

Quote:


> Originally Posted by *Catscratch*
> 
> I'm waiting for a FPS which has a lot less chars and action on screen, to really judge dx12. I mean you could still have 1billion piece explosions, clothing that actually acts like clothing instead of just being texture and also better hair with DX12 because cpu can assist the calculations now but i'm still skeptical on fps increases on other genre outside RTS.


Mantle already show that in CPU limited scenarios Baftlefield had performance gains
Quote:


> Originally Posted by *MonarchX*
> 
> Then why use ASync at all? Why not use whatever hardware GPU can do instead? Its kind of dumb that with all this low overhead stuff, DirectX 12 games will actually run SLOWER on current NVidia cards...


because they are trying to sell marketing but they cant catch up with just released updates for a game because they fixed that with software or offloading to cpu?


----------



## Cyber Locc

Quote:


> Originally Posted by *PontiacGTX*
> 
> Mantle already show that in CPU limited scenarios Baftlefield had performance gains
> because they are trying to sell marketing but they cant catch up with just released updates for a game because they fixed that with software or offloading to cpu?


I am confused what this means?

Are you talking about windows?

If yes well its clear why there stuff is jacked, there using a very old engine on a very old game and trying to do a lazzy bad port and it backfired.

If you are talking about Nividia, there cards are not capable of Async on a hardware level. Drivers and software wont change this also, I would be inclined to agree that due to where they are with Pascal it will not either. So this will be the case with DX12 until at least Volta (knowing Nvidia and there stubbornness probably later than that). That said as has been pointed out I am sure most DX12 games coming up will also support DX11, so NV users can just use that. You may lose some graphics settings, and the features of DX12, so you can thank NV for trying to be the new Apple for that.

That said AMD still has there own issues, so it still wont be a clear choice for anyone that isnt a fan boy. Every Generation of cards has had pros and cons to them, NV never wins in every single way although many people want you to believe that. Its a matter of give and take, what do you need and want and what cons are you prepared to deal with. To be clear I am not an AMD fan at all, as I stated earlier this is my first set of AMD cards ever, I bought them for the simple fact that they smashed 780tis in 4k when I bought them. I wanted 4k so I bought the best 4k card at the time, that goes in line with what I am saying, I took a lot of cons for that pro (plus they were pretty cheap). It is all a matter of balancing, and not being a blind fan boy that sees what they want instead of what is reality.

I also dont mean to insult anyone with the "Fan Boy" comment, A lot of us have been there or are there, I myself broke my NV fanism with these 290s, and just recently (kinda due to lack of an option) broke my Asus one, I have only used Asus boards for the better part of a decade, but due to the only a few matx x99 boards, I went EVGA as the Asus board kinda sucks lol. (However full disclosure, I was an EVGA only GPU user before going AMD so I only deviated a little but I am getting there







.)

To be perfectly honest I am actually glad that I seen this thread when I did as right now I have some of my 290s up for sale here on the site and planned on buying a Titan X for my office build (the MATX one), but now I am starting to debate it more. Though I will most likely still switch as I cannot leverage AMD cards in Adobe AFAIK unless that changed, and gaming with this PC is more of an afterthought and occasional thing.


----------



## Olivon

Quote:


> Originally Posted by *NightAntilli*
> 
> Which is exactly why nVidia wants to stay on DX11 as long as possible. AMD's hardware looks too good under DX12, and nVidia's cards don't really get a boost from it.


You can conclude that with just one sponsored benchmark ? Congratz !

Myself, I prefer to wait for more concrete things and see what DX12 gaming will become.
I remembered people saying that Mantle will save AMD years ago.
They certainly didn't know that Mantle did have a so small life with few games sometimes working better in DX11 and that AMD gave up it.

AMD talk a lot but acting is kinda problematic from experience.


----------



## Cyber Locc

Quote:


> Originally Posted by *Olivon*
> 
> You can conclude that with just one sponsored benchmark ? Congratz !
> 
> Myself, I prefer to wait for more concrete things and see what DX12 gaming will become.
> I remembered people saying that Mantle will save AMD years ago.
> They certainly didn't know that Mantle did have a so small life with few games sometimes working better in DX11 and that AMD gave up it.
> 
> AMD talk a lot but acting is kinda problematic from experience.


You realize that DX12 is basically Mantle right? So what you are saying makes no since, they abandoned Mantle to make DX12 to benefit everyone. NV would have never adapted Mantle, they will DX12 thats why AMD abandoned it.

One thing you do have to give to AMD is that they as a company have always tried to help the industry not themselves unlike NV, even when they were on top everything they did they shared. However that said, it is kind of business suicide, but its prosumer so that is cool.

From what experience exactly? From there cards being in close competition for the past 3 years, and them beating NV pre Fermi? IT wasn't always as cut and dry NV winning until the past few years and AMD has been closing that gap.

It also has zero to do with any game sponsored or not, a major aspect of DX12 is Async NV cant handle Async its really that simple, stop sticking your head in the sand and trying to hide from it. Games that have Async will cause NV cards to suffer, most DX12 games will have Async its a really simple equation.

That said that can be solved by Nvidia adapting Async we will have to see how that plays out, also in the next gen it will be 16nm Nv vs 14nm AMD, that is a huge advantage to AMD.

I will give NV one thing though, they are Apple 2.0 there closed end bull just gets sucked right up by people







.


----------



## Fyrwulf

Quote:


> Originally Posted by *Olivon*
> 
> You can conclude that with just one sponsored benchmark ? Congratz !


What sponsored benchmark? If you mean AotS, it's up to the GPU companies to properly implement Dx12/Vulkan features. AMD has, for the most part, and will probably completely implement them with Polaris. nVidia has not for the most part because they didn't plan for the paradigm shift that low level APIs represent and they may, or may not, fully implement Dx12 with Pascal. If not, they certainly will with Tesla.

If you're such a die hard nVidia fanboy that you're going to cry every time AMD's cards do better than nVidia, you're probably going to be in for a depressing year or two. Or you could, you know, _surrender_ to the inevitable and just buy an AMD card.
Quote:


> They certainly didn't know that Mantle did have a so small life with few games sometimes working better in DX11 and that AMD gave up it.


They gave Mantle's code wholesale to the Vulkan group and forced MS to put out Dx12 when there weren't plans for another Dx release. I'd rather say they achieved their objectives.


----------



## Cyber Locc

Quote:


> Originally Posted by *Fyrwulf*
> 
> What sponsored benchmark? If you mean AotS, it's up to the GPU companies to properly implement Dx12/Vulkan features. AMD has, for the most part, and will probably completely implement them with Polaris. nVidia has not for the most part because they didn't plan for the paradigm shift that low level APIs represent and they may, or may not, fully implement Dx12 with Pascal. If not, they certainly will with *Tesla*.


I think you mean Volta, Tesla is there Computing GPU line.

Quote:


> Originally Posted by *Fyrwulf*
> 
> They gave Mantle's code wholesale to the Vulkan group and forced MS to put out Dx12 when there weren't plans for another Dx release. I'd rather say they achieved their objectives.


I agree 100%, I think there plan was for Mantles adaption all around, and they thought maybe just maybe NV would stop being stuck up morons like they always are. When they realized they were wrong they put that code into DX12 as NV has to support that. NV to me is a like a 15 year old kid, dont want to listen to anyone, or get any kind of help has to do everything themselves, they are always right. APPLE 2.0 for the WIn







NV CEO screaming no close that End to END.

We have seen what NV wants many times in the past, Nforce anyone? They are so full of themselves and stubborn it is seriously sickening. They want what they have been touting lately the NV Ecosystem, they want everyone running NV GPUs with NV Tegras on NV systems that only support NV equipment. They want to be Apple, and there shooting outside of there scope.


----------



## Olivon

It's more easy to treat me as a nVidia fanboy (which I'm not) than having a real reflexion on AMD problems.
Yes, AMD is the blablabla king and they pay the price with poor selling numbers.
We need AMD to be strong and not an hot-air specialist like we seeen past years.


----------



## Cyber Locc

Quote:


> Originally Posted by *Olivon*
> 
> It's more easy to treat me as a nVidia fanboy (which I'm not) than having a real reflexion on AMD problems.
> Yes, AMD is the blablabla king and they pay the price with poor selling numbers.
> We need AMD to be strong and not an hot-air specialist like we seeen past years.


No one is saying AMd hasn't had issues in the past, however they were not as bad as you are making it sound. NV has a Cult following thats why the sales of AMD are low. Watch even when AMD does smoke NV in DX12, people will still buy NV and like has been predicted they will say "ya well the P/PW is better on NV that is what matters" it is seriously like talking to Apple fans. I am not claiming you are a fan boy and if you are not then you see what I am saying in all kinds of outlets of news ect, it doesn't matter what NV does they are god end of story.

That said again, Nv knocked it out of the park with Maxwell, With Fermi and Kepler, the lines were a little more blurred. No one is trying to take Maxwell from them it was a home run 100%, however in the current state looks like tables will turn.

I do not think that NV will use Async as it is AMDs tech, there ideas there API design, they are going to kick and scream about it the whole way like they have every other time they didn't get there way. NVs hardware is not the issue here, there EGO is.

Also I feel like AMD was a little sneaky here, NV perfected 28nm, and AMD new they lost that fight they cant win in DX11 NV has it on lock, so they did the smart play and got a head start on DX12 where they could get a jump up. That is a very good idea, the world doesn't end with DX11, NV is focused on the battle where AMD was looking at the war.

That was IMO the best move for them to make, put out something that works well with DX11 and absolutely smashed DX12, thats the sucker punch that NV didn't see coming.


----------



## Fyrwulf

Quote:


> Originally Posted by *Olivon*
> 
> It's more easy to treat me as a nVidia fanboy (which I'm not) than having a real reflexion on AMD problems.
> Yes, AMD is the blablabla king and they pay the price with poor selling numbers.
> We need AMD to be strong and not an hot-air specialist like we seeen past years.


I'm fully aware of AMD's supposed problems. Thing is, I actually play games rather than masturbate to numbers. Right now I play MWO on a laptop with an A8 APU that has the equivalent of an R5 series GPU. I'm in the process of building a gaming desktop and just about anything I buy would be an upgrade at this point. I'm buying Zen and Polaris because I've never had a truly bad experience with AMD products. Will I cry if Volta (yes, I mistyped) is better than Polaris? No, because until nVidia straightens out their business practices I'm not going to consider an nVidia product. Also, I don't do epeen envy, I find it extremely distasteful. The only reason I enjoy "taking the mickey" out of nVidia's fanboys is because they're such self-congratulatory meatheads.


----------



## Cyber Locc

Quote:


> Originally Posted by *Fyrwulf*
> 
> I'm fully aware of AMD's supposed problems. *Thing is, I actually play games rather than masturbate to numbers.*


Laughing so hard,







good one








Quote:


> Originally Posted by *Fyrwulf*
> 
> I'm buying Zen and Polaris because I've never had a truly bad experience with AMD products. Will I cry if Volta (yes, I mistyped) is better than Polaris? No, because until nVidia straightens out their business practices I'm not going to consider an nVidia product.


Zolta is a still a year off and will be competing with Greenland, in 2017.


----------



## Olivon

Quote:


> Originally Posted by *Fyrwulf*
> 
> I'm fully aware of AMD's supposed problems. Thing is, I actually play games rather than masturbate to numbers. Right now I play MWO on a laptop with an A8 APU that has the equivalent of an R5 series GPU. I'm in the process of building a gaming desktop and just about anything I buy would be an upgrade at this point. I'm buying Zen and Polaris because I've never had a truly bad experience with AMD products. Will I cry if Volta (yes, I mistyped) is better than Polaris? No, because until nVidia straightens out their business practices I'm not going to consider an nVidia product. Also, I don't do epeen envy, I find it extremely distasteful. The only reason I enjoy "taking the mickey" out of nVidia's fanboys is because they're such self-congratulatory meatheads.


Good for you. Just hope AMD will send you cookies to congratulate you !
And Yes Im' a big pc gamer too and I play a lot (AAA, indies... I love video games) and am not masturbating to numbers (???).

Myself, I dunno what will be my next buy because I'm rooting with the best.
If AMD deliver big products, I will go AMD. If not, I will stay with Intel/nVidia. As simple as that.


----------



## Fyrwulf

Quote:


> Originally Posted by *Cyber Locc*
> 
> Volta is a still a year off and will be competing with Greenland, in 2017.


Polaris _is_ Greenland, IIRC. AMD revised the names of their GPU lineup for internal and external consistency. In fact, I remember reading that the Fury lineup has been retroactively codenamed Vega.


----------



## Fyrwulf

Quote:


> Originally Posted by *Olivon*
> 
> masturbating to numbers (???).


It's a turn of phrase. It means people get so caught up in benchmarks they don't really consider any other factor.


----------



## Cyber Locc

Quote:


> Originally Posted by *Fyrwulf*
> 
> Polaris _is_ Greenland, IIRC. AMD revised the names of their GPU lineup for internal and external consistency. In fact, I remember reading that the Fury lineup has been retroactively codenamed Vega.


So I googled this







you were half right. Polaris is still polaris, greenland is now Vega







. Maybe they are trying to stay closer to Nv code names, they changed Arctic Islands to Polaris(Pascal) now they changed Greenland to Vega(Volta) lol.


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> So I googled this
> 
> 
> 
> 
> 
> 
> 
> you were half right. Polaris is still polaris, greenland is now Vega
> 
> 
> 
> 
> 
> 
> 
> . Maybe they are trying to stay closer to Nv code names, they changed Arctic Islands to Polaris(Pascal) now they changed Greenland to Vega(Volta) lol.


According to current rumors,

Baffin/XT is Polaris 11. Greenland/XT is Vega 11.

BaffintXT is likely only 232mm2 which is about equivalent to a 464mm2 GPU on 28nm. So in terms of performance per watt, it is 2.5x that of an equivalent AMD Fiji GPU on the 28nm process assuming similar clockspeeds.

AMD have stated that Polaris' performance gain are 70% due to the 14LPP process and 30% due to architectural improvements. So it's not a big GPU but it packs a punch. An AMD Hawaii part (290/290x) or Grenada part (390/390x) has a die size of 438mm2. So that should give you an idea of the relative density of Polaris 11.

So Polaris 11 will likely be faster than Fiji (Fury/FuryX) but will likely also include less Stream Processors (thus less CUs).

This is all based on rumors, shipping manifests etc. So it could all be bs.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> According to current rumors,
> 
> Baffin/XT is Polaris 11. Greenland/XT is Vega 11.
> 
> BaffintXT is likely only 232mm2 which is about equivalent to a 464mm2 GPU on 28nm. So in terms of performance per watt, it is 2.5x that of an equivalent AMD Fiji GPU on the 28nm process assuming similar clockspeeds.
> 
> AMD have stated that Polaris' performance gain are 70% due to the 14LPP process and 30% due to architectural improvements. So it's not a big GPU but it packs a punch. An AMD Hawaii part (290/290x) or Grenada part (390/390x) has a die size of 438mm2. So that should give you an idea of the relative density of Polaris 11.
> 
> So Polaris 11 will likely be faster than Fiji (Fury/FuryX) but will likely also include less Stream Processors (thus less CUs).
> 
> This is all based on rumors, shipping manifests etc. So it could all be bs.


232mm seems rather small, to compete with Nvs 300+ that is being speculated I know its 14 vs 16 but does it really make that much difference? or Are we going to see GP 104 being the stronger chip?

I am also seeing rumors that GP104 will not have HMB2, which if we look at the pascal chip touted by NV ceo for the carputer its using DDR5x, so will the first pascal run have ddr5?


----------



## airfathaaaaa

Quote:


> Originally Posted by *Cyber Locc*
> 
> 232mm seems rather small, to compete with Nvs 300+ that is being speculated I know its 14 vs 16 but does it really make that much difference? or Are we going to see GP 104 being the stronger chip?
> 
> I am also seeing rumors that GP104 will not have HMB2, which if we look at the pascal chip touted by NV ceo for the carputer its using DDR5x, so will the first pascal run have ddr5?


its a complete different nm process the 16ff of nvidia is actually 20nm with low power finfet not a complete jump to a true 16nm


----------



## PontiacGTX

Quote:


> Originally Posted by *Mahigan*
> 
> According to current rumors,
> 
> Baffin/XT is Polaris 11. Greenland/XT is Vega 11.
> 
> BaffintXT is likely only 232mm2 which is about equivalent to a 464mm2 GPU on 28nm. So in terms of performance per watt, it is 2.5x that of an equivalent AMD Fiji GPU on the 28nm process assuming similar clockspeeds.
> 
> AMD have stated that Polaris' performance gain are 70% due to the 14LPP process and 30% due to architectural improvements. So it's not a big GPU but it packs a punch. An AMD Hawaii part (290/290x) or Grenada part (390/390x) has a die size of 438mm2. So that should give you an idea of the relative density of Polaris 11.
> 
> So Polaris 11 will likely be faster than Fiji (Fury/FuryX) but will likely also include less Stream Processors (thus less CUs).
> 
> This is all based on rumors, shipping manifests etc. So it could all be bs.


Polaris sounds like If it were an amd codename but using stars not islands


----------



## MonarchX

Quote:


> Originally Posted by *Cyber Locc*
> 
> Thats not the industrys fault that is NV's, i honestly woildnt be suprised if NV doesnt support async for a long time as they are stubborn and properiarty so employing AMDs ideas os tough to swallow for them.
> 
> Why use Async at all, because it can improve what games can do. Why did they use phys X at all? When AMD had to have the CPU render it? Tables have turned and NV finds themselves getting a dose of there own medicine.
> 
> Despite what fan boys seem to think, NV does not rule the world. Actually if you want to get techincal AMD has the larger market share, NV may have the larger market share in DGPUs however AMD has APU shares and 100% of consoles, that will also use DX12 and ASync.
> 
> Also the AMD Dgpu is no where near as small as people seem to think. As has been proven time and time again AMDs cards age better, just because there market sales this year are 80/20 is irrelvant in truth its more like a 60/40 split of cards owned, if not 50/50. Then add the APUs and Consoles and AMD is the majority lead in Graphics Proccesing.


I don't care about NVidia vs. AMD thing. I only care to understand why Async Shaders have to be used to render something in DirectX 12. I thought NVidia drivers could just skip it entirely.


----------



## sugarhell

Quote:


> Originally Posted by *MonarchX*
> 
> I don't care about NVidia vs. AMD thing. I only care to understand why Async Shaders have to be used to render something in DirectX 12. I thought NVidia drivers could just skip it entirely.


Why not? Its only logical to evolve your software to match your hardware.

You render something and it use all your ROPs. Half of your shaders are idle because the bottleneck is on the ROPs of the gpu. With Async shaders you can use the idle shaders to do compute work.

Natural evolution of software nothing else. And its already late.


----------



## Cyber Locc

Quote:


> Originally Posted by *airfathaaaaa*
> 
> its a complete different nm process the 16ff of nvidia is actually 20nm with low power finfet not a complete jump to a true 16nm


That is interesting, any source for that, or reasons that you think that. I would really like to see







.


----------



## MonarchX

Yeah, but it is still ******ed IMHO. Maxwell supports most DirectX 12 feature and its considered a DirectX 12.1 card, except for Async Shaders. DirectX 12 is a low-level API and was expected to provide performance improvements to practically recent video cards. Now all of a sudden that's all crap and Maxwell with a single missing hardware feature, Async Shaders, performs worse in all DirectX 12 despite all of the improvements.

- If the same exact graphics in AoS can be rendered with DirectX 11 without the use of Async Shaders, then why on Earth can't these Async shaders be skipped entirely with DirectX 12 on NVidia cards?
- Does Maxwell perform worse only in DirectX 12 games that use Async Shader feature or ALL DirectX 12 games?
- Is Async Shader the only major feature that makes DirectX 12 faster? It seems developers are very fond of it.
- What about all other DirectX 12 features? Are developers even using them?
- What about DirectX 12 utilizing CPU's better? That ALONE should improve DirectX 12 performance in NVidia cards

NVidia actually promised partial Async Shader hardware support and I doubt they have integrated it yet, considering it all gets offloaded to the CPU. They also haven't really planned Maxwell to be a DirectX 12 card. Pascal will be the DirectX 12 card and they predicted to be that way as it forces people to upgrade. The degree of performance difference between Maxwell and Pascal will be at least 2-3x the degree of difference between Kepler and Maxwell, which will set a new standard and those with Maxwell architecture will be in a bigger crapper performance-wise than today's Kepler owners. Its just business, good business (for NVidia that is).


----------



## sugarhell

Quote:


> Originally Posted by *MonarchX*
> 
> Yeah, but it is still ******ed IMHO. Maxwell supports most DirectX 12 feature and its considered a DirectX 12.1 card, except for Async Shaders. DirectX 12 is a low-level API and was expected to provide performance improvements to practically recent video cards. Now all of a sudden that's all crap and Maxwell with a single missing hardware feature, Async Shaders, performs worse in all DirectX 12 despite all of the improvements.
> 
> - If the same exact graphics in AoS can be rendered with DirectX 11 without the use of Async Shaders, then why on Earth can't these Async shaders be skipped entirely with DirectX 12 on NVidia cards?
> - Does Maxwell perform worse only in DirectX 12 games that use Async Shader feature or ALL DirectX 12 games?
> - Is Async Shader the only major feature that makes DirectX 12 faster? It seems developers are very fond of it.
> - What about all other DirectX 12 features? Are developers even using them?
> - What about DirectX 12 utilizing CPU's better? That ALONE should improve DirectX 12 performance in NVidia cards
> 
> NVidia actually promised partial Async Shader hardware support and I doubt they have integrated it yet, considering it all gets offloaded to the CPU. They also haven't really planned Maxwell to be a DirectX 12 card. Pascal will be the DirectX 12 card and they predicted to be that way as it forces people to upgrade. The degree of performance difference between Maxwell and Pascal will be at least 2-3x the degree of difference between Kepler and Maxwell, which will set a new standard and those with Maxwell architecture will be in a bigger crapper performance-wise than today's Kepler owners. Its just business, good business (for NVidia that is).


Both Amd and Nvidia miss a lot of things. Not just Async shaders.

1) You can turn off Async shaders. Its the same performance for nvidia because they dont support it
2)The performance of nvidia dx11 is really good and the dx12 is at least mediocre. If you take in account that their architecture is way better for serial pipeline than dx12 pipeline
3) No it is not. But dx12 alone will never gonna improve gpu performance. They are fixing the cpu overhead and the limitation of draw calls on dx11. The only way to improve gpu performance is Async shaders and new rendering technique which either improve IQ or performance.


----------



## Cyber Locc

Quote:


> Originally Posted by *MonarchX*
> 
> Yeah, but it is still ******ed IMHO. Maxwell supports most DirectX 12 feature and its considered a DirectX 12.1 card, except for Async Shaders. DirectX 12 is a low-level API and was expected to provide performance improvements to practically recent video cards. Now all of a sudden that's all crap and Maxwell with a single missing hardware feature, Async Shaders, performs worse in all DirectX 12 despite all of the improvements.
> 
> - If the same exact graphics in AoS can be rendered with DirectX 11 without the use of Async Shaders, then why on Earth can't these Async shaders be skipped entirely with DirectX 12 on NVidia cards?
> - Does Maxwell perform worse only in DirectX 12 games that use Async Shader feature or ALL DirectX 12 games?
> - Is Async Shader the only major feature that makes DirectX 12 faster? It seems developers are very fond of it.
> - What about all other DirectX 12 features? Are developers even using them?
> - What about DirectX 12 utilizing CPU's better? That ALONE should improve DirectX 12 performance in NVidia cards
> 
> NVidia actually promised partial Async Shader hardware support and I doubt they have integrated it yet, considering it all gets offloaded to the CPU. They also haven't really planned Maxwell to be a DirectX 12 card. Pascal will be the DirectX 12 card and they predicted to be that way as it forces people to upgrade. The degree of performance difference between Maxwell and Pascal will be at least 2-3x the degree of difference between Kepler and Maxwell, which will set a new standard and those with Maxwell architecture will be in a bigger crapper performance-wise than today's Kepler owners. Its just business, good business (for NVidia that is).


1. Because that is one of the main features of DX12.
2. We dont know we have access to DX12 games 1 is a complete bust and you have seen AoS, they do both feature Async.
3. I dont think so, but it is a very big deal and like sugar said, long since needed.
4. Yes
5. That is the issue with NVidia, they are placing more work on the CPU by making it do Async.

6. The didn't lie, Nvidia cards can do Hardware Async AFAIK, they just cant do it in parallel with Graphics. They can do one or the other but not both at once is how I understood it, I may be wrong however, we need someone more knowledgeable about it then I.

I think this may be why we are seeing a miasssive gain with Maxwell and GCN 1.2 together in a rig, they make up for each others faults and give a perfect balance.


----------



## airfathaaaaa

actually we dont know if the maxwell cards CAN do anything of dx12..all we know is the driver exposes all the features of dx12 as being true which we already know is a lie
and considering how they keep forcibly closing down effects on their driver the reality must be really worse than we think


----------



## Glottis

Quote:


> Originally Posted by *airfathaaaaa*
> 
> actually we dont know if the maxwell cards CAN do anything of dx12..all we know is the driver exposes all the features of dx12 as being true which we already know is a lie
> and considering how they keep forcibly closing down effects on their driver the reality must be really worse than we think


you have no idea what you are talking about, do you?


----------



## mtcn77

Quote:


> Originally Posted by *Glottis*
> 
> you have no idea what you are talking about, do you?


That is an Unreal game title. Previously there have been rumours that Unreal wouldn't support Asynchronous Shaders, so you tell me whether that is a good Directx 12 benchmark, or not.


----------



## Glottis

Quote:


> Originally Posted by *mtcn77*
> 
> That is an Unreal game title. Previously there have been rumours that Unreal wouldn't support Asynchronous Shaders, so you tell me whether that is a good Directx 12 benchmark, or not.


UE4 DX12 benchmarks are very important because that's the engine that big majority of games will be using that we all will be playing over the next 5 years.


----------



## Charcharo

Whilst that is possible and reasonably likely... it is not a certainty.
UE3 was used widely. UE4 might not strike again.

For better or worse, the past is not always indicative of the future. At least not readily so.


----------



## airfathaaaaa

using ue as an example of dx12 features that only recently told us they will use async is a bit funny isnt it?


----------



## mtcn77

Quote:


> Originally Posted by *Glottis*
> 
> UE4 DX12 benchmarks are very important because that's the engine that big majority of games will be using that we all will be playing over the next 5 years.


Personally, I couldn't care for any of their games, sorry. You inadvertently still use "Unreal" and "Directx 12" interchangeably, so let me prepare you ahead of time for your journey.
Quote:


> The Rendering Hardware Interface (RHI) now supports asynchronous compute (AsyncCompute) *for Xbox One.*
> ...
> USE_ASYNC_COMPUTE_CONTEXT is defined to *0 on PC*, since it's not supported in D3D11.1.


----------



## p4inkill3r

Quote:


> Originally Posted by *airfathaaaaa*
> 
> using ue as an example of dx12 features that only recently told us they will use async is a bit funny isnt it?


Not when you're trying to make a point.


----------



## airfathaaaaa

Quote:


> Originally Posted by *p4inkill3r*
> 
> Not when you're trying to make a point.


but it is since even my grandmother knows this engine favors nvidia its practicly a nvidia sponsored engine


----------



## Glottis

Quote:


> Originally Posted by *airfathaaaaa*
> 
> but it is since even my grandmother knows this engine favors nvidia its practicly a nvidia sponsored engine


Ashes has big AMD logo on official website and devs of this game speak in AMD panel of every gaming conference they attend and promote AMD products. So your point?


----------



## PontiacGTX

Quote:


> Originally Posted by *Glottis*
> 
> Ashes has big AMD logo on official website and devs of this game speak in AMD panel of every gaming conference they attend and promote AMD products. So your point?


only that UE4 is totally biased from its roots and Nitrous put the first benchmark where nvidia was faster(Star swarm) and yet if amd was ahead of nvidia on the first AotS DX12 Benchmark Oxides allowed nvidia to work with them to optimize their game for their side...









Where are devs that used nvidia gameworks optimizing amd tessellation performance?


----------



## airfathaaaaa

Quote:


> Originally Posted by *Glottis*
> 
> Ashes has big AMD logo on official website and devs of this game speak in AMD panel of every gaming conference they attend and promote AMD products. So your point?


point is we are talking about an engine that is full compliant with dx12 vs an engine that is dx 11.3 compliant that miss most of the key dx12 features and incoporates only the ones that actually benefits a certain company (in ue3 they even had a menu poll that you could use the branches as a drop down from nvidia...i assume its the same on ue4) and that only on pc...


----------



## Glottis

just like always, more AMD glorifying and Nvidia demonizing.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Glottis*
> 
> just like always, more AMD glorifying and Nvidia demonizing.


gee i wonder why

http://www.geek.com/games/futuremark-confirms-nvidia-is-cheating-in-benchmark-553361/
http://www.fudzilla.com/news/graphics/8769-nvidia-cheats-in-hd-hqv-test
http://www.overclock.net/t/465934/inq-nvidias-big-dishonesty
they even tried to call out amd for using their own version of hdr instead of the nvidia's one that was locked down on certain xbox 360 ports (ring any bells gameworks?)
http://www.pcauthority.com.au/Feature/232215,ati-cheating-benchmarks-and-degrading-game-quality-says-nvidia.aspx/7


----------



## mtcn77

Thread derail squad,
Please, no more appeals to motives.


----------



## looniam

Quote:


> Originally Posted by *airfathaaaaa*
> 
> gee i wonder why
> 
> http://www.geek.com/games/futuremark-confirms-nvidia-is-cheating-in-benchmark-553361/
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> http://www.fudzilla.com/news/graphics/8769-nvidia-cheats-in-hd-hqv-test
> http://www.overclock.net/t/465934/inq-nvidias-big-dishonesty
> they even tried to call out amd for using their own version of hdr instead of the nvidia's one that was locked down on certain xbox 360 ports (ring any bells gameworks?)
> http://www.pcauthority.com.au/Feature/232215,ati-cheating-benchmarks-and-degrading-game-quality-says-nvidia.aspx/7


that tells only HALF the story:
Futuremark Caught NVIDIA *and ATI Technologies* On Cheating In 3DMark03.

and (still then) ATI's first 7000 series drivers raised more than a few eye brows with IQ. bottom line is _both company's_ have had their hand caught in the cookie jar to represent their products for years and only n00bs still argue about it.








Quote:


> Originally Posted by *mtcn77*
> 
> Thread derail squad,
> Please, no more appeals to motives.


for once i whole hardly agree with you.


----------



## mtcn77

Quote:


> Originally Posted by *looniam*
> 
> for once i whole hardly agree with you.


Our feelings are not mutual. I'm tired of getting injunctions for your transference issues.
Quote:


> Originally Posted by *looniam*
> 
> and (still then) ATI's first 7000 series drivers raised more than a few eye brows with IQ.


Afaik, IQ is a tricky proposition with Radeon drivers. Since AMD has better anisotropic filtering quality, the implementation is very aggressive in its utilization. I suppose I saw it in a RadeonMod guide - the best Mipmapping quality of the driver defaults to "-1 LOD" and hence the shimmering protests are voiced. The driver is actually too hard on itself. The recent layout set back the accessibility of a lot of the settings; however it can be negated from such tools as RadeonMod, perhaps.
One key point I forgot to mention is the GCN driver can clamp the LOD level in accordance with user preference, so it might not be as easy as that.


----------



## MonarchX

Quote:


> Originally Posted by *Cyber Locc*
> 
> 1. Because that is one of the main features of DX12.
> 2. We dont know we have access to DX12 games 1 is a complete bust and you have seen AoS, they do both feature Async.
> 3. I dont think so, but it is a very big deal and like sugar said, long since needed.
> 4. Yes
> 5. That is the issue with NVidia, they are placing more work on the CPU by making it do Async.
> 
> 6. The didn't lie, Nvidia cards can do Hardware Async AFAIK, they just cant do it in parallel with Graphics. They can do one or the other but not both at once is how I understood it, I may be wrong however, we need someone more knowledgeable about it then I.
> 
> *I think this may be why we are seeing a miasssive gain with Maxwell and GCN 1.2 together in a rig, they make up for each others faults and give a perfect balance*.


Huh... The scores I saw showed that DirectX 12 is SLOWER on Maxwell than DirectX 11. No gain, only loss... Are you being sarcastic?


----------



## ku4eto

Quote:


> Originally Posted by *MonarchX*
> 
> Huh... The scores I saw showed that DirectX 12 is SLOWER on Maxwell than DirectX 11. No gain, only loss... Are you being sarcastic?


That was tested with Async Compute enabled. With AC on, AMD gains huge FPS over DX11, while nVidia can actually lose. With AC off, AMD gains not so huge FPS over DX11, but nVidia scores better when compared to AC on.


----------



## mtcn77

Context switching takes its toll...


----------



## looniam

Quote:


> Originally Posted by *mtcn77*
> 
> Our feelings are not mutual. I'm tired of getting injunctions for your transference issues.


hey, i have long since tired of your _fishes expeditions_ and unfounded accusations but nice to see you confess that we can never agree to agree.


----------



## Forceman

Quote:


> Originally Posted by *MonarchX*
> 
> Huh... The scores I saw showed that DirectX 12 is SLOWER on Maxwell than DirectX 11. No gain, only loss... Are you being sarcastic?


I think he's talking about mixed-GPU operation.


----------



## MonarchX

Quote:


> Originally Posted by *ku4eto*
> 
> That was tested with Async Compute enabled. With AC on, AMD gains huge FPS over DX11, while nVidia can actually lose. With AC off, AMD gains not so huge FPS over DX11, but nVidia scores better when compared to AC on.


Oh, so AC CAN be turned off in the game. What does it do for the game actually? Improve graphics?


----------



## MonarchX

Quote:


> Originally Posted by *Forceman*
> 
> I think he's talking about mixed-GPU operation.


Oh. I was talking about Maxwell-alone tests...


----------



## Irked

Quote:


> Originally Posted by *MonarchX*
> 
> Oh, so AC CAN be turned off in the game. What does it do for the game actually? Improve graphics?


Here is a link http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading


----------



## magnek

Quote:


> Originally Posted by *Glottis*
> 
> UE4 DX12 benchmarks are very important because that's the engine that big majority of games will be using that we all will be playing over the next 5 years.


UE4 doesn't natively support multi-GPU and may never provide native support. So if a big majority of games will be using UE4 in the next 5 years, you should tell AMD and nVidia to stop selling CF and SLI.


----------



## Cyber Locc

Quote:


> Originally Posted by *MonarchX*
> 
> Huh... The scores I saw showed that DirectX 12 is SLOWER on Maxwell than DirectX 11. No gain, only loss... Are you being sarcastic?


As has been pointed out, I was talking about using a Maxwell GPU and a AMD GPU in the same rig, you can do that with DX12 and it gets the highest performance.
Quote:


> Originally Posted by *MonarchX*
> 
> Oh, so AC CAN be turned off in the game. What does it do for the game actually? Improve graphics?


We have told you what it does, it allows more utilization of your GPU that previously just sat there unused. It gives the GPU a major performance boost, can you live without ya, and if you have NV atm you will have to. What it will do however is make AMD cards that use it more powerful, more powerful than NV cards that have it turned off or on, that is until NV add the hardware support. which will likely be Volta, so if AMD cards can get with in 10% of the performance in DX11 on Pascal, they will beat it in DX12.

What we are seeing here is AMD said the hell with DX11 work on it on the side and go full boar out on DX12. This may sound dumb, but it really isnt NV had them Choke holded in DX11, they would never outperform them. However Nvidia was busy making DX11 win that much more so AMD secertly worked on DX12, this gives them the advantage and now the Choke Hold has flipped. It was a very smart play, if you ask me, they knew they lost in DX11 so they began thinking of the future rather than beating a dead horse.

Depending on when Nv finally get Async by that time AMD will have already been using it and working with it for years NV will have a hard time catching up.


----------



## dagget3450

Quote:


> Originally Posted by *magnek*
> 
> UE4 doesn't natively support multi-GPU and may never provide native support. So if a big majority of games will be using UE4 in the next 5 years, you should tell AMD and nVidia to stop selling CF and SLI.


Right on right on!

I play descent underground and its based on ue4, also another upcoming title that isnt out yet is talking about going ue4. So this is how i feel you nailed it. CF and SLI should either be discontinued or they need to start providing support for it somehow.


----------



## cowie

Quote:


> Originally Posted by *MonarchX*
> 
> Oh, so AC CAN be turned off in the game. What does it do for the game actually? Improve graphics?


not one bit at all


----------



## MonarchX

Quote:


> Originally Posted by *Cyber Locc*
> 
> As has been pointed out, I was talking about using a Maxwell GPU and a AMD GPU in the same rig, you can do that with DX12 and it gets the highest performance.
> We have told you what it does, it allows more utilization of your GPU that previously just sat there unused. It gives the GPU a major performance boost, can you live without ya, and if you have NV atm you will have to. What it will do however is make AMD cards that use it more powerful, more powerful than NV cards that have it turned off or on, that is until NV add the hardware support. which will likely be Volta, so if AMD cards can get with in 10% of the performance in DX11 on Pascal, they will beat it in DX12.


Then we were talking about entirely different things and you got me further confused...

- Can Async Shaders be turned off in DirectX 12 games or not?
- Are you saying Async Shaders = DirectX 12's most important improvement and is the main performance-gaining driver for all future DirectX 12 games?
- If a DirectX 11 game can be optimized by using DirectX 12 API without Async Shaders, then would it nor run faster on both AMD AND NVidia's current hardware?
- If DirectX 11 version of AoS runs WITHOUT Async Shaders, then WHY DOES IT HAVE TO use Async Shaders in DirectX 12? Why can't this feature be turned off in a game to allow Maxwell cards to run faster using OTHER DirectX 12 features? Hell, why can't DirectX 12 version just be a copy of DirectX 11 version (without Async Shaders), but just utilize CPU better?

DirectX 12 is probably overall hard to optimize for since it deals with such low-levels. Thus developers decided to rely on Async Shaders alone to improve performance, not caring about optimizing other DirectX 12 aspects, like lower CPU overhead and some 5-6 other major performance-improving features. If they can stick it all on Async Shaders, then they technically have a DirectX 12 game. Its like today's Assassin's Creed games that use DirectX 11 features for the sake of it, not because they are actually optimized for it. While games like Rise of the Tomb Raider and Witcher 3 fully utilize DirectX 11 and run exceptionally well for the awesome graphics they provide.

AoS is simply a poorly optimized singular benchmark in pre-beta stage that keeps AMD fans raving because finally there's some light in that dark tunnel, while NVidia fans were basking in the sun the whole time and finally see a dark cloud coming. I guarantee you that if Maxwell did support Async Shaders and showed better performance gains than AMD cards, then it wouldn't be a big deal because it would be a common occurrence. Consideirng how incredibly bad AMD drivers utilize CPU in DirectX 11 games, once NVidia released Pascal with Async Shaders support, the history will repeat itself once more.

What makes you think NVidia will add Async Shaders support in Volta and Pascal?

Considering all this mess with MS Store owning DirectX 12, I think Vulkan is something to look forward to. Does Vulkan support Async Shaders or similar feature? Does Maxwell support that Vulkan Async Shader-like feature?


----------



## mtcn77

Quote:


> Originally Posted by *MonarchX*
> 
> Then we were talking about entirely different things and you got me further confused...
> 
> - Can Async Shaders be turned off in DirectX 12 games or not?
> - Are you saying Async Shaders = DirectX 12's most important improvement and is the main performance-gaining driver for all future DirectX 12 games?
> - If a DirectX 11 game can be optimized by using DirectX 12 API without Async Shaders, then would it nor run faster on both AMD AND NVidia's current hardware?
> - If DirectX 11 version of AoS runs WITHOUT Async Shaders, then WHY DOES IT HAVE TO use Async Shaders in DirectX 12? Why can't this feature be turned off in a game to allow Maxwell cards to run faster using OTHER DirectX 12 features? Hell, why can't DirectX 12 version just be a copy of DirectX 11 version (without Async Shaders), but just utilize CPU better?
> 
> DirectX 12 is probably overall hard to optimize for since it deals with such low-levels. Thus developers decided to rely on Async Shaders alone to improve performance, not caring about optimizing other DirectX 12 aspects, like lower CPU overhead and some 5-6 other major performance-improving features. If they can stick it all on Async Shaders, then they technically have a DirectX 12 game. Its like today's Assassin's Creed games that use DirectX 11 features for the sake of it, not because they are actually optimized for it. While games like Rise of the Tomb Raider and Witcher 3 fully utilize DirectX 11 and run exceptionally well for the awesome graphics they provide.
> 
> AoS is simply a poorly optimized singular benchmark in pre-beta stage that keeps AMD fans raving because finally there's some light in that dark tunnel, while NVidia fans were basking in the sun the whole time and finally see a dark cloud coming. I guarantee you that if Maxwell did support Async Shaders and showed better performance gains than AMD cards, then it wouldn't be a big deal because it would be a common occurrence. Consideirng how incredibly bad AMD drivers utilize CPU in DirectX 11 games, once NVidia released Pascal with Async Shaders support, the history will repeat itself once more.
> 
> What makes you think NVidia will add Async Shaders support in Volta and Pascal?
> 
> Considering all this mess with MS Store owning DirectX 12, I think Vulkan is something to look forward to. Does Vulkan support Async Shaders or similar feature? Does Maxwell support that Vulkan Async Shader-like feature?


For your ease of mind, I think it is better to refer to Directx 12 as ATi Stream Computing in its scope.


----------



## Catscratch

I think you can still mimic dx11 like behavior with AC in dx12. Nvidia is probably trying to do that with their drivers. Unscrambling all that mumbo jumbo into what their architecture can understand. Might not be possible without too much perf. hit. Tough to comprehend. Could further distinguish vendor specific games; Nvidia supporting certain dx12 coding and AMD supporting heavy AC.


----------



## mtcn77

Quote:


> Originally Posted by *Catscratch*
> 
> I think you can still mimic dx11 like behavior with AC in dx12. Nvidia is probably trying to do that with their drivers. Unscrambling all that mumbo jumbo into what their architecture can understand. Might not be possible without too much perf. hit. Tough to comprehend. Could further distinguish vendor specific games; Nvidia supporting certain dx12 coding and AMD supporting heavy AC.


Nvidia cannot use AC: it requires _"Context Switches"_ and that takes time in order to switch between queues. Their gpus don't have more than _'1 queue'_, so I don't think they will be interested since they will be penalized the more they switch between queues. It only makes sense if you can substitute between multiple queued workloads.


----------



## Forceman

Async compute is an optional part of DX12.

It is possible to turn it on and off as long as the game supports that (as AotS does).

Nvidia can run without async compute and still get the other benefits on DX12 (like lower overhead and more draw calls).

There are other optional parts of DX12, like conservative rasterization, that Nvidia supports in hardware and AMD does not.

It's yet to be seen what kind of impact/usage those features will get, although it would stand to reason that Nvidia sponsored titles will probably use more of them, and less of async compute. Unless, of course, Pascal has hardware support for async compute, in which case all bets are off.


----------



## BlitzWulf

Quote:


> Originally Posted by *MonarchX*
> 
> Then we were talking about entirely different things and you got me further confused...
> 
> - Can Async Shaders be turned off in DirectX 12 games or not?
> *That depends on the developer, Oxide has an option to turn them off, as far as i know gears of war does not have the option to disable Async compute.*
> 
> - Are you saying Async Shaders = DirectX 12's most important improvement and is the main performance-gaining driver for all future DirectX 12 games?
> 
> *Nobody has said that but it is free performance that is wasted otherwise so developers seem to have taken an almost universal interest in this feature.*
> 
> - If a DirectX 11 game can be optimized by using DirectX 12 API without Async Shaders, then would it nor run faster on both AMD AND NVidia's current hardware?
> 
> *No, Async shaders are by their nature a performance enhancing feature.They are the same shaders used by the non Async codepath but modified to run Asynchronously. Their function as shaders does not change,only the order in which they can be executed and the amount of time it takes to execute them is affected.(Oversimplified)*
> 
> _- If DirectX 11 version of AoS runs WITHOUT Async Shaders, then WHY DOES IT HAVE TO use Async Shaders in DirectX 12? Why can't this feature be turned off in a game to allow Maxwell cards to run faster using OTHER DirectX 12 features? Hell, why can't DirectX 12 version just be a copy of DirectX 11 version (without Async Shaders), but just utilize CPU better?
> _
> 
> *It doesn't HAVE to you can turn them off , Developers use them because they give free performance on hardware that supports it, and if Nvidia would come clean about whether or not they truly support this feature then I'm sure developers would turn it off to maxmize performance for their cards, but that would take Nvidia publicly confirming that they don't truly support the hot new feature of the newest api in a significant way ,even though they claimed they did support it.*
> 
> _DirectX 12 is probably overall hard to optimize for since it deals with such low-levels. Thus developers decided to rely on Async Shaders alone to improve performance, not caring about optimizing other DirectX 12 aspects, like lower CPU overhead and some 5-6 other major performance-improving features. If they can stick it all on Async Shaders, then they technically have a DirectX 12 game. Its like today's Assassin's Creed games that use DirectX 11 features for the sake of it, not because they are actually optimized for it. While games like Rise of the Tomb Raider and Witcher 3 fully utilize DirectX 11 and run exceptionally well for the awesome graphics they provide._
> 
> *You are speculating wildly on things you dont understand*.
> 
> AoS is simply a poorly optimized singular benchmark in pre-beta stage that keeps AMD fans raving because finally there's some light in that dark tunnel, while NVidia fans were basking in the sun the whole time and finally see a dark cloud coming. I guarantee you that if Maxwell did support Async Shaders and showed better performance gains than AMD cards, then it wouldn't be a big deal because it would be a common occurrence. Considering how incredibly bad AMD drivers utilize CPU in DirectX 11 games, once NVidia released Pascal with Async Shaders support, the history will repeat itself once more.
> 
> *Thats an emotional argument and extremely speculative, it holds no weight*
> 
> What makes you think NVidia will add Async Shaders support in Volta and Pascal?
> 
> *They would be insane not too and they already claim that their current GPU's support it*.
> 
> Considering all this mess with MS Store owning DirectX 12, I think Vulkan is something to look forward to. Does Vulkan support Async Shaders or similar feature? Does Maxwell support that Vulkan Async Shader-like feature?
> 
> *Yes it supports async[/B*]


Edit: I misread your last question Vulkan does indeed include Async compute ,Truthfully the only people who know whether or not Maxwell supports any kind of Async shading in DX12 or Vulkan is Nvidia themselves. AMD has publicly stated that only the GCN architecture supports Async shaders.

I attempted to address your questions


----------



## STEvil

Quote:


> Originally Posted by *cowie*
> 
> not one bit at all


False.

Improved performance allows use of higher graphics settings or perceived increased fluidity due to higher framerate, thus AC can "increase" graphical quality.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> Async compute is an optional part of DX12.
> 
> It is possible to turn it on and off as long as the game supports that (as AotS does).
> 
> Nvidia can run without async compute and still get the other benefits on DX12 (like lower overhead and more draw calls).
> 
> There are other optional parts of DX12, like conservative rasterization, that Nvidia supports in hardware and AMD does not.
> 
> It's yet to be seen what kind of impact/usage those features will get, although it would stand to reason that Nvidia sponsored titles will probably use more of them, and less of async compute. Unless, of course, Pascal has hardware support for async compute, in which case all bets are off.


Okay however you are missing something, we already have a large list of games that are going to be supporting DX12, you know how many of those games (there is like 10 I think) dont have ASC 2 or 3 lol.

Here is something you have to realize, VR is huge and very much benefits from Async, most of our games are coded on consoles first, consoles that use Async and AMD hardware. The games will be ported and still use Async, AMD won big time with the consoles and that will start to become apparent as time goes on.
Quote:


> Originally Posted by *MonarchX*
> 
> Then we were talking about entirely different things and you got me further confused...
> 
> - Can Async Shaders be turned off in DirectX 12 games or not?
> - Are you saying Async Shaders = DirectX 12's most important improvement and is the main performance-gaining driver for all future DirectX 12 games?
> - If a DirectX 11 game can be optimized by using DirectX 12 API without Async Shaders, then would it nor run faster on both AMD AND NVidia's current hardware?
> - If DirectX 11 version of AoS runs WITHOUT Async Shaders, then WHY DOES IT HAVE TO use Async Shaders in DirectX 12? Why can't this feature be turned off in a game to allow Maxwell cards to run faster using OTHER DirectX 12 features? Hell, why can't DirectX 12 version just be a copy of DirectX 11 version (without Async Shaders), but just utilize CPU better?
> 
> DirectX 12 is probably overall hard to optimize for since it deals with such low-levels. Thus developers decided to rely on Async Shaders alone to improve performance, not caring about optimizing other DirectX 12 aspects, like lower CPU overhead and some 5-6 other major performance-improving features. If they can stick it all on Async Shaders, then they technically have a DirectX 12 game. Its like today's Assassin's Creed games that use DirectX 11 features for the sake of it, not because they are actually optimized for it. While games like Rise of the Tomb Raider and Witcher 3 fully utilize DirectX 11 and run exceptionally well for the awesome graphics they provide.
> 
> AoS is simply a poorly optimized singular benchmark in pre-beta stage that keeps AMD fans raving because finally there's some light in that dark tunnel, while NVidia fans were basking in the sun the whole time and finally see a dark cloud coming. I guarantee you that if Maxwell did support Async Shaders and showed better performance gains than AMD cards, then it wouldn't be a big deal because it would be a common occurrence. Consideirng how incredibly bad AMD drivers utilize CPU in DirectX 11 games, once NVidia released Pascal with Async Shaders support, the history will repeat itself once more.
> 
> What makes you think NVidia will add Async Shaders support in Volta and Pascal?
> 
> Considering all this mess with MS Store owning DirectX 12, I think Vulkan is something to look forward to. Does Vulkan support Async Shaders or similar feature? Does Maxwell support that Vulkan Async Shader-like feature?


"the history will repeat itself once more." How long have you been in the PC industry lol, what history will repeat itself? It has always been up and down, AMD is on top then NV is, for the past few years it has been pretty much NV on top, 290x was the only time that was not true, but then NV fans love to say no 980tis smash 290xs, and completely miss the fact those cards are not competing. 780ti got handed by a 290x and more so with every new driver. Before that, NV was on top with 6 series (and through most of 7 series). Then we go back to 5 series, which was winning, 4 series was destroyed by AMD, they were the winners all around back then. Should I go on or do you get the point and see how saying "History will repeat, NV always wins" makes utterly no since and is straight fan boy logic.

Nv for the most part won all of the DX11 era, however most of the DX9 era belonged to AMD, so ya history will repeat itself, its AMDs turn to be on top which they will be for DX12.

"What makes you think NVidia will add Async Shaders support in Volta and Pascal? "
Okay look above, look at the graph that was posted. Async increases GPU performance massively, it is also extremely important and needed in VR, NV would be stupid not to support it, they are going to have to.

"Considering all this mess with MS Store owning DirectX 12, I think Vulkan is something to look forward to. Does Vulkan support Async Shaders or similar feature? Does Maxwell support that Vulkan Async Shader-like feature?"

There seems to be some confusion here, they support Async right now, however they use the CPU to do. They do not have hardware support for Async that is why there performance is less, Maxwell cards will never have HW Async, Pascal may, if not then most likely Volta will. What does this mean for a Maxwell user, it means your card will be borderline worthless within 2 years and not perform as well for VR right now. Not many people hold on to video cards for long, the ones that do, usually go AMD as they age much better. The issue is how much advancement AMD can make while NV plays catch up, and how much market share they can take back during the time of no Async support.

Here watch this man this should help you understand.




So after watching that if you still dont understand, any game that offers Async support, will show AMD having massive FPS gains compared to NV. In Laymans terms NV gets smashed into the ground







.

The reason we keep saying that Pascal may not have it, is it requires a complete rework of the way NV makes there GPUs, they didn't plan for this, and there GPUs are not designed in that way.


----------



## airfathaaaaa

and the fact that for the first time in 10 years now nvidia is virtually silent about a card of them is just adding to that they probably thought that they will be able to keep the industry for 1-2 years in order to fine tune volta now the fact that their updated roadmap showed pascal almost side by side with maxwell 2.0 isnt really helpfull


----------



## Charcharo

And here I am seeing most AMD cards win DX10 and 11 against their competitors already...
I guess I am blind or something...









As for the game, it seems reasonably optimized for what is happening in it.
Nvidia will improve things with drivers. Just wont improve THAT much.


----------



## Devnant

Yep
Quote:


> Originally Posted by *Charcharo*
> 
> And here I am seeing most AMD cards win DX10 and 11 against their competitors already...
> I guess I am blind or something...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As for the game, it seems reasonably optimized for what is happening in it.
> Nvidia will improve things with drivers. Just wont improve THAT much.


As a 980 TI owner, I say GOOD! Competition benefits us all, and AMD needs market share. GPU prices are way too high!

Polaris stomping Pascal would also be a huge benefit for the community as a whole, and that might happen if Pascal doesn´t have hardware support for async shaders.


----------



## Assirra

Quote:


> Originally Posted by *Devnant*
> 
> Yep
> As a 980 TI owner, I say GOOD! Competition benefits us all, and AMD needs market share. GPU prices are way too high!
> 
> Polaris stomping Pascal would also be a huge benefit for the community as a whole, and that might happen if Pascal doesn´t have hardware support for async shaders.


GPU prices won't go down even if Polaris stomps the crap out of Pascal.


----------



## mtcn77

Quote:


> Originally Posted by *Assirra*
> 
> GPU prices won't go down even if Polaris stomps the crap out of Pascal.


As in saying Nvidia won't be affected by another HD4870.


----------



## NightAntilli

Quote:


> Originally Posted by *MonarchX*
> 
> Yeah, but it is still ******ed IMHO. Maxwell supports most DirectX 12 feature and its considered a DirectX 12.1 card, except for Async Shaders. DirectX 12 is a low-level API and was expected to provide performance improvements to practically recent video cards. Now all of a sudden that's all crap and Maxwell with a single missing hardware feature, Async Shaders, performs worse in all DirectX 12 despite all of the improvements.
> 
> - If the same exact graphics in AoS can be rendered with DirectX 11 without the use of Async Shaders, then why on Earth can't these Async shaders be skipped entirely with DirectX 12 on NVidia cards?
> - Does Maxwell perform worse only in DirectX 12 games that use Async Shader feature or ALL DirectX 12 games?
> - Is Async Shader the only major feature that makes DirectX 12 faster? It seems developers are very fond of it.
> - What about all other DirectX 12 features? Are developers even using them?
> - What about DirectX 12 utilizing CPU's better? That ALONE should improve DirectX 12 performance in NVidia cards
> 
> NVidia actually promised partial Async Shader hardware support and I doubt they have integrated it yet, considering it all gets offloaded to the CPU. They also haven't really planned Maxwell to be a DirectX 12 card. Pascal will be the DirectX 12 card and they predicted to be that way as it forces people to upgrade. The degree of performance difference between Maxwell and Pascal will be at least 2-3x the degree of difference between Kepler and Maxwell, which will set a new standard and those with Maxwell architecture will be in a bigger crapper performance-wise than today's Kepler owners. Its just business, good business (for NVidia that is).


You're misinformed.

There's no such thing as DirectX 12.1. There's a difference between feature levels and DirectX versions. nVidia marketing has got you good.

To put things into perspective, we have the following;...

GCN 1.0 supports FL11_1
GCN 1.1 supports FL12_0
GCN 1.2 supports FL12_0

Fermi supports FL11_0
Kepler supports FL11_0
Maxwell supports FL11_0
Maxwell 2 supports FL12_1 (sort of)

Note that GCN 1.1 and GCN 1.2 support FL12_0. This means that since 2013, AMD has had GPUs on the market supporting pretty much all DX12 features. On top of that, they support the majority of these features on the highest tiers, including the 'hidden' 11_2 feature level. They are only missing conservative rasterization and ROV. Compare that to nVidia. Maxwell which was released in 2014, were still only capable of FL11_0!!! In fact, AMD's GCN 1.0 from 2011 has a higher rank in feature level support than 2014's Maxwell... That means that you still really don't have to upgrade a card from 2011 in terms of features, while nVidia's GTX 900 series is pretty much already outdated. Let that sink in for a moment... And yet, people STILL invest into nVidia. It makes no sense whatsoever.

nVidia and Microsoft for some reason made a deal to make conservative rasterization and ROV a separate feature level. This allowed nVidia to do some marketing of DX12.1 which doesn't exist, and to pretend that their cards are superior to AMD's. But in fact, these features are not really that revolutionary nor that important. The other features that are part of FL12_0 are in fact way more important, and believe it or not, AMD is better at most of them.

- AMD's is still a higher tier in resource binding, even GCN 1.0 which is from 2011.
- Stencil reference value from pixel shader is still only supported by GCN cards, again starting from GCN 1.0.
- All GCN cards have the full heap available for UAV slots for all stages, Maxwell 2 cards are limited to 64.
- GCN 1.2 is the only card (including among GCN) that has minimum float precision. Maxwell 2 doesn't have it at all.
- Lastly it's the obvious one. GCN 1.0 cards have two asynchronous compute engines with two queues per unit (total of 4), which allow concurrent calculations of graphics + compute. Maxwell 2 still can't do this since they're limited by their required preemption context switch . They can do asynchronous computing, but they can't do concurrent graphics + compute. GCN doesn't have this limit since no context switch is required. GCN 1.1 increased the compute units from two to eight compared to GCN 1.0, and the queues from 2 to 8 per unit also (total of 64). And this isn't even being used properly yet. Developers are starting to experiment with it. We're gonna see what Hitman will bring to the table as that one is supposedly the best one yet.

The GCN architecture is one of the best that has ever been designed in terms of longevity and being future proof. It supported so many features since 2011 that are still not being used nowadays. It has already been stated that Polaris is a revamped GCN again. No surprise, for the reasons mentioned above. And it will iron out the biggest weaknesses of current GCN cards. As for your questions....

- That's what they're doing. Async is by default disabled on nVidia hardware in Ashes of the Singularity.
- As of now, all DX12 benchmarks put nVidia's performance as same as DX11 or slightly worse.
- Async is indeed the main feature that increases efficiency of GPU usage and thus increases performance. Most other features are nice to haves for certain graphical effects, but will not boost performance, only visuals.
- Most other DX12 features are not widely used yet, speaking in terms of GPU acceleration. Most of them have been implemented through the CPU in a game one time or another in the past.
- Unlike AMD's cards, nVidia cards are generally not CPU bottlenecked under DX11. They are pretty much already at their max capabilities.

To elaborate on nVidia's async... nVidia requires a context switch. To explain what that means, I have to highlight some other things first so that you understand what's going on.
When people are talking about AMD's async compute, what they actually mean is that graphics/shader tasks and compute tasks are processed in parallel, AND they are at the same time processed in a 'random' order. The latter part is not exactly accurate, but it makes it easy to understand. Some tasks are long, and some are short, and what I mean by processing in this random order is that, you can basically insert the short tasks in between the long tasks avoiding the GPU from idling. The GCN compute units can handle all the long + the short graphics/shader tasks AND the long + the short compute tasks mixed within each other like a soup. This blending makes the efficiency of the GPU go very high.

nVidia's hardware cannot do this in the same way. They can handle either the mixing of long and short graphics/shader tasks, OR handle the mixing of long and short compute tasks. This is what we mean when we say it requires a context switch. You have to keep switching between graphics/shaders and compute tasks. This is obviously less efficient than AMD's hardware solution. And yes, being able to blend the short and long graphics/shader tasks is more efficient than doing them in order. Same for compute. But a context switch is costly. If you're doing async graphics now, you have to basically throw out your whole bowl of graphics soup to create a new compute soup, and this causes delay. What you gain by running the graphics/shaders soup and the compute soup separately in an asynchronous manner, is lost by having to switch between them.

Obviously, nVidia is claiming it can do Async, and they would not be lying, but, it's completely different than AMD's, and well, it's borderline useless for performance gains. And they will not be admitting this, because they advertised their graphics cards as superior for DX12, due to the DX12.1 thing. And they would get some backlash if it turns out that some graphics cards from 2011 do some things better under DX12 than their "DX12.1" 2015 cards.
I hope this clarifies some things for you.


----------



## MonarchX

OK
- So DirectX 12 games CAN be made with both Async Shaders enabled and disabled to fit either current NVidia or current AMD cards' best interests in terms of performance?
- Wasn't DirectX 12 supposed to even help Kepler cards once WDDM 2.0 drivers came out? It doesn't support the same features as Maxwell.
- All in all, DirectX 12 was some kind of a lie about its ability to decrease CPU overhead and improve performance on CURRENT hardware. It obviously makes performance worse in many cases.


----------



## mtcn77

Quote:


> Originally Posted by *MonarchX*
> 
> OK
> - So DirectX 12 games CAN be made with both Async Shaders enabled and disabled to fit either current NVidia or current AMD cards' best interests in terms of performance?
> - Wasn't DirectX 12 supposed to even help Kepler cards once WDDM 2.0 drivers came out? It doesn't support the same features as Maxwell.
> - All in all, *DirectX 12 was some kind of a lie* about its ability to decrease CPU overhead and improve performance on CURRENT hardware. It obviously makes performance worse in many cases.


How so - when compliant hardware DOES exhibit less cpu utilization and more gpu utilization?

This isn't even the end of the road for Directx 11. Draw calls aren't as high as Directx 11 limit (which is around 20,000 per frame, afaik).


----------



## Themisseble

Quote:


> Originally Posted by *mtcn77*
> 
> How so - when compliant hardware DOES exhibit less cpu utilization and more gpu utilization?
> 
> This isn't even the end of the road for Directx 11. Draw calls aren't as high as Directx 11 limit (which is around 20,000 per frame, afaik).


How can GTX 960 be more CPU bound than GTX 980TI.


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> How can GTX 960 be more CPU bound than GTX 980TI.


It would be pure speculation if I answered that.


----------



## Themisseble

Quote:


> Originally Posted by *mtcn77*
> 
> It would be pure speculation if I answered that.


I dont belive that GTX 960 or GTX 980TI are being bottlenecked by CPU at 4K.


----------



## sugarhell

Quote:


> Originally Posted by *Themisseble*
> 
> I dont belive that GTX 960 or GTX 980TI are being bottlenecked by CPU at 4K.


Should i choose your speculation or the graphs with real data?


----------



## Themisseble

Quote:


> Originally Posted by *sugarhell*
> 
> Should i choose your speculation or the graphs with real data?


Yep, because sometimes ingame benchmarks shows that my GPU was bottlenecked by CPU... but MSI is showing 99% GPU usage all the time.

So no, I dont belive that GTX cards were bottlenecked by CPU at 4K. Tomshardware did this test wrong.. or simple they just lie.


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> *Yep*, because sometimes ingame benchmarks shows that my GPU was bottlenecked by CPU... but MSI is showing 99% GPU usage all the time.


Good call.


----------



## Cyber Locc

Quote:


> Originally Posted by *MonarchX*
> 
> OK
> - So DirectX 12 games CAN be made with both Async Shaders enabled and disabled to fit either current NVidia or current AMD cards' best interests in terms of performance?
> - Wasn't DirectX 12 supposed to even help Kepler cards once WDDM 2.0 drivers came out? It doesn't support the same features as Maxwell.
> *- All in all, DirectX 12 was some kind of a lie about its ability to decrease CPU overhead and improve performance on CURRENT hardware.* *It obviously makes performance worse in many cases.*


No there was no lies, only NV being stupid and not supporting the features that make DX12 great that they new were coming. AMD GPUs have supported ASync for years, they designed mantle to use it, DX12 was designed based off mantle. Async is AMDs tech and NV are egotistical and refuse to use anything from AMD whether it is a standard or not. They are like children, it must the proprietary and it must be there's or they ignore its existence.

You are imploring fan boy logic and saying that DX12 did something wrong, when in fact this is NVs fault. This is there issue, not MS's not AMD's not DX's this is solely the fault of NV's ego.


----------



## MonarchX

Quote:


> Originally Posted by *Cyber Locc*
> 
> No there was no lies, only NV being stupid and not supporting the features that make DX12 great that they new were coming. AMD GPUs have supported ASync for years, they designed mantle to use it, DX12 was designed based off mantle. Async is AMDs tech and NV are egotistical and refuse to use anything from AMD whether it is a standard or not. They are like children, it must the proprietary and it must be there's or they ignore its existence.
> 
> You are imploring fan boy logic and saying that DX12 did something wrong, when in fact this is NVs fault. This is there issue, not MS's not AMD's not DX's this is solely the fault of NV's ego.


So MS designed DirectX 12 off AMD's specific technology, Async Shaders? That's quite biased, don't you think? Besides, how can NVidia even integrate Async Shader technology into their future cards if that technology is proprietary and belongs to AMD?

They say Vulkan is based off Mantle too, but NVidia decided to very much support it.


----------



## mtcn77

Quote:


> Originally Posted by *MonarchX*
> 
> So MS designed DirectX 12 off AMD's specific technology, Async Shaders? That's quite biased, don't you think? Besides, how can NVidia even integrate Async Shader technology into their future cards if that technology is proprietary and belongs to AMD?
> 
> They say Vulkan is based off Mantle too, but *NVidia* decided to very much support it.


It isn't Nvidia's, Khronos'.


----------



## Cyber Locc

Quote:


> Originally Posted by *MonarchX*
> 
> So MS designed DirectX 12 off AMD's specific technology, Async Shaders? That's quite biased, don't you think? Besides, how can NVidia even integrate Async Shader technology into their future cards if that technology is proprietary and belongs to AMD?
> 
> They say Vulkan is based off Mantle too, but NVidia decided to very much support it.


Shakes head lol.

Okay, yes Async was AMDs idea, it is not proprietary, AMD hasnt released anything Propertairy pretty much ever, Hair Works, Mantle, Freesync, all of it all offered to NV for there use. The people that make Propertairy tech is NV.

When NV was offered Mantle they said get lost we dont need it and we dont want it. As I said there cards are good them as a company are like spoiled children. They have been given everything they need from AMD to use Async and any other AMD tech, they refuse to do so.

Quote:


> Originally Posted by *mtcn77*
> 
> It isn't Nvidia's, Khronos'.


He was saying that that NV decided to support Vulkan even though it was based on AMD tech/code. They will do the same for ASync once there backed into a corner as well, they will as well with freesync.


----------



## Forceman

Quote:


> Originally Posted by *sugarhell*
> 
> Should i choose your speculation or the graphs with real data?


Both the 960 and 390X spend more time waiting for the CPU in that chart than the much faster cards, which indicates to me that either the testing was wrong, or it isn't showing what Tom's/people think it is showing.
Quote:


> Originally Posted by *Cyber Locc*
> 
> Shakes head lol.
> 
> Okay, yes Async was AMDs idea, it is not proprietary, AMD hasnt released anything Propertairy pretty much ever, Hair Works, Mantle, Freesync, all of it all offered to NV for there use. The people that make Propertairy tech is NV.
> 
> When NV was offered Mantle they said get lost we dont need it and we dont want it. As I said there cards are good them as a company are like spoiled children. They have been given everything they need from AMD to use Async and any other AMD tech, they refuse to do so.
> He was saying that that NV decided to support Vulkan even though it was based on AMD tech/code. They will do the same for ASync once there backed into a corner as well, they will as well with freesync.


By the time Mantle came out Maxwell was already out, and it's not like they can graft async compute capabilities onto an existing chip.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> Both the 960 and 390X spend more time waiting for the CPU in that chart than the much faster cards, which indicates to me that either the testing was wrong, or it isn't showing what Tom's/people think it is showing.


Well thats because the 970 is the clear winner, the how do you say God Card if you will







.


----------



## sugarhell

Quote:


> Originally Posted by *MonarchX*
> 
> So MS designed DirectX 12 off AMD's specific technology, Async Shaders? That's quite biased, don't you think? Besides, how can NVidia even integrate Async Shader technology into their future cards if that technology is proprietary and belongs to AMD?
> 
> They say Vulkan is based off Mantle too, but NVidia decided to very much support it.


WAit wait wait.

Async shaders is the name that AMD use.

Dx12 name is multi channel queue.

Also xbox use Async shaders.

Common place? Both use amd

If you think that microsoft didint develop dx12 with gcn(or xbox call it whatever you want) in mind you are crazy.

And why it is biased? Amd support it from the first gcn gpu. Nvidia had kinda the same pipeline with fermi but they decied to drop it for a more simple Command processor with kepler and after. So it is biased that software actually catch up with the hardware and nvidia because of strategy choices cant support it?


----------



## Cyro999

Quote:


> Originally Posted by *mtcn77*
> 
> This isn't even the end of the road for Directx 11. Draw calls aren't as high as Directx 11 limit (which is around 20,000 per frame, afaik).


Ashes has scenes in the benchark with 20k per frame even on 1920x1080 and low settings. That runs at ~70fps for me w/ a 6700k and 980 on dx11.


----------



## Cyber Locc

Quote:


> Originally Posted by *sugarhell*
> 
> WAit wait wait.
> 
> Async shaders is the name that AMD use.
> 
> Dx12 name is multi channel queue.
> 
> Also xbox use Async shaders.
> 
> Common place? Both use amd
> 
> If you think that microsoft didint develop dx12 with gcn(or xbox call it whatever you want) in mind you are crazy.
> 
> And why it is biased? Amd support it from the first gcn gpu. Nvidia had kinda the same pipeline with fermi but they decied to drop it for a more simple Command processor with kepler and after. So it is biased that software actually catch up with the hardware and nvidia because of strategy choices cant support it?


Well like the video I posted, Multi threading and Async are slightly different as you stated NVs way was different as Async is now. There way sucks and AMDs is good







, but it isn't proprietary I dont know where he came out with that.

It kinda shows the entire thing though, AMD chose Async way back when it caused issues with DX11 so they were losing, and NVs approach killed it for DX11 but in DX12 the tables flip.

Compeletey agree on the point though, this is 100% NVs fault, no one elses. There need to be Apple 2.0 is putting them in a very bad spot once again. They also full well knew this was coming, and did they add ASync, no because there stuck up child of a company.


----------



## cowie

Quote:


> Originally Posted by *sugarhell*
> 
> WAit wait wait.
> 
> Async shaders is the name that AMD use.
> 
> Dx12 name is multi channel queue.
> 
> Also xbox use Async shaders.
> 
> Common place? Both use amd
> 
> If you think that microsoft didint develop dx12 with gcn(or xbox call it whatever you want) in mind you are crazy.
> 
> And why it is biased? Amd support it from the first gcn gpu. Nvidia had kinda the same pipeline with fermi but they decied to drop it for a more simple Command processor with kepler and after. So it is biased that software actually catch up with the hardware and nvidia because of strategy choices cant support it?


you have valid points but you had to know that with the amd consoles ms would be looking to take advantage of getting the pc games to run better on that hardware. for all we know its ms that got dx12 running better with what was given them they knew to push dx12 because they had no chance in dx11 and if game makers continued to use that it would be worse off for them.

but of coarse its just me shooting the breeze and will see if the game makers buck the w10 trend and rebel or just suck it up and go with amd/ms.

I still feel this is might just be the reason you see more lower powered cards and hardly any highend cards this year on the new nodes


----------



## sugarhell

Quote:


> Originally Posted by *Forceman*
> 
> Both the 960 and 390X spend more time waiting for the CPU in that chart than the much faster cards, which indicates to me that either the testing was wrong, or it isn't showing what Tom's/people think it is showing.
> By the time Mantle came out Maxwell was already out, and it's not like they can graft async compute capabilities onto an existing chip.


You got it wrong. WIth dx12 its quite difficult the driver thread or the rendering thread to be the bottleneck anymore.

A faster gpu will have lower cpu usage because it has more shaders.(depends the sheduling)

If the gpu has a lot of shaders, it can shedule all the big shaders from the cpu or all the shorts shaders together. With async it can do both together

With a smaller gpu with fewer amount of shaders/tmu it asks all the time the cpu for a smaller amount of work but more frequently. That cause a higher demand for cpu draw calls.


----------



## Cyber Locc

Quote:


> Originally Posted by *cowie*
> 
> you have valid points but you had to know that with the amd consoles ms would be looking to take advantage of getting the pc games to run better on that hardware. for all we know its ms that got dx12 running better with what was given them they knew to push dx12 because they had no chance in dx11 and if game makers continued to use that it would be worse off for them.
> 
> but of coarse its just me shooting the breeze and *will see if the game makers buck the w10 trend and rebel or just suck it up and go with amd/ms.*
> 
> I still feel this is might just be the reason you see more lower powered cards and hardly any highend cards this year on the new nodes


Well see this is the biggest problem in our industry, I love PC gaming, and hate consoles. However reality is there is more console gamers than PC, We are the minority Xbox pays the bills round here for developers.

Also with there new Ecosystem of Xbox games on PC they only have to code the game once, which is easier on them. And games that once were limited to Xbox now have more people to purchase them with the growing PC market. If you think that this wont be embraced by Devs well then, you have a case of Denial







.

The future is here the lines of consoles and PCs are blurring and that is good for everyone, them, us, devs, Everyone. The true candle on the cake will be if Sony also jumps to using DX12 as they are an X86 console now as well. I said it before I will say it again, everyone teased AMD about getting the console contracts and how that wont keep them alive, I call bull, that was a sneaky methodical move that will put them on top.

I also think that Xbox will see a large performance increase on this, which will result in PS wanting the same. In which Async gets further pushed and MS wins by leasing DX12 to PS (or they use Vulkan, and a windows platform for a store). Which just increases Asyncs adaption and inches AMD that much higher.

All that said there CPUs are still bad IMO, and I am glad they are finally breaking the 2 apart, I been thinking they should do that for years.


----------



## Forceman

Quote:


> Originally Posted by *sugarhell*
> 
> You got it wrong. WIth dx12 its quite difficult the driver thread or the rendering thread to be the bottleneck anymore.
> 
> A faster gpu will have lower cpu usage because it has more shaders.(depends the sheduling)
> 
> If the gpu has a lot of shaders, it can shedule all the big shaders from the cpu or all the shorts shaders together. With async it can do both together
> 
> With a smaller gpu with fewer amount of shaders/tmu it asks all the time the cpu for a smaller amount of work but more frequently. That cause a higher demand for cpu draw calls.


That makes sense, but then you look at something like these charts and wonder why the 980 Ti and 390X are so different at 1440p (which may just be an anomaly or a bad data point, but it bugs me that they made no attempt to explain it). And why is Fury so much higher at 1080p?



And then here, which according to the description shows the 960 as GPU limited, even though the other chart seems to show it spending the most time in CPU (implying it is CPU limited).


----------



## sugarhell

Quote:


> Originally Posted by *Forceman*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> That makes sense, but then you look at something like these charts and wonder why the 980 Ti and 390X are so different at 1440p. And why the Fury is so much higher at 1080p.
> 
> 
> 
> And then here, which according to the description shows the 960 as GPU limited, even though the other chart seems to show it spending the most time in CPU (implying it is CPU limited).
> 
> 
> 
> 
> ]


I dont know about this. But i am sure that the drivers and the dx12 implementation is in an early phase.

Going up in resolution it should increase the cpu usage. The 1440p results with the 4k one are kinda strange. Only the gcn1.3 gpus are performing as i expect.

Also i am not sure in the last graph if the present time is actually what are they saying or the data is wrong. It doesnt make a sense.

Two reasons: Amd drivers are kinda inefficient for 390x or the present time means something else


----------



## cowie

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well see this is the biggest problem in our industry, I love PC gaming, and hate consoles. However reality is there is more console gamers than PC, We are the minority Xbox pays the bills round here for developers.
> 
> Also with there new Ecosystem of Xbox games on PC they only have to code the game once, which is easier on them. And games that once were limited to Xbox now have more people to purchase them with the growing PC market. If you think that this wont be embraced by Devs well then, you have a case of Denial
> 
> 
> 
> 
> 
> 
> 
> .
> 
> The future is here the lines of consoles and PCs are blurring and that is good for everyone, them, us, devs, Everyone. The true candle on the cake will be if Sony also jumps to using DX12 as they are an X86 console now as well. I said it before I will say it again, everyone teased AMD about getting the console contracts and how that wont keep them alive, I call bull, that was a sneaky methodical move that will put them on top.
> 
> I also think that Xbox will see a large performance increase on this, which will result in PS wanting the same. In which Async gets further pushed and MS wins by leasing DX12 to PS (or they use Vulkan, and a windows platform for a store). Which just increases Asyncs adaption and inches AMD that much higher.
> 
> All that said there CPUs are still bad IMO, and I am glad they are finally breaking the 2 apart, I been thinking they should do that for years.


oh no I understand 100% what you are saying but consoles as always has been where the money is.

on the consoles amd does not get rich on them because unlike nv they don't get paid to license the chips just sell them and done and pretty much low balled themselves but no one really knows how much they get.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> That makes sense, but then you look at something like these charts and wonder why the 980 Ti and 390X are so different at 1440p (which may just be an anomaly or a bad data point, but it bugs me that they made no attempt to explain it). And why is Fury so much higher at 1080p?
> 
> 
> 
> And then here, which according to the description shows the 960 as GPU limited, even though the other chart seems to show it spending the most time in CPU (implying it is CPU limited).


Fences,

Sometimes there are dependencies. Meaning that the graphics and/or compute work waitng fo be executed is blocked by a fence. Thia fence is put in place because the work cannot continue until the GPU is done processing previous work, work whose result is needed by the work being blocked by a fence.

You also have synchronization, whereas a graphics and compute job must be synchronized (again controlled by a fence) in order to achieve the desired graphics effect(s).

Since Maxwell's pipeline is relatively straight forward, you generally don't see much of a difference between various GPUs. Meaning their behaviour, under CPU and/or GPU bottlenecked situations is predictable.

With GCN, the wise variety of compute queue, rops etc going from GCN 1.1 to 1.2 models produces rather unpreditable behaviors and these behaviours are entirely dependent on the compute: graphics ratio as well as the compute queues.

Take Fiji, it has 4 ACEs (not 8), and two HWSs (hardware compute schedulers) while being GCN 1.2 whereas Tonga has only ACEs but more compute queues than the ACEs in GCN 1.1.

So the end result is an unpreditable behaviour.


----------



## Cyber Locc

Quote:


> Originally Posted by *cowie*
> 
> oh no I understand 100% what you are saying but consoles as always has been where the money is.
> 
> on the consoles amd does not get rich on them because unlike nv they don't get paid to license the chips just sell them and done and pretty much low balled themselves but no one really knows how much they get.


Oh, I didnt mena they will make money off the consoles. I am sure they made 1 lump sum price or pennies for every console sold. What I meant is they sold them console tech to sell them Async which in turn gives there GPUs and serious boost, and next it will be something to make there CPUs stronger, which MS will eat up if it helps XB sales







. Thats why I said at face value, putting the hardware in consoles meant nothing, but now we see they were able to push in DX12 to there advantage with it. It was a sneaky move they have more leverage in the community now, it was kinda underhanded and I do not think it will end with DX12







.

Most people look at it like you did, Consoles dont make them much money, thats true. However you are not seeing the bigger picture of the leverage they now posses, I honestly believe in the case of DX12 this is exactly what we are seeing.

AMD "You should make a DX12 that benefits our hardware and not NVs", Microsoft "We really didn't have any plans for a new DX", AMD "You could also use it on consoles and outperform MS, so its a win win.", Microsoft "Haha that is a good idea...".

Seriously I think DX12 was trickery by AMD, they are showing that they are playing for keeps







. IMO DX12 = Gameworks on Steroids, make it beneficent for you heavily, then make it a standard, that is a pro move right there.


----------



## NightAntilli

Quote:


> Originally Posted by *Mahigan*
> 
> Fences,
> 
> Sometimes there are dependencies. Meaning that the graphics and/or compute work waitng fo be executed is blocked by a fence. Thia fence is put in place because the work cannot continue until the GPU is done processing previous work, work whose result is needed by the work being blocked by a fence.
> 
> You also have synchronization, whereas a graphics and compute job must be synchronized (again controlled by a fence) in order to achieve the desired graphics effect(s).
> 
> Since Maxwell's pipeline is relatively straight forward, you generally don't see much of a difference between various GPUs. Meaning their behaviour, under CPU and/or GPU bottlenecked situations is predictable.
> 
> With GCN, the wise variety of compute queue, rops etc going from GCN 1.1 to 1.2 models produces rather unpreditable behaviors and these behaviours are entirely dependent on the compute: graphics ratio as well as the compute queues.
> 
> Take Fiji, it has 4 ACEs (not 8), and two HWSs (hardware compute schedulers) while being GCN 1.2 whereas Tonga has only ACEs but more compute queues than the ACEs in GCN 1.1.
> 
> So the end result is an unpreditable behaviour.


I still haven't found any info on Fiji actually having 4 rather than 8. Any place I looked it has 8. Did I miss something?


----------



## Neb9

This is apparently dx12, and it has no fps cap for me?

https://www.youtube.com/watch?v=fJUqxNUMxus

Is there anyway to check what dx version it is running?


----------



## MonarchX

OK, now it kind of makes sense...


----------



## mtcn77

Quote:


> Originally Posted by *NightAntilli*
> 
> I still haven't found any info on Fiji actually having 4 rather than 8. Any place I looked it has 8. Did I miss something?




You're not alone. 4 ACEs were affiliated with Nano, so I guess the former chart at the launch of Fury X was flawed.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> Both the 960 and 390X spend more time waiting for the CPU in that chart than the much faster cards, which indicates to me that either the testing was wrong, or it isn't showing what Tom's/people think it is showing.
> By the time Mantle came out Maxwell was already out, and it's not like they can graft async compute capabilities onto an existing chip.


I just seen this from my email so sorry for a late reply. Mantle was released in March of 2015, however NV long knew about it as it was announced in 2013. They knew full well that Mantle would use Async they had no desire to use it, Intel did however.

It was in Nvs best intrest to plan ahead of time, after AMDs announcement, they should have began working on Mantle, as they knew full well about it. They didn't as they did not want to support it.

In either case, I can concur that once Mantle was released it was too late. As such it is also too late for pascal as it was taped out shortly after mantles release. So Volta is where to be looking for Async on NV.

However at that point, AMD should have a firm grasp on Async, and NV will be starting fresh with Volta, it will take them a few years to catch up, I would assume.


----------



## Kollock

Quote:


> Originally Posted by *Forceman*
> 
> That makes sense, but then you look at something like these charts and wonder why the 980 Ti and 390X are so different at 1440p (which may just be an anomaly or a bad data point, but it bugs me that they made no attempt to explain it). And why is Fury so much higher at 1080p?
> 
> 
> 
> And then here, which according to the description shows the 960 as GPU limited, even though the other chart seems to show it spending the most time in CPU (implying it is CPU limited).


Quote:


> Originally Posted by *Mahigan*
> 
> Fences,
> 
> Sometimes there are dependencies. Meaning that the graphics and/or compute work waitng fo be executed is blocked by a fence. Thia fence is put in place because the work cannot continue until the GPU is done processing previous work, work whose result is needed by the work being blocked by a fence.
> 
> You also have synchronization, whereas a graphics and compute job must be synchronized (again controlled by a fence) in order to achieve the desired graphics effect(s).
> 
> Since Maxwell's pipeline is relatively straight forward, you generally don't see much of a difference between various GPUs. Meaning their behaviour, under CPU and/or GPU bottlenecked situations is predictable.
> 
> With GCN, the wise variety of compute queue, rops etc going from GCN 1.1 to 1.2 models produces rather unpreditable behaviors and these behaviours are entirely dependent on the compute: graphics ratio as well as the compute queues.
> 
> Take Fiji, it has 4 ACEs (not 8), and two HWSs (hardware compute schedulers) while being GCN 1.2 whereas Tonga has only ACEs but more compute queues than the ACEs in GCN 1.1.
> 
> So the end result is an unpreditable behaviour.


just FYI, I wouldn't take too much stock in the %CPU bound indicators on the current benchmark - looks to us that windows is blocking where it says it isn't, which means we get false positives. We're trying to fix it, but not sure when that rolls out. The timings themselves should be accurate (e.g. the ms in CPU, vs driver, etc), but you sort of see this weird pattern where our CPU time is way lower then the GPU time but we are logging the frame as CPU bound. But the percent GPU bound number itself is likely inaccurate.


----------



## Cyber Locc

Quote:


> Originally Posted by *Kollock*
> 
> just FYI, I wouldn't take too much stock in the %CPU bound indicators on the current benchmark - looks to us that windows is blocking where it says it isn't, which means we get false positives. We're trying to fix it, but not sure when that rolls out. The timings themselves should be accurate (e.g. the ms in CPU, vs driver, etc), but you sort of see this weird pattern where our CPU time is way lower then the GPU time but we are logging the frame as CPU bound. But the percent GPU bound number itself is likely inaccurate.


OMG, word from the man himself such a rare treat these days so see people like you speak out. Super awesome sauce







.


----------



## Kollock

Quote:


> Originally Posted by *Cyber Locc*
> 
> OMG, word from the man himself such a rare treat these days so see people like you speak out. Super awesome sauce
> 
> 
> 
> 
> 
> 
> 
> .


We try









Honestly, the present timing is sort of an interesting thing. What it usually indicates is some sort of GPU block happening on the refresh, we put tracking in to see if we could capture it. It doesnt' mean anything's wrong, it's just information on where a block might be occuring.


----------



## Mahigan

Quote:


> Originally Posted by *NightAntilli*
> 
> I still haven't found any info on Fiji actually having 4 rather than 8. Any place I looked it has 8. Did I miss something?


I asked this question to AMD and they wouldn't comment further. We know Fiji has HWSs but we're not sure about what they do differently than ACEs. They appear to be fused ACEs (128 queues) but there's no information on their execution capabilities.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> I just seen this from my email so sorry for a late reply. Mantle was released in March of 2015, however NV long knew about it as it was announced in 2013. They knew full well that Mantle would use Async they had no desire to use it, Intel did however.
> 
> It was in Nvs best intrest to plan ahead of time, after AMDs announcement, they should have began working on Mantle, as they knew full well about it. They didn't as they did not want to support it.
> 
> In either case, I can concur that once Mantle was released it was too late. As such it is also too late for pascal as it was taped out shortly after mantles release. So Volta is where to be looking for Async on NV.
> 
> However at that point, AMD should have a firm grasp on Async, and NV will be starting fresh with Volta, it will take them a few years to catch up, I would assume.


Not sure about the March 2015 date, but Mantle was announced in conjunction with Hawaii in Sept 2013, and that was the date I was referencing. I was thinking Kepler for some reason in my original comment, but the first Maxwell card (the 750 Ti) launched in February 2014, so it was still too late to make wholesale architecture changes.

As for Pascal, Nvidia certainly knew about async compute before March 2015 since they were in the DX12 working groups where it would have been discussed (MS publicly announced DX12 in March 2014). It might still have been too late even then, but I guess we'll see in a few months.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> Not sure about the March 2015 date, but Mantle was announced in conjunction with Hawaii in Sept 2013, and that was the date I was referencing. I was thinking Kepler for some reason in my original comment, but the first Maxwell card (the 750 Ti) launched in February 2014, so it was still too late to make wholesale architecture changes.
> 
> As for Pascal, Nvidia certainly knew about async compute before March 2015 since they were in the DX12 working groups where it would have been discussed (MS publicly announced DX12 in March 2014). It might still have been too late even then, but I guess we'll see in a few months.


The code for mantle was publicly released Mar 2015, before that they said its open you can have it, just let us finish it. March 2015 it was "finished", conveniently I think it died around there as well.


----------



## pengs

Quote:


> Originally Posted by *Cyber Locc*
> 
> The code for mantle was publicly released Mar 2015, before that they said its open you can have it, just let us finish it. March 2015 it was "finished", conveniently I think it died around there as well.


Yeah, your spot on imo.
One could imagine that this was probably as part of an agreement between Microsoft and AMD for use of AMD's asset as it would benefit Microsoft to develop it's own branch beyond what Mantle offered at that moment and to rid of an API which may had precedence to compete. It also benefits AMD as it entrenches PC gaming in it's architecture and the other obvious benefits.
Quote:


> Originally Posted by *Forceman*
> 
> As for Pascal, Nvidia certainly knew about async compute before March 2015 since they were in the DX12 working groups where it would have been discussed (MS publicly announced DX12 in March 2014). It might still have been too late even then, but I guess we'll see in a few months.


I remember it also and specifically AMD in conferences with Microsoft the 1st quarter of 2014 (one year before) about Mantle. Pascal would had been a near finished product at that point - asynchronous shading is an architectural feature which would had been made at at Pascal's conception. Pascal will not be the answer.
Quote:


> Quote:
> 
> 
> 
> Originally Posted by *cowie*
> 
> oh no I understand 100% what you are saying but consoles as always has been where the money is.
> 
> on the consoles amd does not get rich on them because unlike nv they don't get paid to license the chips just sell them and done and pretty much low balled themselves but no one really knows how much they get.
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *cowie*
> 
> you have valid points but you had to know that with the amd consoles ms would be looking to take advantage of getting the pc games to run better on that hardware. for all we know its ms that got dx12 running better with what was given them they knew to push dx12 because they had no chance in dx11 and if game makers continued to use that it would be worse off for them.
> 
> Click to expand...
Click to expand...

The basics are all you need for the situation to be self explanatory:

Microsoft's console uses GCN

It would benefit Microsoft to use AMD's API
To help market Windows 10, not primarily but that's a great byproduct
For ease of cross platforming the 360 with the PC
To bring it's exclusives and partnerships into the PC space and...
And to maintain Windows as the gaming platform for the PC

It would benefit AMD to have Microsoft use it's API
To have a substantial graphical and performance edge in it's dGPU market
To have triple A games which are tailored ground up to it's architecture and furthermore running on a favorable API, and...
To entrench gaming itself in it's own ecosystem silently

Quote:


> Originally Posted by *MonarchX*
> 
> So MS designed DirectX 12 off AMD's specific technology, Async Shaders? That's quite biased, don't you think? Besides, how can NVidia even integrate Async Shader technology into their future cards if that technology is proprietary and belongs to AMD?
> 
> They say Vulkan is based off Mantle too, but NVidia decided to very much support it.


The technology belongs to nobody, it's essentially a method of executing and transporting data. NVIDIA may had been able to squeeze it into Pascal at that time but they didn't, granted there was no evidence of it's use at that moment and was probably too far into Pascal's planning.
There is a reason why NVIDIA are pushing Vulkan, no doubt, but it has nothing to do with ASC. It's probably got something to do with raster ordered views and conservative rasterization, the FL12_1 features.

AMD isn't really sneaky as someone said earlier, they just made the correct moves - there is no way that a company which is as large or smart as NVIDIA lacks the situational awareness in it's field to make such wrong choices. It's either by ignorance or choice so it's definitely by choice -- NVIDIA knows that exclusivity is end game, as does Apple, the only difference being that Apple pulled it off a little lot more gracefully and were the only ones marketing technology in such a way.

These are very interesting times with all of the massive movements which are being made by the vendors and within the industry itself.


----------



## Tobiman

Quote:


> Originally Posted by *Mahigan*
> 
> I asked this question to AMD and they wouldn't comment further. We know Fiji has HWSs but we're not sure about what they do differently than ACEs. They appear to be fused ACEs (128 queues) but there's no information on their execution capabilities.


According to the extremetech article, one HWS equals two ACEs.


----------



## mav451

I would not characterize Apple as 'graceful,' only that court of public opinion has held them in that regard. I remember Apple's G5 ads before their Intel switch.

Apple deliberately misled consumers then and they'll do it again and again if it's still working.


----------



## superstition222

Quote:


> Corporations deliberately mislead consumers and they'll do it again and again if it's still working.


fixed


----------



## Cyber Locc

Quote:


> Originally Posted by *mav451*
> 
> I would not characterize Apple as 'graceful,' only that court of public opinion has held them in that regard. I remember Apple's G5 ads before their Intel switch.
> 
> Apple deliberately misled consumers then and they'll do it again and again if it's still working.


Since the G5 lol, try since LISA, hahaha. Steve Jobs was a straight con artist by his own admission, he stated himself "People dont know what they want, there stupid, I tell them what they want." The sad part is he was right about the greater part of society.

Dont get me wrong, Apple has made some tech break throughs that the world benefits from, but mostly is just theft and lies







. I do have to give him props though, he was a darn good hustler, lets see if they can find another to fill his shoes.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> Since the G5 lol, try since LISA, hahaha.


Don't go there unless you want a smackdown. Lisa was amazing (although I will grant it was saddled with a few dumb moves, like in-house floppy drives and a 5 MHz clock). But, aside from the clockspeed, even the floppy drives were an attempt at innovation. They were the first variable rate drives in the market as far as I know and crammed a lot of capacity onto a disk, vastly more than IBM PC drives could. I think it was 860K vs. 360K for the IBM.

Yes, Jobs was a con artist. That's why he was a successful businessman.

It was probably taking Jobs off the Lisa project that made it a failure in the market.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> Don't go there unless you want a smackdown. Lisa was amazing (although I will grant it was saddled with a few dumb moves, like in-house floppy drives and a 5 MHz clock). But, aside from the clockspeed, even the floppy drives were an attempt at innovation. They were the first variable rate drives in the market as far as I know and crammed a lot of capacity onto a disk, vastly more than IBM PC drives could. I think it was 860K vs. 360K for the IBM.
> 
> *Yes, Jobs was a con artist. That's why he was a successful businessman.*
> 
> It was probably taking Jobs off the Lisa project that made it a failure in the market.


I agree 100% with bold, thats why Apple thrives the way it does.

I also agree taking jobs off of lisa didn't help, as back then a PCs had little use. No one really needed the LISA, it was Job's job to make them want it anyway







.

That is why I say NV trys to be Apple, but with out a Silver Tongue it will never happen, they simply lack that. There marketing team is very good no doubt, but its no Job's


----------



## GorillaSceptre

...


----------



## NightAntilli

Quote:


> Originally Posted by *Kollock*
> 
> We try
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Honestly, the present timing is sort of an interesting thing. What it usually indicates is some sort of GPU block happening on the refresh, we put tracking in to see if we could capture it. It doesnt' mean anything's wrong, it's just information on where a block might be occuring.


Last year you posted on here that you spent maybe 5 days optimizing for async. Have you spent more time optimizing it for AMD hardware between then and now?


----------



## Kollock

Quote:


> Originally Posted by *NightAntilli*
> 
> Last year you posted on here that you spent maybe 5 days optimizing for async. Have you spent more time optimizing it for AMD hardware between then and now?


We don't 'optimize' for it per say, we detangled dependencies in our scene so it can execute in parallel. Thus, I wouldn't say we optimized or built around it, we just moved some of the rendering work to compute and scheduled it to co-execute. Since we aren't a console title, we're not really tuning it like someone might on an XboxOne or PS4. However, consoles guys I've talked to think that 20% increase in perf is about the range that is expected for good use on a console anyway.


----------



## gamervivek

Quote:


> Originally Posted by *Mahigan*
> 
> I asked this question to AMD and they wouldn't comment further. We know Fiji has HWSs but we're not sure about what they do differently than ACEs. They appear to be fused ACEs (128 queues) but there's no information on their execution capabilities.


HWS are supposedly for hardware virtualization features, something that Tonga based FirePro cards are the first in.

https://forum.beyond3d.com/posts/1896122/


----------



## airfathaaaaa

Quote:


> Originally Posted by *Kollock*
> 
> We don't 'optimize' for it per say, we detangled dependencies in our scene so it can execute in parallel. Thus, I wouldn't say we optimized or built around it, we just moved some of the rendering work to compute and scheduled it to co-execute. Since we aren't a console title, we're not really tuning it like someone might on an XboxOne or PS4. However, consoles guys I've talked to think that 20% increase in perf is about the range that is expected for good use on a console anyway.


have you seen this?
https://twitter.com/PellyNV/status/702556025816125440

i guess their pr team has once again started a dirt rally?


----------



## looniam

Quote:


> Published on Mar 1, 2016
> Wir haben die Grafikqualität in DirectX 12 zwischen einer AMD Radeon R9 Fury X und einer NVIDIA GeForce GTX 980 Ti verglichen. Auflösung war jeweils 1.920 x 1.080 Pixel im Extreme-Preset. Die Asynchronous Shaders sollten die Möglichkeit bieten, zusätzliche Effekte anzuwenden.


*translated*
We compared the graphic quality in DirectX 12 between an AMD Radeon R9 Fury X and an NVIDIA GeForce GTX 980 Ti. Resolution was respectively 1,920 x 1,080 pixels in the Extreme preset. The Asynchronous shaders should provide the opportunity to apply additional effects.


----------



## ku4eto

Quote:


> Originally Posted by *airfathaaaaa*
> 
> have you seen this?
> https://twitter.com/PellyNV/status/702556025816125440
> 
> i guess their pr team has once again started a dirt rally?


Heh, a nice way to have them sued. This is slander i believe...


----------



## Tivan

Quote:


> Originally Posted by *looniam*
> 
> 
> 
> 
> 
> *translated*
> We compared the graphic quality in DirectX 12 between an AMD Radeon R9 Fury X and an NVIDIA GeForce GTX 980 Ti. Resolution was respectively 1,920 x 1,080 pixels in the Extreme preset. The Asynchronous shaders should provide the opportunity to apply additional effects.


As was stated by the oxide guy on the forum, they have since disabled that effect that previously was only on one of the two cards, as they were actually working on a better version of it. Seems like it's not in yet, as there's no trails on aircraft units, and only little illumination of rockets.


----------



## MonarchX

It seems Maxwell will lose in DirectX 12, regardless of NVidia's exclusivity due to Async Shader popularity in early games. Maybe if they release DirectX 11 and DirectX 12 versions...

I didn't want to start a new thread, but will DirectX 12 and/or Vulkan improve VRAM utilization compared to DirectX 11? My 4GB GTX 980 is done as far using Ultra settings at 1080p without any AA. Rise of the Tomb Raider FPS goes down as soon as I select Ultra textures, which take 6GB+ VRAM on GTX 980 Ti and GTX Titan X cards. I was hoping DirectX 12 will improve buffering or compression or whatever to make it easier on 4GB cards...


----------



## Mahigan

Quote:


> Originally Posted by *MonarchX*
> 
> It seems Maxwell will lose in DirectX 12, regardless of NVidia's exclusivity due to Async Shader popularity in early games. Maybe if they release DirectX 11 and DirectX 12 versions...
> 
> I didn't want to start a new thread, but *will DirectX 12 and/or Vulkan improve VRAM utilization compared to DirectX 11? My 4GB GTX 980 is done as far using Ultra settings at 1080p without any AA*. Rise of the Tomb Raider FPS goes down as soon as I select Ultra textures, which take 6GB+ VRAM on GTX 980 Ti and GTX Titan X cards. I was hoping DirectX 12 will improve buffering or compression or whatever to make it easier on 4GB cards...


That's all on NVIDIA and their driver team. AMD has two engineers dedicated to the task for their Fiji series. NVIDIA appear to not have any dedicated engineers to that task. 4GB is enough if you utilize it wisely via the drivers.


----------



## MonarchX

Quote:


> Originally Posted by *Mahigan*
> 
> That's all on NVIDIA and their driver team. AMD has two engineers dedicated to the task for their Fiji series. NVIDIA appear to not have any dedicated engineers to that task. 4GB is enough if you utilize it wisely via the drivers.


Its not about drivers, but about game programming. There is not much NVidia or AMD can do when a game is un-optimized. I was hoping DirectX 12 will introduce a very lean approach where only the exact pixels and textures shown are being processed, while another lean background process buffers the rest of the graphics to provide very smooth streaming without texture pop-ins, but also without heavy VRAM utilization. It would be like a highly optimized JIT (Just In Time) setting. They say even DirectX 9 had that, let alone DirectX 11... Maybe it can be optimized more in DirectX 12 and Vulkan?

Let's take Rise of the Tomb Raider, for example:
Very High Textures settings (in-game messages strongly suggests using Very High Textures only on 6GB+ VRAM cards)
- Case 1. 6-7GB VRAM utilization on GTX 980 Ti and GTX Titan X. AFAIK there's isn't much difference in performance between High and Very High Textures setting
- Case 2. 4GB VRAM utilization on GTX 980 (obviously due to 4GB VRAM limit). There's a HUGE FPS difference (up to 15fps) in performance between High and Very High Textures setting.

In both cases, the same textures are shown (no popins), and although GTX 980 greatly suffers from Very High Textures setting, it still manages to show the exact textures as GTX 980 Ti and GTX Titan X per frame. That means 4GB is plenty to display all the necessary textures on the screen using the settings tested. Considering the severe FPS performance hit 4GB GTX 980 gets from using Very High Texture setting, there's obviously intense background processing that occurs on 4GB cards because they cannot buffer as much as 6GB+ cards, which do not get such a severe FPS hit from the same Very High Texture setting.

I am not a very knowledgeable person on all this, but that is what makes sense to me. I think in Rise of the Tomb Raider, 4GB cards cannot store the necessary amount of textures to be processed per single cycle... Or I am completely wrong.


----------



## Olivon

Quote:


> Originally Posted by *Mahigan*
> 
> That's all on NVIDIA and their driver team. *AMD has two engineers dedicated to the task for their Fiji series*. NVIDIA appear to not have any dedicated engineers to that task. 4GB is enough if you utilize it wisely via the drivers.


Waow, really impressive ! What a strike force !
Don't worry about nVidia drivers team dude, DX11 domination all along show they're quite talented


----------



## inedenimadam

Quote:


> Originally Posted by *MonarchX*
> 
> It seems Maxwell will lose in DirectX 12, regardless of NVidia's exclusivity due to Async Shader popularity in early games. Maybe if they release DirectX 11 and DirectX 12 versions...
> 
> I didn't want to start a new thread, but will DirectX 12 and/or Vulkan improve VRAM utilization compared to DirectX 11? My 4GB GTX 980 is done as far using Ultra settings at 1080p without any AA. Rise of the Tomb Raider FPS goes down as soon as I select Ultra textures, which take 6GB+ VRAM on GTX 980 Ti and GTX Titan X cards. I was hoping DirectX 12 will improve buffering or compression or whatever to make it easier on 4GB cards...


Welcome to ports from consoles that have a 8GB buffer to work with.


----------



## KyadCK

Quote:


> Originally Posted by *Olivon*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Mahigan*
> 
> That's all on NVIDIA and their driver team. *AMD has two engineers dedicated to the task for their Fiji series*. NVIDIA appear to not have any dedicated engineers to that task. 4GB is enough if you utilize it wisely via the drivers.
> 
> 
> 
> Waow, really impressive ! What a strike force !
> Don't worry about nVidia drivers team dude, DX11 domination all along show they're quite talented
Click to expand...

AMD put two people on the team exclusively for memory management and cleanup, resulting in them using less memory than nVidia at the same settings. At this point in time, nVidia has not dedicated anyone to the task that we are aware of, but the results speak for themselves and they probably should.

Woo, I'm twitter famous.

Speaking of, @Kollock, I was speaking in the theoretical environment where "support" simply means "doesn't die because you told it to do something". I do not obviously have information specific to the inner workings of your game nor how nVidia is handling it in their drivers aside from information available to us by yourself or testing. Just that software support for something is vastly inferior to hardware support.


----------



## Cyber Locc

Well I sincerely hope that, 980TIs get Async.

I know I have been saying they wont, and they probably wont. However my Rampage died a few weeks ago RIP. I was planning to replace it with a RVBE, and I still am. so I decided to update my office rig for now, so I sold 2 of my 290s and bought a 980ti. That will be my gaming rig until After they give me a sweet Decacore and RVBE







.

Before I get jumped by why did I do that or grats by either camp. My reasons were for my office build the gaming is a more of an afterthought compared to Adobe supporting Cuda and also Cad / Rendering / Transcoding ect. Plus the office build is MATX so it will be rad limited 1 card is the most I can cool. My feelings on the gaming front are still the same.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well I sincerely hope that, 980TIs get Async.
> 
> I know I have been saying they wont, and they probably wont. However my Rampage died a few weeks ago RIP. I was planning to replace it with a RVBE, and I still am. so I decided to update my office rig for now, so I sold 2 of my 290s and bought a 980ti. That will be my gaming rig until After they give me a sweet Decacore and RVBE
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Before I get jumped by why did I do that or grats by either camp. My reasons were for my office build the gaming is a more of an afterthought compared to Adobe supporting Cuda and also Cad / Rendering / Transcoding ect. Plus the office build is MATX so it will be rad limited 1 card is the most I can cool. My feelings on the gaming front are still the same.


I was going to do the same thing. As much as ASync will make a difference GTX980 Ti is backed up by Nvidia until Pascal 100%. After that we do not know but expect at least another 6-12 months of full support. I think it has more to do with Fury X sucking and wanting to go Single GPU. If a Nvidia card performs the same or better as an AMD while costing the same there is no reason to go AMD. There are card where they are so close to each other that Brand plays a part. Nvidia has backed in a lot of feature to keep their user from switching over. Right now share is 80/20 but in reality 40 of those are in the fence. 20 are AMD die hard. 40 are Nvidia die hard and 40 go either way. They have chosen Nvidia for the Maxwell round due to so many factors that have nothing to do with performance.


----------



## Forceman

Quote:


> Originally Posted by *KyadCK*
> 
> AMD put two people on the team exclusively for memory management and cleanup, resulting in them using less memory than nVidia at the same settings. At this point in time, nVidia has not dedicated anyone to the task that we are aware of, but the results speak for themselves and they probably should.


Given that the 970 doesn't seem to have any problems with games where the 980 also doesn't, I'd say they already have people working on it. Otherwise you'd see more situations where the 970 tanks before the 980 does. Plus Nvidia has more experience (because of their design choices) working with more VRAM constrained cards, so their memory management may be better just in general.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> Given that the 970 doesn't seem to have any problems with games where the 980 also doesn't, I'd say they already have people working on it. Otherwise you'd see more situations where the 970 tanks before the 980 does. Plus Nvidia has more experience (because of their design choices) working with more VRAM constrained cards, so their memory management may be better just in general.


"Better" as in more copies for redundancy? Because I see higher memory usage in every gamegpu review.


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> "Better" as in more copies for redundancy? Because I see higher memory usage in every gamegpu review.


Generally speaking, empty VRAM is a waste, so using more VRAM isn't a problem as long as you aren't going over. But I'm talking about their experience with the 680 and 780, where they had less VRAM to use than AMD.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Forceman*
> 
> Given that the 970 doesn't seem to have any problems with games where the 980 also doesn't, I'd say they already have people working on it. Otherwise you'd see more situations where the 970 tanks before the 980 does. Plus Nvidia has more experience (because of their design choices) working with more VRAM constrained cards, so their memory management may be better just in general.


Yes but how long will GTX970 remain like that? I have a feeling GTX970 will tank if Nvidia does not optimize for it.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> Generally speaking, empty VRAM is a waste, so using more VRAM isn't a problem as long as you aren't going over. But I'm talking about their experience with the 680 and 780, where they had less VRAM to use than AMD.


So you are saying Nvidia did, because they had to(in some shape or form) - that is a naturalistic fallacy.


----------



## Forceman

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Yes but how long will GTX970 remain like that? I have a feeling GTX970 will tank if Nvidia does not optimize for it.


The same question can probably be asked about the Fury cards. Hard to know how much is game by game and how much is universal. Nvidia did mention heuristics when they talked about the 970, don't recall hearing anything about how AMD was handling it.
Quote:


> Originally Posted by *mtcn77*
> 
> So you are saying Nvidia did, because they had to(in some shape or form) - that is a naturalistic fallacy.


Oookkkaayy?


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> The same question can probably be asked about the Fury cards. Hard to know how much is game by game and how much is universal. *Nvidia did mention heuristics* when they talked about the 970, don't recall hearing anything about how AMD was handling it.


You're 100% wrong on this. They specifically mentioned heuristics are not their problem, but the operating system's issue.
Quote:


> The problem and risk is that this performance difference essentially depends on the heuristics of the OS and its ability to balance the pools effectively, putting data that needs to be used less frequently or in a less latency-dependent fashion in the 0.5GB portion.


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> You're 100% wrong on this. They specifically mentioned heuristics are not their problem, but the operating system's issue.


What? I'm saying Nvidia said they are using heuristics to help them optimize for VRAM use, instead of it all being individually tuned for each game. Not that heuristics are a problem.
Quote:


> Also, Alben noted, with "good heursitics," Nvidia is able to put data that's not likely to be used as often into this half-gig segment. In other words, Nvidia's driver developers may already be optimizing the way their software stores data in the GTX 970's upper memory segment.


Quote:


> Alben told us Nvidia continues to look into possible situations where the performance drop-offs are larger on the GTX 970, and he suggested that in those cases, the company will "see if we can improve the heuristics."


http://techreport.com/review/27724/nvidia-the-geforce-gtx-970-works-exactly-as-intended


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> What? I'm saying Nvidia said they are using heuristics to help them optimize for VRAM use, instead of it all being individually tuned for each game. Not that heuristics are a problem.


Quote:


> The goal for NVIDIA then is that *the operating system* would utilize the 3.5GB of memory capacity first, then access the 0.5GB and then finally move to the system memory if necessary.
> ...
> NVIDIA claims that the architecture is working exactly as intended and that with *competent OS heuristics* the performance difference should be negligible in real-world gaming scenarios.


The later response: *My bad:* _-PcPer's recap of the former evaluation:_
Quote:


> ...it was stated that *the operating system* had a much stronger role in the allocation of memory from a game's request *than the driver*.


Still, if we look at the former reply, it is clearly stated that the initiative is ours, not theirs:
Quote:


> The question then is, what is the real-world performance penalty of the GTX 970's dual memory pool configuration? Though Alben didn't have a specific number he wanted to discuss _he encouraged us to continue doing our own testing to find cases where you can test games requesting less than 3.5GB of memory and then between 3.5GB and 4.0GB. By comparing the results on the GTX 980 and the GTX 970 in these specific scenarios you should be able to gauge the impact that the slower pool of memory has on the total memory configuration and gaming experience._ The problem and risk is that this performance difference essentially depends on the heuristics of the OS and its ability to balance the pools effectively, putting data that needs to be used less frequently or in a less latency-dependent fashion in the 0.5GB portion.
> 
> NVIDIA's performance labs continue to work away at finding examples of this occurring and the consensus seems to be something in the 4-6% range.


That should give some idea, imo.


----------



## Tivan

Quote:


> Originally Posted by *Forceman*
> 
> Oookkkaayy?


He's kinda mixing 2 arguments there.

1) Nvidia is _not_ more experienced with low memory situations than AMD, because AMD too has low memory models.
2) Nvidia had to compete with worse memory volume in the top end segment, but this doesn't mean they have to have learned anything special from that.

edit: but yeah, personally, I have no clue how exactly the situation looks like for either camp, on the memory side of things.


----------



## mtcn77

Quote:


> Originally Posted by *Tivan*
> 
> He's kinda mixing 2 arguments there.
> 
> 1) Nvidia is _not_ more experienced with low memory situations than AMD, because AMD too has low memory models.
> 2) Nvidia had to compete with worse memory volume in the top end segment, but this doesn't mean they have to have learned anything special from that.


Agree with the first part, despite not having thought of that to be honest. GTX 960 seems to be in less agony than R9 380 with 2GB.


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> The later response:


Where's that from? This is from Anandtech's article:
Quote:


> The complex part of this process occurs once both memory segments are in use, at which point NVIDIA's heuristics come into play to try to best determine which resources to allocate to which segments. How NVIDIA does this is very much a "secret sauce" scenario for the company, but from a high level identifying the type of resource and when it was last used are good ways to figure out where to send a resource.... The way NVIDIA describes the process we suspect there are even per-application optimizations in use, though NVIDIA can clearly handle generic cases as well.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> Where's that from? This is from Anandtech's article:


[Pcper - 3][PcPer - 2][PcPer - 1]
The third is specifically from their representative disavowing the responsibility of the driver.


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> [Pcper - 3][PcPer - 2][PcPer - 1]
> The third is specifically from their representative disavowing the responsibility of the driver.


Hmm. Anandtech and Tech Report are where I got my quotes, and they both talk about Nvidia's heuristics. Those articles explain that the OS allocates the memory (obviously) but that the driver can influence where and how it is allocated, and that would be where the heuristics come in. But even the PCPer article says that the driver does have some influence, even if the OS has a stronger one.

I guess we'll have to wait and see what happens when Pascal comes out and they move focus to those cards. Likewise, what happens with Fury cards when Polaris releases.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> Hmm. Anandtech and Tech Report are where I got my quotes, and they both talk about Nvidia's heuristics. Those articles explain that the OS allocates the memory (obviously) but that the driver can influence where and how it is allocated, and that would be where the heuristics come in.
> 
> I guess we'll have to wait and see what happens when Pascal comes out and they move focus to those cards. Likewise, what happens with Fury cards when Polaris releases.


I still cannot find the actual response from Nvidia. Maybe, on their site... All I got is PcPer's evaluative tensing which makes it especially difficult to differentiate their evalution from Nvidia's actual reply(though, you can actually keep on reading where they have made their admission).


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> I still cannot find the actual response from Nvidia. Maybe, on their site... All I got is PcPer's evaluative tensing which makes it especially difficult to differentiate their evalution from Nvidia's actual reply(though, you can actually keep on reading where they have made their admission).


I think it all came from the same conference call with Alben, so there probably isn't an actual statement, just the different tech writers recollection and analysis of what they said. Jen-Hsun's blog post is the only official word I could find, and all it does is point to the Tech Report article.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> I think it all came from the same conference call with Alben, so there probably isn't an actual statement, just the different tech writers recollection and analysis of what they said.


I'm sure the word "suspect" disclaims any adherence of Anandtech's evaluation to the statement, then and there.
Quote:


> The way NVIDIA describes the process *we suspect* there are even per-application optimizations in use, though NVIDIA can clearly handle generic cases as well.


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> I'm sure the word "suspect" disclaims any adherence of Anandtech's evaluation to the statement, then and there.


I read that as they have generic solution in place, the heuristics, and then possibly (the suspects part) per-game optimizations as well. The "can clearly handle generic cases" is what i'm talking about with the heuristics - they don't have to tune it for each game because they have a system in place to handle it generically.

But without a definitive statement from Nvidia it's all just speculation and hair-splitting anyway.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Forceman*
> 
> I read that as they have generic solution in place, the heuristics, and then possibly (the suspects part) per-game optimizations as well. The "can clearly handle generic cases" is what i'm talking about with the heuristics - they don't have to tune it for each game because they have a system in place to handle it generically.
> 
> But without a definitive statement from Nvidia it's all just speculation and hair-splitting anyway.


they admitted being wrong 1 time on this decade(970) you expect them to say it twice?


----------



## SpeedyVT

Quote:


> Originally Posted by *airfathaaaaa*
> 
> they admitted being wrong 1 time on this decade(970) you expect them to say it twice?


They did with that new version of HBAO+ in GoW.


----------



## airfathaaaaa

Quote:


> Originally Posted by *SpeedyVT*
> 
> They did with that new version of HBAO+ in GoW.


2 times? well we have 4 more years till 2020 lets see


----------



## magnek

Quote:


> Originally Posted by *airfathaaaaa*
> 
> have you seen this?
> https://twitter.com/PellyNV/status/702556025816125440
> 
> i guess their pr team has once again started a dirt rally?


Grown men bickering like teenage girls on Twitter, color me shocked.


----------



## superstition222

Quote:


> Originally Posted by *airfathaaaaa*
> 
> they admitted being wrong 1 time on this decade(970) you expect them to say it twice?


If you're talking about Nvidia, I think their admittance was more like the Vatican's "apology" over the Inquisition and the like (a non-apology because they said the things they did wrong were done "in service of the truth" instead of saying they were done due to greed).

That Nvidia still doesn't list the VRAM partitioning on its website for the 970 is... I have no words for it, really. A legitimate business would list the card as 3.5/.5 after being exposed for outright fraud.


----------



## superstition222

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Yes but how long will GTX970 remain like that? I have a feeling GTX970 will tank if Nvidia does not optimize for it.


Eurogamer found two cases back when they looked into it.
Quote:


> Originally Posted by *eurogamer*
> We ran two cards in SLI - in order to remove the compute bottleneck as much as possible - then we ran Assassin's Creed Unity on ultra high settings at 1440p, with 4x MSAA. As you can see in the video at the top of this page, this produces very noticeable stutter that isn't as pronounced on the GTX 980.
> 
> Running Shadows of Mordor at ultra settings at 1440p, while downscaling from an internal 4K resolution with ultra textures engaged, showed a clear difference between the GTX 970 and the GTX 980 - something which must be caused by the different memory set-ups. To be honest, this produced a sub-optimal experience on both cards, but there were areas that saw noticeable stutter on the 970 that were far less of an issue on the 980.


Quote:


> Originally Posted by *guru3D*
> It has been an interesting week with all the hype on the GeForce GTX 970 with what now is know as a crippled memory sub-system. Three weeks ago here on Guru3D.com users in our forums started to report oddities with their GeForce GTX 970 graphics card, the card ran out of memory at 3.5 GB, not utilizing anything higher or reports of stutters in high graphics memory usage.
> 
> Nvidia could have and probably should have marketed the card as 3.5GB, or they probably could even have deactivated an entire right side quad and go for a 192-bit memory interface tied to just 3GB of memory.
> 
> I do believe Nvidia has been lying. There are dozens if not hundreds of GeForce GTX 970 reviews out on the web. You can not explain to me that with all the engineers that Nvidia has, not one of them noticed the wrong specs in one of these reviews?
> 
> The solution Nvidia pursued is complex and not rather graceful, IMHO. Nvidia needed to slow down the performance of the GeForce GTX 970, and the root cause of all this discussion was disabling that one L2 cluster with it's ROPs.
> 
> Nvidia also could have opted other solutions:
> 
> Release a 3GB card and disable the entire ROP/L2 and two 32-bit memory controller block.
> You'd have have a very nice 3GB card and people would have known what they actually purchased.
> 
> Even better, to divert the L2 cache issue, leave it enabled, leave the ROPS intact and if you need your product to perform worse to say the GTX 980, disable an extra cluster of shader processors, twelve instead of thirteen.
> 
> Simply enable twelve or thirteen shader clusters, lower voltages, and core/boost clock frequencies. Set a cap on voltage to limit overclocking. Good for power efficiency as well.
> 
> We do hope to never ever see a graphics card being configured like this ever again as it would get toasted by the media, for what Nvidia did here. It's simply not the right thing to do.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well I sincerely hope that, 980TIs get Async.


I don't see how the cards are going to get hardware that isn't there.
Quote:


> Originally Posted by *Hruska*
> Ashes of the Singularity isn't just the first DirectX 12 game - it's also the first PC title to make extensive use of asynchronous computing. Support for this capability is a major difference between AMD and Nvidia hardware, and it has a significant impact on game performance.
> 
> A GPU that supports asynchronous compute can use multiple command queues and execute these queues simultaneously, rather than switching between graphics and compute workloads.
> 
> Asynchronous computing is, in a very real sense, GCN's secret weapon. While every GCN-class GPU since the original HD 7970 can use it, AMD quadrupled the number of ACEs per GPU when it built Hawaii, then modified the design again with Fiji. Where the R9 290 and 290X use eight ACEs, Fiji has four ACEs and two HWS units.
> 
> The exact state and nature of Nvidia's asynchronous compute capabilities is still unclear. We know that Nvidia's Maxwell can't perform anything like the concurrent execution that AMD GPUs can manage. Maxwell can benefit from some light asynchronous compute workloads, as it does in Fable, but the benefits on Team Green hardware are small.
> 
> The Nitrous Engine that powers Ashes of the Singularity makes extensive use of asynchronous compute and uses it for up to 30% of a given frame's workload. Oxide has stated that they believe this will be a common approach in future games and game engines, since DirectX 12 encourages the use of multiple engines to execute commands from separate queues in parallel.
> 
> Nvidia cards cannot handle asynchronous workloads the way that AMD's can, and the differences between how the two cards function when presented with these tasks can't be bridged with a few quick driver optimizations or code tweaks.
> 
> Fifth and finally, we know that AMD GPUs have always had enormous GPU compute capabilities. Those capabilities haven't always been displayed to their best advantage for a variety of reasons, but they've always existed, waiting to be tapped. When Nvidia designed Maxwell, it prioritized rendering performance.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> I don't see how the cards are going to get hardware that isn't there.


Well I also said "*I know I have been saying they wont, and they probably wont.*" so ya. Thing is they are sticking to there claims that there cards do have the hardware, we will just have to wait and see if that is true or not.

So until someone shows without a shadow of a doubt that NV does not have Async hardware, or they say that themselves than it is all conjecture. We have assumed, that current cards do not have Async support, although NV says otherwise. While those assumptions are most likely correct, at the end of the day they are still just that, Assumptions.

However as I have pointed out numerous times in this thread, I do agree, the most likely case is it does not have the support in the hardware. This is however yet to be proven one way or the other, at least that I have seen.

Hints my statement in its full form, nice way to pull it out of context though









From NV, "Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled."
http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/6

Now as I have said whether this is true or not, well that has yet to be seen.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> nice way to pull it out of context though












As for relying on Nvidia's word... Been there, seen the result.

If the company is so slow that it couldn't have released even an alpha by now...


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As for relying on Nvidia's word... Been there, seen the result.
> 
> If the company is so slow that it couldn't have released even an alpha by now...


Well honestly, I do not trust any of them. However as to the alpha, for what? for a game that isn't even released or in a final beta even? When there is games that are released and are needing to be concerned with, and it still isn't out, that is proof. These assumptions are educated guesses at the end of the day, I am not just taking NVs word, we have no hard facts, so who else's word are we suppose to take.

We have the guess that it wont work and them saying it will neither are fact at this point period.

NV is winning they have nothing to prove, yet. When the game comes out and AMD is winning and more games come out showing AMD winning then it will be important but for a game that is in beta, and hardly anyone even cares about ya not top priority.

AMD is doing what they need to by hyping DX12 as they are losing in DX11, with no hope of taking the lead. NV is acting like the leader that they are, and following there same pattern of right now matters not tomorrow.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> However as to the alpha, for what? for a game that isn't even released or in a final beta even? When there is games that are released and are needing to be concerned with.
> 
> NV is winning they have nothing to prove, yet. When the game comes out and AMD is winning and more games come out showing AMD winning then it will be important but for a game that is in beta, and hardly anyone even cares about ya not top priority.
> 
> AMD is doing what they need to by hyping DX12 as they are losing in DX11, with no hope of taking the lead. NV is acting like the leader that they are, and following there same pattern of right now matters not tomorrow.


So much spin.

The alpha would obviously be something that demonstrates good performance with async. No, Nvidia is not winning with async. Of course AMD is hyping async since it looks pretty obvious that Maxwell falls on its face when it is utilized.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> So much spin.
> 
> The alpha would obviously be something that demonstrates good performance with async. No, Nvidia is not winning with async. Of course AMD is hyping async since it looks pretty obvious that Maxwell falls on its face when it is utilized.


Agreed, but NV doesn't need to win in Async, Async isn't here is the point you are not getting.

I have no dog in this fight, for right now I needed a 980ti, it would be nice if it would support Async, if it doesn't Volta will and I can upgrade without any issue. Also as I stated this isn't even my main gaming rig, Async would be nice but I dont need it.

Again I will say, I agree with you I dont think MW2.0 supports it either, I am simply playing devils advocate now and saying we have no proof one way or the other.

I didn't say NV is winning with Async? I do not know where that came from. I said NV is winning period, you can reference 1 game that is in Alpha/Beta all day but it is IRRELEVANT 100%.

The game isn't out and it does not yet matter, that is what you are missing. When the game is out and we say the same pattern and they still haven't released there "Driver" that will be a different story.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> Async isn't here is the point you are not getting.


Red herring at best and not really true, either. I'm not sure why there has been so much foot dragging on releasing Ashes as a consumer produce since the developer said the game was more complete months ago than many shipping titles. Regardless, the benchmark exists so it is here, although not as here as most would like.
Quote:


> Originally Posted by *Cyber Locc*
> 
> I am simply playing devils advocate now and saying we have no proof one way or the other.


We have proof so far that Maxwell doesn't have it. Nvidia's word that it does is much weaker than the Ashes results.
Quote:


> Originally Posted by *Cyber Locc*
> 
> I didn't say NV is winning with Async? I do not know where that came from.


Your post, with statements like "Nvidia is winning; they have nothing to prove", obviously.
Quote:


> Originally Posted by *Cyber Locc*
> 
> The game isn't out and it does not yet matter, that is what you are missing.


I really have no idea why they're taking so long to release it commercially but the benchmark has been available for a very long time now and the results have always shown AMD's cards, especially Hawaii, with a big advantage over their performance in DX11, unlike Nvidia's cards.

*The benchmark is out.*

Few people use Maxon for rendering and yet there is a community obsession over Cinebench numbers, including professional review sites using just Cinebench and a single game to test CPUs.


----------



## superstition222

Actually, they said it is more debugged than most consumer titles (months ago) now that I remember. It could be that they're still developing a lot of the content rather than perfecting the engine.


----------



## Charcharo

Again with the "Nvidia is winning DX11" speak...
Apart from the 950 and 980 Ti... no... they are losing on all other fronts. *Looks at old Kepler card* They also lost other things as well.

They are *winning marketing*. That... is more important then either DX12 or 11 it seems. Not the same as winning DX11 or 12, even if it is actually effectively more important than anything else.


----------



## ku4eto

Quote:


> Originally Posted by *Charcharo*
> 
> Again with the "Nvidia is winning DX11" speak...
> Apart from the 950 and 980 Ti... no... they are losing on all other fronts. *Looks at old Kepler card* They also lost other things as well.
> 
> They are *winning marketing*. That... is more important then either DX12 or 11 it seems. Not the same as winning DX11 or 12, even if it is actually effectively more important than anything else.


Unfortunately, people are a bit too dumb to know or acknowledge this. And AMD was really bad at marketing till a while ago.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Charcharo*
> 
> Again with the "Nvidia is winning DX11" speak...
> Apart from the 950 and 980 Ti... no... they are losing on all other fronts. *Looks at old Kepler card* They also lost other things as well.
> 
> They are *winning marketing*. That... is more important then either DX12 or 11 it seems. Not the same as winning DX11 or 12, even if it is actually effectively more important than anything else.


Depends. Most people play 1080p and there in most games Nvidia win because of DX11. Yes if you take DX11, 1080p, 1440p, 4K then AMD does not lose with all its cards. GTX980 Ti wins 1080p/1440 no problem. Not sure about GTX950. GTX950 can be had ~ $140 USD while 380 can be had for $165 which is faster then 960. GTX 950 is a terrible card like R7 370.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> Red herring at best and not really true, either. I'm not sure why there has been so much foot dragging on releasing Ashes as a consumer produce since the developer said the game was more complete months ago than many shipping titles. Regardless, the benchmark exists so it is here, although not as here as most would like.
> We have proof so far that Maxwell doesn't have it. Nvidia's word that it does is much weaker than the Ashes results.
> Your post, with statements like "Nvidia is winning; they have nothing to prove", obviously.
> I really have no idea why they're taking so long to release it commercially but the benchmark has been available for a very long time now and the results have always shown AMD's cards, especially Hawaii, with a big advantage over their performance in DX11, unlike Nvidia's cards.
> 
> *The benchmark is out.*
> 
> Few people use Maxon for rendering and yet there is a community obsession over Cinebench numbers, including professional review sites using just Cinebench and a single game to test CPUs.


Umm ya the BETA benchmark is out, enough said. It is a BETA.....

I hope AMD does win they need it, and I will happily buy 3x AMD cards to replace the 3x 290s I just sold lol. For my gaming rig.

However, the statement about the 980to that rig had other concerns. Cuda with Adobe and rendering was more important than gaming, but since that rig does game I hope the fps is decent at 1440, in dx12 titles, if not I will have to go pascal.


----------



## Charcharo

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Depends. Most people play 1080p and there in most games Nvidia win because of DX11. Yes if you take DX11, 1080p, 1440p, 4K then AMD does not lose with all its cards. GTX980 Ti wins 1080p/1440 no problem. Not sure about GTX950. GTX950 can be had ~ $140 USD while 380 can be had for $165 which is faster then 960. GTX 950 is a terrible card like R7 370.


The GTX 950 and R7 370 are a market segment MUCH (and I do mean MANY TIMES) more important than the Fury X and 980 Ti. It is one of those things that sell a LOT.
Nvidia having a great 1080P card in the face of the 950 is a key victory.

The R9 380 and 380X are another very important segment. Again, simply MUCH more important than the high end. It is how it is







. Excellent 1080P and entry-level 1440P cards IMHO.

As for that... depends on your definitions I guess. I consider the 390/390X and 290/290X to be 1440P GPUs. And great for 1080 I guess. Some people consider them only 1080... for me that is a waste. Other say the 980 TI is only a 1440P and not a 4K card. Fair enough, but by the same logic I can really stretch it that no card is even 1080 Worthy (some games cant do 144 fps minimum on Ultra 1080P even with OCed 980 TIs).


----------



## doritos93

Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm ya the BETA benchmark is out, enough said. It is a BETA.....


It's actually been out since August of last year.. dunno which BETA stage it's at but I believe they're pretty far along since they intend to release the game this year


----------



## Cyber Locc

Quote:


> Originally Posted by *doritos93*
> 
> It's actually been out since August of last year.. dunno which BETA stage it's at but I believe they're pretty far along since they intend to release the game this year


Lol, since August, is that suppose to mean something? Most games on the market are still in BETA for the first 6 months of there release, BETA is beta. Also it was in alpha in August, a game I am waiting for impatiently has been in BETA since january of 2015... It is still in beta.


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> Lol, since August, is that suppose to mean something? Most games on the market are still in BETA for the first 6 months of there release, BETA is beta. Also it was in alpha in August, a game I am waiting for impatiently has been in BETA since january of 2015... It is still in beta.


Ashes of the Singularity has undergone more QA than GoW Ultimate Edition and the latter was officially released.

You can play AotS right now if you buy it (Early access).


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> Ashes of the Singularity has undergone more QA than GoW Ultimate Edition and the latter was officially released.
> 
> You can play AotS right now if you buy it (Early access).


Read above again, so now you want to refrnece bad secs that release games that should still be in beta vs secs that want to release a game right. Too bad I already covered that lol.

Again none of this is relvant really. Show me a shred of evidence actual evidence that NV doesn't have Async. Till then it's simply assumptions, are they well based assumptions yes. Does that make them fact, no.


----------



## Kollock

Quote:


> Originally Posted by *Cyber Locc*
> 
> Lol, since August, is that suppose to mean something? Most games on the market are still in BETA for the first 6 months of there release, BETA is beta. Also it was in alpha in August, a game I am waiting for impatiently has been in BETA since january of 2015... It is still in beta.


You do realize that there is no government body sanctioning what BETA means? It's strictly a term the developer uses. For PC games these days, it's just a marketing term. If we wanted, we could call Ashes 'done' right now, but there are marketing things that happen on 'launch' day. The main thing that's going to happen on launch day is the addition of the campaign. That won't impact the benchmark mode in any meaningful way, however.

I guess I'm pretty confused about your logic. If the Beta designation is completely an arbitrary marketing term by the developer, and the developer is saying that the graphics/engine portion are effectively done, then why does the Beta label invalidate benchmarking results? That is, the game may be in Beta, but the benchmark portion is NOT. Graphics a engine is basically on lock down at this point, if results change at launch it will be due to diver updates, not game ones.


----------



## Cyber Locc

Quote:


> Originally Posted by *Kollock*
> 
> You do realize that there is no government body sanctioning what BETA means? It's strictly a term the developer uses. For PC games these days, it's just a marketing term. If we wanted, we could call Ashes 'done' right now, but there are marketing things that happen on 'launch' day. The main thing that's going to happen on launch day is the addition of the campaign. That won't impact the benchmark mode in any meaningful way, however.
> 
> I guess I'm pretty confused about your logic. If the Beta designation is completely an arbitrary marketing term by the developer, and the developer is saying that the graphics/engine portion are effectively done, then why does the Beta label invalidate benchmarking results? That is, the game may be in Beta, but the benchmark portion is NOT. Graphics a engine is basically on lock down at this point, if results change at launch it will be due to diver updates, not game ones.


Well in the case of your game that is probably true.

However that is still back to my point. This entire discussion is relvolving around NV drivers not your game.

The statement being made is that if NV had asynchronous drivers they would have released them by now. I am saying this is possible not true. That is all I am saying.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> The statement being made is that if NV had asynchronous drivers they would have released them by now. I am saying this is possible not true. That is all I am saying.


But it's more probable that they're stalling, just as they stalled with the truth about the 970's memory scheme for as long as they could.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> But it's more probable that they're stalling, just as they stalled with the truth about the 970's memory scheme for as long as they could.


I agree and said that in the very first post that you quoted to start this debate....

I agree it is very probale, but not guarnteed.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> I agree and said that in the very first post that you quoted to start this debate....
> 
> I agree it is very probale, but not guarnteed.


----------



## Greenland

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well in the case of your game that is probably true.
> 
> However that is still back to my point. This entire discussion is relvolving around NV drivers not your game.
> 
> The statement being made is that if NV had asynchronous drivers they would have released them by now. I am saying this is possible not true. That is all I am saying.


If Nvidia had shown that Maxwell 2 indeed had Async Compute capability, they would have shown that in the very beginning, not 7 months after the first AotS benchmark debut. Nvidia is all about the PR / marketing power, anyone would think they would have tried to avoid such PR disaster.


----------



## PostalTwinkie

Quote:


> Originally Posted by *superstition222*
> 
> Actually, they said it is more debugged than most consumer titles (months ago) now that I remember. It could be that they're still developing a lot of the content rather than perfecting the engine.


Just wanted to check with you on this, as you are committing some energy to your side of the discussion.

Are you aware that ASC in the title is still disabled by Nvidia? At this point, Nvidia's performance is without leveraging ASC in any way.

Quote:


> Originally Posted by *superstition222*
> 
> But it's more probable that they're stalling, just as they stalled with the truth about the 970's memory scheme for as long as they could.


How exactly didn't Nvidia "stall" on the non-issue 970 memory configuration?


----------



## superstition222

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Are you aware that ASC in the title is still disabled by Nvidia? At this point, Nvidia's performance is without leveraging ASC in any way.


So?


----------



## PostalTwinkie

Quote:


> Originally Posted by *superstition222*
> 
> So?


Your argument pertains to ASC with Nvidia, in fact you have made specific negative/sarcastic comments about it two pages ago. You are making an argument against Nvidia, based on something we know they haven't enabled yet. In other words, you are making an argument out of nothing. To what end?

If you want to hate on Nvidia, for ASC, at least wait until we know what they are going to do with ASC.

EDIT:

Further, as has been cautioned in the past, people might want to hold making definitive statements about ASC's use. A lot of people are running around preaching it is the Second Coming, based off one pre-release example. Sort of putting the cart before the horse.


----------



## superstition222

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Your argument pertains to ASC with Nvidia, in fact you have made specific negative/sarcastic comments about it two pages ago. You are making an argument against Nvidia, based on something we know they haven't enabled yet. In other words, you are making an argument out of nothing. To what end?
> 
> If you want to hate on Nvidia, for ASC, at least wait until we know what they are going to do with ASC.
> 
> EDIT:
> 
> Further, as has been cautioned in the past, people might want to hold making definitive statements about ASC's use. A lot of people are running around preaching it is the Second Coming, based off one pre-release example. Sort of putting the cart before the horse.


So, the point is that you're upset about the fact that Nvidia has given us no reason to believe that Maxwell is capable of leveraging async compute nearly as well as AMD's cards. Ok.
Quote:


> Originally Posted by *Extreme Tech*
> Ashes of the Singularity isn't just the first DirectX 12 game - it's also the first PC title to make extensive use of asynchronous computing. Support for this capability is a major difference between AMD and Nvidia hardware, and it has a significant impact on game performance.
> 
> A GPU that supports asynchronous compute can use multiple command queues and execute these queues simultaneously, rather than switching between graphics and compute workloads.
> 
> Asynchronous computing is, in a very real sense, GCN's secret weapon. While every GCN-class GPU since the original HD 7970 can use it, AMD quadrupled the number of ACEs per GPU when it built Hawaii, then modified the design again with Fiji. Where the R9 290 and 290X use eight ACEs, Fiji has four ACEs and two HWS units.
> 
> The exact state and nature of Nvidia's asynchronous compute capabilities is still unclear. We know that Nvidia's Maxwell can't perform anything like the concurrent execution that AMD GPUs can manage. Maxwell can benefit from some light asynchronous compute workloads, as it does in Fable, but the benefits on Team Green hardware are small.
> 
> The Nitrous Engine that powers Ashes of the Singularity makes extensive use of asynchronous compute and uses it for up to 30% of a given frame's workload. Oxide has stated that they believe this will be a common approach in future games and game engines, since DirectX 12 encourages the use of multiple engines to execute commands from separate queues in parallel.
> 
> Nvidia cards cannot handle asynchronous workloads the way that AMD's can, and the differences between how the two cards function when presented with these tasks can't be bridged with a few quick driver optimizations or code tweaks.
> 
> Fifth and finally, we know that AMD GPUs have always had enormous GPU compute capabilities. Those capabilities haven't always been displayed to their best advantage for a variety of reasons, but they've always existed, waiting to be tapped. When Nvidia designed Maxwell, it prioritized rendering performance.


So much HATE!


----------



## PostalTwinkie

Quote:


> Originally Posted by *superstition222*
> 
> So, the point is that you're upset about the fact that Nvidia has given us no reason to believe that Maxwell is capable of leveraging async compute nearly as well as AMD's cards. Ok.
> So much HATE!


No hate, I asked you a legitimate question. Of which, it is clear, you are being extremely defensive about, and extremely emotionally invested into it.

The 3rd party developer has confirmed it isn't enabled for Nvidia yet, and that they are working on it. What legitimate reason should we have to not believe that, aside from your own emotional investment into the subject?


----------



## superstition222

Quote:


> Originally Posted by *PostalTwinkie*
> 
> you are being extremely defensive about, and extremely emotionally invested











Quote:


> Originally Posted by *PostalTwinkie*
> 
> The 3rd party developer has confirmed it isn't enabled for Nvidia yet, and that they are working on it.


No one is saying the cards are completely incapable of async compute. The problem is that they are not nearly as good at it. So, even enabling it isn't going to increase performance. In fact, it could potentially decrease it, based on what I recall reading. If you have something substantive to post instead of _ad hominem_ nonsense then do so. Otherwise, I prefer to go to a psychic to get my palms read.

Given that Ashes results were first posted in August of 2015, Nvidia has had plenty of time to give people a clear picture about what async means to its current cards.


----------



## KyadCK

Quote:


> Originally Posted by *Cyber Locc*
> 
> Quote:
> 
> 
> 
> Originally Posted by *superstition222*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As for relying on Nvidia's word... Been there, seen the result.
> 
> If the company is so slow that it couldn't have released even an alpha by now...
> 
> 
> 
> Well honestly, I do not trust any of them. However as to the alpha, for what? for a game that isn't even released or in a final beta even? When there is games that are released and are needing to be concerned with, and it still isn't out, that is proof. These assumptions are educated guesses at the end of the day, I am not just taking NVs word, we have no hard facts, so who else's word are we suppose to take.
> 
> We have the guess that it wont work and them saying it will neither are fact at this point period.
> 
> NV is winning they have nothing to prove, yet. When the game comes out and AMD is winning and more games come out showing AMD winning then it will be important but for a game that is in beta, and hardly anyone even cares about ya not top priority.
> 
> AMD is doing what they need to by hyping DX12 as they are losing in DX11, with no hope of taking the lead. NV is acting like the leader that they are, and following there same pattern of right now matters not tomorrow.
Click to expand...

We actually do. We developed a benchmark to test it months ago, when this was first a thing. Not AOTS. A test expressly to use ASYNC to see what happens. nVidia failed horribly, showing zero ability to do simultaneous workloads.


----------



## GoLDii3

Quote:


> Originally Posted by *PostalTwinkie*
> 
> No hate, I asked you a legitimate question. Of which, it is clear, you are being extremely defensive about, and extremely emotionally invested into it.
> 
> The 3rd party developer has confirmed it isn't enabled for Nvidia yet, and that they are working on it. What legitimate reason should we have to not believe that, aside from your own emotional investment into the subject?


nVidia can't do the same async compute as AMD hardware. They have only one queue meanwhile AMD's utilization of async uses multiple queues (3)

So as of now,they can forget of getting the same results of AMD if any.


----------



## Randomdude

Is it possible that if Pascal lacks ASC nVidia releases Volta on a node not designed for it, like how they released Maxwell V2 on 28nm instead?


----------



## PostalTwinkie

Quote:


> Originally Posted by *superstition222*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> No one is saying the cards are completely incapable of async compute. The problem is that they are not nearly as good at it. So, even enabling it isn't going to increase performance. In fact, it could potentially decrease it, based on what I recall reading. If you have something substantive to post instead of _ad hominem_ nonsense then do so. Otherwise, I prefer to go to a psychic to get my palms read.
> 
> *Given that Ashes results were first posted in August of 2015, Nvidia has had plenty of time to give people a clear picture about what async means to its current cards.*


You are aware of their workflow, queue, and how Nvidia is ran as a business? Do you honestly, even for a moment, think that this one early look at a single title is enough motivation for Nvidia to dedicate their engineer's time? No, the fact it is getting attention at all in that regard is pretty amazing.
Quote:


> Originally Posted by *GoLDii3*
> 
> nVidia can't do the same async compute as AMD hardware. They have only one queue meanwhile AMD's utilization of async uses multiple queues (3)
> 
> So as of now,they can forget of getting the same results of AMD if any.


What you aren't realizing is that Nvidia doesn't need to do ASC in Maxwell/Kepler as well as AMD does now. AMD essentially used ASC, as we have seen it, to catch up to Nvidia, slightly surpassing. In other words; ASC closed a gap between AMD and Nvidia, but just only so, Even if Nvidia was only 40% as efficient as AMD at it, that still puts them back in the lead. Basically, Nvidia doesn't need to do ASC as well as AMD, to overtake AMD again.


----------



## Cyber Locc

Quote:


> Originally Posted by *KyadCK*
> 
> We actually do. We developed a benchmark to test it months ago, when this was first a thing. Not AOTS. A test expressly to use ASYNC to see what happens. nVidia failed horribly, showing zero ability to do simultaneous workloads.


Right well once again you are missing what I am saying. It doesn't matter if NV cards have hardware support for async if it's not supported in there software.

They have said quite a few times now. They have not yet added software support for the asynchronous but the hardware is capable of it.

No one is saying that async is currently not working on Maxwell. The question is, is that because of a lack of hardware or software. NV is saying it's oftware and they will add support they just told andantech that 2 weeks ago In a quote I linked earlier.

However it is being said that they cards do not have the hardware to support async, this has yet to be proven and is an assumption.

The debate here is, why do NV cards not support async, they are saying software. If that isn't true then they are lying as are the boxes of the cards that say they fully support dx12 so if they are lying than they are open to a class action.

I don't not see how a benchmark from months ago proves anything. Because it doesn't in any way, just because NV has not released software/firmware to support async yet does not prove the cards are not capable.


----------



## superstition222

Quote:


> Do you honestly, even for a moment, think that this one early look at a single title is enough motivation for Nvidia to dedicate their engineer's time? No, the fact it is getting attention at all in that regard is pretty amazing.












Because we all know async compute is just a "single title"


----------



## GorillaSceptre

Async hasn't closed the gap.. DX12 just eliminates the mult-thread disadvantage AMD have under DX11 drivers. Async doesn't even enter the equation with current games we've seen, it's merely the cherry on top.

I don't know what graphs you're looking at Postal but AMD don't need "catching up". They already match/beat Nvidia under every tier except the 980Ti, even under DX11. DX12/Vulkan + Async is just going to make them even better and take advantage of GCN.


----------



## superstition222

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Async hasn't closed the gap.. DX12 just eliminates the mult-thread disadvantage AMD have under DX11 drivers.


The old cheap Hawaii GPU outperforms a much newer and vastly more expensive 980 Ti in "crazy" mode and outperforms a 980 in the lower setting. If that isn't a gap closing nothing is.

Why is there such a need to spin this? Async, according to Hruska, is a performance-enhancing feature of DX12. This is like people arguing that gaming should stick with DX11 forever because who needs new DX features?

_640K should be enough for anyone_


----------



## sugarhell

Quote:


> Originally Posted by *KyadCK*
> 
> We actually do. We developed a benchmark to test it months ago, when this was first a thing. Not AOTS. A test expressly to use ASYNC to see what happens. nVidia failed horribly, showing zero ability to do simultaneous workloads.


If you check vulcan hardware validation and features nvidia gpus from kepler and after they support a single queue of work. Nothing else. They cant do async


----------



## superstition222

Quote:


> Originally Posted by *sugarhell*
> 
> If you check vulcan hardware validation and features nvidia gpus from kepler and after they support a single queue of work. Nothing else. They cant do async


They can do it via emulation, right?


----------



## GoLDii3

Quote:


> Originally Posted by *PostalTwinkie*
> 
> What you aren't realizing is that Nvidia doesn't need to do ASC in Maxwell/Kepler as well as AMD does now. AMD essentially used ASC, as we have seen it, to catch up to Nvidia, slightly surpassing. In other words; ASC closed a gap between AMD and Nvidia, but just only so Even Nvidia was only 40% as efficient as AMD at it, that still puts them back in the lead. Basically, Nvidia doesn't need to do ASC as well as AMD, to overtake AMD again.


Ok....and?

Good for them if they don't need it. AMD will make good use of async.

By the way,nVidia only has a few products overtaking AMD,the rest of their products have equal or greater performance than nVidia equivalent. Just because they have a higher market share,it does not mean they are better.

Oh and you have pretty much the info right on your face and yet you say Async Shaders have only closed a gap between AMD.

Fury X gained 10 FPS on 1440p and 4 on 1080p. That's 17% over 980 Ti at 1440p and 6% on 1080p. So not only they closed this gap they had (in a few games) ,they also went ahead.

You're optimistic as of now to think nVidia can be 40% as efficient as AMD on Async shaders.

This is AMD's rappresentation on how Async shaders,the thing they're using to speed up on DX12 Works for them:

http://www.extremetech.com/wp-content/uploads/2015/08/GPU-Pipelines.jpg

As you can see,three queues.

How can nVidia even get closer to 40%,let alone do async shaders when they only have one queue like the first example?

Software wise? Well'see if that's a possibility.


----------



## sugarhell

Quote:


> Originally Posted by *superstition222*
> 
> They can do it via emulation, right?


I dont think there is any benefit to emulate async. Is like you have 3 trains and all of them want to pass through 1 tunnel. W/e you do you still have 1 tunnel


----------



## GorillaSceptre

Quote:


> Originally Posted by *superstition222*
> 
> The old cheap Hawaii GPU outperforms a much newer and vastly more expensive 980 Ti in "crazy" mode and outperforms a 980 in the lower setting. If that isn't a gap closing nothing is.
> 
> Why is there such a need to spin this? Async, according to Hruska, is a performance-enhancing feature of DX12. This is like people arguing that gaming should stick with DX11 forever because who needs new DX features?
> 
> _640K should be enough for anyone_


I'm not spinning anything..

AMD is doing good simply because GCN can actually stretch it's legs under DX12, as many have said for years, GCN is the superior hardware, but is hamstrung by DX11. Async is hardly used in Ashes, that's not the reason for the performance boost.

But down the line when titles actually move a lot of their engines into compute and heavily use Async then we'll see a massive advantage for AMD. From what we've been told we could be looking at 30-50%









I should say an advantage for AMD's current architectures vs Nvidia's current ones, by the time Async is heavily used Nvidia will probably have an architecture that supports it, so it will be great for everybody. But for those that don't upgrade often then the AMD users are going to get a lot more life out of their cards.


----------



## SuperZan

Quote:


> Originally Posted by *sugarhell*
> 
> I dont think there is any benefit to emulate async. Is like you have 3 trains and all of them want to pass through 1 tunnel. W/e you do you still have 1 tunnel


That's exactly as I've assumed.


----------



## Greenland

Quote:


> Originally Posted by *PostalTwinkie*
> 
> You are aware of their workflow, queue, and how Nvidia is ran as a business? Do you honestly, even for a moment, think that this one early look at a single title is enough motivation for Nvidia to dedicate their engineer's time? No, the fact it is getting attention at all in that regard is pretty amazing.
> What you aren't realizing is that Nvidia doesn't need to do ASC in Maxwell/Kepler as well as AMD does now. AMD essentially used ASC, as we have seen it, to catch up to Nvidia, slightly surpassing. In other words; ASC closed a gap between AMD and Nvidia, but just only so Even Nvidia was only 40% as efficient as AMD at it, that still puts them back in the lead. Basically, Nvidia doesn't need to do ASC as well as AMD, to overtake AMD again.


Yes, I honestly think that the first DX12 is enough motivation for Nvidia to dedicate their engineer's time; and also, the main reason why AMD getting a massive leap going from DX11 to DX12 is the removal of CPU overhead, Async Compute is part of the D3D12, yes but it's merely the cherry on top.

Remember when Nvidia essentially said only 40% AMD GPUs support Dx12 and Mantle while 100% Nvidia GPUs support 100%? At one point Mantle became *THE* API, and I doubt Nvidia expect this move from AMD.


----------



## PostalTwinkie

Quote:


> Originally Posted by *sugarhell*
> 
> I dont think there is any benefit to emulate async. Is like you have 3 trains and all of them want to pass through 1 tunnel. W/e you do you still have 1 tunnel


Again, this has yet to be seen.

People keep claiming Nvidia can't do ASC from a hardware perspective, which isn't even being argued, we know what hardware is there. However, that doesn't mean they can't emulate it, or otherwise tackle it via software, and see gain where ASC is being used.

Anyone that says Nvidia isn't capable of handling ASC at all, is, at this point, talking out their ass. As we even have a 3rd party developer confirming it is being worked on, just not yet enabled.

EDIT:

For those that missed it; the above point is that we don't need to listen to Nvidia and their statements, or AMD and their statements. We can eliminate the bias potential, and look at a 3rd party not tied to Nvidia - actually an AMD backed title developer.

So, again, people just need to stop putting the cart before the horse, and wait. Until we know what they are going to do, and have more 3rd party verification of its potential performance. It is that simple.


----------



## Greenland

That's like saying a fish can survive without water, it just needs a bunch of minerals and H20 atoms pouring into it's gills.


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Again, this has yet to be seen.
> 
> People keep claiming Nvidia can't do ASC from a hardware perspective, which isn't even being argued, we know what hardware is there. However, that doesn't mean they can't emulate it, or otherwise tackle it via software, and see gain where ASC is being used.
> 
> Anyone that says Nvidia isn't capable of handling ASC at all, is, at this point, talking out their ass. As we even have a 3rd party developer confirming it is being worked on, just not yet enabled.


I am pretty sure that nvidia cant support async with amd's term. I did an app for my company 3 months ago ~ and if i want to send a compute task twith a rendering together i need to wait for the command processor to change between graphics and compute


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Again, this has yet to be seen.
> 
> People keep claiming Nvidia can't do ASC from a hardware perspective, which isn't even being argued, we know what hardware is there. However, that doesn't mean they can't emulate it, or otherwise tackle it via software, and see gain where ASC is being used.
> 
> Anyone that says Nvidia isn't capable of handling ASC at all, is, at this point, talking out their ass. As we even have a 3rd party developer confirming it is being worked on, just not yet enabled.
> 
> EDIT:
> 
> For those that missed it; the above point is that we don't need to listen to Nvidia and their statements, or AMD and their statements. We can eliminate the bias potential, and look at a 3rd party not tied to Nvidia - actually an AMD backed title developer.
> 
> So, again, people just need to stop putting the cart before the horse, and wait. Until we know what they are going to do, and have more 3rd party verification of its potential performance. It is that simple.


Nvidia can emulate async compute + graphics in software with preemption, but it still comes with a context switch cost that AMD doesn't have.

Nvidia have a one lane road that they can drive both directions on, but only one way at a time. They can use traffic lights to make it more efficient, but it can't match a GPU with 2 lanes no matter what it does, because the Nvidia lanes have to empty first.

So yes, on paper Nvidia will support it (that will come in handy for marketing) and they will support their own "version" of Async, but it will have nowhere near the benefit of GCN, and maybe no benefit at all.


----------



## PostalTwinkie

Quote:


> Originally Posted by *sugarhell*
> 
> I am pretty sure that nvidia cant support async with amd's term. I did an app for my company 3 months ago ~ and if i want to send a compute task twith a rendering together i need to wait for the command processor to change between graphics and compute


The speculation is that Nvidia is going to attempt ASC emulation of a sort, via a new driver. How well it works, no one outside of Nvidia knows right now, if that is how they are doing. About the only thing any of us can do is just wait.

No one is saying that Nvidia is going to magically pull hardware out (www.downloadASChardware.com) - Just that "emulation" is surely potential.


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> The speculation is that Nvidia is going to attempt ASC emulation of a sort, via a new driver. How well it works, no one outside of Nvidia knows right now, if that is how they are doing. About the only thing any of us can do is just wait.
> 
> No one is saying that Nvidia is going to magically pull hardware out (www.downloadASChardware.com) - Just that "emulation" is surely potential.


I am not that sold to the emulation thing. Even if they try with preemption they still have the context switch that needs to happen. At the end the graphics tasks will be with the graphic tasks and the compute one with the compute just with a reorder pipeline(?)

IIRC GCN works this way. The main command processor can issue only graphics or only compute and the ACEs can issue compute ,thats why its possible the whole async compute.


----------



## PontiacGTX

Source

Finally Pcworld realized that DX12 wont improve the performance with more cores just cause


----------



## Mahigan

Quote:


> Originally Posted by *PostalTwinkie*
> 
> No hate, I asked you a legitimate question. Of which, it is clear, you are being extremely defensive about, and extremely emotionally invested into it.
> 
> The 3rd party developer has confirmed it isn't enabled for Nvidia yet, and that they are working on it. What legitimate reason should we have to not believe that, aside from your own emotional investment into the subject?


The third party developer did not state this. Ask Kollock, here's here. Kollock stated that Async Compute is enabled for NVIDIA and AMD. That previously, in the Alpha version, they had it disabled for NVIDIA. Kollock also stated that as far as he knows, Async compute is enabled in the NVIDIA driver. He went on to mention that we should forward our questions to NVIDIA about their specific implementation.

NVIDIA hit back saying that it was not enabled in their driver. Which means one thing... The driver is exposing the feature but the feature doesn't work (yet?). That's why Kollock sees the feature as available and NVIDIA claim it is not.

Most probably, as of right now, NVIDIAs driver reroutes Async compute + graphics commands to the 3D queue when Async is enabled. Most likely, this is why we see a small drop in performance for NVIDIA when the feature is turned on.


----------



## Mahigan

Quote:


> Originally Posted by *PontiacGTX*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Source
> 
> Finally Pcworld realized that DX12 wont improve the performance with more cores just cause


DX 9 engine and code ported to DX12. So GoW is not representative of a DX12 title.

We'll get to see Hitman very soon


----------



## PontiacGTX

Quote:


> Originally Posted by *Mahigan*
> 
> [/spoiler]
> DX 9 engine and code ported to DX12. So GoW is not representative of a DX12 title.
> 
> We'll get to see Hitman very soon


we need that crytek release a ported/redo Crysis on CE/CE3 using DX12 and get back the multiplayer back like it was except with better performance and graphics


----------



## PostalTwinkie

Quote:


> Originally Posted by *Mahigan*
> 
> The third party developer did not state this. Ask Kollock, here's here. Kollock stated that Async Compute is enabled for NVIDIA and AMD. That previously, in the Alpha version, they had it disabled for NVIDIA. Kollock also stated that as far as he knows, Async compute is enabled in the NVIDIA driver. He went on to mention that we should forward our questions to NVIDIA about their specific implementation.
> 
> NVIDIA hit back saying that it was not enabled in their driver. Which means one thing... The driver is exposing the feature but the feature doesn't work (yet?). That's why Kollock sees the feature as available and NVIDIA claim it is not.
> 
> Most probably, as of right now, NVIDIAs driver reroutes Async compute + graphics commands to the 3D queue when Async is enabled. Most likely, this is why we see a small drop in performance for NVIDIA when the feature is turned on.


He didn't say it?

Quote:


> Originally Posted by *Kollock*
> 
> Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
> I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.


The above is in the Hitman thread that is going, from 3 weeks ago. Has there been a change?

EDIT:

You even replied to the very quote.


Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *Mahigan*
> 
> Now this I've got to see... Will Maxwell execute Graphics + Compute commands concurrently or will Asynchronous Compute simply mean that there is no defined order by which compute commands are executed.
> 
> AMD appear to stress that performance gains are best achieved through concurrent execution of Graphics and Compute commands whereas asynchronous compute doesn't really mean that in the computer science world.
> 
> Currently the prevailing conclusiom has been (sourced from across the web)
> 
> Nvidia executes Asynchronous compute + graphics code synchronously under DX12. Nvidia supports Async Compute through Hyper-Q, CUDA, but Hyper-Q doesn't support the additional wait conditions of barriers (a DX12 requirement). So no, there is no Async Compute + Graphics for Fermi, Kepler or Maxwell under DX12 currently.
> 
> Let me explain, Microsoft have introduced additional compute queues into 3D apps with their DX12 API:
> 
> Graphics queue for primary rendering tasks
> Compute queue for supporting GPU tasks (lighting, post processing, physics etc)
> Copy queue for simple data transfers
> 
> Command lists, from a specific queue, are still executed synchronously, while those in different queues can execute asynchronously (ex: concurrently and in parallel). What does asynchronous mean? Asynchronous means that the order of execution of each queue in relation to another is not defined. Work loads submitted to these queues may start or complete in a different order than they were issued. In terms of Fences and barriers, they only apply to each respective queue. When the work load in a queue is blocked by a fence, the other queues can still be running and submitting work for execution. If Synchronisation points between two or more queues are required, they can be defined and enforced by using fences.
> 
> Similar features have been available under OpenCL and CUDA for some time. The fences and signals, under DX12 map directly to a subset of the event system under OpenCL and CUDA. Under DX12, however, Barriers have additional wait conditions. These wait conditions are not supported by either OpenCL or CUDA. Instead, a write through of dirty buffers needs to be explicitly requested. Therefore Asynchronous compute + Graphics under DX12, though similar to Asynchronous compute under OpenCL and CUDA, requires explicit feature support for compatibility with the Asynchronous Compute + Graphics feature.
> 
> These new queues are also different than the classic Graphics queue. While the classic Graphics queue can be fed with compute commands, copy commands and graphics commands (draw calls), the new compute and copy queues can only accept compute and copy commands respectively. Hence their names.
> 
> For Maxwell, Compute and Graphics can't be active at the same time under DX12, currently, it is theorized that this is due to the fact that there is only a single function unit (Command Processor) rather than having access to ACEs as well. Copy commands, however, can run concurrently to Graphics and Compute commands due to the inclusion of more than one DMA engine in Maxwell. We see this when looking at how Fable Legends executes the various queues. What nvidia would need, in order to execute graphics and compute commands asynchronously, is to add support for additional barrier wait times for their Hyper-Q implementation. Why? This would expose the additional execution unit under Hyper-Q. The Hyper-Q interface used for CUDAs concurrent executions supports Asynchronous compute as we see in DX11 + Physx titles (Batman Arkham series for example). Hyper-Q is, however, not compatible with the DX12 API as of the time of writting this (for reasons mentioned above). If it was compatible, there would be a hardware limit of 31 asynchronous compute queues and 1 Graphics queue (as Anandtech reported).
> 
> So all that to say that if you fence often, you can get nvidia hardware to run the Asynchronous + Graphics code synchronously. You also have to make sure you use large batches of short running shaders, long running shaders would complicate scheduling on nvidia hardware and introduce latency. Oxide, because they were using AMD supplied code, ran into this problem in Ashes of the Singularity (according to posts over at overclock.net).
> 
> Since AMD are working with IO for the Hitman DX12 path, then you can be sure that the DX12 path will be AMD optimized. That means less fencing and longer running shaders.
> 
> For Hitman, Nvidia basically have to work with IO as well, in order to add a vendor ID specific DX12 path (like we saw Oxide do). It's probably not worth it seeing as nvidia have little to gain from DX12 over DX11. AMD, however, will likely suffer from a CPU bottle neck under Hitman DX11 (as they do under Rise of the Tomb Raider DX11). AMD have a lot to gain from working with developers on coding and optimizing a DX12 path.
> 
> So to summarize,
> 
> Nvidia do not support Async compute + Graphics under DX12 at this time or perhaps ever. Hitman's DX12 path may run like crap on nvidia hardware unless nvidia convince IO Interactive to code a vendor ID specific path and supply IO with optimized short running shaders. Basically, same thing that nvidia did with Oxide for Ashes of the Singularity (if memory serves me right). Since nvidia have little to gain from moving from DX11 to DX12, best for them to not waste time and money helping IO code a vendor ID specific path.
> 
> AMD will suffer performance issues due to a CPU bottleneck, brought on by the lack of support for DX11 multi-threaded command listing, when running the Hitman DX11 path. AMD has everything to gain in assisting IO Interactive in the implementation of a DX12 path. Asynchronous compute is just an added bonus on top of the removal of the CPU bottle neck which plagues AMD GCN under DX11 titles.






So, again, I can safely ask - has this changed? I think that would have been a pretty big deal, as tech writers are chomping at the bit to run an Nvidia /w ASC v.s. AMD /w ASC test.


----------



## Mahigan

Quote:


> Originally Posted by *PostalTwinkie*
> 
> He didn't say it?
> The above is in the Hitman thread that is going, from 3 weeks ago. Has there been a change?
> 
> EDIT:
> 
> You even replied to the very quote.
> 
> So, again, I can safely ask - has this changed? I think that would have been a pretty big deal, as tech writers are chomping at the bit to run an Nvidia /w ASC v.s. AMD /w ASC test.


That quote, from Kollock, predates the Beta build (ie Alpha builds). Look at the date. We were discussing the graphic anomalies between NVIDIA and AMD in the recent Alpha builds. 21 days ago. The Beta launched on Feb 23/24 2016.

For the Beta, Asynchronous compute + graphics is not forcibly disabled. It's on by default and you can switch it off in the .ini file.


Spoiler: Warning: Spoiler!








That's from Extremetech here: http://www.extremetech.com/gaming/223567-amd-clobbers-nvidia-in-updated-ashes-of-the-singularity-directx-12-benchmark

This is why NVIDIA lose performance with Async Turned on. It seems like they're maybe looping the commands back to the 3D queue or something causing a bit of overhead.


----------



## MonarchX

Quote:


> Originally Posted by *inedenimadam*
> 
> Welcome to ports from consoles that have a 8GB buffer to work with.


No, that isn't it. Console versions of these games use worse textures and other settings and those 8GB of RAM they have still need to be used for the OS and non-video related gameplay data.


----------



## MonarchX

Can't NVidia use other technologies at their disposal to, as some say, re-route AC functions? Maybe they can force other parts of the cards to do AC instead, fully utilizing hardware, not just software emulation. Human brains can do it.., but videocards aren't there yet...


----------



## Kriant

While I am a "red team" supporter at heart, and while R9 290x still manages to squeeze good framerates, while the "almightry" 780 ti lags behind in more recent tests that I've seen across various resources, Nvidia is better at delivering working soft support aka drivers on day 1, while AMD takes it's sweet time to push out relevant drivers for their, arguably superior tech. And yes, I think AMD's marketing department needs to be fired or re-trained, because it seriously falls flat on brining up the hype train when their new product is being released


----------



## PostalTwinkie

Quote:


> Originally Posted by *Mahigan*
> 
> That quote, from Kollock, predates the Beta build (ie Alpha builds). Look at the date. We were discussing the graphic anomalies between NVIDIA and AMD in the recent Alpha builds. 21 days ago. The Beta launched on Feb 23/24 2016.
> 
> For the Beta, Asynchronous compute + graphics is not forcibly disabled. It's on by default and you can switch it off in the .ini file.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> That's from Extremetech here: http://www.extremetech.com/gaming/223567-amd-clobbers-nvidia-in-updated-ashes-of-the-singularity-directx-12-benchmark
> 
> This is why NVIDIA lose performance with Async Turned on. It seems like they're maybe looping the commands back to the 3D queue or something causing a bit of overhead.


Do you even read the things you post?

Bottom of your own source
Quote:


> Update: (2/24/2016) Nvidia reached out to us this evening to confirm that while the GTX 9xx series does support asynchronous compute, it does not currently have the feature enabled in-driver. Given that Oxide has pledged to ship the game with defaults that maximize performance, Nvidia fans should treat the asynchronous compute-disabled benchmarks as representative at this time. We'll revisit performance between Teams Red and Green if Nvidia releases new drivers that substantially change performance between now and launch day.


So, once again, Nvidia's implementation of ASC isn't enabled yet. So we can't comment on how they are going to perform.


----------



## drufause

asynchronous compute is a part of Vulcan and Direct X 12. Any Game using either standard can ship with these turned on and its Nvidias choice not to implement the approved standard to maintain performance but their owners should understand that games they are playing are not fully utilizing Vulcan and Direct X 12.


----------



## Mahigan

Quote:


> Originally Posted by *Kollock*
> 
> You do realize that there is no government body sanctioning what BETA means? It's strictly a term the developer uses. For PC games these days, it's just a marketing term. If we wanted, we could call Ashes 'done' right now, but there are marketing things that happen on 'launch' day. The main thing that's going to happen on launch day is the addition of the campaign. That won't impact the benchmark mode in any meaningful way, however.
> 
> I guess I'm pretty confused about your logic. If the Beta designation is completely an arbitrary marketing term by the developer, and the developer is saying that the graphics/engine portion are effectively done, then why does the Beta label invalidate benchmarking results? That is, the game may be in Beta, but the benchmark portion is NOT. Graphics a engine is basically on lock down at this point, if results change at launch it will be due to diver updates, not game ones.


Hmm,

I have a question and it has to do with something you stated in October of last year.
Quote:


> I would say that Oxide does almost everything in the recommended list and virtually nothing on the don't list. The list is very good advice and pretty vendor independent; It's just good general advice for using D3D12. Many of them are really good to do with D3D11 as well. *Ironically, if you refactor your engine for D3D12, you will typically end up getting much better perf in D3D11*. Our D3D11 performance is very good for many of the same reasons.


Say you're using an engine designed around D3D9 and you refactor it for D3D12... Would you end up perhaps performing worse off than you would under D3D9? I mean, multi-threaded rendering wise, how hard would you say it would be to perform such a conversion from D3D9 to D3D12?

I'm not sure if you're familiar with the Unreal 3 engine but that's the one I'm curious about here.


----------



## PostalTwinkie

Quote:


> Originally Posted by *drufause*
> 
> asynchronous compute is a part of Vulcan and Direct X 12. Any Game using either standard can ship with these turned on and its Nvidias choice not to implement the approved standard to maintain performance but their owners should understand that games they are playing are not fully utilizing Vulcan and Direct X 12.


ASC support isn't a requirement under DX 12. Further, as has been stated several times now, Nvidia's implementation of it is still pending.


----------



## Kollock

Quote:


> Originally Posted by *Mahigan*
> 
> Hmm,
> 
> I have a question and it has to do with something you stated in October of last year.
> Say you're using an engine designed around D3D9 and you refactor it for D3D12... Would you end up perhaps performing worse off than you would under D3D9? I mean, multi-threaded rendering wise, how hard would you say it would be to perform such a conversion from D3D9 to D3D12?
> 
> I'm not sure if you're familiar with the Unreal 3 engine but that's the one I'm curious about here.


I think it would take a fairly massive rewrite for UE3 to exploit D3D12. I don't think it would be worth it for UE3.


----------



## Mahigan

Quote:


> Originally Posted by *Kollock*
> 
> I think it would take a fairly massive rewrite for UE3 to exploit D3D12. I don't think it would be worth it for UE3.


Thank you for taking the time to respond.









When I compare your engine:


To UE3 running DX12:


I see what appears to be a lack of multi-threaded rendering. Unless a GPU bottleneck is present...

http://www.pcworld.com/article/3039552/hardware/tested-how-many-cpu-cores-you-really-need-for-directx-12-gaming.html


----------



## ZealotKi11er

Quote:


> Originally Posted by *PostalTwinkie*
> 
> ASC support isn't a requirement under DX 12. Further, as has been stated several times now, Nvidia's implementation of it is still pending.


Do you really believe that?


----------



## PostalTwinkie

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Do you really believe that?












https://msdn.microsoft.com/en-us/library/windows/desktop/ff476876(v=vs.85).aspx


----------



## Mahigan

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Do you even read the things you post?
> 
> Bottom of your own source
> So, once again, Nvidia's implementation of ASC isn't enabled yet. So we can't comment on how they are going to perform.


I am aware of that. Which is why my explanation makes sense. If async compute wasn't enabled in the drivers then..

1. Why do the Maxwell 900 series take a performance hit when the feature is turned on?
2. Why is the Async compute feature reporting as being available in NVIDIAs drivers as per Kollock?

Possible answers:
1. Maxwell takes a performance hit because NVIDIA enabled an async compute loop in their driver. Meaning that Async Compute + graphics commands are intercepted and thrown into the 3D queue.
2.Async compute is exposed in the driver, as per Kollock, because it would have to be in order for answer 1 to be possible.

Basically, for the time being, NVIDIA have provided a fx which allows them to at least run the async code synchronously.

As for NVIDIA having the Asynchronous compute + graphics feature working in their drivers, NVIDIA claims that its not operational yet. 7 months and counting with AotS and Hitman both releasing soon.


----------



## Remij

Quote:


> Originally Posted by *Kollock*
> 
> I think it would take a fairly massive rewrite for UE3 to exploit D3D12. I don't think it would be worth it for UE3.


I think he's asking if with little to marginal code changes at best, if it would hinder performance in any way, in regards to the game's rendering engine pipeline not being properly coded for multi-threaded application.


----------



## Mahigan

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://msdn.microsoft.com/en-us/library/windows/desktop/ff476876(v=vs.85).aspx


Maybe it's just me but I can't find anything about multi-engine support on that page. Fyi, just in case you weren't aware, Asynchronous compute and Asynchronous compute + graphics are two separate things.

Asynchronous compute just means executing compute workloads without a defined order.

Asynchronous compute + graphics means the same as Asynchronous compute but also executing graphics tasks in parallel.

Asynchronous compute is mandatory for DX12 FL12_0 but not Async compute + graphics.


----------



## Dudewitbow

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Do you really believe that?


async compute isn't mandatory for DX12, its just really useful as unlike other features, its a feature used to improve performance rather than improve visual quality at the cost of performance.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Dudewitbow*
> 
> async compute isn't mandatory for DX12, its just really useful as unlike other features, its a feature used to improve performance rather than improve visual quality at the cost of performance.


I know that. Was mostly questioning the fact that he is waiting for Nvidia to implement it.


----------



## Olivon

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Do you really believe that?


Async Compute is a marketing term coming from AMD marketing (thanks Mahigan for speading it everywhere).
Only difference is having parallelized tasks for compute and graphics (AMD) or serial tasks (nVidia).
AMD push on AC because they think they have an advantage here and want to play on it.
But doing parallelized tasks for compute and graphics is not mandatory to be DX12 compliant.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Mahigan*
> 
> 1. Why do the Maxwell 900 series take a performance hit when the feature is turned on?


Because it doesn't support that feature yet, it is disabled in the drivers.

Quote:


> Originally Posted by *Mahigan*
> 
> 2. Why is the Async compute feature reporting as being available in NVIDIAs drivers as per Kollock?


Except Nvidia has already stated it is disabled at the driver level. You of all people should know that just because it is listed in driver, but disabled, doesn't mean they a) can't do it or b) it is all there yet. It is a very common practice to patch in features over a period of patches when it comes to software/drivers. Each patch bringing part of what is needed later to take advantage of whatever that content/feature might be.

Why is it that so many people, in this thread, simply can't take it for what it is? Nvidia has said, several times, they are bringing it via drivers. They have said several times that it is currently disabled, pending launch. Why do you, and others, keep pushing it as some conspiracy that they can't do it, and are "stalling" (per another user)?

EDIT:

Further on the driver content issue; Pascal has shown up in drivers. It isn't even out yet, but there are remarks to it in drivers. I guess we shouldn't expect Pascal either? I mean, it is in driver, but where is it? They must be stalling.

Do we see how silly that can be?

EDIT 2:

Just a quick reminder; the very source you provided even states they will come back and retest it when Nvidia launches their support. They won't be the only ones doing it either.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Olivon*
> 
> Async Compute is a marketing term coming from AMD marketing (thanks Mahigan for speading it everywhere).
> Only difference is having parallelized tasks for compute and graphics (AMD) or serial tasks (nVidia).
> AMD push on AC because they think they have an advantage here and want to play on it.
> But doing parallelized tasks for compute and graphics is not mandatory to be DX12 compliant.


We knew about Async Compute for year now. Async Compute help AMD so yes its a good thing. Do not see why developers can't use this feature. What I do not get is the salt from Nvidia users because they do not have it.


----------



## Olivon

Because AMD has no money to give to studios and is only 20% market share.
That's why you'll see few games with AC at start.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Olivon*
> 
> Because AMD has no money to give to studios and is only 20% market share.
> That's why you'll see few games with AC at start.


Assuming Pascal has Async then how do you feel about that? Current AMD cards will beat all pre Pascal and Nvidia will use Pascal as a demonstration of the loyal 80% to upgrade because they need too.
Another thing to consider is GCN consoles. They would gladly use all GCN features they can have access too.


----------



## Mahigan

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Because it doesn't support that feature yet, it is disabled in the drivers.
> Except Nvidia has already stated it is disabled at the driver level. You of all people should know that just because it is listed in driver, but disabled, doesn't mean they a) can't do it or b) it is all there yet. It is a very common practice to patch in features over a period of patches when it comes to software/drivers. Each patch bringing part of what is needed later to take advantage of whatever that content/feature might be.
> 
> Why is it that so many people, in this thread, simply can't take it for what it is? Nvidia has said, several times, they are bringing it via drivers. They have said several times that it is currently disabled, pending launch. Why do you, and others, keep pushing it as some conspiracy that they can't do it, and are "stalling" (per another user)?
> 
> EDIT:
> 
> Further on the driver content issue; Pascal has shown up in drivers. It isn't even out yet, but there are remarks to it in drivers. I guess we shouldn't expect Pascal either? I mean, it is in driver, but where is it? They must be stalling.
> 
> Do we see how silly that can be?
> 
> EDIT 2:
> 
> Just a quick reminder; the very source you provided even states they will come back and retest it when Nvidia launches their support. They won't be the only ones doing it either.


You misunderstood my comment,

You wouldn't get a performance hit if the driver wasn't doing something when it detects Asynchronous compute. You'd get no difference at all from the feature being turned on/off.

Kollock stated that all he does is insert "markers" on compute shaders he wants to run in parallel to graphics tasks. If a piece of hardware supports Asynchronous compute + graphics, then it will execute those "marked" compute tasks in parallel. If the hardware doesn't support the feature then it will simply act as if those markers were never there.

Technically, turning Async on/off should have zero effect on the frame rate. So evidently, turning it on has the NVIDIA driver do something which affects the frame rate.

Clearly, since the driver exposes the feature as being available, when Async is turned on, the driver does something. I suggested that the driver simply "loops" the commands to the 3D queue.

So NVIDIA have implemented a temporary fix of sorts which gives you the illusion that it's running Async compute (at a performance hit). That's what I wrote above. The whole Async software solution, however, is not yet implemented... 7 months and counting with two titles due out soon.

That's what I'm saying.


----------



## mtcn77

Quote:


> Originally Posted by *Olivon*
> 
> Because AMD has no money to give to studios and is only 20% market share.
> That's why you'll see few games with AC at start.


Just because AMD doesn't have the money, supposedly, does not mean fewer games will launch with AC at start because studios don't just look for handouts: Microsoft has embraced pc gaming once again, so expect the grip tightening with both arms as time goes by. _Choke tagarous!_ -Spiritbreaker.


----------



## Olivon

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Assuming Pascal has Async then how do you feel about that? Current AMD cards will beat all pre Pascal and Nvidia will use Pascal as a demonstration of the loyal 80% to upgrade because they need too.
> Another thing to consider is GCN consoles. They would gladly use all GCN features they can have access too.


We're more looking at DX11 cards than other things.
I don't took a 780Ti to do DX12 but with DX11 in mind.
I got a 2 years cycle and I'm kinda sure that Polaris or Pascal will give me what I need for DX12.

Graphic card have limited life and we, PC gamers, are not on consoles with 4-5 years cycle life.
And yes, I don't want to wait 2 years the drivers to be ready too, because I will go another card.


----------



## mtcn77

Quote:


> Originally Posted by *Olivon*
> 
> We're more looking at DX11 cards than other things.
> I don't took a 780Ti to do DX12 but with DX11 in mind.
> I got a 2 years cycle and I'm kinda sure that Polaris or Pascal will give me what I need for DX12.
> 
> Graphic card have limited life and we, PC gamers, are not on consoles with 4-5 years cycle life.
> And yes, I don't want to wait 2 years the drivers to be ready too, because I will go another card.


You won't have to wait 2 years as the drivers have less precedence in Directx 12.


----------



## gamervivek

Quote:


> Originally Posted by *Olivon*
> 
> Async Compute is a marketing term coming from AMD marketing (thanks Mahigan for speading it everywhere).
> Only difference is having parallelized tasks for compute and graphics (AMD) or serial tasks (nVidia).
> AMD push on AC because they think they have an advantage here and want to play on it.
> But doing parallelized tasks for compute and graphics is not mandatory to be DX12 compliant.


It's mentioned way before by PS4's lead architect and would be heavily used in consoles.
Quote:


> "Our belief is that by the middle of the PlayStation 4 console lifetime, asynchronous compute is a very large and important part of games technology."
> 
> Cerny envisions "a dozen programs running simultaneously on that GPU" -- using it to "perform physics computations, to perform collision calculations, to do ray tracing for audio."
> 
> But that vision created a major challenge: "Once we have this vision of asynchronous compute in the middle of the console lifecycle, the question then becomes, 'How do we create hardware to support it?'"


http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?print=1


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> "The old cheap Hawaii GPU outperforms a much newer and vastly more expensive 980 Ti in "crazy" mode and outperforms a 980 in the lower setting. If that isn't a gap closing nothing is."
> 
> _640K should be enough for anyone_


Umm this is a joke right? I just replaced 3 290s with 1 980ti and the TI is playing better than 2 of them in CF.....

Quote:


> Originally Posted by *Greenland*
> 
> Yes, I honestly think that the first DX12 is enough motivation for Nvidia to dedicate their engineer's time; and also, the main reason why AMD getting a massive leap going from DX11 to DX12 is the removal of CPU overhead, Async Compute is part of the D3D12, yes but it's merely the cherry on top.
> 
> Remember when Nvidia essentially said only 40% AMD GPUs support Dx12 and Mantle while 100% Nvidia GPUs support 100%? At one point Mantle became *THE* API, and I doubt Nvidia expect this move from AMD.


May I please get a source for this as if it doesnt support ASC we can start a class action, Free 980tis for everyone







.


----------



## Kollock

Quote:


> Originally Posted by *Mahigan*
> 
> You misunderstood my comment,
> 
> You wouldn't get a performance hit if the driver wasn't doing something when it detects Asynchronous compute. You'd get no difference at all from the feature being turned on/off.
> 
> Kollock stated that all he does is insert "markers" on compute shaders he wants to run in parallel to graphics tasks. If a piece of hardware supports Asynchronous compute + graphics, then it will execute those "marked" compute tasks in parallel. If the hardware doesn't support the feature then it will simply act as if those markers were never there.
> 
> Technically, turning Async on/off should have zero effect on the frame rate. So evidently, turning it on has the NVIDIA driver do something which affects the frame rate.
> 
> Clearly, since the driver exposes the feature as being available, when Async is turned on, the driver does something. I suggested that the driver simply "loops" the commands to the 3D queue.
> 
> So NVIDIA have implemented a temporary fix of sorts which gives you the illusion that it's running Async compute (at a performance hit). That's what I wrote above. The whole Async software solution, however, is not yet implemented... 7 months and counting with two titles due out soon.
> 
> That's what I'm saying.


I suspect it's not quite that simple. I believe the windows 10 scheduler can dispatch jobs from virtual queues to actual hardware queues sort of like multi-tasking on a single CPU, though I don't believe any current GPUs have pre-emption. The tasks have fences and signals on them, and I believe as part of the D3D12 specification a fence ends up flushing the GPU, which could mean a 100 us stall or so. TLDR is that adding a command to a compute queue or any queue , or rather the act of synchronizing it will have a tiny bit of overhead. Thus, if the hardware doesn't have some intrinsic gain from doing it parallel, you'll likely end up with a tiny loss. Even an architecture that can do them in parrelel will likely lose a little bit. It's just that the net gain is more then the loss.

Unfortunately, fences in D3D12 are a bit expensive because they operate at an OS level. I don't believe Vulkan would have this limitation. Fine grained synchronization probably won't be expensive if the hardware supports it.


----------



## Mahigan

Quote:


> Originally Posted by *Kollock*
> 
> I suspect it's not quite that simple. I believe the windows 10 scheduler can dispatch jobs from virtual queues to actual hardware queues sort of like multi-tasking on a single CPU, though *I don't believe any current GPUs have pre-emption*. The tasks have fences and signals on them, and I believe as part of the D3D12 specification a fence ends up flushing the GPU, which could mean a 100 us stall or so. TLDR is that adding a command to a compute queue or any queue , or rather the act of synchronizing it will have a tiny bit of overhead. Thus, if the hardware doesn't have some intrinsic gain from doing it parallel, you'll likely end up with a tiny loss. Even an architecture that can do them in parrelel will likely lose a little bit. It's just that the net gain is more then the loss.
> 
> Unfortunately, fences in D3D12 are a bit expensive because they operate at an OS level. I don't believe Vulkan would have this limitation. Fine grained synchronization probably won't be expensive if the hardware supports it.


From my understanding of what David Kanter and AMD have been saying is that GCN (at least Hawaii and newer) have finer grained preemption and Maxwell has coarse grained preemption. Apparently GCN2/3 take a 1 cycle performance hit where as, under extreme circumstances, a flush on Maxwell can lead to up to 1,000ms (ns?). Would that be a faire characterization?

The preemption you're talking about here is true zero performance fine grained preemtion if I'm following correctly?

So if I understand correctly, the NVIDIA driver, when Async is turned on in AotS, is hampered by the fences (used to synchronize workloads) but once Async is turned off the driver ignores said fences?

Thank you for taking the time to respond, much appreciated


----------



## Cyber Locc

So anyone got an Ashes bench with the new driver that was just released for ashes?


----------



## Serios

Quote:


> Originally Posted by *PostalTwinkie*
> 
> You are aware of their workflow, queue, and how Nvidia is ran as a business? Do you honestly, even for a moment, think that this one early look at a single title is enough motivation for Nvidia to dedicate their engineer's time? No, the fact it is getting attention at all in that regard is pretty amazing.


Your attempts to defend Nvidia are funny.
Since August 2015 Nvidia released more than one update for AotS and none which activate Async-Compute.
Also people keep insisting whit the fact that Nvidia can't do Async-Compute the same way AMD can and Nvidia hasn't done anything to clarify the situation. They haven't really discussed about their Async-Compute capabilities at all.
So people that are skeptical about Nvidia's Async-Compute capabilities are right.


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> So anyone got an Ashes bench with the new driver that was just released for ashes?


Yeah, on the Anandtech forums a GTX 980 Ti user lost a few FPS with the new driver.
http://forums.anandtech.com/showpost.php?p=38080728&postcount=1189

And I wouldn't update to the new driver if I were you... Many issues have been reported BSODs etc.
http://www.tweaktown.com/news/50910/geforce-driver-version-364-47-seem-cause-massive-problems/index.html?utm_campaign=trueAnthem:+Trending+Content&utm_content=56ddee6804d3012e570054b4&utm_medium=trueAnthem&utm_source=facebook

And: https://forums.geforce.com/default/topic/921332/geforce-drivers/official-364-47-game-ready-whql-display-driver-feedback-thread-released-3-7-16-/

BSODs seem to be caused by having more than one display connected.


----------



## Charcharo

Quote:


> Originally Posted by *Olivon*
> 
> Because AMD has no money to give to studios and is only 20% market share.
> That's why you'll see few games with AC at start.


And 100% of current gen consoles...








... so yeah...

Also, I am PCMR to the core and I upgrade every 5-6 years...


----------



## kzone75

Quote:


> Originally Posted by *Cyber Locc*


Quote:


> May I please get a source for this as if it doesnt support ASC we can start a class action, Free 980tis for everyone
> 
> 
> 
> 
> 
> 
> 
> .


Almost 2 years ago.. http://www.legitreviews.com/nvidia-highlights-directx-12-strengths-amd_138178


----------



## Nedooo

Quote:


> Originally Posted by *Mahigan*
> 
> Yeah, on the Anandtech forums a GTX 980 Ti user lost a few FPS with the new driver.
> http://forums.anandtech.com/showpost.php?p=38080728&postcount=1189
> 
> And I wouldn't update to the new driver if I were you... Many issues have been reported BSODs etc.
> http://www.tweaktown.com/news/50910/geforce-driver-version-364-47-seem-cause-massive-problems/index.html?utm_campaign=trueAnthem:+Trending+Content&utm_content=56ddee6804d3012e570054b4&utm_medium=trueAnthem&utm_source=facebook
> 
> And: https://forums.geforce.com/default/topic/921332/geforce-drivers/official-364-47-game-ready-whql-display-driver-feedback-thread-released-3-7-16-/
> 
> BSODs seem to be caused by having more than one display connected.


Wow and on every debate green boys brag about drivers...this is not lone case...


----------



## spyshagg

Quote:


> Originally Posted by *Kriant*
> 
> While I am a "red team" supporter at heart, and while R9 290x still manages to squeeze good framerates, while the "almightry" 780 ti lags behind in more recent tests that I've seen across various resources, Nvidia is better at delivering working soft support aka drivers on day 1, while AMD takes it's sweet time to push out relevant drivers for their, arguably superior tech. And yes, I think AMD's marketing department needs to be fired or re-trained, because it seriously falls flat on brining up the hype train when their new product is being released


The underlying issue is that amd cannot touch gameworks games until they release to the market and/if they receive permission by nvidia. There really isn't anything they can do better. It seems to also be the reason they pushed mantle (to bypass the entire dx11 "nvidia optimized" render path of gameworks games).


----------



## NightAntilli

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well in the case of your game that is probably true.
> 
> However that is still back to my point. This entire discussion is relvolving around NV drivers not your game.
> 
> The statement being made is that if NV had asynchronous drivers they would have released them by now. I am saying this is possible not true. That is all I am saying.


Quote:


> Originally Posted by *Cyber Locc*
> 
> Right well once again you are missing what I am saying. It doesn't matter if NV cards have hardware support for async if it's not supported in there software.
> 
> They have said quite a few times now. They have not yet added software support for the asynchronous but the hardware is capable of it.
> 
> No one is saying that async is currently not working on Maxwell. The question is, is that because of a lack of hardware or software. NV is saying it's oftware and they will add support they just told andantech that 2 weeks ago In a quote I linked earlier.
> 
> However it is being said that they cards do not have the hardware to support async, this has yet to be proven and is an assumption.
> 
> The debate here is, why do NV cards not support async, they are saying software. If that isn't true then they are lying as are the boxes of the cards that say they fully support dx12 so if they are lying than they are open to a class action.
> 
> I don't not see how a benchmark from months ago proves anything. Because it doesn't in any way, just because NV has not released software/firmware to support async yet does not prove the cards are not capable.


Re-read this post of mine, and you'll understand why drivers won't fix anything.

http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading/620_20#post_24957495


----------



## Fyrwulf

@PostalTwinkie

You ask why everybody thinks nVidia is lying. Maybe because nVidia has a history of lying? You said we should look at the facts, so let's do that thing.

1) nVidia's cards can do out of order compute tasks or graphics tasks, but not both at the same time, because there is only one lane for those commands to go down.
2) AMD's cards _can_ do both at the same time because there is a dedicated lane for compute tasks, a dedicated lane for graphics tasks, and an additional spillover lane for either one depending on work load.
3) If you think of those lanes like a road, then a three lane road will obviously see more cars from Point A to Point B than a single lane road, given the same speed limit, simply because it can fit more cars. Applied to the present situation with regards to graphics cards, that's where AMD's advantage comes from.

We all acknowledge that nVidia has an excellent software team that has a demonstrated ability to rapidly put out updates to fix problems, and they have asserted that this is a software issue, so why has it taken them so long to solve this issue? Many people, including myself, think this is an issue of the soul being willing but the body being frail. Now, will Pascal resolve this hardware issue? Who knows. Pascal reportedly taped out in June of last year. Unless nVidia made the decision to go back and redo Pascal, which they can certainly afford, I don't think so. There are two significant problems with doing that. One, it's a complete 180 from their present architectures; for them to nail a completely new arch from the get go _and_ do it on a new node is asking a bit much from anybody, unless your name is Jim Keller (aka Silicon Jesus). Two, being first to market is very important for mindshare.

Personally speaking, I find this all very fascinating. For the first time in years we have revolutionary new API(s), new nodes, and new architectures coming down the pipe at the same time. To paraphrase Homeland, the graphics industry is hitting the reset button.


----------



## NightAntilli

Quote:


> Originally Posted by *Olivon*
> 
> Async Compute is a marketing term coming from AMD marketing (thanks Mahigan for speading it everywhere).
> Only difference is having parallelized tasks for compute and graphics (AMD) or serial tasks (nVidia).
> AMD push on AC because they think they have an advantage here and want to play on it.
> But doing parallelized tasks for compute and graphics is not mandatory to be DX12 compliant.


But it does make hardware more efficient. It's basically getting more for your money. Why would anyone be against using it, or downplaying it, other than being incapable of doing it themselves?


----------



## Fyrwulf

Quote:


> Originally Posted by *NightAntilli*
> 
> But it does make hardware more efficient. It's basically getting more for your money. Why would anyone be against using it, or downplaying it, other than being incapable of doing it themselves?


Because the nVidia fanboys suffer from a bizarre version of Stockholm Syndrome. They don't have a dog in this fight, so if Pascal can't do Async+Graphics they'll wait until Volta comes out and buy that. Doesn't make much sense, does it?


----------



## EightDee8D

IMO, nvidia will enable ASC when pascal launches with actual hardware capabilities. and it will show improvements because of that but maxwell/kepler won't show any performance increase because they don't have hardware capable for it. and since its 80%'s favorite NVIDIA everyone will forget about it and jump to pascal. just like dx11.2 which kepler still doesn't support even though it's written on 670's box. heck it doesn't even support dx11.1 which bf4 uses.


----------



## PontiacGTX

Quote:


> Originally Posted by *Olivon*
> 
> We're more looking at DX11 cards than other things.


----------



## Lass3

Quote:


> Originally Posted by *EightDee8D*
> 
> IMO, nvidia will enable ASC when pascal launches with actual hardware capabilities. and it will show improvements because of that but maxwell/kepler won't show any performance increase because they don't have hardware capable for it. and since its 80%'s favorite NVIDIA everyone will forget about it and jump to pascal. just like dx11.2 which kepler still doesn't support even though it's written on 670's box. heck it doesn't even support dx11.1 which bf4 uses.


Because it does not matter yet maybe. DX11 will be here for years to come no matter what. Lots of planned AAA games will be using it thruout 2016 and deep into 2017

DX12 so far:
Gears of War bugged release..
Fable Legends cancelled..
Ashes of the Singularity nothing but a benchmark..

Rise of the Tomb Raider had hints of DX12 in the menu.. Have not heard anything.. Great game tho

Will Hitman be the first retail DX12 title? I doubt it, probably just a few DX12 features if we're lucky.


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> So anyone got an Ashes bench with the new driver that was just released for ashes?


I've tested it. There were small gains. Settings on Crazy 980 TI SLI @ 3440x1440

DX12

ASYNC ON

Before: 66.228989 AVG FPS

After: 67.325569 AVG FPS

ASYNC OFF

Before: 67.810295 AVG FPS

After: 68.654854 AVG FPS


----------



## caswow

AotS is a game...


----------



## PontiacGTX

Quote:


> Originally Posted by *Lass3*
> 
> Because it does not matter yet maybe. DX11 will be here for years to come no matter what. Lots of planned AAA games will be using it thruout 2016 and deep into 2017
> 
> DX12 so far:
> Gears of War bugged release..
> Fable Legends cancelled..
> Ashes of the Singularity nothing but a benchmark..
> 
> Rise of the Tomb Raider had hints of DX12 in the menu.. Have not heard anything.. Great game tho
> 
> Will Hitman be the first retail DX12 title? I doubt it, probably just a few DX12 features if we're lucky.


Now Benchmarks have multiplayer?


----------



## DjXbeat

Ashes of The Singularity BETA version 0.91.17797
AMD - 16.2.1
NV - 362.00


----------



## Lass3

Quote:


> Originally Posted by *caswow*
> 
> AotS is a game...


Yep, a very boring one. People talk about the benchmark part only, not the gameplay, because it's terrible..


----------



## Charcharo

Quote:


> Originally Posted by *caswow*
> 
> AotS is a game...


Yes, and it is the most PC Master Race genre of all. A Strategy game. And so far... one that seems to be quite good







!

Do not understand what kind of PC Gamers would hate on its genre. Hating it as a game is OK I guess, but at least to an elitist like me it seems quite good so far.


----------



## PontiacGTX

Quote:


> Originally Posted by *Charcharo*
> 
> Yes, and it is the most PC Master Race genre of all. A Strategy game. And so far... one that seems to be quite good
> 
> 
> 
> 
> 
> 
> 
> !
> 
> Do not understand what kind of PC Gamers would hate on its genre. Hating it as a game is OK I guess, but at least to an elitist like me it seems quite good so far.


I was looking for a game similar to SuCom2 and this fits perfectly the description(mainly for MP)


----------



## Charcharo

The Fury X and 980 Ti... are kind of irrelevant in the grand scheme of things. And it is one of the two times Nvidia REALLY has a better card.

Fiji is imbalanced. It is... not well made







. Such is life.


----------



## NightAntilli

The Fijis only real weakness is its front end, limiting it mainly under DX11. Other than that, it's pretty much superior to the nVidia equivalents.


----------



## BradleyW

Quote:


> Originally Posted by *Charcharo*
> 
> The Fury X and 980 Ti... are kind of irrelevant in the grand scheme of things. And it is one of the two times Nvidia REALLY has a better card.
> 
> Fiji is imbalanced. It is... not well made
> 
> 
> 
> 
> 
> 
> 
> . Such is life.


Not well made for the purpose of Direct-X 11, yes. That's because GCN is designed around multi-threaded low level API's, which will be the future standard for PC and current standard for consoles.


----------



## Charcharo

Honestly, seeing the first DX12 Async benches from AOTS is amazingly good for the 390s, 380s and 280s. But Fiji, whilst better, still does not show to be that amazing.

The number of ROPs, the low memory and its speed seem to be limiting. The not that great L2 cache seems to also play a part. That is why Polaris has all those elements redone (with HBM 2.0 too) ... to address the issues on Fiji and Hawaii and make it even better.


----------



## Kollock

Quote:


> Originally Posted by *Mahigan*
> 
> From my understanding of what David Kanter and AMD have been saying is that GCN (at least Hawaii and newer) have finer grained preemption and Maxwell has coarse grained preemption. Apparently GCN2/3 take a 1 cycle performance hit where as, under extreme circumstances, a flush on Maxwell can lead to up to 1,000ms (ns?). Would that be a faire characterization?
> 
> The preemption you're talking about here is true zero performance fine grained preemtion if I'm following correctly?
> 
> So if I understand correctly, the NVIDIA driver, when Async is turned on in AotS, is hampered by the fences (used to synchronize workloads) but once Async is turned off the driver ignores said fences?
> 
> Thank you for taking the time to respond, much appreciated


I'm talking OS level pre-emption. What you are talking about on GCN is more like hyperthreading. You have the equivalent of multiple threads being executed at hardware level, but you can't really pre-empt a specific thread. That is, the OS can't stop a task on a specific GPU queue, then switch something, then switch the old job out. It has to let the current GPU job continue on that queue. For Multi-GPU, we actually submit our command lists in sections so that the OS has a chance to swap in our present during the middle of a frame so we can flip the backbuffer. I suspect the way D3D12 works is that the GPU creates a CPU interrupt when a queue signals something, then the Windows Kernal submits the next task to the GPU for completion if there is a ->Wait sitting on the queue. You could see that this round trip to the CPU back to the GPU could slow it down a tiny bit, because AFAIK the GPU is required to flush before continuing. You can also see why MS made this setup - because now dozens of applications could actually be giving work to the GPU, and theoretically the OS could schedule them all. Windows is more then a game OS, afterall.

Yes, you are correct about what happens with async off. We submit the tasks to the universal queue and then don't bother submitting fences and signals because we don't need.

GCN has multiple queues, which can execute in parallel, but that's not the same thing as pre-emption. Time slicing on GCN occurs on a hardware scheduler, and GCN can actually synchronize at a very fine grain. The best way to think about it the ACEs basically look like extra GPUs. However, D3D12 synchronization primitives are at an OS level, the hardware doesn't actually see them. I think they did it this way so that it's more unified across different hardware types. AFAIK, Fences and signals aren't actually directly visible to the driver. TLDD: GCN can actually do much more fine grained synchronization then D3D12 allows right now.

All of this can be seen by an ETW trace of Ashes, you can actually see tiny GPU stalls on GCN where the signals happen. I'd guess we are losing 2-3% perf because of it, but that's the price you pay for being on a multi-tasking OS.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Kollock*
> 
> I'm talking OS level pre-emption. What you are talking about on GCN is more like hyperthreading. You have the equivalent of multiple threads being executed at hardware level, but you can't really pre-empt a specific thread. That is, the OS can't stop a task on a specific GPU queue, then switch something, then switch the old job out. It has to let the current GPU job continue on that queue. For Multi-GPU, we actually submit our command lists in sections so that the OS has a chance to swap in our present during the middle of a frame so we can flip the backbuffer. I suspect the way D3D12 works is that the GPU creates a CPU interrupt when a queue signals something, then the Windows Kernal submits the next task to the GPU for completion if there is a ->Wait sitting on the queue. You could see that this round trip to the CPU back to the GPU could slow it down a tiny bit, because AFAIK the GPU is required to flush before continuing. You can also see why MS made this setup - because now dozens of applications could actually be giving work to the GPU, and theoretically the OS could schedule them all. Windows is more then a game OS, afterall.
> 
> Yes, you are correct about what happens with async off. We submit the tasks to the universal queue and then don't bother submitting fences and signals because we don't need.
> 
> GCN has multiple queues, which can execute in parallel, but that's not the same thing as pre-emption. Time slicing on GCN occurs on a hardware scheduler, and GCN can actually synchronize at a very fine grain. The best way to think about it the ACEs basically look like extra GPUs. However, D3D12 synchronization primitives are at an OS level, the hardware doesn't actually see them. I think they did it this way so that it's more unified across different hardware types. AFAIK, Fences and signals aren't actually directly visible to the driver. TLDD: GCN can actually do much more fine grained synchronization then D3D12 allows right now.
> 
> All of this can be seen by an ETW trace of Ashes, you can actually see tiny GPU stalls on GCN where the signals happen. I'd guess we are losing 2-3% perf because of it, but that's the price you pay for being on a multi-tasking OS.


Great info, Kollock.


----------



## 7850K

Quote:


> Originally Posted by *iLeakStuff*
> 
> Almost too funny watching people brag about 300 series and calling people dumb because they own a much more efficient Maxwell card.
> 
> Because you are soo smart to support a company that rebrands a ton of cards and rather do that than spend money to innovate on a better architecture.
> Yes lets pay money for a card thats from 2012 and give a signal to AMD thats it perfectly ok to give a damn about innovating. And at the same time sit and whiteknight a freaking rebrand and call it amazing.
> 
> I dont know whats more stupid than that. The only cards AMD should be applaud for is Fiji because its new and coms with HBM. The rest is outdated, old and garbage


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I would rather have GTX980 Ti instead of 290X lol.


I am with him, to the point where I just replaced 290xs with a 980ti







.

I think a lot of that decision has to do with the games you play.

So for me perspective wise, going NV I got cuda (important to me as alot of the software I use supports cuda and not AMD). Then games, so lets discuss games, who got that list? I dont have the list off hand but I know for a fact of it 1 game interests me, Fable Legends, and for me the list ends there.

Isnt that what it has always came down to though? If the games you play better support one or the other than that is the better card right? So I dont play FPS games, I buy them sometimes and play them a few hours then never again. I am a huge RPG fan, always have been, so if FFV comes to PC with DX12 support than that will matter to me, aside from that and Fable I dont see any possible DX12 RPGS.

Oh ya fable got canned







so that DX12 list for me just became 100% irrelevant.

EDIT: Got the list from the wiki, lets go over that







.

Ashes of the Singularity RTS
Descent: Underground FPS
Squad FPS
Survarium FPS
Caffeine FPS
The Elder Scrolls Online MMO (People still play that game, LOL)
Rise of the Tomb Raider Action/FPS
Gears of War Ultimate Edition FPS
Deus Ex: Mankind Divided FPS
Hitman FPS
Star Citizen FPS MMO
Ark: Survival Evolved Action/FPS
Quantum Break FPS
W.N.C: Infantry RTS
Forza Motorsport 6: Apex Racing

So that is 8 FPS games, 2 Action/FPS games, 1 FPS MMO, 2 RTS, 1 Racing, and 1 MMORPG that real MMO players left long long ago (Myself Included).

So 1 game on the list actually interests me that I would actually spend any length of time playing and that is forza, I may grab tomb raider and Ark but I wont be devoting a lot of time into them. As far as the RTS titles, Eh those are not the true PC master race titles as has been suggested imo, those are the true Android Master race titles. (sadly the true PC Master race titles dubbed by the community which is also showing here, is FPS, no matter how much I despise them or how stupid they are.)

So I guess if NV is "The Way its Meant to be Played", Then AMD is "Call of Duty Kiddy's Best Friend" hahaha.


----------



## Mahigan

Quote:


> Originally Posted by *Olivon*
> 
> Async Compute is a marketing term coming from AMD marketing (thanks Mahigan for speading it everywhere).
> Only difference is having parallelized tasks for compute and graphics (AMD) or serial tasks (nVidia).
> AMD push on AC because they think they have an advantage here and want to play on it.
> But doing parallelized tasks for compute and graphics is not mandatory to be DX12 compliant.


Asynchronous compute is a computer science term not an AMD marketing term. It means executing compute workloads without a defined order. Asynchronous Shading is the term AMDs marketing devised. The performance benefits we've been seeing are not due to Async Compute but rather Async compute + graphics. Async compute + graphics means executing compute workloads, with graphics workloads in parallel and without a defined order. We could simply use the term Async Shading but I don't use the AMD marketing term.


----------



## sugarhell

Quote:


> Originally Posted by *Mahigan*
> 
> Asynchronous compute is a computer science term not an AMD marketing term. It means executing compute workloads without a defined order. Asynchronous Shading is the term AMDs marketing devised. The performance benefits we've been seeing are not due to Async Compute but rather Async compute + graphics. Async compute + graphics means executing compute workloads, with graphics workloads in parallel and without a defined order. We could simply use the term Async Shading but I don't use the AMD marketing term.


The official feature name of async shading/compute is multiple channel queue or something like that iirc


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> I am with him, to the point where I just replaced 290xs with a 980ti
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I think a lot of that decision has to do with the games you play.
> 
> So for me perspective wise, going NV I got cuda (important to me as alot of the software I use supports cuda and not AMD). Then games, so lets discuss games, who got that list? I dont have the list off hand but I know for a fact of it 1 game interests me, Fable Legends, and for me the list ends there.
> 
> Isnt that what it has always came down to though? If the games you play better support one or the other than that is the better card right? So I dont play FPS games, I buy them sometimes and play them a few hours then never again. I am a huge RPG fan, always have been, so if FFV comes to PC with DX12 support than that will matter to me, aside from that and Fable I dont see any possible DX12 RPGS.
> 
> Oh ya fable got canned
> 
> 
> 
> 
> 
> 
> 
> 
> so that DX12 list for me just became 100% irrelevant.
> 
> EDIT: Got the list from the wiki, lets go over that
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Ashes of the Singularity RTS
> Descent: Underground FPS
> Squad FPS
> Survarium FPS
> Caffeine FPS
> The Elder Scrolls Online MMO (People still play that game, LOL)
> Rise of the Tomb Raider Action/FPS
> Gears of War Ultimate Edition FPS
> Deus Ex: Mankind Divided FPS
> Hitman FPS
> Star Citizen FPS MMO
> Ark: Survival Evolved Action/FPS
> Quantum Break FPS
> W.N.C: Infantry RTS
> Forza Motorsport 6: Apex Racing
> 
> So that is 8 FPS games, 2 Action/FPS games, 1 FPS MMO, 2 RTS, 1 Racing, and 1 MMORPG that real MMO players left long long ago (Myself Included).
> 
> So 1 game on the list actually interests me that I would actually spend any length of time playing and that is forza, I may grab tomb raider and Ark but I wont be devoting a lot of time into them. As far as the RTS titles, Eh those are not the true PC master race titles as has been suggested imo, those are the true Android Master race titles. (sadly the true PC Master race titles dubbed by the community which is also showing here, is FPS, no matter how much I despise them or how stupid they are.)
> 
> So I guess if NV is "The Way its Meant to be Played", Then AMD is "Call of Duty Kiddy's Best Friend" hahaha.


You just bought 980Ti? That's a bad move IMHO. With Polaris and Pascal around the corner, there really is no reason to drop $650 on a 980Ti right now. That performance will be $400 in 3-4 months.

Anyway, I am going Polaris no matter what this time unless AMD gets stupid and prices it wrong. Otherwise I am hanging onto this 980 until next generation.


----------



## raghu78

Quote:


> Originally Posted by *criminal*
> 
> You just bought 980Ti? That's a bad move IMHO. With Polaris and Pascal around the corner, there really is no reason to drop $650 on a 980Ti right now. That performance will be $400 in 3-4 months.
> 
> Anyway, I am going Polaris no matter what this time unless AMD gets stupid and prices it wrong. Otherwise I am hanging onto this 980 until next generation.


I think AMD is going to be forced to price aggressively if they are serious about gaining back lost market share. So you might just find yourself getting some pretty good performance for USD 349 - USD 399.


----------



## Cyro999

Quote:


> Originally Posted by *Cyber Locc*
> 
> So anyone got an Ashes bench with the new driver that was just released for ashes?


I'm getting less performance than a couple days ago, not sure why.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> You just bought 980Ti? That's a bad move IMHO. With Polaris and Pascal around the corner, there really is no reason to drop $650 on a 980Ti right now. That performance will be $400 in 3-4 months.
> 
> Anyway, I am going Polaris no matter what this time unless AMD gets stupid and prices it wrong. Otherwise I am hanging onto this 980 until next generation.


Well there is a few reason though, my main gaming pc died. My new setup is my temporary gaming rig and will become my new office rig. My 290s would drop in price even more when Polaris is launched so they needed to be sold now, my office builds board is MATX so 3 cards were not going work, the new case holds not much rad, it will barely cool a 980ti and a 5820k that is in it mustless 2 290s.

Everything I have seen shows about a june date, so hopefully they release it soon enough to be within 90 days if it is than I can step up (its an EVGA card). which I will do to sell the 1080 later for a Titan P.

If not really no biggie, I was going to go Titan X but decided to wait for Titan P. Now as to the bad move, well slightly sure however what if the cards dont come out in 3-4months? Also what would I do for 4 months? As to the 400 dollar price range you my friend are dreaming







, the 1080 will be equal to or slightly bnetter than a 980ti, when new cards come out the new tier 2 is the old flagship it doesnt drop 3 tiers it drops 1. The 1080, will be most likely around 600 when until the 1080ti is launched then it will drop to 500. GP104 will barely outperform GM200 if at all, the replacement for the 980ti that is actually better is still a ways out.

I know people love to not relize the true gains we will see. GP104 will see a maybe 5-10% increase over a 980ti at the same price or slightly less. Then the Titan will come showing around 15-20%, I know the "40% gain" from Pascal but that wont be seen from a 980ti and defiantly not in june. The true 40% gains are from a more refined 16nm so that will be Volta. The gains are not as insane as people keep thinking they will be, this happens every time and everyone always "Wait for XXX", XXX comes and it isn't that great and defiantly wasn't worth waiting for.

How do I know this, well we have 20 years of history of the same thing happening to show us that lol.

Also HMB2 isnt coming till Titan, so what really gain do you get from waiting? Async maybe, A small bump of performance, a 50-100 dollar price difference if you are lucky, a new unrefined architecture, and a whole lot of wasted time lol.
Quote:


> Originally Posted by *raghu78*
> 
> I think AMD is going to be forced to price aggressively if they are serious about gaining back lost market share. So you might just find yourself getting some pretty good performance for USD 349 - USD 399.


I agree on the AMD side there will be some good deals, however read my post again. This is an office PC first gaming second, Cuda is a huge boost over AMD in that regard, I know Open CL, that nothing supports, none of the programs I use support Open CL and they all support Cuda.

To the next thing that will come up that open CL is better, it may be but it isn't supported on most programs which renders it 100% worthless. Adobe is the big one, Adobe supports Cuda across the board Open CL nope zelch.

And we find ourselves back in the wait train as well, Wait for the cards, then wait for Blocks (I dont air cool period!, so I would have to wait for blocks) all the while I have no card.

All that said I got a great deal on my TI anyway, Brand new 600 shipped for a SC+ couldn't pass that up.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well there is a few reason though, my main gaming pc died. My new setup is my temporary gaming rig and will become my new office rig. My 290s would drop in price even more when Polaris is launched so they needed to be sold now, my office builds board is MATX so 3 cards were not going work, the new case holds not much rad, it will barely cool a 980ti and a 5820k that is in it mustless 2 290s.
> 
> Everything I have seen shows about a june date, so hopefully they release it soon enough to be within 90 days if it is than I can step up (its an EVGA card). which I will do to sell the 1080 later for a Titan P.
> 
> If not really no biggie, I was going to go Titan X but decided to wait for Titan P. Now as to the bad move, well slightly sure however what if the cards dont come out in 3-4months? Also what would I do for 4 months? As to the 400 dollar price range you my friend are dreaming
> 
> 
> 
> 
> 
> 
> 
> , the 1080 will be equal to or slightly bnetter than a 980ti, when new cards come out the new tier 2 is the old flagship it doesnt drop 3 tiers it drops 1. The 1080, will be most likely around 600 when until the 1080ti is launched then it will drop to 500. GP104 will barely outperform GM200 if at all, the replacement for the 980ti that is actually better is still a ways out.
> 
> I know people love to not relize the true gains we will see. GP104 will see a maybe 5-10% increase over a 980ti at the same price or slightly less. Then the Titan will come showing around 15-20%, I know the "40% gain" from Pascal but that wont be seen from a 980ti and defiantly not in june. The true 40% gains are from a more refined 16nm so that will be Volta. The gains are not as insane as people keep thinking they will be, this happens every time and everyone always "Wait for XXX", XXX comes and it isn't that great and defiantly wasn't worth waiting for.
> 
> How do I know this, well we have 20 years of history of the same thing happening to show us that lol.
> 
> Also HMB2 isnt coming till Titan, so what really gain do you get from waiting? Async maybe, A small bump of performance, a 50-100 dollar price difference if you are lucky, a new unrefined architecture, and a whole lot of wasted time lol.
> I agree on the AMD side there will be some good deals, however read my post again. This is an office PC first gaming second, Cuda is a huge boost over AMD in that regard, I know Open CL, that nothing supports, none of the programs I use support Open CL and they all support Cuda.
> 
> To the next thing that will come up that open CL is better, it may be but it isn't supported on most programs which renders it 100% worthless. Adobe is the big one, Adobe supports Cuda across the board Open CL nope zelch.
> 
> And we find ourselves back in the wait train as well, Wait for the cards, then wait for Blocks (I dont air cool period!, so I would have to wait for blocks) all the while I have no card.
> 
> All that said I got a great deal on my TI anyway, Brand new 600 shipped for a SC+ couldn't pass that up.


You not having a card is reason enough to buy a card. But the way you made it sound was that you recently sold your 290s just to get a 980Ti. Anyway, it is your money. I don't really care.

20 years of history... you remember it a little different than I do. We use to get last generation flagship performance for $250 or less. $400 is no stretch by any means.


----------



## SuperZan

Quote:


> Originally Posted by *criminal*
> 
> You not having a card is reason enough to buy a card. But the way you made it sound was that you recently sold your 290s just to get a 980Ti. Anyway, it is your money. I don't really care.
> 
> 20 years of history... you remember it a little different than I do. We use to get last generation flagship performance for $250 or less. $400 is no stretch by any means.


And not for nothing but I think that both makers putting out new product more or less simultaneously will have a stronger effect on prices than the last few staggered releases.


----------



## ZealotKi11er

Quote:


> Originally Posted by *criminal*
> 
> You not having a card is reason enough to buy a card. But the way you made it sound was that you recently sold your 290s just to get a 980Ti. Anyway, it is your money. I don't really care.
> 
> 20 years of history... you remember it a little different than I do. We use to get last generation flagship performance for $250 or less. $400 is no stretch by any means.


GTX970 did that. We got Last generation flagship which was $700 for $330 but now if you look back there was always R9 290 @ $400 lurking there.


----------



## criminal

Quote:


> Originally Posted by *SuperZan*
> 
> And not for nothing but I think that both makers putting out new product more or less simultaneously will have a stronger effect on prices than the last few staggered releases.


Yep

Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX970 did that. We got Last generation flagship which was $700 for $330 but now if you look back there was always R9 290 @ $400 lurking there.


Exactly.


----------



## Themisseble

Quote:


> Originally Posted by *criminal*
> 
> You just bought 980Ti? That's a bad move IMHO. With Polaris and Pascal around the corner, there really is no reason to drop $650 on a 980Ti right now. That performance will be $400 in 3-4 months.
> 
> Anyway, I am going Polaris no matter what this time unless AMD gets stupid and prices it wrong. Otherwise I am hanging onto this 980 until next generation.


R9 Nano for around 450$ is still quite a deal right now.
http://www.newegg.com/Product/Product.aspx?Item=N82E16814131679&nm_mc=AFC-C8Junction&cm_mmc=AFC-C8Junction-PCPartPicker,%20LLC-_-na-_-na-_-na&cm_sp=&AID=10446076&PID=3938566&SID=


----------



## PostalTwinkie

Quote:


> Originally Posted by *Serios*
> 
> Your attempts to defend Nvidia are funny.
> Since August 2015 Nvidia released more than one update for AotS and none which activate Async-Compute.
> Also people keep insisting whit the fact that Nvidia can't do Async-Compute the same way AMD can and Nvidia hasn't done anything to clarify the situation. They haven't really discussed about their Async-Compute capabilities at all.
> So people that are skeptical about Nvidia's Async-Compute capabilities are right.












I am not defending Nvidia. I am just calling out the stupid in saying they can't emulate via software, in some fashion, ASC. We already know they lack the hardware in Maxwell and Kepler, that isn't even an argument. Hell, some of us are even speculating that Pascal might lack the hardware - time will tell.

That doesn't change the point that Nvidia has stated, and a 3rd party developer confirmed, they are working on it. We are already seeing parts of their implementation in Drivers, it just hasn't been enabled yet. To what end? We have to wait and see, but many people seem incapable of doing that. Instead running off on tangents and conspiracies about how the big bad green monster is going to lie to us all, and take our money.


----------



## Charcharo

Emulating a feature designed to improve performance... is somewhat counter-intuitive IMHO...


----------



## PostalTwinkie

Quote:


> Originally Posted by *Charcharo*
> 
> Emulating a feature designed to improve performance... is somewhat counter-intuitive IMHO...


Sure, I agree that it isn't going to be as efficient as having the hardware to do it; it never is. That doesn't mean it can't be done, and some gains had. I even have my doubts about their ability to pull it off, but that doesn't mean I am going to call them a liar and say they can't do it at all.

It is about waiting and seeing what they end up doing. If they fall on their face at that time, grab the torches.


----------



## EightDee8D

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Sure, I agree that it isn't going to be as efficient as having the hardware to do it; it never is. That doesn't mean it can't be done, and some gains had. I even have my doubts about their ability to pull it off, but that doesn't mean I am going to call them a liar and say they can't do it at all.
> 
> It is about waiting and seeing what they end up doing. *If they fall on their face at that time, grab the torches*.


by upgrading to pascal/volta.









/s


----------



## criminal

Quote:


> Originally Posted by *EightDee8D*
> 
> by upgrading to pascal/volta.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> /s


You know that's happening for real. Nvidia's performance could suck from here on to eternity and some people will continue to upgrade to their cards no matter what. Same could be said about some AMD users as well though. No room for brand whores as far as I am concerned.


----------



## EightDee8D

Quote:


> Originally Posted by *criminal*
> 
> You know that's happening for real. Nvidia's performance could suck from here on to eternity and some people will continue to upgrade to their cards no matter what. Same could be said about some AMD users as well though. No room for brand whores as far as I am concerned.


yep.


----------



## magnek

Quote:


> Originally Posted by *criminal*
> 
> You know that's happening for real. Nvidia's performance could suck from here on to eternity and some people will continue to upgrade to their cards no matter what. Same could be said about some AMD users as well though. No room for brand whores as far as I am concerned.


You mean like those who said to wait for Fury X to release before buying a 980 Ti, then went ahead and bought a Fury X anyway after reviews came out? Yeah that was bad.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX970 did that. We got Last generation flagship which was $700 for $330 but now if you look back there was always R9 290 @ $400 lurking there.


Umm no, you got a 970 for 330 that is a slightly improved 780. 780to and titan were the flagships, a 780to beats a 970 by about 3-5% which is not that far of a shot then it best the 780 buy when it was released. A 980 is the new 780ti, that add some optimizations to close the gap, however 780ti vs 970 the 780ti wins.

Again the same thing will happen with pascal, the gtx 1080 will be +5%ish. The 970 will be -5%ish to the 980ti.

Where we will see a major gain is the titan and the 1080ti. Which are still a year out (the 1080ti).

I love the idea of what you saying to be true and a 970 beating or matching a 780ti but it doesn't.


----------



## magnek

See the thing about 970, is that its $330 price was only possible because of the segmented memory. I quote AnandTech on this:
Quote:


> Originally Posted by *AnandTech*
> As for why NVIDIA is using such a configuration here, the crux of the matter is money and yields. *Without the ability to partially disable a ROP/MC partition, NVIDIA would either have to spec a card to use a fully enabled partition - essentially reducing yields for that card and driving up costs - or disable the entire partition and lose all of the benefits of the additional ROPs, memory, and the memory controller.* This finer granularity allows NVIDIA to better control how they harvest bad chips and what resulting configurations they bring to market, along with making a single ROP/L2 defect less harmful to overall performance by keeping the rest of a partition online. *Otherwise, to stick with a "balanced" configuration with as many crossbar ports as DRAM modules would result in either a higher spec GTX 970, or a lower spec card with a 192-bit memory bus.*


I know people are tired of the 3.5GB joke, but I think it's important to put that $330 price in context, because without the segmented vram, the 970 would likely have either been a $450 card (with an intact memory subsystem), or it would've been a 3GB 192-bit card and went for less than $300 probably.


----------



## diggiddi

Quote:


> Originally Posted by *criminal*
> 
> Yep
> Exactly.


Love your sig lol








BTW where did building materials come from?


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm no, you got a 970 for 330 that is a slightly improved 780. 780to and titan were the flagships, a 780to beats a 970 by about 3-5% which is not that far of a shot then it best the 780 buy when it was released. A 980 is the new 780ti, that add some optimizations to close the gap, however 780ti vs 970 the 780ti wins.
> 
> Again the same thing will happen with pascal, the gtx 1080 will be +5%ish. The 970 will be -5%ish to the 980ti.
> 
> Where we will see a major gain is the titan and the 1080ti. Which are still a year out (the 1080ti).
> 
> I love the idea of what you saying to be true and a 970 beating or matching a 780ti but it doesn't.


But you are only going back one generation. Look at the GTX670. $399 and faster than the GTX580. The GTX570 was basically a $349 GTX480. The GTX470 crushed the GTX285 for only $349. Never mind on that one.









Quote:


> Originally Posted by *diggiddi*
> 
> Love your sig lol
> 
> 
> 
> 
> 
> 
> 
> 
> BTW where did building materials come from?


LOL... someone had to create them right?


----------



## superstition222

Quote:


> Originally Posted by *criminal*
> 
> But you are only going back one generation. Look at the GTX670. $399 and faster than the GTX580. The GTX570 was basically a $349 GTX480. The GTX470 crushed the GTX285 for only $349.


Nothing tops the original Titan for worst value ever, right?



(Cut out the middle GPUs to show the worst and best)

And, of course, this is with just DirectX 11, before AMD improved its driver, and before the 390 series made the 290 series even cheaper.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> But you are only going back one generation. Look at the GTX670. $399 and faster than the GTX580. The GTX570 was basically a $349 GTX480. The GTX470 crushed the GTX285 for only $349.


Umm here, http://www.anandtech.com/bench/Product/598?vs=517.

Also back then Flagships were 500, the 580 was priced by NV at 499. It trades blows with the 670 (proving what exactly what I said) and was 1 price bracket below. Since that time the price tiers have changed.

So still we find ourselves seeing the pattern, the 1080 will trade blows with the 980ti, the 1080 will not cost 300 dollars.


----------



## criminal

Quote:


> Originally Posted by *superstition222*
> 
> Nothing tops the original Titan for worst value ever, right?


I don't know, that TitanZ was pretty bad!








Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm here, http://www.anandtech.com/bench/Product/598?vs=517.
> 
> Also back then Flagships were 500, the 580 was priced by NV at 499. It trades blows with the 670 (proving what exactly what I said) and was 1 price bracket below. Since that time the price tiers have changed.


Yeah, there are reasons for that change though. The key is there being better competition this time around. If Polaris is priced right and performs as expected, Nvidia will have no choice but to price aggressively. That's why I said earlier in the thread AMD can't get stupid with pricing if they expect to gain back market share.

Plus you still keep talking just Nvidia. AMD can price a gpu that is similar in performance to the 980Ti for $400 or less. Nvidia will possibly follow if that happens.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> I don't know, that TitanZ was pretty bad!
> 
> 
> 
> 
> 
> 
> 
> 
> Yeah, there are reasons for that change though. The key is there being better competition this time around. If Polaris is priced right and performs as expected, Nvidia will have no choice but to price aggressively. That's why I said earlier in the thread AMD can't get stupid with pricing if they expect to gain back market share.


Well, if they do I think that may cause price drops with in 6 months but it will not happen at launch. I feel at that point, we will see the price drop to back to 500 for 1080, which we will see that anyway when the ti released.

Also 780ti/780 had the same pricing and they had competition in the form of the 290x that was beating or matching them for 400 dollars. They still priced the 980 high.

Quote:


> Originally Posted by *superstition222*
> 
> Nothing tops the original Titan for worst value ever, right?


As to the OG titan no it was not priced badly. Every Titan since has been (did the black have dp). DP was key, that is why the of Titans still sell like hot cakes not for gaming but for compute people that can't afford Teslas/Quadros.

We also see a large used market for the OG Titan by those same people. Which then leads to people thinking that the their titan x will hold its value. Well it won't, it doesn't have DP so it loses that market that made the odd case with the original.

EDIT: The Titan Black had DP, so therefor it will hold its used value, Titan X the overpriced halo gaming card will not. Sad for the absurd amount of people I have seen on OCN saying theres will, just look at the last 2 lol.
Quote:


> Originally Posted by *superstition222*
> 
> Nothing tops the original Titan for worst value ever, right?
> 
> 
> 
> (Cut out the middle GPUs to show the worst and best)
> 
> And, of course, this is with just DirectX 11, before AMD improved its driver, and before the 390 series made the 290 series even cheaper.


----------



## Olivon

Quote:


> Originally Posted by *superstition222*
> 
> Nothing tops the original Titan for worst value ever, right?
> 
> 
> 
> (Cut out the middle GPUs to show the worst and best)
> 
> And, of course, this is with just DirectX 11, before AMD improved its driver, and before the 390 series made the 290 series even cheaper.


Titan has nothing to do with price/performance.
It's a present that AMD made to nVidia in order to improve their brand recognition process.
As AMD got nothing to counter at the time, nVidia can play the prestige card and play on exclusivity.
Myself, I sincerily hope that AMD will not offer such present anymore but I'm not sure what they will do.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well, if they do I think that may cause price drops with in 6 months but it will not happen at launch. I feel at that point, we will see the price drop to back to 500 for 1080, which we will see that anyway when the ti released.
> 
> Also 780ti/780 had the same pricing and they had competition in the form of the 290x that was beating or matching them for 400 dollars. They still priced the 980 high.


The 290x was not $400 when it launched 6 months AFTER the 780. It was $550 and offered Titan performance. The problem with the 290x was that it was hot and loud and then the mining craze hit which caused prices on those cards to skyrocket. The 290X didn't hit $400 until the 970 launched if I remember correctly.

Difference is that this time with Polaris, AMD should be able to launch before/around the same time as Pascal. Pricing will make a huge difference IMHO.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> The 290x was not $400 when it launched 6 months AFTER the 780. It was $550 and offered Titan performance. The problem with the 290x was that it was hot and loud and then the mining craze hit which caused prices on those cards to skyrocket. The 290X didn't hit $400 until the 970 launched if I remember correctly.
> 
> Difference is that this time with Polaris, AMD should be able to launch before/around the same time as Pascal. Pricing will make a huge difference IMHO.


Yep your right my bad, I honestly didn't pay much attention to the 290x until after the mining craze was over.

Anyway I find it doubtful that prices go down. the fury X was release 3 months after the 980ti at the same price where it offers no where near the same performance.

The fact is the 1080 will be the card that trades blows with the 980ti. You think that card will be 400, I think that is wishful thinking. It would be nice, but we havent seen a x80 card under 500 since?? ever.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Yep your right my bad, I honestly didn't pay much attention to the 290x until after the mining craze was over.
> 
> Anyway I find it doubtful that prices go down. the fury X was release 3 months after the 980ti at the same price where it offers no where near the same performance.


I believe if it had not been for the 980Ti, AMD would have tried to price the Fury X close to the Titan X. Remember, the Fury line was suppose to be AMD answer to the Titan line. I think $650 was the cheapest they could price the Fury X and still make any profit considering all the tech the card came with. (HBM and watercooling.)

Anyway, all I can do is hope. Otherwise, like I said I will be sticking with this 980 until I can get 980Ti performance for $400 or less.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> I believe if it had not been for the 980Ti, AMD would have tried to price the Fury X close to the Titan X. Remember, the Fury line was suppose to be AMD answer to the Titan line. I think $650 was the cheapest they could price the Fury X and still make any profit considering all the tech the card came with. (HBM and watercooling.)


HMB, Watercooling, and bad performance









Dont get me wrong, I would love to see 1080 for 400, I just sincerely doubt we will see at less than 500. And given the prices of the 780 and 980 at launch I think 550, 600 is a better guess.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> HMB, Watercooling, and bad performance
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Dont get me wrong, I would love to see 1080 for 400, I just sincerely doubt we will see at less than 500.


LOL... Yeah AMD bombed on the Fury line.


----------



## ZealotKi11er

Quote:


> Originally Posted by *criminal*
> 
> LOL... Yeah AMD bombed on the Fury line.


I do not think Fury line is bad is that this GCN architecture is maxed in DX11. If 290X hits CPU overhead @ 1080p so will the Fury X. Only in true DX12 games we will show the difference but when that comes both cards will be slow.


----------



## criminal

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I do not think Fury line is bad is that this GCN architecture is maxed in DX11. If 290X hits CPU overhead @ 1080p so will the Fury X. Only in true DX12 games we will show the difference but when that comes both cards will be slow.


I guess for me it bombed because I was so disappointed. Now the performance isn't so bad considering the regular Fury catches up with the 980Ti @ 4k. I just don't see the point of the cards now with Polaris so close.


----------



## sugarhell

Quote:


> Originally Posted by *criminal*
> 
> I guess for me it bombed because I was so disappointed. Now the performance isn't so bad considering the regular Fury catches up with the 980Ti @ 4k. I just don't see the point of the cards now with Polaris so close.


Is there any point to any gpu right now that we are so close to the new cards?


----------



## Charcharo

The Nano is OK as it is a niche card. Within its niche it is quite good.

Fury Non X is awesome actually. IMHO makes the 980 irrelevant.

Fury X is... Meh...


----------



## mtcn77

Quote:


> Originally Posted by *criminal*
> 
> I guess for me it bombed because I was so disappointed. Now the performance isn't so bad considering the regular Fury catches up with the 980Ti @ 4k. I just don't see the point of the cards now with Polaris so close.


The performance is rather okay. Make no mistake about it: the ROP units are needed because the bandwidth is HUGE. Should Nvidia have come up with an HBM card, you'd have the same trade off. It is much better having extra ALU's that can turn the bandwidth to performance than have bandwidth and ROPs, but not enough ALUs to utilise them. As long as die space is limited, this is key in bringing more performance to the card.


----------



## JunkoXan

Quote:


> Originally Posted by *Charcharo*
> 
> The Nano is OK as it is a niche card. Within its niche it is quite good.
> 
> Fury Non X is awesome actually. IMHO makes the 980 irrelevant.
> 
> Fury X is... Meh...


The Nano is beautiful...







makes me proud to own one


----------



## criminal

Quote:


> Originally Posted by *sugarhell*
> 
> Is there any point to any gpu right now that we are so close to the new cards?


Nope. Already said that earlier in the thread.


----------



## Themisseble

Quote:


> Originally Posted by *JunkoXan*
> 
> The Nano is beautiful...
> 
> 
> 
> 
> 
> 
> 
> makes me proud to own one


Could you do some measurement of your system? Power consumption while playing and you must be GPU bottlenecked also, if you could tell me AVG clock perf?... I know off topic....


----------



## Assirra

Quote:


> Originally Posted by *criminal*
> 
> Anyway, all I can do is hope. Otherwise, like I said I will be sticking with this 980 until I can get 980Ti performance for $400 or less.


As much as i wish it would go this way, i doubt it. Remember that during the mining craze AMD cards went skyrocket so if it is indeed as good as that, the prices will go up again.
Maybe not from AMD side but the retailers won't hesitate to raise the price.


----------



## JunkoXan

Quote:


> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *JunkoXan*
> 
> The Nano is beautiful...
> 
> 
> 
> 
> 
> 
> 
> makes me proud to own one
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Could you do some measurement of your system? Power consumption while playing and you must be GPU bottlenecked also, if you could tell me AVG clock perf?... I know off topic....
Click to expand...

I don't have a Kill-a-Watt meter unfortunately, and if you look at the other sig build that's where the Nano is. the Sandy system is in storage until I need it.







I manage around 950mhz clock performance in all my games, some occasions it would hit 1000mhz. I just wish I could keep it at one number instead of this fluxing around crap.







also I'm playing at 1080p, but I'm getting a 1440p when I can.


----------



## Cyber Locc

Quote:


> Originally Posted by *sugarhell*
> 
> Is there any point to any gpu right now that we are so close to the new cards?


Quote:


> Originally Posted by *criminal*
> 
> Nope. Already said that earlier in the thread.


Thing is I think you guys and I have different definitions of close







, They have slated Mobile Pascal for a june launch, lastest reports are showing that desktop pascal is likely more around September. That is for x80/x70 lines, TIs and Titans with HBM are not likely to hit until 2017 that isn't "close". Also that is all basing off they are not delayed, Also seen a shipping Manifest earlier that is going around showing the X80/X70 assumed with a shipping insurance price of 700 and 1200 so $300 X70 and $400 dollar X80s is far from likely.

I think we will see the mid range cards (x80, x70) released around Broadwell E, and the flagships in time or around Skylake E. (I would use the Mainstream Desktop variants but I don't even know their launch dates anymore I lost all interest in mainstream platforms







)

So now lets look at that, Pascal around Semptember, with a X80 leading at the time. It will trade blows with a 980ti a tad faster in some places slower in others. It will feature GDDR5, and maybe Async. The only benefit I see there is Async and maybe some power efficiency gains, I seriously think you guys are setting your selves up for some serious disappointment.


----------



## GorillaSceptre

I'm waiting for the Q1 2017 monsters, the Vive/Rift should have some good titles out by then too. I've never bought a "burn your money" enthusiast card before, looking forward to it.







I don't really need anything more than a 390x for 1080p anyway.

All i can say is there better be a 4k ultra-wide out by then..


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Thing is I think you guys and I have different definitions of close
> 
> 
> 
> 
> 
> 
> 
> , They have slated Mobile Pascal for a june launch, lastest reports are showing that desktop pascal is likely more around September. That is for x80/x70 lines, TIs and Titans with HBM are not likely to hit until 2017 that isn't "close". Also that is all basing off they are not delayed, Also seen a shipping Manifest earlier that is going around showing the X80/X70 assumed with a shipping insurance price of 700 and 1200 so $300 X70 and $400 dollar X80s is far from likely.
> 
> I think we will see the mid range cards (x80, x70) released around Broadwell E, and the flagships in time or around Skylake E. (I would use the Mainstream Desktop variants but I don't even know their launch dates anymore I lost all interest in mainstream platforms
> 
> 
> 
> 
> 
> 
> 
> )
> 
> So now lets look at that, Pascal around Semptember, with a X80 leading at the time. It will trade blows with a 980ti a tad faster in some places slower in others. It will feature GDDR5, and maybe Async. The only benefit I see there is Async and maybe some power efficiency gains, I seriously think you guys are setting your selves up for some serious disappointment.


You still seem to only be referring to Nvidia cards. I feel Polaris is much closer to release. At any rate, if I needed a card right now I would get something $200 or less to hold me over. That's just me.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> You still seem to only be referring to Nvidia cards. I feel Polaris is much closer to release. At any rate, if I needed a card right now I would get something $200 or less to hold me over. That's just me.


I think we will see Polaris before we see Pascal as well.


----------



## Forceman

Quote:


> Originally Posted by *criminal*
> 
> You still seem to only be referring to Nvidia cards. I feel Polaris is much closer to release. At any rate, if I needed a card right now I would get something $200 or less to hold me over. That's just me.


AMD confirmed in that Reddit AMA that Polaris cards weren't coming until mid-2016. So earliest would be May, more likely June/July like the Fury. So sooner maybe, but still not soon.


----------



## criminal

Quote:


> Originally Posted by *Forceman*
> 
> AMD confirmed in that Reddit AMA that Polaris cards weren't coming until mid-2016. So earliest would be May, more likely June/July like the Fury. So sooner maybe, but still not soon.


So 3-4 months? Maybe 5 at the most? I still stand by saying it is to CLOSE to spend a bunch on a gpu right now.


----------



## Devnant

Quote:


> Originally Posted by *criminal*
> 
> So 3-4 months? Maybe 5 at the most? I still stand by saying it is to CLOSE to spend a bunch on a gpu right now.


Why spend money on GPUs at all? I mean, all we get nowadays are "platform equality" games. Just look at the potential lost by early E3 trailers before the new consoles were announced. Look at the heavily downgraded The Division right now. I mean, come on! What's the point? Get epeen for benchmarks?


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> Why spend money on GPUs at all? I mean, all we get nowadays are "platform equality" games. Just look at the potential lost by early E3 trailers before the new consoles were announced. Look at the heavily downgraded The Division right now. I mean, come on! What's the point? Get epeen for benchmarks?


I smell a 1080 player...


----------



## magnek

Quote:


> Originally Posted by *criminal*
> 
> So 3-4 months? Maybe 5 at the most? I still stand by saying it is to CLOSE to spend a bunch on a gpu right now.


If you're on 1080p then yeah I agree. Personally the only GPU I'd even consider at this point is a used 290X for $200 to hold over until the new GPUs drop.

If you're on 1440p then you may just have to either suck it up and pony up the dough for a used 980 Ti.


----------



## Assirra

Quote:


> Originally Posted by *Devnant*
> 
> Why spend money on GPUs at all? I mean, all we get nowadays are "platform equality" games. Just look at the potential lost by early E3 trailers before the new consoles were announced. Look at the heavily downgraded The Division right now. I mean, come on! What's the point? Get epeen for benchmarks?


While a lot of games indeed downgrade, the PC is usually still the best version.
resolution and fps will always be higher unless your PC is a toaster and those alone will make the PC version of a game superior.


----------



## Devnant

Quote:


> Originally Posted by *Assirra*
> 
> While a lot of games indeed downgrade, the PC is usually still the best version.
> resolution and fps will always be higher unless your PC is a toaster and those alone will make the PC version of a game superior.


That's it! That's [email protected] FPS now. Then there will be [email protected] FPS. Then what? There are more things in gaming than just FPS and resolutions. Just saying.


----------



## JunkoXan

Quote:


> Originally Posted by *Devnant*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Assirra*
> 
> While a lot of games indeed downgrade, the PC is usually still the best version.
> resolution and fps will always be higher unless your PC is a toaster and those alone will make the PC version of a game superior.
> 
> 
> 
> That's it! That's [email protected] FPS now. Then there will be [email protected] FPS. Then what? There are more things in gaming than just FPS and resolutions. Just saying.
Click to expand...

to some, that's all there is... FPS and Resolution...


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> That's it! That's [email protected] FPS now. Then there will be [email protected] FPS. Then what? There are more things in gaming than just FPS and resolutions. Just saying.


and you just made it clear you have never played at 4k 60fps lol.


----------



## magnek

Quote:


> Originally Posted by *Devnant*
> 
> That's it! That's [email protected] FPS now. Then there will be [email protected] FPS. Then what? There are more things in gaming than just FPS and resolutions. Just saying.


The potential for game modding alone is enough of a reason to stay on PC.


----------



## Assirra

Quote:


> Originally Posted by *Devnant*
> 
> That's it! That's [email protected] FPS now. Then there will be [email protected] FPS. Then what? There are more things in gaming than just FPS and resolutions. Just saying.


Quote:


> Originally Posted by *JunkoXan*
> 
> to some, that's all there is... FPS and Resolution...


Seriously, where did i say that is all there? Both of you are turning me into something i ain't. I played Dark Souls on PC back when it first released, horrible fps included and platinum games still don't know what 1440p is sadly yet i loved metal gear rising and transformers.
My point is, even if downgrades happen for platform equality, the PC version will perform better.
Why not go for that one?


----------



## Charcharo

Quote:


> Originally Posted by *Devnant*
> 
> That's it! That's [email protected] FPS now. Then there will be [email protected] FPS. Then what? There are more things in gaming than just FPS and resolutions. Just saying.


Yes, Resolution and FPS is just little Icing on top of PC Gaming.

It has:
Modding.
Real Backwards Compatibility
Emulation
A lot more games either way.
Choice and customization on how to play your games (either through settings or control options).

Graphics and resolution is small fry compared to that. The only reason it is repeated so often... is because it is easy to demonstrate







!
It is the smallest thing PC Gaming has though.

*Though modern games are... not who knows what people


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> and you just made it clear you have never played at 4k 60fps lol.


Oh! I actually did! I have SLI 980 TIs! But I've made the switch to ultrawide, and I'm not going back.
Quote:


> Originally Posted by *magnek*
> 
> The potential for game modding alone is enough of a reason to stay on PC.


True. That's my only hope!


----------



## Olivon

Quote:


> Originally Posted by *Assirra*
> 
> Seriously, where did i say that is all there? Both of you are turning me into something i ain't. I played Dark Souls on PC back when it first released, horrible fps included and platinum games still don't know what 1440p is sadly yet i loved metal gear rising and transformers.
> My point is, even if downgrades happen for platform equality, the PC version will perform better.
> Why not go for that one?


Once *modded*, Darks Souls on PC offers the best experience possible. Bringing something than consoles will never offer.


----------



## Assirra

Quote:


> Originally Posted by *Olivon*
> 
> Once *modded*, Darks Souls on PC offers the best experience possible. Bringing something than consoles will never offer.


I know, i did a second run with some mods and it was amazing. Love the aggression mod, those mobs are relentless


----------



## criminal

Quote:


> Originally Posted by *Devnant*
> 
> Why spend money on GPUs at all? I mean, all we get nowadays are "platform equality" games. Just look at the potential lost by early E3 trailers before the new consoles were announced. Look at the heavily downgraded The Division right now. I mean, come on! What's the point? Get epeen for benchmarks?


I hate consoles and love upgrading my PC.


----------



## ZealotKi11er

A lot of people here upgrade because they can. More of a want than need. I need to upgrade to the next generation GPU because of 4K. 1 x 290X just does not cut it for 4K with modern games unless you go Medium-Low and then breaks the purpose of 4K to begin with. A lot of people here have not tried 4K. You have to try it to really appreciate how amazing it is. I cant play Witcher 3 @ 1440p after trying 4K.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> A lot of people here upgrade because they can. More of a want than need. I need to upgrade to the next generation GPU because of 4K. 1 x 290X just does not cut it for 4K with modern games unless you go Medium-Low and then breaks the purpose of 4K to begin with. A lot of people here have not tried 4K. You have to try it to really appreciate how amazing it is. I cant play Witcher 3 @ 1440p after trying 4K.


Well I will let you know how a overclocked Ti hangs with Wicther 4k tomorrow or Friday hopefully







got to get all my OC locked in I been staring at real bench for the last 2 days, haha next valley. Valley is going to suck though, as I am letting it run while I am suppose to be working, sothats fine I just have a old small second monitor on the side and look over once in a awhile. With valley I actually have to watch it and hook it to a good big monitor ughhhh lol.

Should be able to do it without AA and a few settings turned down slightly. And of course a good OC.


----------



## superstition222

Quote:


> Originally Posted by *ZealotKi11er*
> 
> A lot of people here upgrade because they can. More of a want than need. I need to upgrade to the next generation GPU because of 4K. 1 x 290X just does not cut it for 4K with modern games unless you go Medium-Low and then breaks the purpose of 4K to begin with. A lot of people here have not tried 4K. You have to try it to really appreciate how amazing it is. I cant play Witcher 3 @ 1440p after trying 4K.


If you place your panel far enough away it will be indistinguishable. The human eye is limited in how much acuity it has and distance makes a big difference. It's too bad there aren't many low input lag low blur panels in large sizes.


----------



## PostalTwinkie

Quote:


> Originally Posted by *ZealotKi11er*
> 
> A lot of people here upgrade because they can. More of a want than need. I need to upgrade to the next generation GPU because of 4K. 1 x 290X just does not cut it for 4K with modern games unless you go Medium-Low and then breaks the purpose of 4K to begin with. A lot of people here have not tried 4K. You have to try it to really appreciate how amazing it is. I cant play Witcher 3 @ 1440p after trying 4K.


A lot of agreement here, although I just can't do 4K, yet. 144 Hz 1440P is just too good right now, especially for those of us that suffer motion sickness easily. Lower frame rates and animation rates tend to make us a little nauseated . So while 4K does look great, the trade-off isn't enough for a few of us unlucky bastards with weak stomachs.

The moment 144 Hz 4K hits, I am all in! The recently released, but years from usage, DP 1.4 supports 120Hz 8K, so 144/4K should be in the next couple of years. I am thinking around the time Volta and AMD's offering drops.

Upgrade to Pascal (if it is good), to scratch that upgrade itch. Go full boat with the next architecture and 144/4K.


----------



## superstition222

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Upgrade to Pascal (if it is good), to scratch that upgrade itch. Go full boat with the next architecture and 144/4K.


Best pricing should be when Polaris and Pascal are both on the market.


----------



## ZealotKi11er

Quote:


> Originally Posted by *superstition222*
> 
> If you place your panel far enough away it will be indistinguishable. The human eye is limited in how much acuity it has and distance makes a big difference. It's too bad there aren't many low input lag low blur panels in large sizes.


I use 4K 40" and sit same distance as I did with 27 1440p. I can use custom resolution to have 1440p 21:9 , 3K, 1440p 1:1 but when I run 4K its the TV feeling but you are close up. Things are bigger and look sharp.


----------



## criminal

Quote:


> Originally Posted by *PostalTwinkie*
> 
> A lot of agreement here, although I just can't do 4K, yet. 144 Hz 1440P is just too good right now, especially for those of us that suffer motion sickness easily. Lower frame rates and animation rates tend to make us a little nauseated . So while 4K does look great, the trade-off isn't enough for a few of us unlucky bastards with weak stomachs.
> 
> The moment 144 Hz 4K hits, I am all in! The recently released, but years from usage, DP 1.4 supports 120Hz 8K, so 144/4K should be in the next couple of years. I am thinking around the time Volta and AMD's offering drops.
> 
> Upgrade to Pascal (if it is good), to scratch that upgrade itch. Go full boat with the next architecture and 144/4K.


I hope you pass on the gsync monitor this time, if/when you go 144/4k!


----------



## Defoler

Quote:


> Originally Posted by *criminal*
> 
> I hope you pass on the gsync monitor this time, if/when you go 144/4k!


By that time, we don't know if g-sync will still be available or a new standard will appear which takes in both g-sync and freesync.


----------



## Devnant

Honestly, I´ve found 16:9 4K not nearly as impressive as 21:9 3440x1440. I´ve had to change my S34E790C once, and the switch back to 16:9 was simply terrible, even @4K. Just enable 4xMSAA/8xMSAA or DSR and you´re pretty much set anyways.

Only way I´m changing my monitor now is when/if they release a 5040x2160 monitor with improved OLED (no burn-ins), 144 hertz and freesync/g-sync (or something better).


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> Honestly, I´ve found 16:9 4K not nearly as impressive as 21:9 3440x1440. I´ve had to change my S34E790C once, and the switch back to 16:9 was simply terrible, even @4K. Just enable 4xMSAA/8xMSAA or DSR and you´re pretty much set anyways.
> 
> Only way I´m changing my monitor now is when/if they release a 5040x2160 monitor with improved OLED (no burn-ins), 144 hertz and freesync/g-sync (or something better).


May I ask, have you ever used surround? and if so is 21:9 similar. I tried and tried to like Surround/Eyefinity back in the day, I have always had 3-4 monitors due to I like having them for real work. However when I tried Surround/Eyefinity it made me sick to my stomach after like 2 secs.

I tried it more than once and every time it makes me want to puke very quickly, I am scared to try 21:9 for that exact reason.

The first time I tried it it was with 3 17inch square old style displays (they were the only 3 matching displays I owned), thats not too awfully much wider than a 34 inch 21:9 I dont think.


----------



## raghu78

Ashes of Singularity release confirmed for March 31st.

http://www.pcper.com/news/General-Tech/Ashes-Singularity-Goes-Live-March-31st

Nvidia has 3 weeks to fix the async compute problem if thats possible in the first case.


----------



## Themisseble

Quote:


> Originally Posted by *raghu78*
> 
> Ashes of Singularity release confirmed for March 31st.
> 
> http://www.pcper.com/news/General-Tech/Ashes-Singularity-Goes-Live-March-31st
> 
> Nvidia has 3 weeks to fix the async compute problem if thats possible in the first case.


You really believe that ASYNC will run on maxwell?
Look they had had like 6-8 months... They will kep saying that driver will arrive, but in the end they will push Pascal and force people to upgrade.


----------



## BradleyW

Quote:


> Originally Posted by *raghu78*
> 
> Ashes of Singularity release confirmed for March 31st.
> http://www.pcper.com/news/General-Tech/Ashes-Singularity-Goes-Live-March-31st
> Nvidia has 3 weeks to fix the async compute problem if thats possible in the first case.


Nvidia hardware has little to no support for actual Async. Their architecture is tuned for large sequential operations on a single thread. That's why they pull ahead on Direct-X 11. Don't get me wrong, even DX12 requires a main thread for most work, but at least it can also tap into idle processing units for less overhead and quicker operation.


----------



## Themisseble

Quote:


> Originally Posted by *BradleyW*
> 
> Nvidia hardware has little to no support for actual Async. Their architecture is tuned for large sequential operations on a single thread. That's why they pull ahead on Direct-X 11. Don't get me wrong, even DX12 requires a main thread for most work, but at least it can also tap into idle processing units for less overhead and quicker operation.


HITMAN Already support async and it was launched today.


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> May I ask, have you ever used surround? and if so is 21:9 similar. I tried and tried to like Surround/Eyefinity back in the day, I have always had 3-4 monitors due to I like having them for real work. However when I tried Surround/Eyefinity it made me sick to my stomach after like 2 secs.
> 
> I tried it more than once and every time it makes me want to puke very quickly, I am scared to try 21:9 for that exact reason.
> 
> The first time I tried it it was with 3 17inch square old style displays (they were the only 3 matching displays I owned), thats not too awfully much wider than a 34 inch 21:9 I dont think.


Just two screens at work, because I´m not a big fan of bezels. I actually switch one of the screens off.


----------



## sugarhell

Some hitman dx12 results

http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


----------



## infranoia

Quote:


> Originally Posted by *sugarhell*
> 
> Some hitman dx12 results
> 
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


Need Fiji in there... So a 390 (non-X) is placing between a 980 and 980Ti at all resolutions.


----------



## PontiacGTX

Quote:


> Originally Posted by *sugarhell*
> 
> Some hitman dx12 results
> 
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


Does it have async compute enabled?


----------



## rickcooperjr

Quote:


> Originally Posted by *sugarhell*
> 
> Some hitman dx12 results
> 
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


It is impressive the R9 390 not X version is beating a GTX 980 by around 10fps can't imagine what a R9 390X would do would be around 980TI lvl or above and the fury / fury X will be even higher. This is showwing a telling trend of facts that was pointed out a while back Nvidia maxwell and previous were designed serial not parallel while AMD designed for parallel for past several generations. This is proof ashes and hitman both show this to be true and even the DEV's directly working with the hardware said the same DX12 is more productive on AMD hardware because it is much more capable of parallel workloads.

I said all this a long while back as far back as shortly after the HD 7000 series release I stated GCN had a very powerfull parallel workload design that DX11 / DX9 / DX10 just are not capable of. The facts are DX12 / Vulkan and so on are all designed to highlight parrellel workflows which is what it will take to eleviate CPU workload because the GPU can do alot more work more efficiently than CPU's can with these newwer API's.

[EDIT] they just added R9 390X results to the benchmark and yes it is very close to the 980TI the 390X is showing amazing performance per dollar across the board all the way upto 4k and at 4k it is literally amazing the 390 is equally impressive considering both cards are around $350 for 390 and $400 or so for 390X. The GTX 980 is $450 and the GTX 980 TI is $650 I would say the performance per dollar on the AMD cards is very impressive.


----------



## BradleyW

Quote:


> Originally Posted by *Themisseble*
> 
> HITMAN Already support async and it was launched today.


My comment is in no way related to whether or not Hitman is to launch with DX12. My comment is towards Nvidia and their current architecture abilities.


----------



## MerkageTurk

Because AMD is hardware based DX12 support which nVidia believes they can get away with it, by implementing it via software and releasing new hardware for consumer bots to purchase.


----------



## Cyber Locc

Quote:


> Originally Posted by *MerkageTurk*
> 
> Because AMD is hardware based DX12 support which nVidia believes they can get away with it, by implementing it via software and releasing new hardware for consumer bots to purchase.


Umm better look at that Hitman bench again, As I am seeing the 980ti beat the 390x, with the Titan X most likely beating the Fury X with no Async on for the NV cards and Hitman was suppose to be the best implementation of Async. There "software Async" doesnt need to do much to thrash AMD cards.

Then realize as I pointed out earlier in the thread we have a very small amount of DX12 games coming this year, there wont be many DX12 games until after Volta, so in the next 2 years people will need to replace there maxwells and? having a GPU for 3 years is quite awhile.

I do love the AMD can do spirit buy it today and dont use it for 3 years.....

I do agree that budget orientated people should be going with AMD there cards do age better, however that aside they are priced lower out the gate as well. I hope AMD pull a rabbit like they did with the 290x to resell me on Polaris


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm better look at that Hitman bench again, As I am seeing the 980ti beat the 390x, with the Titan X most likely beating the Fury X with no Async on for the NV cards and Hitman was suppose to be the best implementation of Async. There "software Async" doesnt need to do much to thrash AMD cards.
> 
> Then realize as I pointed out earlier in the thread we have a very small amount of DX12 games coming this year, there wont be many DX12 games until after Volta, so in the next 2 years people will need to replace there maxwells and? having a GPU for 3 years is quite awhile.
> 
> I do love the AMD can do spirit buy it today and dont use it for 3 years.....


Actually the $439 390X is supposed to compete with the $499 980. Not the $649.99 980 TI.

Fury X is supposed to compete with the 980 TI. TITAN X is pretty much irrelevant ever since the 980 TI launched.

It's pretty bad that the 390X is so close to the 980 TI at Hitman.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Devnant*
> 
> Actually the $439 390X is supposed to compete with the $499 980. Not the $649.99 980 TI.
> 
> Fury X is supposed to compete with the 980 TI. TITAN X is pretty much irrelevant ever since the 980 TI launched.
> 
> It's pretty bad that the 390X is so close to the 980 TI at Hitman.


Another question that hasn't been asked...

If we see these gains now, in the early titles, how good does it get when devs get experience? Surely they aren't masters of DX 12 yet, and there is even more to garner from it!


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> Actually the $439 390X is supposed to compete with the $499 980. Not the $649.99 980 TI.
> 
> Fury X is supposed to compete with the 980 TI. TITAN X is pretty much irrelevant ever since the 980 TI launched.
> 
> It's pretty bad that the 390X is so close to the 980 TI at Hitman.


Actually the Fury X was made and launched to compete with the Titan X not the 980ti, the Fury is suppose to compete with the 980ti, I agree on the other counts though. However, again Nvidia has zero Async in this bench and the game is made for AMD cards, when we see a DX12 Gameworks title I think it will tell a different tale.

Also it isn't really that bad when you realize that AMD designed Mantle which was then turned into DX12, AMD cards were designed for DX12 before DX12 was even thought of. It is essentially Gimp works API, everyone complains when Nvidia does this with Game works, now AMD does it with an Entire API and everyone is like oh AMD is the best. I like AMD just kinda find that logic semi flawed.

Also another thing, how does this vary so widely from the ashes bench? Ashes puts the 390x over the 980ti yet Hitman which is said to have the best and most use of Async that will be seen, shows a different story?.


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> Actually the Fury X was made and launched to compete with the Titan X not the 980ti, the Fury is suppose to compete with the 980ti, I agree on the other counts though. However, again Nvidia has zero Async in this bench and the game is made for AMD cards, when we see a DX12 Gameworks title I think it will tell a different tale.
> 
> Also it isn't really that bad when you realize that AMD designed Mantle which was then turned into DX12, AMD cards were designed for DX12 before DX12 was even thought of. It is essentially Gimp works API, everyone complains when Nvidia does this with Game works, now AMD does it with an Entire API and everyone is like oh AMD is the best. I like AMD just kinda find that logic semi flawed.


The difference in performance between a TITAN X and a 980 TI is about only 2-3%. The TITAN X market pretty much got killed after the 980 TI launched, because you're paying $350 more for 3% more performance and six extra VRAM that is only relevant at 4K surround.

Most benchmarks completely disregard the TITAN X for this reason.

Also, when the Fury X launched most tech sites were comparing it to the 980 TI, as they have the same suggested price.


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> The difference in performance between a TITAN X and a 980 TI is about only 2-3%. The TITAN X market pretty much got killed after the 980 TI launched, because you're paying $350 more for 3% more performance and six extra VRAM that is only relevant at 4K surround.
> 
> Most benchmarks completely disregard the TITAN X for this reason.
> 
> Also, when the Fury X launched most tech sites were comparing it to the 980 TI, because they have the same price.


Right well what Tech sites do and what is fact are 2 different things, AMD said themselves that Fury X was designed to rival the Titan X. The Fury is only 3% slower than the Fury X it was designed to rival the 980ti. AMD always prices lower than its Nvidia rival, so becomes it loses this time that changes the rival? The 290x was 550 as you stated earlier, the 780 that it rivaled was 650. and in alot of cases it beat its rival, the two were very close the Fury X and Titan X does not have the same similarity at all, nor does the Fury and the TI nor the FX for that matter.

Going down the top end line we have this.

Fury X vs Titan X
Fury vs 980ti
390x vs 980
390 vs 970
380x vs 960

380
370 All NA as NV no longer makes cards in these brackets.
xxx


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> Right well what Tech sites do and what is fact are 2 different things, AMD said themselves that Fury X was designed to rival the Titan X. The Fury is only 3% slower than the Fury X it was designed to rival the 980ti. AMD always prices lower than its Nvidia rival, so becomes it loses this time that changes the rival? The 290x was 550 as you stated earlier, the 780 that it rivaled was 650. and in alot of cases it beat its rival, the two were very close the Fury X and Titan X does not have the same similarity at all, nor does the Fury and the TI nor the FX for that matter.


Well, as far as I remember before Fury X launched AMD was comparing it to the 980 TI, not the TITAN X. Just a recap:
http://videocardz.com/56711/amd-radeon-r9-fury-x-official-benchmarks-leaked

That's because, as I've said, their true competitor for market purposes was the 980 TI, not the TITAN X. Why? The 980 TI and TITAN X are so close to performance, specially after overclocked, that it simply doesn't matter.

You've mentioned the 780 TI when comparing to the 290X. That's when there was also TITAN BLACK. See what you just did? You completely disregarded the TITAN BLACK. It's the same thing with the TITAN X. TITAN X only matters for 4K surround, and you need about 4 of those for that.


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> Well, as far as I remember before Fury X launched AMD was comparing it to the 980 TI, not the TITAN X. Just a recap:
> http://videocardz.com/56711/amd-radeon-r9-fury-x-official-benchmarks-leaked
> 
> That's because, as I've said, their true competitor for market purposes was the 980 TI, not the TITAN X. Why? The 980 TI and TITAN X are so close to performance, specially after overclocked, that it simply doesn't matter.
> 
> You've mentioned the 780 TI when comparing to the 290X. That's when there was also TITAN BLACK. See what you just did? You completely disregarded the TITAN BLACK. It's the same thing with the TITAN X. TITAN X only matters for 4K surround, and you need about 4 of those for that.


No the 290x rival was the 780 not the 780ti, the Titan Black didn't have a rival.

As far as them releasing it to rival the 980ti, well I would love to know how they did that when they didn't even know there was a 980ti before releasing the Fury X...... No one thought there would be a 980ti. The Fury became the 980tis rival ipso facto, the Fury X was designed to rival the titan.

Amd compared it to the TI ehhh? when after it lost? try again. http://www.digitaltrends.com/computing/early-benchmarks-for-amds-fury-x-gpu-show-nvidia-titan-x-rivaling-performance/

Also the 295x2 was the Rival to the Titan Black if I recall correctly. Although kind of a bad one as it was dual GPU.


----------



## MerkageTurk

Dx12 gimped? Are you okay son?

It's open to both vendors unlike game works.


----------



## Cyber Locc

Quote:


> Originally Posted by *MerkageTurk*
> 
> Dx12 gimped? Are you okay son?
> 
> It's open to both vendors unlike game works.


It is open to both vendors however NV cant go backwards and change hardware support for cards that AMD had added to DX12. If you cant see why that statement is true well then I do not know what to tell you.

AMD desinged DX12 with Async Compute in Mind, where as NV didnt use that feature, that is the very definition of Gimping the competition. Software can fix Gimpworks, it cant fix Hardware faults.

It was a smart play by AMD that is for sure.


----------



## Catscratch

I already had texture issues with dx12 mode on 280x now the new update for Ashes added something weird going on at the bottom right corner. Or is it the AMD's 16.3 driver I dunno. Latest build still has the texture issues.



and my high settings benchmark. Standard :



Weird/Good thing is, the latest benchmark shows High preset is the same speed as the previous build Standard, cpu framerate is faster







. Previous build Standard : 43 / 73 New build High : 43.1 / 78


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> No the 290x rival was the 780 not the 780ti, the Titan Black didn't have a rival.
> 
> As far as them releasing it to rival the 980ti, well I would love to know how they did that when they didn't even know there was a 980ti before releasing the Fury X...... No one thought there would be a 980ti. The Fury became the 980tis rival ipso facto, the Fury X was designed to rival the titan.
> 
> Amd compared it to the TI ehhh? when after it lost? try again. http://www.digitaltrends.com/computing/early-benchmarks-for-amds-fury-x-gpu-show-nvidia-titan-x-rivaling-performance/


You didn't understand at all what I'm trying to say did you? 780 TI and TITAN Black are the same cards, but with a vast price difference because TITAN BLACK has twice the VRAM. It's almost the same thing between TITAN X and 980 TI. To the point that if the 980 TI looses badly to Fury X on a particular benchmark, TITAN X will too. It's not like if 980 TI looses to Fury X, TITAN X may win.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> It is open to both vendors however NV cant go backwards and change hardware support for cards that AMD had added to DX12. If you cant see why that statement is true well then I do not know what to tell you.
> 
> AMD desinged DX12 with Async Compute in Mind, where as NV didnt use that feature, that is the very definition of Gimping the competition. Software can fix Gimpworks, it cant fix Hardware faults.
> 
> It was a smart play by AMD that is for sure.


Microsoft is responsible for DX12, not AMD. Its Nvidia fault for sucking in DX12 and Async Compute. On the other hand, Nvidia is responsible for the train wreck Gameworks has become and features in gameworks that don't/won't work on AMD hardware.


----------



## Cyber Locc

Quote:


> Originally Posted by *Devnant*
> 
> You didn't understand at all what I'm trying to say did you? 780 TI and TITAN Black are the same cards, but with a vast price difference because TITAN BLACK has twice the VRAM. It's almost the same thing between TITAN X and 980 TI. To the point that if the 980 TI looses badly to Fury X on a particular benchmark, TITAN X will too. It's not like if 980 TI looses to Fury X, TITAN X may win.


Right I get that, a 980ti is a shaved down Titan X, however what I am trying to say is that is what the Fury is to the Fury X. The Fury is the 980tis competitor.
Quote:


> Originally Posted by *criminal*
> 
> Microsoft is responsible for DX12, not AMD. Its Nvidia fault for sucking in DX12 and Async Compute. On the other hand, Nvidia is responsible for the train wreck Gameworks has become.


Umm ya Microsoft is 100% responsible for taking Mantle and turning it into DX12 after pushes from AMD. DX12 is Mantle, that is AMDs code that DX12 is based from so try again. Also of course MS would use Mantle as DX12, seeing how the Xbox one uses DX12 and AMD hardware.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Right I get that, a 980ti is a shaved down Titan X, however what I am trying to say is that is what the Fury is to the Fury X. The Fury is the 980tis competitor.


Don't agree in the current state of the gpu market. I think AMD's intention was for the Fury X to be a Titan X competitor, but when it couldn't beat it and with the release of the 980Ti, AMD had no choice but to price it comparable to the 980Ti thus making it a 980Ti competitor. AMD has nothing that I would compare to the Titan X.
Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm ya Microsoft is 100% responsible for taking Mantle and turning it into DX12 after pushes from AMD. DX12 is Mantle, that is AMDs code that DX12 is based from so try again. Also of course MS would use Mantle as DX12, seeing how the Xbox one uses DX12 and AMD hardware.


AMD may have helped push DX12 along faster yes, but it is still Microsoft's baby and is open to both Nvidia and AMD. DX12 and async compute (not the same thing btw) are nothing like Gameworks.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Don't agree in the current state of the gpu market. I think AMD's intention was for the Fury X to be a Titan X competitor, but when it couldn't beat it and with the release of the 980Ti, AMD had no choice but to price it comparable to the 980Ti thus making it a 980Ti competitor. AMD has nothing that I would compare to the Titan X.


I agree they changed it when they lost, thats what I have been saying 100%.

As to the DX12 talk a second ago, http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle

By the time MS announced DX12 would be mantle based late in 2014 Maxwell was already done, so now they could not add hardware support for Async to cards that were already either released or being started on production. Pascal they may have a chance with, AMD however had this all planned already which is why all of GCN offers Async.

Also a good shred of proof from that,

"In fact, as some of you may recall, an AMD executive publicly stated a year ago that there was no "DirectX 12" on the Microsoft roadmap. Microsoft responded to those comments by affirming that it remained committed to evolving the DirectX standard - and then said nothing more on the topic. Then AMD launched Mantle, with significant support from multiple developers and a bevy of games launching this year - and apparently someone at Microsoft decided to pay attention."

It would be likely that Nv was also kept in the dark on this matter and by the time that MS decided lets make Mantle DX12, it was already too late for Maxwell.


----------



## Devnant

Quote:


> Originally Posted by *criminal*
> 
> Don't agree in the current state of the gpu market. I think AMD's intention was for the Fury X to be a Titan X competitor, but when it couldn't beat it and with the release of the 980Ti, AMD had no choice but to price it comparable to the 980Ti thus making it a 980Ti competitor. AMD has nothing that I would compare to the Titan X.


Bingo! I remember people speculating AMD would price the Fury X at about $800-900 before the 980 TI launched.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> I agree they changed it when they lost, thats what I have been saying 100%.
> 
> As to the DX12 talk a second ago, http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle
> 
> By the time MS announced DX12 would be mantle based late in 2014 Maxwell was already done, so now they could not add hardware support for Async to cards that were already either released or being started on production. Pascal they may have a chance with, AMD however had this all planned already which is why all of GCN offers Async.


Still Nvidia's fault for cutting features out of their hardware. Not being good at DX12 is all Nvidia's fault. It has NOTHING to do with AMD.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Still Nvidia's fault for cutting features out of their hardware. Not being good at DX12 is all Nvidia's fault. It has NOTHING to do with AMD.


How did they cut features? They never offered Parallel Compute+Graphics to begin with did they? AMD started that with GCN with plans and a layout idea for Mantle.

NV cannot predict the future LOL. NV never was going to adapt mantle so they had no use for it, then MS decided to jump on as the Xbox contracts would benefit.


----------



## magnek

Quote:


> Originally Posted by *criminal*
> 
> Still Nvidia's fault for cutting features out of their hardware. Not being good at DX12 is all Nvidia's fault. It has NOTHING to do with AMD.


Planned obsolescence that's all I gotta say.

Not for a single second will I believe this was somehow a "design oversight" on nVidia's part.


----------



## Cyber Locc

Quote:


> Originally Posted by *magnek*
> 
> Planned obsolescence that's all I gotta say.
> 
> Not for a single second will I believe this was somehow a "design oversight" on nVidia's part.


What is the oversight? They never had parelal graphics and compute so how can they take away something they never had? that was never used before mantle?

The logic in this thread has left the window lol.

NV did have and still do have Parallel compute, its called CUDA. They did not have Parallel compute + graphics that is called Async and thats AMDs brainstorm.


----------



## sugarhell

Quote:


> Originally Posted by *Cyber Locc*
> 
> What is the oversight? They never had parelal graphics and compute so how can they take away something they never had? that was never used before mantle?
> 
> The logic in this thread has left the window lol.


So wrong. Check fermi architecture.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> How did they cut features? They never offered Parallel Compute+Graphics to begin with did they? AMD started that with GCN with plans and a layout idea for Mantle.
> 
> NV cannot predict the future LOL. NV never was going to adapt mantle so they had no use for it, then MS decided to jump on as the Xbox contracts would benefit.


AMD can? Again, DX12 is not Mantle. Mantle helped push Microsoft to release DX12 quicker. AMD had the foresight to not cut out certain compute functions out of their hardware. Nvidia did with Maxwell to reduce power consumption and increase performance in DX11 make it a pure gaming chip.
Quote:


> Originally Posted by *magnek*
> 
> Planned obsolescence that's all I gotta say.
> 
> Not for a single second will I believe this was somehow a "design oversight" on nVidia's part.


Agree.
Quote:


> Originally Posted by *sugarhell*
> 
> So wrong. Check fermi architecture.


Yep.


----------



## EightDee8D

Quote:


> Originally Posted by *Cyber Locc*
> 
> What is the oversight? They never had parelal graphics and compute so how can they take away something they never had? that was never used before mantle?
> 
> The logic in this thread has left the window lol.


iirc fermi had something like that.


----------



## Charcharo

Hardware must be compared at price points. Anything different is... stupid.

So Fury X vs 980 TI.

The Titan X itself is a terrible buy these days (and always has been) ... but still, it is a 1 000 dollar GPU.

What happens in DX12 is basically a 300 dollar card obliterating a 500 dollar one and a 400 dollar one almost equaling a 650 dollar one. Since those 2 cards were already great value in DX11 (and IMHO better than their competition) this only adds to the value proposition.


----------



## zGunBLADEz

So we talking about +20% siding amd on this 2 benchmarks HITMAN and ASHES...

So we all know how good amd cards overclock this time around specially the fury-x as a S-ON/great overclocker /S-OFF.... Missing those 7970/290 days...

The only ones that benefits are the the mid range amd users


----------



## Cyber Locc

Quote:


> Originally Posted by *sugarhell*
> 
> So wrong. Check fermi architecture.


Quote:


> Originally Posted by *criminal*
> 
> AMD can? Again, DX12 is not Mantle. Mantle helped push Microsoft to release DX12 quicker. AMD had the foresight to not cut out certain compute functions out of their hardware. Nvidia did with Maxwell to reduce power consumption and increase performance in DX11 and make it a pure gaming chip.
> Agree.
> Yep.


Quote:


> Originally Posted by *EightDee8D*
> 
> iirc fermi had something like that.


! time more I will say this. NV had Parelal Compute only... It was Called Cuda, here you go

" Instead of programming dedicated
graphics units with graphics APIs, the programmer could now write C programs with CUDA
extensions and target a general purpose, massively parallel processor. We called this new way
of GPU programming "GPU Computing"-it signified broader application support, wider
programming language support, and a clear separation from the early "GPGPU" model of
programming. "
http://www.nvidia.com/content/pdf/fermi_white_papers/nvidia_fermi_compute_architecture_whitepaper.pdf

That is not Async or anything similar to Async, it does not work the same way and has zero bearing to this convo.

"AMD can? Again, DX12 is not Mantle. Mantle helped push Microsoft to release DX12 quicker. AMD had the foresight to not cut out certain compute functions out of their hardware."
AMD started the Parallel Compute+Graphics for Mantle, Mantle then gave birth to DX12. AMD gave MS the code for Mantle to create DX12 they used Mantles code as a base, before Mantle they were not even going to make a DX12 I suggest you read that article.


----------



## sugarhell

Quote:


> Originally Posted by *Cyber Locc*
> 
> ! time more I will say this. NV had Parelal Compute only... It was Called Cuda, here you go
> 
> " Instead of programming dedicated
> graphics units with graphics APIs, the programmer could now write C programs with CUDA
> extensions and target a general purpose, massively parallel processor. We called this new way
> of GPU programming "GPU Computing"-it signified broader application support, wider
> programming language support, and a clear separation from the early "GPGPU" model of
> programming. "
> http://www.nvidia.com/content/pdf/fermi_white_papers/nvidia_fermi_compute_architecture_whitepaper.pdf
> 
> That is not Async or anything similar to Async, it does not work the same way and has zero bearing to this convo.
> 
> "AMD can? Again, DX12 is not Mantle. Mantle helped push Microsoft to release DX12 quicker. AMD had the foresight to not cut out certain compute functions out of their hardware."
> AMD started the Parallel Compute+Graphics for Mantle, Mantle then gave birth to DX12. AMD gave MS the code for Mantle to create DX12 they used Mantles code as a base, before Mantle they were not even going to make a DX12 I suggest you read that article.


Please. Having a better command processor for general compute work was fermi speciality. They even had cache between shaders so they can communicate like gcn.

Cuda is programmable shaders. Nothing else..


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> ! time more I will say this. NV had Parelal Compute only... It was Called Cuda, here you go
> 
> " Instead of programming dedicated
> graphics units with graphics APIs, the programmer could now write C programs with CUDA
> extensions and target a general purpose, massively parallel processor. We called this new way
> of GPU programming "GPU Computing"-it signified broader application support, wider
> programming language support, and a clear separation from the early "GPGPU" model of
> programming. "
> http://www.nvidia.com/content/pdf/fermi_white_papers/nvidia_fermi_compute_architecture_whitepaper.pdf
> 
> That is not Async or anything similar to Async, it does not work the same way and has zero bearing to this convo.
> 
> "AMD can? Again, DX12 is not Mantle. Mantle helped push Microsoft to release DX12 quicker. AMD had the foresight to not cut out certain compute functions out of their hardware."
> AMD started the Parallel Compute+Graphics for Mantle, Mantle then gave birth to DX12. AMD gave MS the code for Mantle to create DX12 they used Mantles code as a base, before Mantle they were not even going to make a DX12 I suggest you read that article.


Whatever you say.









If you truly believe Microsoft created an api to cater to only one gpu vendor (one that only has 20% of the market btw), you are delusional. DX12 is not gimped on Nvidia hardware by anyone but Nvidia themselves.


----------



## EightDee8D

Quote:


> Originally Posted by *criminal*
> 
> Whatever you say.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> If you truly believe Microsoft created an api to cater to only one gpu vendor (one that only has 20% of the market btw), you are delusional. DX12 is not gimped on Nvidia hardware by anyone but Nvidia themselves.


or they lied and don't really support everything underdx12. which again is their fault.


----------



## raghu78

Quote:


> Originally Posted by *Cyber Locc*
> 
> I agree they changed it when they lost, thats what I have been saying 100%.
> 
> As to the DX12 talk a second ago, http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle
> 
> By the time MS announced DX12 would be mantle based late in 2014 Maxwell was already done, so now they could not add hardware support for Async to cards that were already either released or being started on production. Pascal they may have a chance with, AMD however had this all planned already which is why all of GCN offers Async.
> 
> Also a good shred of proof from that,
> 
> "In fact, as some of you may recall, an AMD executive publicly stated a year ago that there was no "DirectX 12" on the Microsoft roadmap. Microsoft responded to those comments by affirming that it remained committed to evolving the DirectX standard - and then said nothing more on the topic. Then AMD launched Mantle, with significant support from multiple developers and a bevy of games launching this year - and apparently someone at Microsoft decided to pay attention."
> 
> It would be likely that Nv was also kept in the dark on this matter and by the time that MS decided lets make Mantle DX12, it was already too late for Maxwell.


AMD designed GCN from the ground up to be a highly parallel graphics + compute architecture. We even saw the eight ACE units being introduced with PS4 and R9 290 / R9 290X. the original HD 7970 had only 2 ACE units. AMD designed GCN for the consoles. AMD had to design an architecture which would have longevity. AMD designed features like async compute which allow to maximize resource utilization. btw the proposal or suggestion for Mantle came from Johan Andersson of DICE. Mantle was Johan's brainchild and he was the chief architect of Mantle. What Mantle did was enable the industry discussion on low level graphics API with low CPU overhead. Mantle was more of a proof of concept which eventually triggered others like Apple, Microsoft and Khronos to come out with their own low level APIs. btw DX12 is conceptually similar to Mantle but has even more advanced features. DX12 is proprietary and the intellectual property of Microsoft. Nvidia, AMD and Intel worked with Microsoft to finalize DX12 spec. Vulkan was built completely from the Mantle code base though changes were made to accomodate the needs of an entire industry and not just one company.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Whatever you say.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> If you truly believe Microsoft created an api to cater to only one gpu vendor (one that only has 20% of the market btw), you are delusional. DX12 is not gimped on Nvidia hardware by anyone but Nvidia themselves.


Are you seriously that delusional? Xbox One runs on DX12 that gives MS's Console a boost they could care less about windows GPU support they want to sell more consoles more people own and game on consoles then they do graphics cards by a whole lot lol.

Quote:


> Originally Posted by *sugarhell*
> 
> Please. Having a better command processor for general compute work was fermi speciality. They even had cache between shaders so they can communicate like gcn.
> 
> Cuda is programmable shaders. Nothing else..


I agree with that but Parallel Compute that Fermi had is not the same thing as Async not even close.


----------



## Charcharo

Quote:


> Originally Posted by *criminal*
> 
> Whatever you say.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> If you truly believe Microsoft created an api to cater to only one gpu vendor (one that only has 20% of the market btw), you are delusional. DX12 is not gimped on Nvidia hardware by anyone but Nvidia themselves.


To be fair, the console market counts for more than Nvidia's market share.

Though apart from theoretically better and higher quality ports (both ways)... IDK.

I guess AMD just did the higher tech solution this time.

Now if instead of graphics someone came with incredible AI and melted CPUs... that would be so much better...
But gamers these days...


----------



## criminal

Quote:


> Originally Posted by *EightDee8D*
> 
> or they lied and don't really support everything underdx12. which again is their fault.


Yep
Quote:


> Originally Posted by *raghu78*
> 
> AMD designed GCN from the ground up to be a highly parallel graphics + compute architecture. We even saw the eight ACE units being introduced with PS4 and R9 290 / R9 290X. the original HD 7970 had only 2 ACE units. AMD designed GCN for the consoles. AMD had to design an architecture which would have longevity. AMD designed features like async compute which allow to maximize resource utilization. btw the proposal or suggestion for Mantle came from Johan Andersson of DICE. Mantle was Johan's brainchild and he was the chief architect of Mantle. What Mantle did was enable the industry discussion on low level graphics API with low CPU overhead. Mantle was more of a proof of concept which eventually triggered others like Apple, Microsoft and Khronos to come out with their own low level APIs. btw DX12 is conceptually similar to Mantle but has even more advanced features. DX12 is proprietary and the intellectual property of Microsoft. Nvidia, AMD and Intel worked with Microsoft to finalize DX12 spec. Vulkan was built completely from the Mantle code base though changes were made to accomodate the needs of an entire industry and not just one company.


Bingo!


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Yep
> Bingo!


Right so you agree with that, and in the same breath dont relize that DX12 was a scheme to get Xbox to beat PS4 they could give a hell about your Nv or AMD gpus they want to sell Xboxs that is what DX12 was made for.


----------



## sugarhell

Quote:


> Originally Posted by *Cyber Locc*
> 
> Are you seriously that delusional? Xbox One runs on DX12 that gives MS's Console a boost they could care less about windows GPU support they want to sell more consoles more people own and game on consoles then they do graphics cards by a whole lot lol.
> I agree with that but Parallel Compute that Fermi had is not the same thing as Async not even close.


Its not exactly the same but they had a parallel architecture back then. But they scrapped it and moved to kepler.

http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3

At the end of the page the threading of fermi will remind you something very similar


----------



## Mahigan

Fermi had full blown hardware scheduling and hardware redundancy (dedicated caches for Rendering and Compute pipelines). These were removed by NVIDIA, going backwards technologically, in order to gain the perception of a more refined architecture (performance per watt). NVIDIA also removed the dedcated fp64 CUDA cores from Maxwell (and had cut them down significantly in Kepler).

Basically, NVIDIA created a one trick pony for DX11. This is planned obsolescence. With Pascal, NVIDIA are re-adding the fp64 units and tailoring the architecture for compute performance. So yes, Maxwell and Kepler will get the appearance of being "gimped" once Pascal releases. NVIDIA users will have to upgrade or face lacklustre support in upcoming titles.

GCN, on the other hand, will adapt to these changing market factors. GCN is a highly flexible architecture (thanks to hardware redundancy). So that extra power usage was and is there for a reason. It has nothing to do with "superior NVIDIA engineering" and every to do with NVIDIA cutting down on their architecture in order to deliver one trick ponies one after the other.

But don't worry, NVIDIA will leverage gameworks in order to save face.

As per usual, I will be accused of being wrong and biased. Several months will go by and what I am saying will start to make sense. I'm used to having my name run into the mud by now.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Right so you agree with that, and in the same breath dont relize that DX12 was a scheme to get Xbox to beat PS4 they could give a hell about your Nv or AMD gpus they want to sell Xboxs that is what DX12 was made for.


XBONE and PS4 almost have the same basic hardware. And they are both certainly based on GCN.

So you believe Microsoft is in bed with AMD to hurt Nvidia? I don't buy that.


----------



## Cyber Locc

Quote:


> Originally Posted by *sugarhell*
> 
> Its not exactly the same but they had a parallel architecture back then. But they scrapped it and moved to kepler.
> 
> http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3
> 
> At the end of the page the threading of fermi will remind you something very similar


They did have a Parallel Architecture for Compute Only! they still do for Quodro and Tesla Cards AFAIK, That served no purpose in gaming until Mantle however so they were right to cut it. If Maxwell had it it would still serve no purpose in DX12 or Mantle Async is different.


----------



## Catscratch

So MS carefully kept dx12 secret from Nvidia so Nvidia couldn't react ? "Maxwell was out by the time MS announced dx12", if the usual things happened, AMD and Nvidia knew like 6 or 12 months before what DX12 was gonna be. There's no way MS could develop a DX12 and vendors do not know the details about.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> XBONE and PS4 almost have the same basic hardware. And they are both certainly based on GCN.
> 
> So you believe Microsoft is in bed with AMD to hurt Nvidia? I don't buy that.


You mean Xbox and PS4? ps3 doesnt have GCN lol.

PS4 does not use DX12 nor Async so yes they did it to beat PS4, it gives Xbox an advantage......

They could care less about NV, that does zero for them, beating Playstation that does.


----------



## sugarhell

Quote:


> Originally Posted by *Catscratch*
> 
> So MS carefully kept dx12 secret from Nvidia so Nvidia couldn't react ? "Maxwell was out by the time MS announced dx12", if the usual things happened, AMD and Nvidia knew like 6 or 12 months before what DX12 was gonna be. There's no way MS could develop a DX12 and vendors do not know the details about.


But the development of directx12 started 1 year before mantle,someone from nvidia said.


----------



## EightDee8D

Quote:


> Originally Posted by *Mahigan*
> 
> Fermi had full blown hardware scheduling and hardware redundancy (dedicated caches for Rendering and Compute pipelines). These were removed by NVIDIA, going backwards technologically, in order to gain the perception of a more refined architecture (performance per watt). NVIDIA also removed the dedcated fp64 CUDA cores from Maxwell (and had cut them down significantly in Kepler).
> 
> Basically, NVIDIA created a one trick pony for DX11. This is planned obsolescence. With Pascal, NVIDIA are re-adding the fp64 units and tailoring the architecture for compute performance. So yes, Maxwell and Kepler will get the appearance of being "gimped" once Pascal releases. NVIDIA users will have to upgrade or face lacklustre support in upcoming titles.
> 
> GCN, on the other hand, will adapt to these changing market factors. GCN is a highly flexible architecture (thanks to hardware redundancy). So that extra power usage was and is there for a reason. It has nothing to do with "superior NVIDIA engineering" and every to do with NVIDIA cutting down on their architecture in order to deliver one trick ponies one after the other.
> 
> But don't worry, NVIDIA will leverage gameworks in order to save face.


that parallelism is why fermi has dx12 support in some way i guess.(4xx/5xx)


----------



## Catscratch

Quote:


> Originally Posted by *sugarhell*
> 
> But the development of directx12 started 1 year before mantle,someone from nvidia said.


Very weird, either Nvidia underestimated Async Computing, some say comepletely ignored dx12. Do they have something up their sleeve because it's certainly gonna cost them very dearly.


----------



## Cyber Locc

Quote:


> Originally Posted by *Catscratch*
> 
> So MS carefully kept dx12 secret from Nvidia so Nvidia couldn't react ? "Maxwell was out by the time MS announced dx12", if the usual things happened, AMD and Nvidia knew like 6 or 12 months before what DX12 was gonna be. There's no way MS could develop a DX12 and vendors do not know the details about.


Maxwell didnt even think there was going to be a DX12 until after Maxwell was designed taped out and in production.
Quote:


> Originally Posted by *sugarhell*
> 
> But the development of directx12 started 1 year before mantle,someone from nvidia said.


well they are stupid, MS told AMD in that thread I linked there would be no DX12, Until late 2013 there was no plans for a DX12 at all.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> You mean Xbox and PS4? ps3 doesnt have GCN lol.
> 
> PS4 does not use DX12 nor Async so yes they did it to beat PS4, it gives Xbox an advantage......
> 
> They could care less about NV, that does zero for them, beating Playstation that does.


Yeah, corrected my post...lol

I will admit that I don't follow consoles very closely. I just don't think Microsoft would be so in bed with AMD that it would cause a ripple effect on Nvidia hardware.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Cyber Locc*
> 
> You mean Xbox and PS4? ps3 doesnt have GCN lol.
> 
> PS4 does not use DX12 nor Async so yes they did it to beat PS4, it gives Xbox an advantage......
> 
> They could care less about NV, that does zero for them, beating Playstation that does.


That's incorrect, the PS4 does use Async, an impressive amount of it to boot. It has more ACE units compared to the X1 too, so it's actually a better candidate for it.


----------



## sugarhell

Quote:


> Originally Posted by *Catscratch*
> 
> Very weird, either Nvidia underestimated Async Computing, some say comepletely ignored dx12. Do they have something up their sleeve because it's certainly gonna cost them very dearly.


Quote:


> Nvidia's Senior VP of Content and Technology. Tamasi painted a rather different picture than Corpus. He told me D3D12 had been in in the works for "more than three years" (longer than Mantle) and that "everyone" had been involved in its development.


So they knew about all this for almost 4 years now and they released maxwell like this.

Or they actually lied.

I dont know what is worse.

http://techreport.com/review/26239/a-closer-look-at-directx-12


----------



## Cyber Locc

Quote:


> Originally Posted by *GorillaSceptre*
> 
> That's incorrect, the PS4 does use Async, an impressive amount of it to boot. It has more ACE units compared to the X1 too, so it's actually a better candidate for it.


It uses it software wise? I dont follow consoles much but I wasnt aware they were using Async in there firmware. they dont use DX, they have there own coding.

If thats true than it makes my point even more, MS needed Async and DX12 to cacth up to sony never mind beat them.


----------



## Catscratch

Quote:


> Originally Posted by *Cyber Locc*
> 
> Maxwell didnt even think there was going to be a DX12 until after Maxwell was designed taped out and in production.


Lol, i found there were even people saying Maxwell was capable of Async Compute 6 months ago, that it's been a feature since Fermi







But in what context ? Async Compute's definition varies according to different people







https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


----------



## criminal

Quote:


> Originally Posted by *sugarhell*
> 
> So they knew about all this for almost 4 years now and they released maxwell like this.
> 
> Or they actually lied.
> 
> I dont know what is worse.
> 
> http://techreport.com/review/26239/a-closer-look-at-directx-12


Didn't Nvidia say that their cards were 100% DX12 compliant?


----------



## Mahigan

Quote:


> Originally Posted by *EightDee8D*
> 
> that parallelism is why fermi has dx12 support in some way i guess.(4xx/5xx)


Had NVIDIA kept its hardware side scheduling, dedicated caches and coupled them with Kepler and Maxwell's architectural advancements, we would have a big and hot running GPU but NVIDIA would have had a far more flexible architecture going forward.

Heck, they might even have been able to add ACEs in parallel to their command processor. That architecture would have dominated GCN (in the immediate and going forward). Something like an 8800 GTX effect.

I think that this is what Volta will be. If I were a betting man, I'd bet on Volta dominating AMDs architecture. So AMD has a year or two to catch up market share and sales wise, invest in some large architectural enhancements before Volta drops.


----------



## Cyber Locc

Quote:


> Originally Posted by *sugarhell*
> 
> So they knew about all this for almost 4 years now and they released maxwell like this.
> 
> Or they actually lied.
> 
> I dont know what is worse.
> 
> http://techreport.com/review/26239/a-closer-look-at-directx-12


Well someone is lying then.....

In fact, as some of you may recall, an AMD executive publicly stated a year ago that there was no "DirectX 12" on the Microsoft roadmap. Microsoft responded to those comments by affirming that it remained committed to evolving the DirectX standard - and then said nothing more on the topic. Then AMD launched Mantle, with significant support from multiple developers and a bevy of games launching this year - and apparently someone at Microsoft decided to pay attention.

http://www.extremetech.com/gaming/177407-microsoft-hints-that-directx-12-will-imitate-and-destroy-amds-mantle


----------



## Cyber Locc

Quote:


> Originally Posted by *Catscratch*
> 
> Lol, i found there were even people saying Maxwell was capable of Async Compute 6 months ago, that it's been a feature since Fermi
> 
> 
> 
> 
> 
> 
> 
> But in what context ? Async Compute's definition varies according to different people
> 
> 
> 
> 
> 
> 
> 
> https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


Wells that the thing isnt it. Async is this http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading.

What NV had was closer to this "AMD has offered multiple Asynchronous Compute Engines (ACEs) since the very first GCN part in 2011, the Tahiti-powered Radeon HD 7970. However prior to now the technical focus on the ACEs *was for pure compute workloads*, which true to their name allow GCN GPUs to execute compute tasks from multiple queues. *It wasn't until very recently that the ACEs became important for graphical (or rather mixed graphics + compute) workloads.*"

Which by AMDs own words that will not work with Async in DX12.


----------



## raghu78

Quote:


> Originally Posted by *Cyber Locc*
> 
> Right so you agree with that, and in the same breath dont relize that DX12 was a scheme to get Xbox to beat PS4 they could give a hell about your Nv or AMD gpus they want to sell Xboxs that is what DX12 was made for.


Sony's PS4 low level graphics API GNM is at a lower level than even Mantle or DX12/Vulkan. Sony even has a higher level API called GNMX which wraps GNM and makes it easier to program.

http://gamingbolt.com/interview-with-brad-wardell-ps4xbox-one-differences-directx-12-ashes-of-the-singularity-and-more

"What I was referencing at the time was Vulkan. We're part of the Khronos Group and now it depends who you talk to at Sony and this gets in to a debate. Sony has a very low-level API already for the PlayStation 4. The problem I have with it is that if you want to make use for it you're writing some very specific code just for the PlayStation 4. And in the real world people don't do that right. I write code generally to be as cross-platform as I can.

Now maybe in Unity or Unreal, one of the other guys will write their engines in such a way so that they make the most use of it, but that's going to take time. Whereas if they use something like Vulkan, it's not as low-level as their API, but Vulkan has the advantage that it's really easy to write for it. So you're more likely to get developers to code to that and get more games on to Sony then you would otherwise.

http://gamingbolt.com/ps4-should-support-vulkan-ps4s-api-not-completely-native-for-current-gen-yet-brad-wardell


----------



## Cyber Locc

Quote:


> Originally Posted by *raghu78*
> 
> Sony's PS4 low level graphics API GNM is at a lower level than even Mantle or DX12/Vulkan. Sony even has a higher level API called GNMX which wraps GNM and makes it easier to program.
> 
> http://gamingbolt.com/interview-with-brad-wardell-ps4xbox-one-differences-directx-12-ashes-of-the-singularity-and-more
> 
> "What I was referencing at the time was Vulkan. We're part of the Khronos Group and now it depends who you talk to at Sony and this gets in to a debate. Sony has a very low-level API already for the PlayStation 4. The problem I have with it is that if you want to make use for it you're writing some very specific code just for the PlayStation 4. And in the real world people don't do that right. I write code generally to be as cross-platform as I can.
> 
> Now maybe in Unity or Unreal, one of the other guys will write their engines in such a way so that they make the most use of it, but that's going to take time. Whereas if they use something like Vulkan, it's not as low-level as their API, but Vulkan has the advantage that it's really easy to write for it. So you're more likely to get developers to code to that and get more games on to Sony then you would otherwise.
> 
> http://gamingbolt.com/ps4-should-support-vulkan-ps4s-api-not-completely-native-for-current-gen-yet-brad-wardell


That is good to know, thanks for that +Rep


----------



## raghu78

Quote:


> Originally Posted by *Mahigan*
> 
> Had NVIDIA kept its hardware side scheduling, dedicated caches and coupled them with Kepler and Maxwell's architectural advancements, we would have a big and hot running GPU but NVIDIA would have had a far more flexible architecture going forward.
> 
> Heck, they might even have been able to add ACEs in parallel to their command processor. That architecture would have dominated GCN (in the immediate and going forward). Something like an 8800 GTX effect.
> 
> I think that this is what Volta will be. If I were a betting man, I'd bet on Volta dominating AMDs architecture. So AMD has a year or two to catch up market share and sales wise, invest in some large architectural enhancements before Volta drops.


We don't even know how Polaris and Pascal are going to be and you have jumped to Volta. I think its better we see how the next gen architectures are before looking forward too much into the future. btw you never know if AMD intends on keeping the GCN architecture for the next gen consoles which might launch around 2019-2020. So lots of ifs and buts and not worth talking way into the future.


----------



## Mahigan

Quote:


> Originally Posted by *Catscratch*
> 
> Lol, i found there were even people saying Maxwell was capable of Async Compute 6 months ago, that it's been a feature since Fermi
> 
> 
> 
> 
> 
> 
> 
> But in what context ? Async Compute's definition varies according to different people
> 
> 
> 
> 
> 
> 
> 
> https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


Asynchronous Compute is a computer science term which means the execution of compute tasks without a defined order (out of order execution sort of). Fermi->Maxwell are capable of Async compute.

AMD took it a step further, thanks to Sony, and built something not unlike Sony's cell processors into GCN (ACEs). Not only can ACEs execute compute tasks without a defined order but they can also execute compute tasks in parallel to graphics tasks being executed by the Command Processor (like Hyperthreading sort of). So what AMD is doing is best defined as Asynchronous Compute + Graphics (or Asynchronous Shading in AMD marketing terminology).


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> Asynchronous Compute is a computer science term which means the executionnof compute tasks without a defined order (out of order execution). Fermi->Maxwell are capable of Async compute.
> 
> AMD took it a step further, thanks to Sony, and built something not unlie Sony's cell processors in GCN (ACEs). Not only can ACEs execute compute tasks without a defined order but they can also execute compute tasks in parallel to graphics tasks being executed by the Command Processor. So what AMD is doing is best defined as Asynchronous Compute + Graphics (or Asynchronous Shading in AMD marketing terminology).


Right that is what I was saying, What fermi had and they are saying that maxwell doesn't (first I heard of maxwell cutting cuda, it better not of as I just bought a 980ti for CUDA) isnt the same thing.

Also to everyone I am not making any defense to Pascal, I do not think NV knew about this till it was too late for maxwell, however if Pascal doesn't have it that is harder to justify. Then I will string NV up with you







.


----------



## Kpjoslee

Quote:


> Originally Posted by *Mahigan*
> 
> Fermi had full blown hardware scheduling and hardware redundancy (dedicated caches for Rendering and Compute pipelines). These were removed by NVIDIA, going backwards technologically, in order to gain the perception of a more refined architecture (performance per watt). NVIDIA also removed the dedcated fp64 CUDA cores from Maxwell (and had cut them down significantly in Kepler).
> 
> Basically, NVIDIA created a one trick pony for DX11. This is planned obsolescence. With Pascal, NVIDIA are re-adding the fp64 units and tailoring the architecture for compute performance. So yes, Maxwell and Kepler will get the appearance of being "gimped" once Pascal releases. NVIDIA users will have to upgrade or face lacklustre support in upcoming titles.
> 
> GCN, on the other hand, will adapt to these changing market factors. GCN is a highly flexible architecture (thanks to hardware redundancy). So that extra power usage was and is there for a reason. It has nothing to do with "superior NVIDIA engineering" and every to do with NVIDIA cutting down on their architecture in order to deliver one trick ponies one after the other.
> 
> But don't worry, NVIDIA will leverage gameworks in order to save face.
> 
> As per usual, I will be accused of being wrong and biased. Several months will go by and what I am saying will start to make sense. I'm used to having my name run into the mud by now.


I respect your opinion and your analysis but I think if you remove some extra gravy on your post inviting Green vs Red argument, you probably won't be accused of being biased.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Right that is what I was saying, What fermi had and they are saying that maxwell doesn't (first I heard of maxwell cutting cuda, it better not of as I just bought a 980ti for CUDA) isnt the same thing.
> 
> Also to everyone I am not making any defense to Pascal, I do not think NV knew about this till it was too late for maxwell, however if Pascal doesn't have it that is harder to justify. Then I will string NV up with you
> 
> 
> 
> 
> 
> 
> 
> .


I was not implying they cut cuda. Maxwell is just more of a pure gaming chip than Nvidia has ever produced. I don't even think there was a Quadro line based on Maxwell.

Original point still stands that DX12/async is nothing like Gameworks.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> I was not implying they cut cuda. Maxwell is just more of a pure gaming chip than Nvidia has ever produced. I don't even think there was a Quadro line based on Maxwell.
> 
> Original point still stands that DX12/async is nothing like Gameworks.


It is too me, it was a too fold shot fired. MS got to compete with PS4 better and AMD to NV both of them won.


----------



## stoker

Quote:


> Originally Posted by *raghu78*
> 
> We don't even know how Polaris and Pascal are going to be and you have jumped to Volta. I think its better we see how the next gen architectures are before looking forward too much into the future. btw you never know if AMD intends on keeping the GCN architecture for the next gen consoles which might launch around 2019-2020. So lots of ifs and buts and not worth talking way into the future.


GCN has a bright future


----------



## Catscratch

Quote:


> Originally Posted by *Mahigan*
> 
> Asynchronous Compute is a computer science term which means the execution of compute tasks without a defined order (out of order execution sort of). Fermi->Maxwell are capable of Async compute.
> 
> AMD took it a step further, thanks to Sony, and built something not unlike Sony's cell processors into GCN (ACEs). Not only can ACEs execute compute tasks without a defined order but they can also execute compute tasks in parallel to graphics tasks being executed by the Command Processor (like Hyperthreading sort of). So what AMD is doing is best defined as Asynchronous Compute + Graphics (or Asynchronous Shading in AMD marketing terminology).


So in layman's terms, AMD gpus can negoatiate what to do with CPU (building commands queue) also while CPU issuing actual work commands simultaneously to the gpu. Planning and Working at the same time, that's why it's alike lanes on a road as I read.

Lane 1 <<<->>> Prepare this > Prepare That > Allocate this much vram
Lane 2 <<<->>> Draw this > Cache that > Draw That > Move that here

Nvidia is like

Single Lane <<-->>> Prepare this > Wait draw that > Holy crap we need to move that > Belay that order, do previous one > What's happening allocate more ram ! > Cache is empty ! > Stop !!


----------



## sugarhell

Quote:


> Originally Posted by *Catscratch*
> 
> So in layman's terms, AMD gpus can negoatiate what to do with CPU (building commands queue) also while CPU issuing actual work commands simultaneously to the gpu. Planning and Working at the same time, that's why it's alike lanes on a road as I read.
> 
> Lane 1 <<<->>> Prepare this > Prepare That > Allocate this much vram
> Lane 2 <<<->>> Draw this > Cache that > Draw That > Move that here
> 
> Nvidia is like
> 
> Single Lane <<-->>> Prepare this > Wait draw that > Holy crap we need to move that > Belay that order, do previous one > What's happening allocate more ram ! > Cache is empty ! > Stop !!


Actually there is 3 lanes.
First lane can do compute or graphics. Second lane can do compute. Third lane can upload/transfer data.


----------



## Catscratch

Quote:


> Originally Posted by *sugarhell*
> 
> Actually there is 3 lanes.
> First lane can do compute or graphics. Second lane can do compute. Third lane can upload/transfer data.


Tonga for a time got me interested. All that new tech, newer gcn, better compression. But eventually the 2gb vram made me "meh". 280x was always my fav next gpu. Sapphire Tri-x Vapor-x 280x. My 6850's demise was untimely considering this dx12 debate. There were no 300 series yet. I'd get a 380/X otherwise.


----------



## Mahigan

Quote:


> Originally Posted by *Catscratch*
> 
> So in layman's terms, AMD gpus can negoatiate what to do with CPU (building commands queue) also while CPU issuing actual work commands simultaneously to the gpu. Planning and Working at the same time, that's why it's alike lanes on a road as I read.
> 
> Lane 1 <<<->>> Prepare this > Prepare That > Allocate this much vram
> Lane 2 <<<->>> Draw this > Cache that > Draw That > Move that here
> 
> Nvidia is like
> 
> Single Lane <<-->>> Prepare this > Wait draw that > Holy crap we need to move that > Belay that order, do previous one > What's happening allocate more ram ! > Cache is empty ! > Stop !!


As I understand it, based on my exchanges with Kollock, Razor1, EXT3H, Jawed and Sebbi..

GCN
Lane 1: Draw this, draw that, and this, and that...
Lane 2: Compute this, compute that, and this, and that...
Lane 3: Copy this, copy that, transfer this, transfer that...

Maxwell
Lane 1: Draw this, wait...... Compute that, compute this, wait...... Draw this, draw that, wait......
Lane 2: Copy this, copy that, transfer this, transfer that...

A wait is a fence used to enforce a synchronization. GCN also uses fences under DX12 but the hardware will sometimes ignore fences. If you don't use a fence, in GCN, and you switch between compute and graphics tasks, the GPU will take a cycle to perform the switch (finer grained context switching). But DX12 doesn't support this feature yet (GCN is more advanced, compute wise, than the DX12 API). If you switch between a compute and graphics task with Maxwell, the GPU needs to perform a full flush, meaning emptying the SMM of all data due to the shared L1 cache. This delay can take quite a few cycles (up to 1,000ns). This is called coarse grained preemption. This is why you need to wait, with Maxwell, until the end of a draw call before switching contexts (slow context switching).

What is a context? Graphics, Compute, Copy are all contexts.
What is a switch? Moving from one context to the other before the other is done executing.
What is a flush? Emptying the Multi Processor ansd caches of previous incomplete work.

Each fence, between two contexts, incurs a 1-5% performance cost. GCN, under DX12, incurs these costs but Async compute + graphics affords a larger performance boost than the costs associated with a fence. GCN doesn't need these fences (LiquidVR) but to ensure compatibility with NVIDIAs architecture (most probable reason, Microsoft designed DX12 this way.

Maxwell, under DX12, incurs these fence costs but has no asynchronous compute + graphics capabilities to offset these costs. So if you switch async on, under AotS, NVIDIAs Maxwell performs worse (generally).

Under Hitman DX12, it appears that the GTX 980 Ti was being held back by a CPU bottleneck under DX11. So the alleviation of this CPU bottleneck (API overhead) allows Maxwell to offset the costs associated with a fence under Hitman DX12. Hence the slight boost for the GTX 980 Ti under Hitman DX12.

Vulkan works much the same way except I don't think it requires fences. In otherr words, as Vulkan matures, it will perform even better on GCN than DX12 does.


----------



## sugarhell

Quote:


> Originally Posted by *Mahigan*
> 
> GCN
> Lane 1: Draw this, draw that, and this, and that...
> Lane 2: Compute this, compute that, and this, and that...
> Lane 3: Copy this, copy that, transfer this, transfer that...
> 
> Maxwell
> Lane 1: Draw this, wait...... Compute that, compute this, wait...... Draw this, draw that, wait......
> Lane 2: Copy this, copy that, transfer this, transfer that...


Maxwell doesnt have a second queue for data.


----------



## MerkageTurk

It seems nVidia played the gtx 970 "4gb"


----------



## Mahigan

Quote:


> Originally Posted by *sugarhell*
> 
> Maxwell doesnt have a second queue for data.


It does, Maxwell has two DMA engines. Technically it does support this feature but it doesn't appear to be enabled in their driver for unknown reasons.


----------



## sugarhell

Quote:


> Originally Posted by *Mahigan*
> 
> It does, Maxwell has two DMA engines. Technically it does support this feature but it doesn't appear to be enabled in their driver for unknown reasons.


Neither on vulcan features database. They have exposed a single queue


----------



## PostalTwinkie

Quote:


> Originally Posted by *Cyber Locc*
> 
> It is open to both vendors however NV cant go backwards and change hardware support for cards that AMD had added to DX12. If you cant see why that statement is true well then I do not know what to tell you.
> 
> AMD desinged DX12 with Async Compute in Mind, where as NV didnt use that feature, that is the very definition of Gimping the competition. Software can fix Gimpworks, it cant fix Hardware faults.
> 
> It was a smart play by AMD that is for sure.


AMD didn't design DX 12, that is a Microsoft product with collaboration from other vendors. Further, ASC isn't a requirement of DX 12, it is just a cherry on top that developers can use. With apparently great success so far!


----------



## airfathaaaaa

Quote:


> Originally Posted by *Mahigan*
> 
> It does, Maxwell has two DMA engines. Technically it does support this feature but it doesn't appear to be enabled in their driver for unknown reasons.


a wild guess(knowning nvidia)
the 2 dma engine feature was one of the selling points of titan x it makes sense to disable it on the lower ones but if they do it now they will give the owners one more reason to file a class action lawsuit for yet another lie


----------



## PostalTwinkie

Quote:


> Originally Posted by *airfathaaaaa*
> 
> a wild guess(knowning nvidia)
> the 2 dma engine feature was one of the selling points of titan x it makes sense to disable it on the lower ones but if they do it now they will give the owners one more reason to file a class action lawsuit for yet another lie


----------



## airfathaaaaa

Quote:


> Originally Posted by *PostalTwinkie*
> 
> AMD didn't design DX 12, that is a Microsoft product with collaboration from other vendors. Further, ASC isn't a requirement of DX 12, it is just a cherry on top that developers can use. With apparently great success so far!


actually this is a rare case that the api was built to support the card and not the other way around


----------



## airfathaaaaa

Quote:


> Originally Posted by *PostalTwinkie*


Quote:


> Originally Posted by *PostalTwinkie*


please elaborate since all of the devtalks of nvidia considering the x was about "how it has 2 dma engines and how it can be used for maximum compute effiecency


----------



## magnek

Quote:


> Originally Posted by *Kpjoslee*
> 
> I respect your opinion and your analysis but I think if you remove some extra gravy on your post inviting Green vs Red argument, you probably won't be accused of being biased.


+1 to this.

@Mahigan: As much as I enjoy reading your technical analysis, the piled on "partisan crap" that you rail against really does get a bit tiring sometimes.


----------



## Mahigan

Quote:


> Originally Posted by *magnek*
> 
> +1 to this.
> 
> @Mahigan: As much as I enjoy reading your technical analysis, the piled on "partisan crap" that you rail against really does get a bit tiring sometimes.


You're right, I am partisan... Towards open source, open standards, fair open market competition. I rail against unfair competitive practices all of the time. I dislike closed source, even way back in the 3Dfx days when I supported NVIDIAs use of OpenGL and Direct3D when opposed by 3Dfx Glide.

When AMD released Mantle, the only thing which stopped me from attacking it was AMDs insistence that any GPU could run the code and that Mantle wouldn't require licensing fees. AMD made good on their promise when they donated Mantle to Kronos.

So yeah, I'm partisan...towards the truth. I also don't sugar coat anything. I'm direct, appearing pompous and/or condescending in the process. If you've ever heard Edward Snowden speak, Glenn Greenwald write or Albert Einstein lecture, you'll notice parallels in our personalities. It rubs certain people the wrong way. I have that problem even with my own wife. The cause? Aspergers.


----------



## Cyber Locc

Quote:


> Originally Posted by *PostalTwinkie*
> 
> AMD didn't design DX 12, that is a Microsoft product with collaboration from other vendors. Further, ASC isn't a requirement of DX 12, it is just a cherry on top that developers can use. With apparently great success so far!


They didn't design DX12 no, They designed Mantle, Mantle was used as a base for DX12, 80% of DX12 is mantle.

They even took the darn programming guide,


People have been up and down both APIs code and said its basically the exact same, AMD gave MS Mantle and they altered it slightly and called it DX12, so therefor ya they kind of did make DX12 seeing how they wrote most the code......

They are defiantly trying to cover that up though. That is why they make statements like "Its been in development for 5 years" yet when mantle was announced MS said they would not be making a new DX hmmm strange.


----------



## infranoia

Quote:


> Originally Posted by *Cyber Locc*
> 
> They didn't design DX12 no, They designed Mantle, Mantle was used as a base for DX12, 80% of DX12 is mantle.
> 
> They even took the darn programming guide,
> 
> 
> People have been up and down both APIs code and said its basically the exact same, AMD gave MS Mantle and they altered it slightly and called it DX12, so therefor ya they kind of did make DX12 seeing how they wrote most the code......


Well, kind of. Remember that none of this was about discrete GPUs or Windows. It was all about how Microsoft was going to leverage GCN for the XBox One to make it easy for developers to code to the metal. AMD's partnership early on with Microsoft to hammer out the XBox and GCN was the seed for both Mantle and DX12. I personally believe (opinion inbound warning) that AMD forced Microsoft's hand with Mantle-- MS didn't really intend to push the XBox API out to Windows. They pivoted fast and ported it over.

This is why MS isn't really lying that DX12 was in development long before Mantle. It was. It just wasn't intended for Windows.


----------



## Cyber Locc

Quote:


> Originally Posted by *infranoia*
> 
> Well, kind of. Remember that none of this was about discrete GPUs or Windows. It was all about how Microsoft was going to leverage GCN for the XBox One to make it easy for developers to code to the metal. AMD's partnership early on with Microsoft to hammer out the XBox and GCN was the seed for both Mantle and DX12. I personally believe (opinion inbound warning) that AMD forced Microsoft's hand with Mantle-- MS didn't really intend to push the XBox API out to Windows. They pivoted fast and ported it over.
> 
> This is why MS isn't really lying that DX12 was in development long before Mantle. It was. It just wasn't intended for Windows.


Agreed, it also was in development I am sure just not by MS lol. However ya I agree 100%







. That is what I said before this has nothing to do with hurting NV for MS it has to do with raising Xbox One sales, AMD said hey we can both get a win here use this for DX12, it was a great idea thats for sure.

Both companies are working together and they both have something to gain from this entire thing. Yet people are arguing, "no this wasn't like that", well ya it actually was. Both companies are partnered and seen a way to gain market share with there products and you think they didn't take that opportunity?

I have been saying since the console contracts got signed this would leak back to us and was a big win for AMD everyone said no its a small contract, Okay well looks like I was right.


----------



## Kpjoslee

Quote:


> Originally Posted by *Mahigan*
> 
> You're right, I am partisan... Towards open source, open standards, fair open market competition. I rail against unfair competitive practices all of the time. I dislike closed source, even way back in the 3Dfx days when I supported NVIDIAs use of OpenGL and Direct3D when opposed by 3Dfx Glide.
> 
> When AMD released Mantle, the only thing which stopped me from attacking it was AMDs insistence that any GPU could run the code and that Mantle wouldn't require licensing fees. AMD made good on their promise when they donated Mantle to Kronos.
> 
> So yeah, I'm partisan...towards the truth. I also don't sugar coat anything. I'm direct, appearing pompous and/or condescending in the process. If you've ever heard Edward Snowden speak, Glenn Greenwald write or Albert Einstein lecture, you'll notice parallels in our personalities. It rubs certain people the wrong way. I have that problem even with my own wife. The cause? Aspergers.


Honestly, I think it is waste of devoting your energy on "truth searching" on mere GPU makers. Use that on Intel, MS, Apple. or Google.


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> They didn't design DX12 no, They designed Mantle, Mantle was used as a base for DX12, 80% of DX12 is mantle.
> 
> They even took the darn programming guide,
> 
> 
> People have been up and down both APIs code and said its basically the exact same, AMD gave MS Mantle and they altered it slightly and called it DX12, so therefor ya they kind of did make DX12 seeing how they wrote most the code......
> 
> They are defiantly trying to cover that up though. That is why they make statements like "Its been in development for 5 years" yet when mantle was announced MS said they would not be making a new DX hmmm strange.


Dan Baker of Oxide stated the same thing in a recent PCPer interview. Microsoft had access to Mantle's source when working on the XBoxOne with AMD. This made its way into DX12.

DX12, like Vulcan, are built atop AMD Mantle. I have to agree with you on this.


----------



## Mahigan

Quote:


> Originally Posted by *Kpjoslee*
> 
> Honestly, I think it is waste of devoting your energy on "truth searching" on mere GPU makers. Use that on Intel, MS, Apple. or Google.


Most of my energy is used on politics, foreign policy and the growing police state.

GPUs are more of a hobby


----------



## Kollock

Quote:


> Originally Posted by *Mahigan*
> 
> As I understand it, based on my exchanges with Kollock, Razor1, EXT3H, Jawed and Sebbi..
> 
> GCN
> Lane 1: Draw this, draw that, and this, and that...
> Lane 2: Compute this, compute that, and this, and that...
> Lane 3: Copy this, copy that, transfer this, transfer that...
> 
> Maxwell
> Lane 1: Draw this, wait...... Compute that, compute this, wait...... Draw this, draw that, wait......
> Lane 2: Copy this, copy that, transfer this, transfer that...
> 
> A wait is a fence used to enforce a synchronization. GCN also uses fences under DX12 but the hardware will sometimes ignore fences. If you don't use a fence, in GCN, and you switch between compute and graphics tasks, the GPU will take a cycle to perform the switch (finer grained context switching). But DX12 doesn't support this feature yet (GCN is more advanced, compute wise, than the DX12 API). If you switch between a compute and graphics task with Maxwell, the GPU needs to perform a full flush, meaning emptying the SMM of all data due to the shared L1 cache. This delay can take quite a few cycles (up to 1,000ns). This is called coarse grained preemption. This is why you need to wait, with Maxwell, until the end of a draw call before switching contexts (slow context switching).
> 
> What is a context? Graphics, Compute, Copy are all contexts.
> What is a switch? Moving from one context to the other before the other is done executing.
> What is a flush? Emptying the Multi Processor ansd caches of previous incomplete work.
> 
> Each fence, between two contexts, incurs a 1-5% performance cost. GCN, under DX12, incurs these costs but Async compute + graphics affords a larger performance boost than the costs associated with a fence. GCN doesn't need these fences (LiquidVR) but to ensure compatibility with NVIDIAs architecture (most probable reason, Microsoft designed DX12 this way.
> 
> Maxwell, under DX12, incurs these fence costs but has no asynchronous compute + graphics capabilities to offset these costs. So if you switch async on, under AotS, NVIDIAs Maxwell performs worse (generally).
> 
> Under Hitman DX12, it appears that the GTX 980 Ti was being held back by a CPU bottleneck under DX11. So the alleviation of this CPU bottleneck (API overhead) allows Maxwell to offset the costs associated with a fence under Hitman DX12. Hence the slight boost for the GTX 980 Ti under Hitman DX12.
> 
> Vulkan works much the same way except I don't think it requires fences. In otherr words, as Vulkan matures, it will perform even better on GCN than DX12 does.


GPUs are a little more complex then that. The key thing to understand is that a GPU will have 10s, if not 100s of thousands of pixels in flight at any point of time. The GPU is cosntantly switching between them, typically any time a pixel hits a texture fetch. The texture fetch can be 100s of cycles so in the meantime the GPU goes and works on a bunch of other pixels (or vertices, or compute kernals). If you do too many texture lookups, or your shaders have high register pressure (which reduces the # of pixels the GPU can process at the same time), the GPU can start to stall. GCN is interesting because it can swap in compute tasks with pixel tasks, so a particular shader core could be swapping rapidly between a compute shader and a pixel shader. It can even swap out between different compute tasks. On a console, if you really optimize you can pair a task that is heavy with math and have it execute with one that is heavy on memory, and they will work together quite nicely.

TLDR: GPUs are not CPUs, they process hundreds of thousands of 'threads' at once, swapping between them very quickly to prevent stalls. GCN can cycle in compute tasks to fill in what would other wise be unused execution cycles.


----------



## Mahigan

Quote:


> Originally Posted by *Kollock*
> 
> GPUs are a little more complex then that. The key thing to understand is that a GPU will have 10s, if not 100s of thousands of pixels in flight at any point of time. The GPU is cosntantly switching between them, typically any time a pixel hits a texture fetch. The texture fetch can be 100s of cycles so in the meantime the GPU goes and works on a bunch of other pixels (or vertices, or compute kernals). If you do too many texture lookups, or your shaders have high register pressure (which reduces the # of pixels the GPU can process at the same time), the GPU can start to stall. GCN is interesting because it can swap in compute tasks with pixel tasks, so a particular shader core could be swapping rapidly between a compute shader and a pixel shader. It can even swap out between different compute tasks. On a console, if you really optimize you can pair a task that is heavy with math and have it execute with one that is heavy on memory, and they will work together quite nicely.
> 
> TLDR: GPUs are not CPUs, they process hundreds of thousands of 'threads' at once, swapping between them very quickly to prevent stalls. GCN can cycle in compute tasks to fill in what would other wise be unused execution cycles.


So while what used to cause a wait or a stall, say the GPU is transfering texture information from system memory to the framebuffer, can now be done behind the scenes while the GPU is working on some non bandwidth intensive compute tasks for example. So while the GPU is processing shadow maps, bandwidth intensive, it can also be doing something else in parallel thus limiting stalls and boosting efficiency?


----------



## magnek

Quote:


> Originally Posted by *Mahigan*
> 
> You're right, I am partisan... Towards open source, open standards, fair open market competition. I rail against unfair competitive practices all of the time. I dislike closed source, even way back in the 3Dfx days when I supported NVIDIAs use of OpenGL and Direct3D when opposed by 3Dfx Glide.
> 
> When AMD released Mantle, the only thing which stopped me from attacking it was AMDs insistence that any GPU could run the code and that Mantle wouldn't require licensing fees. AMD made good on their promise when they donated Mantle to Kronos.
> 
> So yeah, I'm partisan...towards the truth. I also don't sugar coat anything. I'm direct, appearing pompous and/or condescending in the process. If you've ever heard Edward Snowden speak, Glenn Greenwald write or Albert Einstein lecture, you'll notice parallels in our personalities. It rubs certain people the wrong way. I have that problem even with my own wife. The cause? Aspergers.


I don't mind bluntness (or even coarse language if there was any), it's just lately, quite a few of your technical posts always follow up with some kind of provocative and potentially inflammatory statement that always leads a thread on the path of no return. Maybe this is part of your bluntness that you allude to, and perhaps you're not even aware you're doing it, so maybe I'll try and point it out the next time I see something like that.
Quote:


> Originally Posted by *Cyber Locc*
> 
> *They didn't design DX12 no, They designed Mantle, Mantle was used as a base for DX12, 80% of DX12 is mantle.*
> 
> They even took the darn programming guide,
> 
> 
> People have been up and down both APIs code and said its basically the exact same, AMD gave MS Mantle and they altered it slightly and called it DX12, so therefor ya they kind of did make DX12 seeing how they wrote most the code......
> 
> They are defiantly trying to cover that up though. That is why they make statements like "Its been in development for 5 years" yet when mantle was announced MS said they would not be making a new DX hmmm strange.


You... you just opened Pandora's box and don't even realize it.


----------



## Cyber Locc

Quote:


> Originally Posted by *magnek*
> 
> You... you just opened Pandora's box and don't even realize it.


LOL how so ?







. Was I suppose to just go along with the cover ups hehe.

I mean someone had to say it, as it seems alot of people in this thread dont know whats really going on







Besides who doesn't love unboxing, lol









And did you change your avatar?


----------



## superstition222

Quote:


> Originally Posted by *Mahigan*
> 
> So yeah, I'm partisan...towards the truth. I also don't sugar coat anything. I'm direct, appearing pompous and/or condescending in the process. If you've ever heard Edward Snowden speak, Glenn Greenwald write or Albert Einstein lecture, you'll notice parallels in our personalities. It rubs certain people the wrong way. I have that problem even with my own wife. The cause? Aspergers.


I highly doubt that you have Asperger's. It's the gluten-free of the tech world.

Greenwald is sometimes flat out wrong about things he states with total assurance, although he is usually accurate about things. Off the top of my head I remember him being unable to really defend "free speech" in the face of some exceptions. The point I made was that if there are exceptions then it's not really free speech - it's managed speech. In that context (reality), all speech is either condoned or suppressed, something he refused to acknowledge - trying to dismiss examples like "fire!" in a crowded theater. In reality, speech, like anything else, can't be free. Everything carries consequences so everything is going to be policed. "Fire!" - the example he showed particular disdain toward, is a great example precisely because it's so difficult to refute. It's a hard case to make that it's a good thing to allow people to shout "fire!" in a crowded theater.

Another thing I remember is the way he said the Ford pardon really exposed how there is a two-tier system of justice, one for the elite and one for everyone else. While there is that, Chomsky pointed out to him wryly that the Ford pardon was hardly the first thing that made that clear. For Greenwald, the Ford pardon was a major milestone, a crystallization of the existence of a false justice. In reality, law has always been about preserving elite privilege. That's what I would have said if I had been there. The last big one that I remember is the way he applauded a federal judge's mocking of a would-be terrorist in court. Greenwald actually, as I recall, endorsed a judge belittling someone for their violence not being part of a bigger framework - as if violence of any kind is pro-social and thus worthy of some sort of admiration. It's been ages since I dealt with this so my recollection is a bit hazy but I think I recounted the gist. I think the idea was that he wasn't a soldier because if he had been then he would have been more heroic.

You cite Einstein. I'll cite Socrates. "The one thing I know is that I know nothing." Also... Diogenes. "I am searching for an honest man" (he said while walking around with a lantern in broad daylight). One of the biggest changes between adolescence and adulthood is the realization that truth is a lot harder to come by than it once seemed.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> They didn't design DX12 no, They designed Mantle, Mantle was used as a base for DX12, 80% of DX12 is mantle.
> 
> They even took the darn programming guide,
> 
> 
> People have been up and down both APIs code and said its basically the exact same, AMD gave MS Mantle and they altered it slightly and called it DX12, so therefor ya they kind of did make DX12 seeing how they wrote most the code......
> 
> They are defiantly trying to cover that up though. That is why they make statements like "Its been in development for 5 years" yet when mantle was announced MS said they would not be making a new DX hmmm strange.


At least Mantle was open, unlike BlackBoxWorks and such, eh?


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> LOL how so ?
> 
> 
> 
> 
> 
> 
> 
> . Was I suppose to just go along with the cover ups hehe.
> 
> I mean someone had to say it, as it seems alot of people in this thread dont know whats really going on
> 
> 
> 
> 
> 
> 
> 
> Besides who doesn't love unboxing, lol


In the old Ashes of the Singularity thread, I also shared that image as did PontiacGTX. Seems either people weren't present or are new to the forums and haven't seen it.

Dan Baker states something similar in a recent PCPer interview (really cool guy I'd love to meet him). All of this info is the basis for the argument I've put forward that NVIDIAs next architectures will likely become more similar to GCN than not and the basis for my Volta predictions.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> You cite Einstein. I'll cite Socrates. "The one thing I know is that I know nothing." Also... Diogenes. "I am searching for an honest man" (he said while walking around with a lantern in broad daylight). One of the biggest changes between adolescence and adulthood is the realization that *truth is a lot harder to come by that it once seemed.*


No its really not, its accepting the truth that you find that is the problem









seen this is someones sig earlier, I think it applies here great quote.

"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." Arthur Conan Doyle
Quote:


> Originally Posted by *superstition222*
> 
> At least Mantle was open, unlike BlackBoxWorks and such, eh?


This is true and I agree, however they caught Nv by suprise is all, thats why we most likely wont see Maxwell supporting it that was all I was saying, that gives AMD a year or 2 to grab some market share.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> No its really not


That's what adolescents believe very strongly, generally.

Finite creatures whose entire existence is designed around mythos... it's humorous for people to claim truth is their primary concern.


----------



## Mahigan

Quote:


> Originally Posted by *superstition222*
> 
> I highly doubt that you have Asperger's. It's the gluten-free of the tech world.
> 
> Greenwald is sometimes flat out wrong about things he states with total assurance, although he is usually accurate about things. Off the top of my head I remember him being unable to really defend "free speech" in the face of some exceptions. The point I made was that if there are exceptions then it's not really free speech - it's managed speech. In that context (reality), all speech is either condoned or suppressed, something he refused to acknowledge - trying to dismiss examples like "fire!" in a crowded theater. In reality, speech, like anything else, can't be free. Everything carries consequences so everything is going to be policed. "Fire!" - the example he showed particular disdain toward, is a great example precisely because it's so difficult to refute. It's a hard care to make that it's a good thing to allow people to shout "fire!" in a crowded theater.
> 
> Another thing I remember is the way he said the Ford pardon really exposed how there is a two-tier system of justice, one for the elite and one for everyone else. While there is that, Chomsky pointed out to him wryly that the Ford pardon was hardly the first thing that made that clear. For Greenwald, the Ford pardon was a major milestone, a crystallization of the existence of a false justice. In reality, law has always been about preserving elite privilege. That's what I would have said if I had been there. The last big one that I remember is the way he applauded a federal judge's mocking of a would-be terrorist in court. Greenwald actually, as I recall, endorsed a judge belittling someone for their violence not being part of a bigger framework - as if violence of any kind is pro-social and thus worthy of some sort of admiration. It's been ages since I dealt with this so my recollection is a bit hazy but I think I recounted the gist. I think the idea was that he wasn't a soldier because if he had been then he would have been more heroic.
> 
> You cite Einstein. I'll cite Socrates. "The one thing I know is that I know nothing." Also... Diogenes. "I am searching for an honest man" (he said while walking around with a lantern in broad daylight). One of the biggest changes between adolescence and adulthood is the realization that truth is a lot harder to come by that it once seemed.


Autism runs in my family. I'm low on the spectrum but my cousins are full blown autistic. It's a medical diagnosis that stems from social issues I've had for my entire life (including tantrums when I became and become overwhelmed in social/public situations from childhood until today). I really do have aspergers. Dealing with my lack of social skills is a learning process.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> That's what adolescents believe very strongly, generally.
> 
> As finite creatures whose entire existence is designed around mythos... it's humorous for people to claim truth is their primary concern.


Well you took it out of context, or misinterpreted what I said I guess. I dont know what your definition of Adolescent is, but I do not think I apply.

Or maybe I am just not getting what you are saying.

Seeing how I am in my 30s and have children of my own I dont think "(of a young person) in the process of developing from a child into an adult." is an apt description of myself.


----------



## superstition222

Quote:


> Originally Posted by *Mahigan*
> 
> I really do have aspergers.


Have you been diagnosed by a professional who has adequate training to make that diagnosis?


----------



## Mahigan

Quote:


> Originally Posted by *Cyber Locc*
> 
> No its really not, its accepting the truth that you find that is the problem
> 
> 
> 
> 
> 
> 
> 
> 
> 
> seen this is someones sig earlier, I think it applies here great quote.
> 
> "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." Arthur Conan Doyle
> This is true and I agree, however they caught Nv by suprise is all, thats why we most likely wont see Maxwell supporting it that was all I was saying, that gives AMD a year or 2 to grab some market share.


That's in my sig


----------



## Mahigan

Quote:


> Originally Posted by *superstition222*
> 
> Have you been diagnosed by a professional who has adequate training to make that diagnosis?


Yessir. Or I wouldn't be making the claim. I'm your typical aspie.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> That's in my sig


How did I miss that lol, I just kept seeing it and it stuck good quote so on point. I think a lot of people are confronted with truths they choose to ignore, I myself at times am also faced with this.


----------



## superstition222

Quote:


> Originally Posted by *Mahigan*
> 
> Yessir. Or I wouldn't be making the claim. I'm your typical aspie.


I would get a second opinion.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> I would get a second opinion.


Why from what he said he does suffer from it, there is varying levels and you may need to see what that actually means. As having issues adapting socially is the main symptom.

"PDDs are a group of conditions that involve delays in the development of many basic skills, most notably the ability to socialize with others, to communicate, and to use imagination. There is a spectrum within the PDD disorders."
http://www.webmd.com/brain/autism/mental-health-aspergers-syndrome

I have know lots of very smart people that had it as well, they were brillant just had problems communicating and in social environments.

However I guess if we get technical to an extent I am a sociopath, as I lack tact and have severe issues with emphasizing with others, so maybe I was the problem







.


----------



## looniam

Quote:


> Originally Posted by *superstition222*
> 
> At least Mantle was open, unlike BlackBoxWorks and such, eh?


actually NO, mantle *was* not open.


----------



## Cyber Locc

Quote:


> Originally Posted by *looniam*
> 
> actually NO, mantle *was* not open.


It was, they simply stated they did want to release the code until after completion. You are right that during its life it was mostly closed however with the knowledge it would be opened later, So you are both right









Or maybe it was really that they didn't want people to put work into it as they knew that DX12 was Mantle and was coming, the whole thing was a well thought scheme.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> Or maybe it was really that they didn't want people to put work into it as they knew that DX12 was Mantle and was coming, the whole thing was a well thought scheme.


Why release Vulkan then? That's supposed to be built upon the work of Mantle from what I remember reading.


----------



## infranoia

Quote:


> Originally Posted by *Mahigan*
> 
> In the old Ashes of the Singularity thread, I also shared that image as did PontiacGTX. Seems either people weren't present or are new to the forums and haven't seen it.
> 
> Dan Baker states something similar in a recent PCPer interview (really cool guy I'd love to meet him). All of this info is the basis for the argument I've put forward that NVIDIAs next architectures will likely become more similar to GCN than not and the basis for my Volta predictions.


LOL. Mahigan, that image well predates your arrival to OCN. We were tossing it around back in the ancient Mantle wars threads. By the way those are a laugh-riot to read now, with full hindsight, if you're ever desperately bored. There was an awful lot of salt in those threads and an awful lot of ammo for I-told-you-so's.


----------



## looniam

Quote:


> Originally Posted by *Cyber Locc*
> 
> It was, they simply stated they did want to release the code until after completion. You are right that during its life it was mostly closed however with the knowledge it would be opened later, So you are both right
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Or maybe it was really that they didn't want people to put work into it as they knew that DX12 was Mantle and was coming, the whole thing was a well thought scheme.


i just like pointing out the details - which tend to be forgotten. you know, like how no one couldn't get mantle during it's life time but yet any dev with a licences agreement can get the source code for gameworks.

but one is a blackbox whereas the other isn't.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> Why release Vulkan then? That's supposed to be built upon the work of Mantle from what I remember reading.


Vulkan is meant to be an Open GL replacement, Why release Open GL when we have DX, DX12 doesn't work on Linux for starters I am sure there is other reasons.
Quote:


> Originally Posted by *looniam*
> 
> i just like pointing out the details - which tend to be forgotten. you know, like how no one couldn't get mantle during it's life time but yet any dev with a licences agreement can get the source code for gameworks.
> 
> but one is a blackbox whereas the other isn't.


That is very true well played well played







. I am also sure that AMD has got there hands on GW code, it would be stupid to think they haven't.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> Vulkan is meant to be an Open GL replacement


It was also supposed to be the successor of Mantle, building upon it and improving it, based on the articles I remember reading.
Quote:


> Originally Posted by *Cyber Locc*
> 
> I am also sure that AMD has got there hands on GW code, it would be stupid to think they haven't.


When it becomes open source let me know. I'm not going to hold my breath.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> It was also supposed to be the successor of Mantle, building upon it and improving it, based on the articles I remember reading.


Well its isn't AMDs so I do not know that I would call it a "Successor" to Mantle but I guess so. Mantle was meant to be as you yourself pointed out an Open Source API, DX12 is not, whether they are both based on Mantle is kind of irrelevant 1 is open source 1 is not that is why we need 2.

Windows would never use a Open Source API especially for there Xbox, that would be heresy to there entire infrastructure.

Think about what DX12 does, It sells Xboxs and it sells, Windows 10. To use Mantle or Vulkan those aspects would be lost, and anyone could alter the Xboxs code base much easier and no one would jump onto windows 10.

Whether the idea and base of the API are the same MS still has the ability to add there own locks and changes to prevent changing, that is why its 80% Mantle and 20% MS locks.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> Mantle was meant to be as you yourself pointed out an Open Source API, DX12 is not


I thought we were discussing Vulkan. That was what I read was supposed to be Mantle's successor.

I don't think it's reasonable to believe AMD thought it could displace DirectX with anything. Mantle was likely intended to prod Microsoft into going low-level as it did.


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> I thought we were discussing Vulkan. That was what I read was supposed to be Mantle's successor.
> 
> I don't think it's reasonable to believe AMD thought it could displace DirectX with anything. Mantle was likely intended to prod Microsoft into going low-level as it did.


We are, you asked me why release Vulkan if DX12 was based on Mantle and I am telling you why. Also Vulkan is suppose to be Open GLs successor not Mantles, Mantle was never even finished dont know how it can have a successor lol.

why would they need to prod Ms to do anything, Ms relies on AMD working very well for Xbox, Xbox that uses DX. Giving them Mantle and telling them make DX12 with this handed them the ability to give Xbox a boost.

As has been said by a few people in the last few posts, DX12 is Mantle this has been proving 100s of times, anyone that says differently is well, in denial.

There is plenty of ways to make a Low Level API other than AMDs way, they went with AMDs way as they have investments in AMDs hardware. They dont sell PCs with GPUs they sell Xboxs that exclusively use AMD.

As was stated earlier as well, PS4 also uses an API like Mantle, although there is is further off. They use the API to give the best performance from the AMD GCN hardware to sell consoles MS is doing the same thing.

MS gains nothing for coding DX12 to work well on NV cards they gain a whole lot by coding it for GCN. They have no skins in the PC gpu market, the do have in the console market.


----------



## magnek

Quote:


> Originally Posted by *Cyber Locc*
> 
> LOL how so ?
> 
> 
> 
> 
> 
> 
> 
> . Was I suppose to just go along with the cover ups hehe.
> 
> I mean someone had to say it, as it seems alot of people in this thread dont know whats really going on
> 
> 
> 
> 
> 
> 
> 
> Besides who doesn't love unboxing, lol
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And did you change your avatar?


Yes changed my avatar. And by Pandora's box, see below:
Quote:


> Originally Posted by *infranoia*
> 
> LOL. Mahigan, that image well predates your arrival to OCN. We were tossing it around back in the ancient Mantle wars threads. By the way those are a laugh-riot to read now, with full hindsight, if you're ever desperately bored. There was an awful lot of salt in those threads and an awful lot of ammo for I-told-you-so's.


You really should've seen those threads (the chicken or egg Mantle or DX12 came first ones), they were... something. But I won't lie, I definitely derived hours of free entertainment from them.


----------



## Mahigan

Quote:


> Originally Posted by *magnek*
> 
> Yes changed my avatar. And by Pandora's box, see below:
> You really should've seen those threads (the chicken or egg Mantle or DX12 came first ones), they were... something. But I won't lie, I definitely derived hours of free entertainment from them.


Yeah,

I think that although DX12 is based on Mantle, a lot of stuff was added to make it work well with the "Dragon's buried deep within the barrows of Windows" that Kollock mentioned.

Vulkan, on the other hand, appears to lack these additions.


----------



## Xuper

Quote:


> Originally Posted by *Cyber Locc*
> 
> It was, they simply stated they did want to release the code until after completion. You are right that during its life it was mostly closed however with the knowledge it would be opened later, So you are both right
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Or maybe it was really that they didn't want people to put work into it as they knew that DX12 was Mantle and was coming, the whole thing was a well thought scheme.


No , Mantle was not open.It was stated by Robert Hallock in reddit but I can't find it.


----------



## Cyber Locc

Quote:


> Originally Posted by *Xuper*
> 
> No , Mantle was not open.It was stated by Robert Hallock in reddit but I can't find it.


"Mantle was conceived and developed by AMD in partnership with leading game developers.
This enabled the fast and agile development required to validate the concepts and bring such the technology to life in a relatively short period of time. However, Mantle was designed in a way that makes it applicable to a range of modern GPU architectures. In the months ahead, we will be inviting more partners to participate in the development program, leading up to a public release of the specifications later in 2014. *Our intention is for Mantle, or something that looks very much like it,* *to eventually become an industry standard applicable to multiple graphics architectures and platforms.* "

http://support.amd.com/en-us/search/faq/184

Pay close attention that as it also goes hand and hand with the DX12 stuff.

Quote:


> Originally Posted by *magnek*
> 
> Yes changed my avatar. And by Pandora's box, see below:
> You really should've seen those threads (the chicken or egg Mantle or DX12 came first ones), they were... something. But I won't lie, I definitely derived hours of free entertainment from them.


I hate it when people do that lol, I kept not realizing who you were due to your avatar lol. Kept second guessing myself hehe. Also I liked the old one better.









and ya I wasn't around much during that time I was really busy, I was barely able to keep up reading the stuff must less discussing it such is life







. I will have to go back and read through those to see this comedies though.


----------



## Xuper

either Robert is wrong or You.but i think your link says we give to all developers not everyone.

https://twitter.com/Thracks/status/383872285351739393

Quote:


> @GnrlKhalid No. It is an API for the industry-standard GCN Architecture and its specific ISA, done at the request of game developers


----------



## Cyber Locc

Quote:


> Originally Posted by *Xuper*
> 
> either Robert is wrong or You.but i think your link says we give to all developers not everyone.
> 
> https://twitter.com/Thracks/status/383872285351739393


Well I am thinking its Robert as click my link that is a quote from AMDs website. Official statements are not from Twitter they are from Press releases and Company's websites lol, in this case I think we need to look at the source.

Also his twitter statement is from 2013, where my source doesn't have a date. Maybe his statement was true for 2013 but later changed I do not know.

Then again we have this, http://wccftech.com/amd-mantle-api-require-gcn-work-nvidia-graphic-cards/

And the fact that AMD offered NV rights to use mantle.

Also a very good point in this twitter, someone says. "Of course it would, but low level API's are tied to a very specific architecture, so by definition it can't be open." He is right and that is what we are seeing from DX12 right now, DX12 was coded with AMD in mind for Xbox One so of course they will have the upper hand .

AMD couldn't compete with NV when they were on an equal playing field so they had to pull dirty trick of pushing an API coded to there architecture. That will buy them a few years and NV will get the new architecture down and then will be when we truly see who makes the better card. It was honestly kind of a low blow to be honest but very good move, that is why Nv stated way back when that Mantle would show no gains, it wasn't for there cards it was made for AMD cards only as is DX12. They would need to do a major architecture change to support mantle efficiently which now they are having to do for DX12.

That leads right back other unliked statement of "DX12 is Gimpworks API" and it is 100% the entire API is coded towards AMDs GCN architecture. Sure NV can adapt theres to support it and excel at it but that will take time.

Everyone likes to Glorify AMD as this innocent sweet company and that does no wrong and NV is just a big bully, that is however very far from the truth they both are playing dirty and this proves that right here. In actuality Game Works is only on a handful of Games where this is an API so AMD is the one that is playing dirtier.


----------



## mtcn77

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well I am thinking its Robert as click my link that is a quote from AMDs website. Official statements are not from Twitter they are from Press releases and Company's websites lol, in this case I think we need to look at the source.
> 
> Also his twitter statement is from 2013, where my source doesn't have a date. Maybe his statement was true for 2013 but later changed I do not know.
> 
> Then again we have this, http://wccftech.com/amd-mantle-api-require-gcn-work-nvidia-graphic-cards/
> 
> And the fact that AMD offered NV rights to use mantle.
> 
> Also a very good point in this twitter, someone says. "Of course it would, but low level API's are tied to a very specific architecture, so by definition it can't be open." He is right and that is what we are seeing from DX12 right now, DX12 was coded with AMD in mind for Xbox One so of course they will have the upper hand .
> 
> *AMD couldn't compete with NV when they were on an equal playing field so they had to pull dirty trick of pushing an API coded to there architecture.* That will buy them a few years and NV will get the new architecture down and then will be when we truly see who makes the better card. It was honestly kind of a low blow to be honest but very good move, that is why Nv stated way back when that Mantle would show no gains, it wasn't for there cards it was made for AMD cards only as is DX12. They would need to do a major architecture change to support mantle efficiently which now they are having to do for DX12.
> 
> That leads right back other unliked statement of "DX12 is Gimpworks API" and it is 100% the entire API is coded towards AMDs GCN architecture. Sure NV can adapt theres to support it and excel at it but that will take time.
> 
> Everyone likes to Glorify AMD as this innocent sweet company and that does no wrong and NV is just a big bully, that is however very far from the truth they both are playing dirty and this proves that right here. In actuality Game Works is only on a handful of Games where this is an API so AMD is the one that is playing dirtier.


This was a project before AMD acquired ATi, I presume. It's still a fact that AMD can put more units into the same die space. The newer nodes will be a challenge in making bigger dies. This is just a trick because _it will play into AMD's hands_ - not because it is dubious. It will factually elevate the number of ALU units simultaneously working in Pitcairn gpus by 20%, Tonga's by 26%, Hawaii's by 24%, Tahiti's by 68%, Fiji's by *75%* - this has been an unsolved problem, until Mantle, that gpus with more bandwidth could be scaled without increasing die size. That is what AMD did and that is why they will win in the long term. AMD is both solving its own problem and taking care of foundry contingencies at the same time.
PS: Fiji's have twice the bandwidth, but a third less rops than big Maxwell's. It is under three times the pixel writing pressure. Directx 12 is a BIG must for these gpus. They literally cannot work better than a 'R9 290' without the new api(just dramatizing).


----------



## Robenger

Quote:


> AMD couldn't compete with NV when they were on an equal playing field so they had to pull dirty trick of pushing an API coded to there architecture.


So much salt









How you managed to come up with this idea is astounding.


----------



## Serios

Quote:


> Originally Posted by *Xuper*
> 
> No , Mantle was not open.It was stated by Robert Hallock in reddit but I can't find it.


Details.
AMD's plans were to make Mantle Open and in the end that is what they did, they just did it in a different way, not like everybody was expecting.


----------



## mcg75

Quote:


> Originally Posted by *Cyber Locc*
> 
> AMD couldn't compete with NV when they were on an equal playing field so they had to pull dirty trick of pushing an API coded to there architecture. That will buy them a few years and NV will get the new architecture down and then will be when we truly see who makes the better card. It was honestly kind of a low blow to be honest but very good move, that is why Nv stated way back when that Mantle would show no gains, it wasn't for there cards it was made for AMD cards only as is DX12. They would need to do a major architecture change to support mantle efficiently which now they are having to do for DX12.
> 
> That leads right back other unliked statement of "DX12 is Gimpworks API" and it is 100% the entire API is coded towards AMDs GCN architecture. Sure NV can adapt theres to support it and excel at it but that will take time.
> 
> Everyone likes to Glorify AMD as this innocent sweet company and that does no wrong and NV is just a big bully, that is however very far from the truth they both are playing dirty and this proves that right here. In actuality Game Works is only on a handful of Games where this is an API so AMD is the one that is playing dirtier.


Mantle wasn't a dirty trick, it was a smart business move on the part of AMD.

It was also smart of AMD to offer it to everyone else because they knew nobody would truly take them up on the offer.

And yes, it was built to take advantage of GCN rather than anything else.

Gameworks, whether you like it or not, was also a smart business decision.

And there is no such thing as innocent when it comes to companies with shareholders. The company playing nice now offering everything for free is doing it to survive. If fortune changes and they ended up on top with the power, they'd act very similar to their competition did in the same position. Their goal is to make money not to give away things for free.


----------



## BradleyW

Quote:


> AMD couldn't compete with NV when they were on an equal playing field so they had to pull dirty trick of pushing an API coded to there architecture.


They were never on an equal playing field. Direct-X 11 for instance. It's coded in a way which highly benefits Nvidia GPU's.


----------



## PontiacGTX

Quote:


> Originally Posted by *Mahigan*
> 
> You're right, I am partisan... Towards open source, open standards, fair open market competition. I rail against unfair competitive practices all of the time. I dislike closed source, even way back in the 3Dfx days when I supported NVIDIAs use of OpenGL and Direct3D when opposed by 3Dfx Glide.
> 
> When AMD released Mantle, the only thing which stopped me from attacking it was AMDs insistence that any GPU could run the code and that Mantle wouldn't require licensing fees. AMD made good on their promise when they donated Mantle to Kronos.


well either kronos didnt like amd`s mantle or the game wasnt coded properly for it because vulkan wasnt at DX11 performance levels

Quote:


> Originally Posted by *Mahigan*
> 
> In the old Ashes of the Singularity thread, I also shared that image as did PontiacGTX. Seems either people weren't present or are new to the forums and haven't seen it.
> 
> Dan Baker states something similar in a recent PCPer interview (really cool guy I'd love to meet him). All of this info is the basis for the argument I've put forward that NVIDIAs next architectures will likely become more similar to GCN than not and the basis for my Volta predictions.


do you think they could go back to fermi design,redesign for DX12 and get similar performance like GCN?
Quote:


> Originally Posted by *Cyber Locc*
> 
> AMD couldn't compete with NV when they were on an equal playing field so they had to pull dirty trick of pushing an API coded to there architecture. That will buy them a few years and NV will get the new architecture down and then will be when we truly see who makes the better card. It was honestly kind of a low blow to be honest but very good move, that is why Nv stated way back when that Mantle would show no gains, it wasn't for there cards it was made for AMD cards only as is DX12. They would need to do a major architecture change to support mantle efficiently which now they are having to do for DX12.
> 
> That leads right back other unliked statement of "DX12 is Gimpworks API" and it is 100% the entire API is coded towards AMDs GCN architecture. Sure NV can adapt theres to support it and excel at it but that will take time.
> 
> Everyone likes to Glorify AMD as this innocent sweet company and that does no wrong and NV is just a big bully, that is however very far from the truth they both are playing dirty and this proves that right here. In actuality Game Works is only on a handful of Games where this is an API so AMD is the one that is playing dirtier.


So Consoles were cheating because they had a more efficient environment due to get most of the resources of the available hardware?

Well nvidia has long list of dirt tricks
Over tesellation in Crysis 2,physx running on cpu with amd gpus,blocking physx on hybrid card combinations,forcing physics on cpus on pcars,gameworks crippling cards on purpose,integrating nvidia software onto UE4,telling that their hardware is 100% directx 12, asking developer to disable Async shaders to hide their current issues

Then amd for pushing new standards in PC game dev allowing better performance on capable hardware and more detailed graphics or better optimized ,is a dirt trick


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> It is too me, it was a too fold shot fired. MS got to compete with PS4 better and AMD to NV both of them won.


I still don't get where you are getting that. Maybe the current batch of Nvidia cards don;t support DX12, but it DX12 is open for Nvidia to adapt to and will so in the future. On the other hand, stuff like Gameworks was introduced by Nvidia and will never be fully open for AMD to take advantage off.

Anyway, if you felt DX12 was such a big conspiracy by AMD and Microsoft that also turned out to hamper Nvidia's cards, why did you buy a 980Ti so late in the game? LOL


----------



## Cyber Locc

Quote:


> Originally Posted by *Robenger*
> 
> So much salt
> 
> 
> 
> 
> 
> 
> 
> 
> How you managed to come up with this idea is astounding.


How do I have Salt? I use both AMD and NV cards, how do I come up with that hmm maybe because I owned AMD cards until a week ago lol...
Quote:


> Originally Posted by *mcg75*
> 
> Mantle wasn't a dirty trick, it was a smart business move on the part of AMD.
> 
> It was also smart of AMD to offer it to everyone else because they knew nobody would truly take them up on the offer.
> 
> And yes, it was built to take advantage of GCN rather than anything else.
> 
> Gameworks, whether you like it or not, was also a smart business decision.
> 
> And there is no such thing as innocent when it comes to companies with shareholders. The company playing nice now offering everything for free is doing it to survive. If fortune changes and they ended up on top with the power, they'd act very similar to their competition did in the same position. Their goal is to make money not to give away things for free.


I agree 100% that playing dirty is a smart business move. I also agree that gameworks is a smart business move, if I gave you that impression sorry. However that was my point what you just said, businesses with shareholders have to play dirty.

Just look two posts below you and you will see what I am saying...
Quote:


> Originally Posted by *BradleyW*
> 
> They were never on an equal playing field. Direct-X 11 for instance. It's coded in a way which highly benefits Nvidia GPU's.


AFAIK, thats because Nvidia coded there architecture around DX11, not DX11 around their cards. If that isn't the case I would love some reading into the subject







.
Quote:


> Originally Posted by *PontiacGTX*
> 
> So Consoles were cheating because they had a more efficient environment due to get most of the resources of the available hardware?


I dont know where this is coming from? Consoles have the same hardware, they need APIs written for the non changing hardware thats a completely different situation.
Quote:


> Originally Posted by *PontiacGTX*
> 
> Well nvidia has long list of dirty tricks
> Over tesellation in Crysis 2,physx running on cpu with amd gpus,blocking physx on hybrid card combinations,forcing physics on cpus on pcars,gameworks crippling cards on purpose,integrating nvidia software onto UE4,telling that their hardware is 100% directx 12, asking developer to disable Async shaders to hide their current issues
> 
> *Then amd for pushing new standards in PC game dev allowing better performance on capable hardware and more detailed graphics or better optimized ,is a dirty trick*


It most defiantly is, when it benefits only them, We will see if it benefits NV and more Importantly Intel, Intel holds the highest shares in the PC graphics market so that will be the true testament, they also need the boost the most. Nothing wrong with AMD using Tricks to get things done, thats good business. However I am sick of everyone bashing NV and putting AMD on this pedestal like they are the pope, they are just as grimey as NV. *All* Publicly traded companies play dirty and are grimey, so stop with the pedestal because it is incorrect.

Quote:


> Originally Posted by *criminal*
> 
> I still don't get where you are getting that. Maybe the current batch of Nvidia cards don;t support DX12, but it DX12 is open for Nvidia to adapt to and will so in the future. On the other hand, stuff like Gameworks was introduced by Nvidia and will never be fully open for AMD to take advantage off.
> 
> Anyway, if you felt DX12 was such a big conspiracy by AMD and Microsoft that also turned out to hamper Nvidia's cards, why did you buy a 980Ti so late in the game? LOL


I do not think that pascal will support Async either for 1 thing, which will give AMD a leg up. Even if it does NV s first iteration will likely be bad, only time will tell.

Why I buy a 980ti, well we have went over this haven't we?

1 more time, CUDA, HEAT, Office PC, and finally I don't care about DX12, as none of the games interest me.

For a final Clarification, one more time. You seen my sig change, it went from 1 rig in text sig to 2. That is because while I have quite a few systems around here I only cared about 1, my Main rig. However going forward my Main Rig for Gaming and Main rig for my Office will differ. So a background on that, Shadow (my Gaming RIG) Had a 4820k and 3 290xs, My RIVBE died, so it needed to be replaced. The prices for RIVBE at this time were horrid, more for a used board then I payed new back then.

I had planned to Update my entire Rig (Shadow) when the RVBE was released with BW-E, that is still the plan. At the same time, I was sick of having to remote my gaming rig from my laptop to do real work (they are in different rooms in my house but still). So I decided I would build an Office Rig, (Night Hawk) and use it as my Gmaing Rig until BW-E.

When BW-E Comes out than DX12 will be more so a factor, and my card choices will differ for that Rig as its sole purpose is Gaming and Benching, and Maybe some work that will take a long time I can set to run and go to my office rig. I will not be gaming much on my Office rig, and if I do it will be for short periods of time like if I am taking a break or whatever. No serious gaming will take place on that rig (aside from temporarily right now.)

Nighthawk is an MATX build with a small case (Hexgear R40) my rad space is limited, the board wouldn't work well with 3 cards and I defiantly couldn't cool 3 most likely not even 2. So I switched to a 980ti for this rig, I didnt plan to use 3 290xs in Shadow as it would be pointless, they would show there age too much, it was time to upgrade them with the board and CPU anyway. Due to the board dying I just did that slightly sooner, does that make since now?


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> .
> I do not think that pascal will support Async either for 1 thing, which will give AMD a leg up. Even if it does NV s first iteration will likely be bad, only time will tell.
> 
> Why I buy a 980ti, well we have went over this haven't we?
> 
> 1 more time, CUDA, HEAT, Office PC, and finally I don't care about DX12, as none of the games interest me.
> 
> For a final Clarification, one more time. You seen my sig change, it went from 1 rig in text sig to 2. That is because while I have quite a few systems around here I only cared about 1, my Main rig. However going forward my Main Rig for Gaming and Main rig for my Office will differ. So a background on that, Shadow (my Gaming RIG) Had a 4820k and 3 290xs, My RIVBE died, so it needed to be replaced. The prices for RIVBE at this time were horrid, more for a used board then I payed new back then.
> 
> I had planned to Update my entire Rig (Shadow) when the RVBE was released with BW-E, that is still the plan. At the same time, I was sick of having to remote my gaming rig from my laptop to do real work (they are in different rooms in my house but still). So I decided I would build an Office Rig, (Night Hawk) and use it as my Gmaing Rig until BW-E.
> 
> When BW-E Comes out than DX12 will be more so a factor, and my card choices will differ for that Rig as its sole purpose is Gaming and Benching, and Maybe some work that will take a long time I can set to run and go to my office rig. I will not be gaming much on my Office rig, and if I do it will be for short periods of time like if I am taking a break or whatever. No serious gaming will take place on that rig (aside from temporarily right now.)
> 
> Nighthawk is an MATX build with a small case (Hexgear R40) my rad space is limited, the board wouldn't work well with 3 cards and I defiantly couldn't cool 3 most likely not even 2. So I switched to a 980ti for this rig, I didnt plan to use 3 290xs in Shadow as it would be pointless, they would show there age too much, it was time to upgrade them with the board and CPU anyway. Due to the board dying I just did that slightly sooner, does that make since now?


Yeah, that makes sense. I guess I am just confused as to why all this got started in the first place if you don't care about DX12 and the other features you bought the 980Ti for won't be affected by Microsoft/AMD supposed conspiracy. Seemed to me you were upset about the impact it had on Nvidia and your most current gpu purchase decision.


----------



## Shogon

Need more people for matchmaking


----------



## sugarhell

http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb
Quote:


> Another big feature, which we are also using on Xbox One, is asynchronous compute.


Funny thing : Nvidia repost it on twitter


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Yeah, that makes sense. I guess I am just confused as to why all this got started in the first place if you don't care about DX12 and the other features you bought the 980Ti for won't be affected by Microsoft/AMD supposed conspiracy. Seemed to me you were upset about the impact it had on Nvidia and your most current gpu purchase decision.


Oh no I appluad AMD for the DX12 conspiricacy I think it was a great idea and will buy them a ton of market share







, If Pascal doesn't have Async (I dont think it will) then Polaris will dominate, I have no issues with using Polaris or AMD at all, I like them both.

I didnt see anyone sticking up for NV so I figured I would thats all lol. I am not trying to bash AMD for the MS/AMD conspiracy, I think it was a great idea and very well executed.

However I dont like to see NV being strung up when they do stuff like this, and AMD walks away clear. On top of that people are blaming NV for not having Async support, but I dont think it was there fault I think the cards just fell in a way throwing them a bad hand, just as that has happened to AMD many times before.

Both Companies push the industry, both of them make good cards, it comes down to what your needs are when shopping and what they offer at the current time.

Also I bought my 980ti, well after participating in this thread, and my prior comments were the same as just now, I dont think even Pascal will have DX12, so I bought knowing no DX12, I mean I still hope they do get some support obviously but if they dont I knew fully what I was getting.

Quote:


> Originally Posted by *sugarhell*
> 
> http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb
> Funny thing : Nvidia post it


What did NV post, I dont get it?


----------



## mtcn77

Quote:


> Originally Posted by *Cyber Locc*
> 
> *What did NV post*, I dont get it?


The irony.


----------



## Cyber Locc

Quote:


> Originally Posted by *mtcn77*
> 
> The irony.


I get that, the article he linked isnt wrote by NV though, is it? That is why I am confused, where did NV say what he said.

However there is some Irony here though,

"Our developer blogs lift the curtain on the creation of Lara's first tomb raiding expedition, and the technology we use to constantly improve it. Following the release of Rise of the Tomb Raider for PC, the title will be one of the first in the industry to integrate DirectX 12 support, allowing fans with older PCs *WITH AMD CARDS* or newer rigs *WITH AMD CARDS* to run at higher framerates and higher graphical settings. Nixxes Studio Head Jurjen Katsman dives deep into the new technology below.]"

Also fixed that for them







http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb


----------



## GorillaSceptre

Quote:


> Originally Posted by *sugarhell*
> 
> http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb
> Funny thing : Nvidia repost it on twitter


If ROTTR doesn't use Async with it's DX12 build i will be seriously annoyed. It's Nvidia backed so i doubt they will..

The Xbone is using it but PC is still stuck in the past.. I hope studios push things forward.


----------



## Cyber Locc

Quote:


> Originally Posted by *GorillaSceptre*
> 
> If ROTTR doesn't use Async with it's DX12 build i will be seriously annoyed. It's Nvidia backed so i doubt they will..
> 
> The Xbone is using it but PC is still stuck in the past.. I hope studios push things forward.


Is the patch live? The comments in that thread seems to make me believe dx12 is live on TR. It says it uses Async in there didn't it?

Also Xbox only has 1 set of hardware to work with, PCS have more NV GPUs and more Intel than AMD and NV combined x100, neither Intel nor NV works with Async.

Yet I say that MS release DX12 to benefit AMD and I get called crazy, yet the 2 leading Graphics processors do not benefit from its biggest feature. Lets not forget that in the GPU market, Intel wins by a landslide, how many PCs do you actually think have a DGPU, 1 in 100, or less?

Edit: Well ya its out let the benchmarks flow







.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Is the patch live? The comments in that thread seems to make me believe dx12 is live on TR. It says it uses Async in there didn't it?
> 
> Also Xbox only has 1 set of hardware to work with, PCS have more NV GPUs and more Intel than AMD and NV combined x100, neither Intel nor NV works with Async.
> 
> Yet I say that MS release DX12 to benefit AMD and I get called crazy, yet the 2 leading Graphics processors do not benefit from its biggest feature. Lets not forget that in the GPU market, Intel wins by a landslide, how many PCs do you actually think have a DGPU, 1 in 100, or less?


It is live. Not great though, worse performance for me (and others on the Steam forums) with a lot of stuttering and some graphical artifacts.


----------



## Devnant

Quote:


> Originally Posted by *Cyber Locc*
> 
> Is the patch live? The comments in that thread seems to make me believe dx12 is live on TR. It says it uses Async in there didn't it?
> 
> Also Xbox only has 1 set of hardware to work with, PCS have more NV GPUs and more Intel than AMD and NV combined x100, neither Intel nor NV works with Async.
> 
> Yet I say that MS release DX12 to benefit AMD and I get called crazy, yet the 2 leading Graphics processors do not benefit from its biggest feature. Lets not forget that in the GPU market, Intel wins by a landslide, how many PCs do you actually think have a DGPU, 1 in 100, or less?
> 
> Edit: Well ya its out let the benchmarks flow
> 
> 
> 
> 
> 
> 
> 
> .


I've also tested the patch. No difference in graphics. Lost 1 FPS on the benchmark comparing to DX11. SLI doesn't work.


----------



## superstition222

Quote:


> Originally Posted by *PontiacGTX*
> 
> Well nvidia has long list of dirt tricks
> Over tesellation in Crysis 2,physx running on cpu with amd gpus,blocking physx on hybrid card combinations,forcing physics on cpus on pcars,gameworks crippling cards on purpose,integrating nvidia software onto UE4,telling that their hardware is 100% directx 12, asking developer to disable Async shaders to hide their current issues
> 
> Then amd for pushing new standards in PC game dev allowing better performance on capable hardware and more detailed graphics or better optimized ,is a dirt trick


And it's not like Nvidia won't keep bribing developers to downplay async and use lots of tessellation. Hopefully AMD will increase tessellation performance enough in Polaris to take away that trick angle.


----------



## superstition222

Quote:


> Originally Posted by *mcg75*
> 
> Mantle wasn't a dirty trick, it was a smart business move on the part of AMD.
> 
> It was also smart of AMD to offer it to everyone else because they knew nobody would truly take them up on the offer.
> 
> And yes, it was built to take advantage of GCN rather than anything else.
> 
> Gameworks, whether you like it or not, was also a smart business decision.
> 
> And there is no such thing as innocent when it comes to companies with shareholders. The company playing nice now offering everything for free is doing it to survive. If fortune changes and they ended up on top with the power, they'd act very similar to their competition did in the same position. Their goal is to make money not to give away things for free.


Mantle, though, didn't hurt performance for Nvidia buyers like Gameworks, right? AMD also, at least as far as I've been seeing, has been pushing more open standards than Nvidia.
Quote:


> Originally Posted by *Cyber Locc*
> Everyone likes to Glorify AMD as this innocent sweet company and that does no wrong and NV is just a big bully, that is however very far from the truth they both are playing dirty and this proves that right here.


Some recognize the fact that companies that are in a weaker competitive position have to try to give consumers better deals.

Different businesses can also have different angles in terms of how they seek profit. Some don't care much about how they're perceived by consumers and others do a lot to cultivate a positive image. Apple, for instance, is defying the US government about the phone backdoor because it is a very image-conscious company. Microsoft, by contrast, has someone like Gates who would happily let the government do whatever it likes.


----------



## jologskyblues

Quote:


> Originally Posted by *superstition222*
> 
> Mantle, though, didn't hurt performance for Nvidia buyers like Gameworks, right?


Apples and oranges. Mantle was a different API path to DX11 and only worked on Radeon GPU's while GameWorks is a middleware software suite where some features are vendor agnostic..


----------



## looniam

Quote:


> Originally Posted by *superstition222*
> 
> And it's not like Nvidia won't keep bribing developers to downplay async and use lots of tessellation. Hopefully AMD will increase tessellation performance enough in Polaris to take away that trick angle.


you are aware that AMD users can turn down tessellation in the CCC, right?


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> Different businesses can also have different angles in terms of how they seek profit. Some don't care much about how they're perceived by consumers and others do a lot to cultivate a positive image. Apple, for instance, *is defying the US government about the phone backdoor because it is a very image-conscious company*. Microsoft, by contrast, has someone like Gates who would happily let the government do whatever it likes.


Allegedly. Did you forget what happened to Snowden for saying stuff like that? Ya that is not going to end well for Apple which is a US company. Whether its true or not, dont get me wrong I dont put it past the Govt to do it nor to I put it past Apple to make it up

Nor would it be the first time they have flat out lied to consumers and not the last either. There entire history has been based on lying to customers and stealing designs its there MO.
Quote:


> Originally Posted by *looniam*
> 
> you are aware that AMD users can turn down tessellation in the CCC, right?


Whaaaaa no that is Blasphemous talk, you no.


----------



## Robenger

Quote:


> Originally Posted by *looniam*
> 
> you are aware that AMD users can turn down tessellation in the CCC, right?


Not the issue, but nice try.


----------



## Cyber Locc

Quote:


> Originally Posted by *Robenger*
> 
> Not the issue, but nice try.


Your right its much easier, just turn the hairworks off problem solved.


----------



## Robenger

Quote:


> Originally Posted by *Cyber Locc*
> 
> Your right its much easier, just turn the hairworks off problem solved.


Still not the issue Osmium.


----------



## Cyber Locc

Quote:


> Originally Posted by *Robenger*
> 
> Still not the issue Osmium.


I know it isn't, turning tess down fixes it just like what was said. If you mod the Tess the game will play just as it should on an AMD card.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> I know it isn't, turning tess down fixes it just like what was said. If you mod the Tess the game will play just as it should on an AMD card.


That only fixes HairWorks but VusialFX is not just Hairwork.


----------



## Cyro999

Quote:


> Originally Posted by *looniam*
> 
> you are aware that AMD users can turn down tessellation in the CCC, right?


As much as it's awful practice to turn tesselation up to 11 and ruin performance for nvidia users but ruin performance for AMD users 5x more, i must also fault them for having so much worse tesselation perf.

They improved tesselation performance by 4x.. in gcn 1.2, which they didn't refresh onto some of their most popular GPU's in the PC gaming community. So here we have a situation with like 390 vs 970 and the 970 being a huge amount faster for tess workloads - like 5-10x faster - which is partially AMD's fault.


----------



## superstition222

Quote:


> Originally Posted by *Cyber Locc*
> 
> There entire history has been based on lying to customers and stealing designs its there MO.


I think you took Pirates of Silicon Valley too seriously.
Quote:


> Originally Posted by *jologskyblues*
> 
> Apples and oranges. Mantle was a different API path to DX11 and only worked on Radeon GPU's while GameWorks is a middleware software suite where some features are vendor agnostic..


So, as I said, Mantle didn't hurt the performance of Nvidia cards. Gameworks, by contrast...


----------



## Themisseble

Gameworks is for marketing...
I still thinkg that frost bite is the best engine, amazing graphic without gameworks, best optimization...


----------



## SpeedyVT

No fighting guys, Mantle paradigm is the future and motivator for the betterment of all APIs to come.


----------



## jologskyblues

Quote:


> Originally Posted by *superstition222*
> 
> So, as I said, Mantle didn't hurt the performance of Nvidia cards. Gameworks, by contrast...


Huh? You should be comparing GameWorks with GPUOpen.


----------



## looniam

Quote:


> Originally Posted by *Robenger*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> you are aware that AMD users can turn down tessellation in the CCC, right?
> 
> 
> 
> Not the issue, but nice try.
Click to expand...

you're right. AMD needs to improve their tessellation performance. but until then, there is an option.


----------



## ku4eto

Quote:


> Originally Posted by *looniam*
> 
> you're right. AMD needs to improve their tessellation performance. but until then, there is an option.


Their tesselation performance is actually good, it is titles like Batman Arhkam (whatever it was), with the Batman coat (or whatever wasthe name), and Crysis 2 with the city blockades stupid amounts of tesselation.


----------



## stoker

Is Ashes Beta available outside of Steam? I'd like to support the game but here in the AU our exchange rates to US aren't so good atm


----------



## PlugSeven

Quote:


> Originally Posted by *looniam*
> 
> you're right. AMD needs to improve their tessellation performance. but until then, there is an option.


Why? So AMD users can enjoy 64 factors of tess on flat surfaces? With AMD lowering the tess to X8 in drivers on some games, and nvidia rocking it at X64, where're all the IQ comparisons to show it's worth it, all the flatter, rounder surfaces? Or is it just more tessellation for the sake of tessellation?


----------



## looniam

Quote:


> Originally Posted by *ku4eto*
> 
> Their tesselation performance is actually good, it is titles like Batman Arhkam (whatever it was), with the Batman coat (or whatever wasthe name), and Crysis 2 with the city blockades stupid amounts of tesselation.


only lately with fiji had AMD done any improvements:

whereas most AMd users are still sitting on their three year old hardware that kepler had a 20%-100% performance increase. it's nothing new that AMD lags behind.

so you have a game released from a warner bros. that was plagued with problems to the point it was pulled from steam and a mod to a game that was rushed out the door by crytek. heard of culling?
Quote:


> Originally Posted by *PlugSeven*
> 
> Why? So AMD users can enjoy 64 factors of tess on flat surfaces? With AMD lowering the tess to X8 in drivers on some games, and nvidia rocking it at X64, where're all the IQ comparisons to show it's worth it, all the flatter, rounder surfaces? Or is it just more tessellation for the sake of tessellation?


your point is?
oh, devs aren't suppose to use tessellation because it hurts.









please, i have already posted why some devs are just lazy and increase tess to the max. the x64 factor, which i also think is outragous btw, does also put a damper on NV's performance so the logic that it's an underhanded tactic by NV is a FAIL.


----------



## Themisseble

Quote:


> Originally Posted by *looniam*
> 
> only lately with fiji had AMD done any improvements:
> 
> whereas most AMd users are still sitting on their three year old hardware that kepler had a 20%-100% performance increase. it's nothing new that AMD lags behind.
> 
> so you have a game released from a warner bros. that was plagued with problems to the point it was pulled from steam and a mod to a game that was rushed out the door by crytek. heard of culling?
> your point is?
> oh, devs aren't suppose to use tessellation because it hurts.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> please, i have already posted why some devs are just lazy and increase tess to the max. the x64 factor, which i also think is outragous btw, does also put a damper on NV's performance so the logic that it's an underhanded tactic by NV is a FAIL.


Yes, because it hurts...


----------



## Charcharo

Quote:


> Originally Posted by *looniam*
> 
> only lately with fiji had AMD done any improvements:


Devs should use tessellation where it actually works and not on flat surfaces... it is amazing that CoP is still an actual example on how to use Tessellation 6 years later.

Also, I am 100% certain you won't be able to tell the difference between X16 and X64 tessellation.


----------



## ku4eto

Quote:


> Originally Posted by *looniam*
> 
> only lately with fiji had AMD done any improvements:
> 
> whereas most AMd users are still sitting on their three year old hardware that kepler had a 20%-100% performance increase. it's nothing new that AMD lags behind.
> 
> so you have a game released from a warner bros. that was plagued with problems to the point it was pulled from steam and a mod to a game that was rushed out the door by crytek. heard of culling?
> your point is?
> oh, devs aren't suppose to use tessellation because it hurts.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> please, i have already posted why some devs are just lazy and increase tess to the max. the x64 factor, which i also think is outragous btw, does also put a damper on NV's performance so the logic that it's an underhanded tactic by NV is a FAIL.


Ufh... So a x64 tess synthetic brnch is useful? This ia uaeless, a game with diffwrwnt ammounts of tess shouls be uswd as comparison.


----------



## looniam

Quote:


> Originally Posted by *ku4eto*
> 
> Ufh... So a x64 tess synthetic brnch is useful? This ia uaeless, a game with diffwrwnt ammounts of tess shouls be uswd as comparison.


i thought the same myself when looking at it but is there a reason it wouldn't scale linearly? ie x32 tess half the points. i doubt that one (or more cards) would dramatically improve its placement.


----------



## ku4eto

Quote:


> Originally Posted by *looniam*
> 
> i thought the same myself when looking at it but is there a reason it wouldn't scale linearly? ie x32 tess half the points. i doubt that one (or more cards) would dramatically improve its placement.


There is ofcourse diminishing, the tesselation depends on several elements, and so on. If its probably brought down to more believable levels, sucha s x16, the difference between AMD and nVidia will become smaller. There is a certain output amount that the AMD cards can do, it is not only about the intensity.


----------



## looniam

Quote:


> Originally Posted by *ku4eto*
> 
> There is ofcourse diminishing, the tesselation depends on several elements, and so on. If its probably brought down to more believable levels, sucha s x16, the difference between AMD and nVidia will become smaller. There is a certain output amount that the AMD cards can do, it is not only about the intensity.


care to compare some benches?
http://www.ozone3d.net/benchmarks/tessmark/


----------



## ku4eto

Quote:


> Originally Posted by *looniam*
> 
> care to compare some benches?
> http://www.ozone3d.net/benchmarks/tessmark/


The link you provided, does not indicate what moderate, extreeme and so on means in terms of tesselation. But as you can see, under Moderate, nVidia has 50% advantage, and when you increase the Tesselation it goes to 140%, 200%, 400%. This is what i meant with my previous comment, under "believable" levels of tesselation, you get actually more than good results. Increase it too much, and it gets 1st useless, 2nd too heavy on both (but mainly AMD).
Of course, this is on already old cards, where i have no idea how the VLIW architecture fares, altough i had a Cayman Pro.


----------



## looniam

Quote:


> Originally Posted by *ku4eto*
> 
> The link you provided, does not indicate what moderate, extreeme and so on means in terms of tesselation. But as you can see, under Moderate, nVidia has 50% advantage, and when you increase the Tesselation it goes to 140%, 200%, 400%. This is what i meant with my previous comment, under "believable" levels of tesselation, you get actually more than good results. Increase it too much, and it gets 1st useless, 2nd too heavy on both (but mainly AMD).
> Of course, this is on already old cards, where i have no idea how the VLIW architecture fares, altough i had a Cayman Pro.


i meant between you and i (compare) and the link was to _DL the bench._
can adjust the settings:


Spoiler: Warning: Spoiler!


----------



## ku4eto

Quote:


> Originally Posted by *looniam*
> 
> i meant between you and i (compare) and the link was to _DL the bench._
> can adjust the settings:


Sorry, my bad. I am on work now, i will try after few hours to do the tests.


----------



## looniam

Quote:


> Originally Posted by *ku4eto*
> 
> Sorry, my bad. I am on work now, i will try after few hours to do the tests.


no worries, you got me curious. if you have the time later and we ought to take it off thread and go to the benchmarking section.









then everyone will want to play. but i don't feel like spending my time w/google spread sheets.


----------



## Charcharo

This reminds me... a bit off topic guys...
How can I check how much tessellation an older title is using?


----------



## ku4eto

Quote:


> Originally Posted by *Charcharo*
> 
> This reminds me... a bit off topic guys...
> How can I check how much tessellation an older title is using?


Easiest way is to use the Crimson Panel to set up a game profile and then the Tesselation setting is changed from AMD Optimized to Application Controlled. Do a test. Exit game and set the Tesselation setting to x8/x16/x32x/x64 - redo test, compare results. Its not like its a raw number, this is just for orientation.


----------



## Charcharo

Quote:


> Originally Posted by *ku4eto*
> 
> Easiest way is to use the Crimson Panel to set up a game profile and then the Tesselation setting is changed from AMD Optimized to Application Controlled. Do a test. Exit game and set the Tesselation setting to x8/x16/x32x/x64 - redo test, compare results. Its not like its a raw number, this is just for orientation.


Thanks. Just wanted to know what Tess some older games like CoP are using. Will try it out later... hopefully I would be able to get results.


----------



## Cyro999

Quote:


> Originally Posted by *ku4eto*
> 
> Their tesselation performance is actually good, it is titles like Batman Arhkam (whatever it was), with the Batman coat (or whatever wasthe name), and Crysis 2 with the city blockades stupid amounts of tesselation.


The tesselation performance is not good.

They improved it by *4x* between gcn 1.1 and 1.2, but the 290/290x/390/390x doesn't have that change. Even with the 4x improvement, it's still substantially slower than Maxwell IIRC.

It's super wrong to abuse a weakness like this, but it's also wrong to have such a weakness. I think there is no innocent victim in the tesselation debate.


----------



## Themisseble

Quote:


> Originally Posted by *Cyro999*
> 
> The tesselation performance is not good.
> 
> They improved it by *4x* between gcn 1.1 and 1.2, but the 290/290x/390/390x doesn't have that change. Even with the 4x improvement, it's still substantially slower than Maxwell IIRC.
> 
> It's super wrong to abuse a weakness like this, but it's also wrong to have such a weakness. I think there is no innocent victim in the tesselation debate.


R9 380 is matching GTX 960 is tessellation as GTX 770.... all of them are in same range.

First of all:
Gameworks is entirely set on tessellation, but it is mainly COMPUTE. This is why NVIDIA made gameworks... true story









Gameworks is to make look bad every product out there expect latest NVIDIA look bad, old.


----------



## ku4eto

Quote:


> Originally Posted by *Cyro999*
> 
> The tesselation performance is not good.
> 
> They improved it by *4x* between gcn 1.1 and 1.2, but the 290/290x/390/390x doesn't have that change. Even with the 4x improvement, it's still substantially slower than Maxwell IIRC.
> 
> It's super wrong to abuse a weakness like this, but it's also wrong to have such a weakness. I think there is no innocent victim in the tesselation debate.


No point to increase the performance in that area too much, as it will affect other areas, like computing. They are increasing it now, because nVidia are using their The way its meant to be played, to force excessive tesselation in games. Or Gameworks (tesselation in tesselation?). The overall need for such amounts tesselation is not much, the amount of polygons still plays bigger role.


----------



## Cyro999

Quote:


> R9 380 is matching GTX 960 is tessellation


Yes, the 380 is gcn 1.2 which is - by AMD's slides - 4x faster @ moderate-high tesselation loads than gcn 1.1. Hawaii (290/290x/390/390x) uses gcn 1.1, not 1.2, so does not have this improvement.


----------



## sugarhell

Quote:


> Originally Posted by *Cyro999*
> 
> Yes, the 380 is gcn 1.2 which is - by AMD's slides - 4x faster @ moderate-high tesselation loads than gcn 1.1. Hawaii (290/290x/390/390x) uses gcn 1.1, not 1.2, so does not have this improvement.


The slides says that the tessellation rate is 2-4x faster than GCN1.0 or 7000 series.

Hawaii improved the tessellation vs the first generation.

Also who use over 12x tessellation except gameworks? None

Most games nowadays use 8x in general and 16x for extremely detailed objects.


----------



## Cyro999

Quote:


> The slides says that the tessellation rate is 2-4x faster than GCN1.0


Ah my bad. Forgot the 7950/280 was 1.0


----------



## Themisseble

GTX 770 is still better than GTX 960 in tessellation, yet it is slower in gameworks.


----------



## looniam

Quote:


> Originally Posted by *Themisseble*
> 
> Gameworks is to make look bad every product out there expect latest NVIDIA look bad, old.


or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.


----------



## Themisseble

Quote:


> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.


Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.


----------



## looniam

Quote:


> Originally Posted by *Themisseble*
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.


doesn't matter if you think it should be or not, kepler was EOL the second maxwell released, again, as stated by nvidia.


----------



## huzzug

Quote:


> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> 
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
Click to expand...

If you think $700 should not be EOL in 2 years, maybe you shouldn't be spending $700 on a card in the first place. Not patronizing you, but it all depends on a person. Nvidia already informed when card becomes EOL based of past practices.


----------



## Themisseble

Quote:


> Originally Posted by *huzzug*
> 
> If you think $700 should not be EOL in 2 years, maybe you shouldn't be spending $700 on a card in the first place. Not patronizing you, but it all depends on a person. Nvidia already informed when card becomes EOL based of past practices.


I never spent over 700$ for a card.... my last GTX card was GTX 660 and few month later I sold it. Best "GPU" decision ever ...
I will never buy NVIDIA again...


----------



## Cyber Locc

Quote:


> Originally Posted by *superstition222*
> 
> I think you took Pirates of Silicon Valley too seriously.
> So, as I said, Mantle didn't hurt the performance of Nvidia cards. Gameworks, by contrast...


Don't know what that is, bur I suggest you go watch some of Steve Hobbs pr releases, and what all he said about a product could do, and then it actually didn't. Your funny though, if you think that they/he aren't lying con artists well the Idk. As to the stealing the have openly admitted it many times.


----------



## infranoia

Quote:


> Originally Posted by *Charcharo*
> 
> Thanks. Just wanted to know what Tess some older games like CoP are using. Will try it out later... hopefully I would be able to get results.


Help a brother out with this. CoP to me says "Call of Pripyat" (STALKER). I start to get lost with game acronyms as the years go by.


----------



## PontiacGTX

Quote:


> Originally Posted by *infranoia*
> 
> Help a brother out with this. CoP to me says "Call of Pripyat" (STALKER). I start to get lost with game acronyms as the years go by.


it should be a DX11 game but Stalker is the only that fits
old Tessellation bench
with r9 280x


----------



## Assirra

Quote:


> Originally Posted by *PontiacGTX*
> 
> it should be a DX11 game but Stalker is the only that fits
> old Tessellation bench


That is a DX11 game. Although i do remember having some troubles when using the DX11 stuff.


----------



## ZealotKi11er

I do not think the amount of Tessellation on Nvidia GPUs is necessary. Basically, they are developing code to take advantage of the only thing they have more than AMD. This is not to make things faster for them. This is to make AMD slower. For example, if AMD had to option to make OpenGPU with full ASync the difference there is that makes cards go faster by contrast not slow down one more then the other. This tessellation talk has been going on since Fermi. Its the worse excuse to buy a Geforce. I was so betrayed back GTX470 vs HD 5850 days being told how much faster Fermi is on Tessellation which does not exist in games.


----------



## Charcharo

Quote:


> Originally Posted by *infranoia*
> 
> Help a brother out with this. CoP to me says "Call of Pripyat" (STALKER). I start to get lost with game acronyms as the years go by.


Yes mate, Call of Pripyat. One of the first DX11 games with Contact Hardening Shadows and Tessellation.

As for the actual tessellation... the amount Gameworks uses is silly. I can see 32X being... somewhat useful I guess, but 64??? Not really... truth be told, you can not tell the difference between 64 and 16 even... I tried









Also from what I know, Tessmark is OpenGL? If so it is kind of moot since for some reason it does not work quite as well on AMD cards. So I was told at least.


----------



## infranoia

Really, though, tessellation levels are hard to hide. As long as there are no more subterranean oceans or 64x concrete wall tessellation levels, I do not see a huge performance mismatch between the architectures post GCN-1.0-- at least, none that really punish AMD.

But then there is a setting in Crimson / Catalyst for "Tessellation: AMD Optimized" so there is that.


----------



## looniam

Quote:


> Originally Posted by *infranoia*
> 
> But then there is a setting in Crimson / Catalyst for "Tessellation: AMD Optimized" so there is that.


*THANK YOU!*

seriously fwiw, i can't remember that last time i didn't sit down and tweak out settings (between ingame menu and NVinspector) a half an hour before actually playing the game. and then go back over them again after a few hours of game play.

but i guess people don't like putting in effort . . .


----------



## Charcharo

I love options menus ...
I dont understand what people dont like about tweaking. It is fun!

I did even in newer games on my old ATI 5770, you'd be surprised what can be done in a well made options menu. For graphics, sound... UI... gameplay even!


----------



## AmericanLoco

Quote:


> Originally Posted by *Charcharo*
> 
> Yes mate, Call of Pripyat. One of the first DX11 games with Contact Hardening Shadows and Tessellation.
> 
> As for the actual tessellation... the amount Gameworks uses is silly. I can see 32X being... somewhat useful I guess, but 64??? Not really... truth be told, you can not tell the difference between 64 and 16 even... I tried
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also from what I know, Tessmark is OpenGL? If so it is kind of moot since for some reason it does not work quite as well on AMD cards. So I was told at least.


Depending on whether or not you can trust AMD marketing, their recent AMA on Reddit indicated that the higher tessellation rates often result in triangles that are smaller than the individual pixels on the screen.


----------



## Mahigan

Quote:


> Originally Posted by *AmericanLoco*
> 
> Depending on whether or not you can trust AMD marketing, their recent AMA on Reddit indicated that the higher tessellation rates often result in triangles that are smaller than the individual pixels on the screen.


Yes, the small triangle issue. NVIDIAs architecture simply culls these triangles. So when NVIDIA sets a tessellation level to x64, what's displayed on the screen isn't x64 because of the culling/primitive discard.

AMD doesn't have a primitive discard accelerator yet (coming with Polaris). So setting tessellation levels to x64 is a "cheat" of sorts on NVIDIAs part.

Discard also allows the geometry pipeline to reject unseen primitives (Crysis 2 hidden ocean).


----------



## PontiacGTX

Quote:


> Originally Posted by *Mahigan*
> 
> Yes, the small triangle issue. NVIDIAs architecture simply culls these triangles. So when NVIDIA sets a tessellation level to x64, what's displayed on the screen isn't x64 because of the culling/primitive discard.
> 
> AMD doesn't have a primitive discard accelerator yet (coming with Polaris). So setting tessellation levels to x64 is a "cheat" of sorts on NVIDIAs part.
> 
> Discard also allows the geometry pipeline to reject unseen primitives (Crysis 2 hidden ocean).


not so hiden












well thats why water looks good on Crysis 2 they have water rendering

Quote:


> This was primarily the same water rendering approach as in Crysis 1 (the different look is
> mostly due to cityscape environments instead of tropical environments). We use FFT based
> normal map using cheap parallax approximation for consoles, and vertex displacement
> mapping for PC specs. Currently, FFT simulation is done on CPU on its own dedicated thread.
> 
> We developed dynamic interaction with wave propagation on all platforms, using vertex
> displacement version for PC specs. It is worth mentioning that water wave propagation
> simulation is done on GPU with a cost of just 0.2 ms on consoles and negligible cost on PC
> specs


Quote´s Source
Water shader


----------



## steadly2004

Quote:


> Originally Posted by *PontiacGTX*
> 
> not so hiden
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> well thats why water looks good on Crysis 2 they have water rendering
> Quote´s Source
> Water shader


I think he's alluding to the fact that the water may be rendered even when it isn't on screen. I'm not sure if that is true, but I have no knowledge on the subject.


----------



## BlitzWulf

Quote:


> Originally Posted by *Mahigan*
> 
> Yes, the small triangle issue. NVIDIAs architecture simply culls these triangles. So when NVIDIA sets a tessellation level to x64, what's displayed on the screen isn't x64 because of the culling/primitive discard.
> 
> AMD doesn't have a primitive discard accelerator yet (coming with Polaris). So setting tessellation levels to x64 is a "cheat" of sorts on NVIDIAs part.
> 
> Discard also allows the geometry pipeline to reject unseen primitives (Crysis 2 hidden ocean).


That's what always astounds me about people claiming that the Crysis 2 fiasco is debunked because the unseen triangles can be culled,these people are completely oblivious to the fact that AMD cards cannot perform this type of culling in hardware so the fact that the game is designed in a way that requires geometry culling , proves the accusations it doesn't debunk them


----------



## looniam

Quote:


> Originally Posted by *BlitzWulf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Mahigan*
> 
> Yes, the small triangle issue. NVIDIAs architecture simply culls these triangles. So when NVIDIA sets a tessellation level to x64, what's displayed on the screen isn't x64 because of the culling/primitive discard.
> 
> AMD doesn't have a primitive discard accelerator yet (coming with Polaris). So setting tessellation levels to x64 is a "cheat" of sorts on NVIDIAs part.
> 
> Discard also allows the geometry pipeline to reject unseen primitives (Crysis 2 hidden ocean).
> 
> 
> 
> That's what always astounds me about people claiming that the Crysis 2 fiasco is debunked because the unseen triangles can be culled,these people are completely oblivious to the fact that AMD cards cannot perform this type of culling in hardware so the fact that the game is designed in a way that requires geometry culling , proves the accusations it doesn't debunk them
Click to expand...

all while _completely ignoring what happens in the game engine_:
http://docs.cryengine.com/display/SDKDOC4/Culling+Explained


----------



## magnek

Quote:


> Originally Posted by *looniam*
> 
> *THANK YOU!*
> 
> seriously fwiw, i can't remember that last time i didn't sit down and tweak out settings (between ingame menu and NVinspector) a half an hour before actually playing the game. and then go back over them again after a few hours of game play.
> 
> but i guess people don't like putting in effort . . .


*raises hand sheepishly*









These days I just don't have the time, energy, or willpower to futz around with games anymore. I mean sure for an indie game that I really REALLY like and is easy to mod (Rebel Galaxy for example), I'll sit down and actually make the effort to make it look/play better. But everything else forget it, the most I'd do is tweak settings to have the game run smoothly most of the time and that's it.


----------



## Olivon

So, to sum up :

GoW DX12 = Fail
Hitman DX12 = Fail
Tomb Raider DX12 = Fail
Fable DX12 = Abandoned

DX12 launch is totaly disastrous, not the better way to promote a new standard API


----------



## magnek

Quote:


> Originally Posted by *Olivon*
> 
> So, to sum up :
> 
> GoW DX12 = Fail
> Hitman DX12 = Fail
> Tomb Raider DX12 = Fail
> Fable DX12 = Abandoned
> 
> DX12 launch is totaly disastrous, not the better way to promote a new standard API


AotS DX12 =Great success hehehe

But seriously I wonder if Win10 adoption will start to slow down significantly


----------



## Themisseble

Quote:


> Originally Posted by *magnek*
> 
> AotS DX12 =Great success hehehe
> 
> But seriously I wonder if Win10 adoption will start to slow down significantly


BF5 might be DX12 only.


----------



## Defoler

Quote:


> Originally Posted by *Themisseble*
> 
> BF5 might be DX12 only.


I doubt that.
Even though I'm sure EA likes to be at the top by a big lead, I'm sure they don't like that type of #1 hated developer/publisher, for alienating everyone who doesn't want to upgrade for "free" to windows 10.


----------



## caswow

How in the world is hitman dx12 a fail?


----------



## ku4eto

Quote:


> Originally Posted by *caswow*
> 
> How in the world is hitman dx12 a fail?


Total performance/image quality i guess.


----------



## infranoia

Quote:


> Originally Posted by *caswow*
> 
> How in the world is hitman dx12 a fail?


CTDs on fullscreen for me. Only runs in windowed mode.


----------



## PlugSeven

Quote:


> Originally Posted by *caswow*
> 
> How in the world is hitman dx12 a fail?


Because nvidia aren't faster in it duh!


----------



## OneB1t

hitman DX12 is pretty good for me








my AMD cpu is on same performance level as i7-6700K
and gpu also got nice boost


----------



## PontiacGTX

Quote:


> Originally Posted by *Olivon*
> 
> So, to sum up :
> 
> GoW DX12 = Fail
> Hitman DX12 = Fail
> Tomb Raider DX12 = Fail
> Fable DX12 = Abandoned
> 
> DX12 launch is totaly disastrous, not the better way to promote a new standard API


where is the fail on hitman? in the 980ti performing liek a 390x? nah thats nvidia fault


----------



## Forceman

Quote:


> Originally Posted by *PontiacGTX*
> 
> where is the fail on hitman? in the 980ti performing liek a 390x? nah thats nvidia fault


Fail is too strong, but it seems like only the 390/390X show gains across the board. Something doesn't seem right when the 280X/380 show no gains at 1080p, and Fury shows minimal gain at 1440 and 4K.


----------



## OneB1t

also 290/290X/295X have good gains
reason for this is that these gpus were build for mantle and dx12

7970/R9 280X is too old to get boost from DX12
FURY/FURY X is bad desing with too much bottlenecks (still gets some fps







)

also nearly all reviews have powerfull intel CPU which have nearly same performance under DX11 as under DX12

but try it on fx-8xxx or i3 and you will see massive gains


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> also 290/290X/295X have good gains
> reason for this is that these gpus were build for mantle and dx12
> 
> 7970/R9 280X is too old to get boost from DX12
> FURY/FURY X is bad desing with too much bottlenecks (still gets some fps
> 
> 
> 
> 
> 
> 
> 
> )


According to the game thread, the 290/290X don't have the same gains as the 300 cards.


----------



## PontiacGTX

Quote:


> Originally Posted by *Forceman*
> 
> Fail is too strong, but it seems like only the 390/390X show gains across the board. Something doesn't seem right.


Quote:


> Originally Posted by *Forceman*
> 
> Fail is too strong, but it seems like only the 390/390X show gains across the board. Something doesn't seem right when the 280X/380 show no gains at 1080p, and Fury shows minimal gain at 1440 and 4K.


Fury has the same Quantity of ACEs, and a 390x is overclocked and has a tweaked bios, the game doesnt use extra bandwidth at 1440/1080 yet and fury has 4GB, you dont have memory usage numbers nor CPU bound resolution scenarios
Quote:


> Originally Posted by *Forceman*
> 
> According to the game thread, the 290/290X don't have the same gains as the 300 cards.


390/x have better memory timmings and probably different power limit , and there it doesnt mentiion which cooling has the 290/x,R9 290/X might get better performance with newer driver if it would be the issue, there are many factors which could be the issue,unless you know someone who has a 290/X (390/X Clocks) using 390/X bios with good testing methodology


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> Fail is too strong, but it seems like only the 390/390X show gains across the board. Something doesn't seem right when the 280X/380 show no gains at 1080p, and Fury shows minimal gain at 1440 and 4K.


I thought we were discussing this:


Though, this is the hot topic:


----------



## OneB1t

Quote:


> Originally Posted by *Forceman*
> 
> According to the game thread, the 290/290X don't have the same gains as the 300 cards.


they does its same card


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> they does its same card


Go look at the pcgameshardware review and compare the gains the 390 gets from DX12 to the gains 290 gets from DX12 (especially at 1440p). The 390 gains about 10%, while the 290 gains less than 5% (with no gain at 1440p). The 390 should be faster because it has higher speeds, but why does it gain more (on a percentage basis)?

1440p
390: DX11 46.8 DX12 51.7
290: DX11 45.4 DX12 46.2

1080p
390: DX11 59.0 DX12: 65.6
290: DX12 57.4 DX12: 59.8


----------



## OneB1t

because their 290 have much less power limit in bios thats all
and DX12 requests more power (as chip is better loaded than under dx11)
so 390 is not limited by power limit but 290 is limited

simple as that









also there are more changes like better memory timings and better memory controller timings

if they retest with power limit +50% on 290 results will be much closer


----------



## looniam

so far: AotS is looking like what DX12 _could be_ but, when compared to recent releases, not what it _will be_.


----------



## Cyro999

AOTS is also mostly/hugely GPU limited while the main benefit of dx12 is the dramatic reduction in CPU load.

I don't know if anyone else here actually plays the game, but my 980 is at 100% load basically 100% of the time in gameplay so far even on minimum graphics settings. I'm a bit CPU bound in parts the benchmark.. on dx11.


----------



## OneB1t

benchmark itself simulates massive battle whole time
in real game cpu have much less work to keep GPU feeded


----------



## infranoia

Quote:


> Originally Posted by *OneB1t*
> 
> because their 290 have much less power limit in bios thats all
> and DX12 requests more power (as chip is better loaded than under dx11)
> so 390 is not limited by power limit but 290 is limited
> 
> simple as that
> 
> 
> 
> 
> 
> 
> 
> 
> 
> also there are more changes like better memory timings and better memory controller timings
> 
> if they retest with power limit +50% on 290 results will be much closer


Already done. 290x with 390x BIOS, 1100/1350 +40% power limit: http://www.overclock.net/t/1594186/various-hitman-2016-pc-directx-11-vs-directx-12-performance/600_100#post_24983992

DX11:
81.55fps Average
9.55fps Min
232.41fps Max

DX12:
86.04fps Average
9.50fps Min
232.20fps Max

Min/max regression, average up 5.5%.


----------



## BradleyW

Quote:


> Originally Posted by *OneB1t*
> 
> because their *290 have much less power limit in bios* thats all
> and *DX12 requests more power* (as chip is better loaded than under dx11)
> so *390 is not limited by power limit* but 290 is limited
> 
> simple as that
> 
> 
> 
> 
> 
> 
> 
> 
> 
> also there are more changes like better memory timings and better memory controller timings
> 
> if they retest with power limit +50% on 290 results will be much closer


This is a very interesting point you have here.


----------



## ZealotKi11er

If you look at Hitman its a game with low budget as they had to split the game up
TR was also low budget as it had to have the help for Microsoft
GoW also another low budget games

Now of these represent DX12.

Even Mantle was far better than DX12. For those people that were being negative towards Mantle and claiming DX12 will stomp it....


----------



## spyshagg

hey better than nothing!









the 200 series do throttle at stock clocks with the default power limit. I have to increase 10% for my 290x not to throttle under occt. (at 30º watercooled!)

occt is a very demanding app, true, and not representative.

However, if the benched 200 series are running with the default coolers hitting 95ºc, the extra load from dx12 will probably hit the either the thermal limit or the powerlimit due to worse efficiency at those temperatures.

cheers


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> because their 290 have much less power limit in bios thats all
> and DX12 requests more power (as chip is better loaded than under dx11)
> so 390 is not limited by power limit but 290 is limited
> 
> simple as that
> 
> 
> 
> 
> 
> 
> 
> 
> 
> also there are more changes like better memory timings and better memory controller timings
> 
> if they retest with power limit +50% on 290 results will be much closer


Quote:


> Originally Posted by *BradleyW*
> 
> This is a very interesting point you have here.


Interesting theory. So in the name of science (of course) I got the intro pack and tested it.

Code:



Code:


PL        DX11         DX12 CPU         DX12 GPU
100%      76.60          77.16             77.60
150%      76.78          76.70             77.22

and also

80%        67.45         68.67             69.06

So power limit has no effect on DX11/DX12 gains, and DX12 shows basically no gain at all. That's on a 290X at 1100/1375.


----------



## infranoia

OK, starting to get confused. Hitman benches in the AotS thread?

I've also run the 290x reflashed as 390x benches in the Hitman thread with +40% and +50% voltages. It's a nominal bump to DX12 within the margin of error.

Since Hitman DX12 is crashing on my HDMI set, I'm really not ready to blame AMD for anything yet. It's a pretty sloppy DX12 implementation, if the crashes and bench run variances are anything to go by.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> OK, starting to get confused. Hitman benches in the AotS thread?


It kind of morphed into a generic DX11 vs DX12 discussion.


----------



## Mahigan

For one, we have to remember this...

Fiji hits a ROP wall whereas R9 290/290x/390/390x hit a bandwidth wall.


HBM, on Fiji, alleviates the bandwidth wall from Hawaii Grenada but introduces the ROp wall for Fiji. If you ignore the black textures (color compression affected results) and focus on the random textures then you start to see a pattern which explains quite a bit.

1. It explains why Maxwell hits a wall at 4K despite having more ROps than Fiji.
2. It explains why Fiji has mild gains relative to Hawaii/Grenada. Both have the same ROps (ignoring color compression) but while Hawaii/Grenada are memory bandwidth bound, Fiji is ROps bound.
3. Async Compute + Graphics is about running render operations in parallel to compute operations. Render wise, both Hawaii/Grenada and Fiji are similar. So unless you specifically code in more parallel compute operations for Fiji by way of a Fiji specific path, then Hawaii/Grenada and Fiji running the same path will only highlight Fiji's ROps bound performance and Hawai/Grenada's memory bandwidth bound performance.

Just a thought.


----------



## sugarhell

Quote:


> Originally Posted by *Mahigan*
> 
> For one, we have to remember this...
> 
> Fiji hits a ROP wall whereas R9 290/290x/390/390x hit a bandwidth wall.
> 
> 
> HBM, on Fiji, alleviates the bandwidth wall from Hawaii Grenada but introduces the ROp wall for Fiji. If you ignore the black textures (color compression affected results) and focus on the random textures then you start to see a pattern which explains quite a bit.
> 
> 1. It explains why Maxwell hits a wall at 4K despite having more ROps than Fiji.
> 2. It explains why Fiji has mild gains relative to Hawaii/Grenada. Both have the same ROps (ignoring color compression) but while Hawaii/Grenada are memory bandwidth bound, Fiji is ROps bound.
> 3. Async Compute + Graphics is about running render operations in parallel to compute operations. Render wise, both Hawaii/Grenada and Fiji are similar. So unless you specifically code in more parallel compute operations for Fiji by way of a Fiji specific path, then Hawaii/Grenada and Fiji running the same path will only highlight Fiji's ROps bound performance and Hawai/Grenada's memory bandwidth bound performance.
> 
> Just a thought.


A bit different thought.

We know that with fiji, ROPs use bandwidth way better than hawaii.

Probably both hawaii and fiji are ROPs limited but because fiji use HBM , fiji's ROPs can use more bandwidth so they are faster. Still at some point they hit a wall because the rest of the gpu needs bandwidth too.


----------



## Mahigan

Quote:


> Originally Posted by *sugarhell*
> 
> A bit different thought.
> 
> We know that with fiji, ROPs use bandwidth way better than hawaii.
> 
> Probably both hawaii and fiji are ROPs limited but because fiji use HBM , fiji's ROPs can use more bandwidth so they are faster. Still at some point they hit a wall because the rest of the gpu needs bandwidth too.


Fiji only uses memory bandwidth better on colored textures.
This test tests both black and random textures. The color compression on Fiji doesn't work with the random textures. Also the test is pretty specific so the rest of the GPU isn't being utilized.

Random textures:
Hawaii: 263GB/s
Fiji: 333GB/s
*Fiji OC: ~365GB/s
*_estimation_

So moving to HBM exposes the ROps peak rate. What would be interesting would be running this test on a FuryX at stock GPU and Memory clocks followed by a FuryX with a GPU OC but the same memory clocks. This would increase the ROp rate. Something tells me the results would go up despite the same memory bandwidth.

Fiji @ 1,050MHz = 67.2 GPixels/s
Fiji @ 1,150MHz = 73.6 GPixels/s

This test also exposes Maxwell's memory bandwidth starved ROps. So keeping the same core clocks on a GTX 980 Ti but overclocking the memory should yield interesting results as well.

So if this is true then a FuryX @ 1,150MHz should get 85.6FPS instead of 78.2FPS here:


----------



## EightDee8D

Quote:


> Originally Posted by *Forceman*
> 
> Interesting theory. So in the name of science (of course) I got the intro pack and tested it.
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> PL        DX11         DX12 CPU         DX12 GPU
> 100%      76.60          77.16             77.60
> 150%      76.78          76.70             77.22
> 
> and also
> 
> 80%        67.45         68.67             69.06
> 
> So power limit has no effect on DX11/DX12 gains, and DX12 shows basically no gain at all. That's on a 290X at 1100/1375.


so it does hamper performance with 80% pl ? now if you can overclock/reduce memory timings to 390x lvl, maybe it will show same gains. IMO.


----------



## BradleyW

Wait just a moment?
Quote:


> Originally Posted by *Forceman*
> 
> Interesting theory. So in the name of science (of course) I got the intro pack and tested it.
> Quote:
> 
> 
> 
> PL DX11 DX12 CPU DX12 GPU
> 100% 76.60 77.16 77.60
> 150% 76.78 76.70 77.22
> *80% 67.45 68.67 69.06*
> 
> 
> 
> So power limit has no effect on DX11/DX12 gains, and DX12 shows basically no gain at all. That's on a 290X at 1100/1375.
Click to expand...

How would one go about upping the power limit to 100% through software? MSI AB has it at +50%


----------



## OneB1t

Quote:


> Originally Posted by *EightDee8D*
> 
> so it does hamper performance with 80% pl ? now if you can overclock/reduce memory timings to 390x lvl, maybe it will show same gains. IMO.


problem is that he have faster cpu then they had in that test
so he run into bottleneck later then guys from test

by overclocking his GPU he can increase CPU load and then bigger gap between DX11 and DX12 appear


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> problem is that he have faster cpu then they had in that test
> so he run into bottleneck later then guys from test
> 
> by overclocking his GPU he can increase CPU load and then bigger gap between DX11 and DX12 appear


Except the computerbase test also showed ~0% gain for 290 while using the same CPU that showed 10% gain for 390.
Quote:


> Originally Posted by *BradleyW*
> 
> Wait just a moment?
> How would one go about upping the power limit to 100% through software? MSI AB has it at +50%


Not +100%, just 100%.


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> Not +100%, just 100%.


Oh I'm sorry about that.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> Wait just a moment?
> How would one go about upping the power limit to 100% through software? MSI AB has it at +50%


TriXX allows a much higher voltage mod than AB, at least with the default UI. You can go up to 300mV+. There may be an AB trick.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> TriXX allows a much higher voltage mod than AB, at least with the default UI. You can go up to 300mV+. There may be an AB trick.


Just as a side note I'm sad that I can't really push anything higher than 1375 on my VRAM. Also adding Vcore of +40 or higher gives me a crash, so overclocking high is out of the question for me. I can do 1100/1350 on stock voltage. Anything higher needs higher than +40 voltage, but as I say, that's not an option for me.









40+ or higher = crash at any speed or temperature.


----------



## OneB1t

Quote:


> Originally Posted by *Forceman*
> 
> Except the computerbase test also showed ~0% gain for 290 while using the same CPU that showed 10% gain for 390.
> Not +100%, just 100%.


so what AMD pay some golds to computerbase to mess with results so 390 looks better than 290 ...


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> Just as a side note I'm sad that I can't really push anything higher than 1375 on my VRAM. Also adding Vcore of +40 or higher gives me a crash, so overclocking high is out of the question for me. I can do 1100/1350 on stock voltage. Anything higher needs higher than +40 voltage, but as I say, that's not an option for me.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 40+ or higher = crash at any speed or temperature.


Are you on a 390x (Modded timings, not Stock) BIOS? You definitely need that above 1350.


----------



## Mahigan

Quote:


> Originally Posted by *mtcn77*
> 
> I don't think so. Those longer bars are not as significant as implied. In fact, memory compression is only 40% as important as actual bandwidth as both vendors use the same patents and recycle them to give a new flair. AMD says it straight forward while Nvidia adds a few inverse "special effects" into the jargon.
> Take for instance:
> 
> You can literally compare the theoretical ideals and find the same benefit. +40% gain is the same as -29% bandwidth reduction, only reciprocally(1.4ˆ-1≈0.71≈100%-*29%*). Which also means, the shorter bars are 2.5x times more vital than compression values.
> If we take this info into account,
> 
> 290X(The Asus DCIIOC variant with 1050/1350) is 263GB/s,
> 980 is 218GB/s(60/40 actual/compressed split),
> 980Ti is 286GB/s.
> and Fury X is 355GB/s.


Color compression only affects colored textures in the test above. I'm not even looking at those figures. I'm looking at straight up random texture figures.

In that scenario Fiji, Hawaii, Grenada all have 64 ROps. Hawaii and Grenada are held back from fully utilizing their ROps by memory bandwidth whereas Fiji is held back from fully utilizing its memory bandwidth by having only 64 ROps.

Hawaii was memory bandwidth bottlenecked, just as Tech Report stated, rather than ROp bound. Fiji, on the other hand, is ROp bound rather than memory bandwidth bottlenecked.


----------



## infranoia

Well it looks like Stardock is marketing Multi-GPU as an API plug-in.

http://venturebeat.com/2016/03/13/stardocks-newest-innovation-will-let-you-mix-amd-and-nvidia-video-cards-in-one-pc/

What's funny is what Brad Wardell says completely out of context in this article:
Quote:


> The one negative Wardell sees is overclockers, who "will not have a good time with DX12." But that's just a small niche of the overall PC market.


----------



## Mahigan

Quote:


> Originally Posted by *infranoia*
> 
> Well it looks like Stardock is marketing Multi-GPU as an API plug-in.
> 
> http://venturebeat.com/2016/03/13/stardocks-newest-innovation-will-let-you-mix-amd-and-nvidia-video-cards-in-one-pc/
> 
> What's funny is what Brad Wardell says completely out of context in this article:


Hmm interesting


----------



## BlitzWulf

Makes sense ,if you are using parts of the chip that were idle before the heat will probably be too much ,those extra watts have to go somewhere, aren't there tests showing elevated power draw in AOTS from DX11 to DX12?

Maybe I'll have to put my 390X under water if I want to maintain my overclock


----------



## Devnant

Quote:


> Originally Posted by *infranoia*
> 
> Well it looks like Stardock is marketing Multi-GPU as an API plug-in.
> 
> http://venturebeat.com/2016/03/13/stardocks-newest-innovation-will-let-you-mix-amd-and-nvidia-video-cards-in-one-pc/
> 
> What's funny is what Brad Wardell says completely out of context in this article:


So, this would work in any DX12 game, and be dev agnostic, right? I´m very interested to see this in practice. Multi-GPU support these days have been.... lacking to say the least.


----------



## airfathaaaaa

we have any news about that new version of the ACE engines amd has?(also what was their name again?)


----------



## superstition222

I'm not sure who said it but someone said that the developer of Ashes said they had to make performance compromises (in terms of what a pure DX 12 engine could have provided) for backward compatibility with DX 11. Does anyone know how much performance is affected by those compromises?

It would be rather unfortunate for a benchmark that's supposed to represent what DX 12 can do to be hamstrung by the need to work with DX 11.


----------



## Mahigan

Well, here's another game making use of Asynchronous Compute..
Doom
http://wccftech.com/doom-dev-big-gains-async-compute-love/

Which I find odd because it uses OpenGL (I thought?).


----------



## Cyro999

Quote:


> It would be rather unfortunate for a benchmark that's supposed to represent what DX 12 can do to be hamstrung by the need to work with DX 11.


Ashes of the SIngularity tends to be heavily GPU bound (unless you have thousands of units and a pair of flagship GPU's on 1080p - even then you might have to lower some settings)

the #1 feature by a mile of dx12/vulkan IMO is the dramatic reduction in CPU load which is not shown off in these benchmarks. Maybe a bit, but nowhere near the full extent of them because the CPU load isn't high enough


----------



## infranoia

Quote:


> Originally Posted by *Mahigan*
> 
> Well, here's another game making use of Asynchronous Compute..
> Doom
> http://wccftech.com/doom-dev-big-gains-async-compute-love/
> 
> Which I find odd because it uses OpenGL (I thought?).


Why do you find it odd? Asynchronous Compute isn't exclusive to Direct3D. The PS4 uses it extensively.

Is there no way of ordering OpenGL commands to facilitate ASC?


----------



## Mahigan

Quote:


> Originally Posted by *infranoia*
> 
> Why do you find it odd? Asynchronous Compute isn't exclusive to Direct3D. The PS4 uses it extensively.
> 
> Is there no way of ordering OpenGL commands to facilitate ASC?


Well OpenGL uses a single synchronous queue. Unless they're tapping into OpenCL.


----------



## looniam

Quote:


> Originally Posted by *Mahigan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *infranoia*
> 
> Why do you find it odd? Asynchronous Compute isn't exclusive to Direct3D. The PS4 uses it extensively.
> 
> Is there no way of ordering OpenGL commands to facilitate ASC?
> 
> 
> 
> Well OpenGL uses a single synchronous queue. Unless they're tapping into OpenCL.
Click to expand...

this thread just might be up your alley:
Msi Kombustor Plasma OpenGL 4.3 Benchmark Async Buffer


----------



## Mahigan

Quote:


> Originally Posted by *looniam*
> 
> this thread just might be up your alley:
> Msi Kombustor Plasma OpenGL 4.3 Benchmark Async Buffer


I've read the article. This article isn't about Asynchronous Compute but rather Asynchronous Copy commands from System Memory to Framebuffer and vice versa.

OpenGL:

_2 queues: (Graphics/Compute) and Copy._

DX12/Vulkan:

_3 queues: Graphics, Compute and Copy._

Asynchronous Compute + Graphics is when both the Graphics and Compute queues are executing jobs concurrently and in parallel. OpenGL can't achieve this.

Further down, in the document, they discuss multi-threaded rendering.

That's pretty much it.

Ps. But I figured out Doom... It's being ported to Vulkan:
https://en.m.wikipedia.org/wiki/Id_Tech_6
(A member from Anandtech showed me this).


----------



## diggiddi

Quote:


> Originally Posted by *Mahigan*
> 
> Well OpenGL uses a single synchronous queue. Unless they're tapping into OpenCL.


So OpenCl can can utilize ASC? Mahigan are you familiar with SkelCl?


----------



## airfathaaaaa

found a gem while browsing reddit today
https://www.reddit.com/r/Amd/comments/4ag0w0/amd_unveils_vega_navi_2017_18_graphics/d10gsc0
so much cringe


----------



## Mahigan

Quote:


> Originally Posted by *airfathaaaaa*
> 
> found a gem while browsing reddit today
> https://www.reddit.com/r/Amd/comments/4ag0w0/amd_unveils_vega_navi_2017_18_graphics/d10gsc0
> so much cringe


Wow,

That guy sounds like Donald Trump. Keeps yapping but says nothing.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mahigan*
> 
> Wow,
> 
> That guy sounds like Donald Trump. Keeps yapping but says nothing.


And you posted to correct him







. I honestly would save your breath people on Reddit are well they just are







. Stay on OCN you will find your life is much happier


----------



## magnek

You read reddit for the lawlz, and any time spent posting on reddit is wasted time.


----------



## airfathaaaaa

well ama's on reddit is the main reason someone use it searching for redheard is the other one


----------



## superstition222

Quote:


> Originally Posted by *Mahigan*
> 
> Well OpenGL uses a single synchronous queue. Unless they're tapping into OpenCL.


Vulkan is supposed to replace OpenGL


----------



## KyadCK

Quote:


> Originally Posted by *looniam*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ku4eto*
> 
> The link you provided, does not indicate what moderate, extreeme and so on means in terms of tesselation. But as you can see, under Moderate, nVidia has 50% advantage, and when you increase the Tesselation it goes to 140%, 200%, 400%. This is what i meant with my previous comment, under "believable" levels of tesselation, you get actually more than good results. Increase it too much, and it gets 1st useless, 2nd too heavy on both (but mainly AMD).
> Of course, this is on already old cards, where i have no idea how the VLIW architecture fares, altough i had a Cayman Pro.
> 
> 
> 
> i meant between you and i (compare) and the link was to _DL the bench._
> can adjust the settings:
> 
> 
> Spoiler: Warning: Spoiler!
Click to expand...



Story is the same at x8, my FuryX (stock) has about half the score as your 980ti, whatever clock you run at.

Still though, 35C, about half the temp too.








Quote:


> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> 
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
Click to expand...

Well, I mean, they don't exactly produce 780tis, so they are kinda EOL. All AMD's re-branding and unified arch gives them an edge on supporting old and new.

Still, when the GPU comes with a 3-5 year warranty, support ending after 1.5 for the most part sucks.
Quote:


> Originally Posted by *huzzug*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> 
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
> 
> Click to expand...
> 
> If you think $700 should not be EOL in 2 years, maybe you shouldn't be spending $700 on a card in the first place. Not patronizing you, but it all depends on a person. Nvidia already informed when card becomes EOL based of past practices.
Click to expand...

Mmm... I remember a time, when I had my 6970s, where;

"nVidia is the brand to buy if you want it to last"
"nVidia retains it's value better"
"Go nVidia if you want it to still be good later on"
"Don't go AMD, they die out quicker, you wont get your money's worth"

That was Fermi. GCN and Kepler launch and the whole world turns upside-down.

Even Fermi is "supported" in the drivers still. It isn't "legacy", not yet. It should get something besides the bare minimum, and Kepler certainly should. In contrast, these days, a 7970 from 2012 will never die. My how the world changes in four short years, no?

It's been funny, watching AMD pulling minor victory after minor victory with VR, Consoles, Mantle (to DX, Vulkan), building the arch for the future from the beginning, their recent Driver steam roll, and all the while nVidia shoots themselves in the foot removing features they thought unnecessary (they were correct... for a while). Sigh... It's been quite a ride, and it's been hilarious the whole way as everyone said each one little thing AMD did didn't matter... Then it all hits as one. These next few years will be very, very interesting.


----------



## rickcooperjr

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ku4eto*
> 
> The link you provided, does not indicate what moderate, extreeme and so on means in terms of tesselation. But as you can see, under Moderate, nVidia has 50% advantage, and when you increase the Tesselation it goes to 140%, 200%, 400%. This is what i meant with my previous comment, under "believable" levels of tesselation, you get actually more than good results. Increase it too much, and it gets 1st useless, 2nd too heavy on both (but mainly AMD).
> Of course, this is on already old cards, where i have no idea how the VLIW architecture fares, altough i had a Cayman Pro.
> 
> 
> 
> i meant between you and i (compare) and the link was to _DL the bench._
> can adjust the settings:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Click to expand...
> 
> 
> 
> Story is the same at x8, my FuryX (stock) has about half the score as your 980ti, whatever clock you run at.
> 
> Still though, 35C, about half the temp too.
> 
> 
> 
> 
> 
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> Click to expand...
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
> 
> Click to expand...
> 
> Well, I mean, they don't exactly produce 780tis, so they are kinda EOL. All AMD's re-branding and unified arch gives them an edge on supporting old and new.
> 
> Still, when the GPU comes with a 3-5 year warranty, support ending after 1.5 for the most part sucks.
> Quote:
> 
> 
> 
> Originally Posted by *huzzug*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> Click to expand...
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
> 
> Click to expand...
> 
> If you think $700 should not be EOL in 2 years, maybe you shouldn't be spending $700 on a card in the first place. Not patronizing you, but it all depends on a person. Nvidia already informed when card becomes EOL based of past practices.
> 
> Click to expand...
> 
> Mmm... I remember a time, when I had my 6970s, where;
> 
> "nVidia is the brand to buy if you want it to last"
> "nVidia retains it's value better"
> "Go nVidia if you want it to still be good later on"
> "Don't go AMD, they die out quicker, you wont get your money's worth"
> 
> That was Fermi. GCN and Kepler launch and the whole world turns upside-down.
> 
> Even Fermi is "supported" in the drivers still. It isn't "legacy", not yet. It should get something besides the bare minimum, and Kepler certainly should. In contrast, these days, a 7970 from 2012 will never die. My how the world changes in four short years, no?
> 
> It's been funny, watching AMD pulling minor victory after minor victory with VR, Consoles, Mantle (to DX, Vulkan), building the arch for the future from the beginning, their recent Driver steam roll, and all the while nVidia shoots themselves in the foot removing features they thought unnecessary (they were correct... for a while). Sigh... It's been quite a ride, and it's been hilarious the whole way as everyone said each one little thing AMD did didn't matter... Then it all hits as one. These next few years will be very, very interesting.
Click to expand...

Man you hit the nail on the head in so many places it isn't even funny you summed up everything I have been saying for past few years on here especially past few months.


----------



## Liranan

Quote:


> Originally Posted by *KyadCK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ku4eto*
> 
> The link you provided, does not indicate what moderate, extreeme and so on means in terms of tesselation. But as you can see, under Moderate, nVidia has 50% advantage, and when you increase the Tesselation it goes to 140%, 200%, 400%. This is what i meant with my previous comment, under "believable" levels of tesselation, you get actually more than good results. Increase it too much, and it gets 1st useless, 2nd too heavy on both (but mainly AMD).
> Of course, this is on already old cards, where i have no idea how the VLIW architecture fares, altough i had a Cayman Pro.
> 
> 
> 
> i meant between you and i (compare) and the link was to DL the bench.
> can adjust the settings:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Click to expand...
> 
> 
> 
> Story is the same at x8, my FuryX (stock) has about half the score as your 980ti, whatever clock you run at.
> 
> Still though, 35C, about half the temp too.
> 
> 
> 
> 
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> Click to expand...
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
> 
> Click to expand...
> 
> Well, I mean, they don't exactly produce 780tis, so they are kinda EOL. All AMD's re-branding and unified arch gives them an edge on supporting old and new.
> 
> Still, when the GPU comes with a 3-5 year warranty, support ending after 1.5 for the most part sucks.
> Quote:
> 
> 
> 
> Originally Posted by *huzzug*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Themisseble*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> or maybe because it wouldn't make sense to invest resources in EOL hardware. ya know, as nvidia has already said, two years ago.
> 
> Click to expand...
> 
> Then it doesnt make sense at all. 2 years hardware for 700$ should not be EOL.
> 
> Click to expand...
> 
> If you think $700 should not be EOL in 2 years, maybe you shouldn't be spending $700 on a card in the first place. Not patronizing you, but it all depends on a person. Nvidia already informed when card becomes EOL based of past practices.
> 
> Click to expand...
> 
> Mmm... I remember a time, when I had my 6970s, where;
> 
> "nVidia is the brand to buy if you want it to last"
> "nVidia retains it's value better"
> "Go nVidia if you want it to still be good later on"
> "Don't go AMD, they die out quicker, you wont get your money's worth"
> 
> That was Fermi. GCN and Kepler launch and the whole world turns upside-down.
> 
> Even Fermi is "supported" in the drivers still. It isn't "legacy", not yet. It should get something besides the bare minimum, and Kepler certainly should. In contrast, these days, a 7970 from 2012 will never die. My how the world changes in four short years, no?
> 
> It's been funny, watching AMD pulling minor victory after minor victory with VR, Consoles, Mantle (to DX, Vulkan), building the arch for the future from the beginning, their recent Driver steam roll, and all the while nVidia shoots themselves in the foot removing features they thought unnecessary (they were correct... for a while). Sigh... It's been quite a ride, and it's been hilarious the whole way as everyone said each one little thing AMD did didn't matter... Then it all hits as one. These next few years will be very, very interesting.
Click to expand...

Not being able to rep you makes me cry.


----------



## superstition222

Quote:


> That was Fermi. GCN and Kepler launch and the whole world turns upside-down.
> 
> Even Fermi is "supported" in the drivers still. It isn't "legacy", not yet.


It depends on the part. The 460 768 MB was discontinued from W10 support as far as I know sometime ago.
Quote:


> It should get something besides the bare minimum, and Kepler certainly should. In contrast, these days, a 7970 from 2012 will never die. My how the world changes in four short years, no?


I don't know what you're saying here. What is the contrast between praising Nvidia for extended driver support and saying an old AMD card is still supported?
Quote:


> It's been funny, watching AMD pulling minor victory after minor victory with VR, Consoles, Mantle (to DX, Vulkan), building the arch for the future from the beginning, their recent Driver steam roll, and all the while nVidia shoots themselves in the foot removing features they thought unnecessary (they were correct... for a while). Sigh... It's been quite a ride, and it's been hilarious the whole way as everyone said each one little thing AMD did didn't matter... Then it all hits as one. These next few years will be very, very interesting.


Well, async is being downplayed in the press at the moment. According to earlier articles it could improve performance up to 30% and now we're hearing 5% and that nearly all of the DX 12 improvements for AMD come from less driver overhead. I've also read a post recently that says Fiji has "bad triangle performance which explains why it doesn't do as well as Maxwell with Hairworks" - a questionable way of spinning what could be purposeful overuse of tessellation. How much spin is going to control the market may be key.


----------



## pengs

Quote:


> Originally Posted by *superstition222*
> 
> Well, async is being downplayed in the press at the moment.


You must be talking about the NVIDIA conference at GDC?
The red flags are right in front of some of your faces, dangling against your nose but you continue to drive on...

The psychology around here goes like this:

Unbiased - find hype, unreasonably high expectations, find instance/demonstration, get disappointed, jump to conclusions, vindicate, form resentment, find vulnerability (see biased)

Biased - find vulnerability, exploit, exploit/fabricate, dismiss reason, exploit, dismiss reason and fabricate, too much reason... , vindicate, dismiss, attack, abandon thread

It's funny, reminds me of politics.


----------



## superstition222

Quote:


> Originally Posted by *pengs*
> 
> You must be talking about the NVIDIA conference at GDC?


No, the recent Hitman article where the argument is that async is only good for 5 - 10%, is "difficult to tune", and can even hurt performance.


----------

