# [WCCF] HITMAN To Feature Best Implementation Of DX12 Async Compute Yet, Says AMD



## MoorishBrutha

*http://wccftech.com/hitman-feature-implementation-dx12-async-compute-amd/*
Quote:


> AMD just sent out a press release about HITMAN using DirectX 12, in what has been described as the best implementation yet of Async Compute Engines.
> 
> Three weeks ago we had spotted a GDC 2016 session about HITMAN's usage of DirectX 12, but at that time nothing had been officially confirmed. Here's what AMD had to say, specifically:


----------



## Sleazybigfoot

Will be interesting too see how much of a performance AMD GPUS will gain, if any.


----------



## PlugSeven

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> Will be interesting too see how much of a performance AMD GPUS will gain, if any.


Depends on what you mean by performance, are you talking raw fps or just a more immersive experience?A world of difference here.


----------



## looniam

good for AMD.


----------



## MoorishBrutha

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> Will be interesting too see how much of a performance AMD GPUS will gain, if any.


You do understand that Nvidia lied to their 980Ti customers about being fully-supported of DX12 while knowing they didn't have one of the main features of DX12- *Async Compute*?

Ashes of Singularity wasn't a fluke, it was a reason why AMD smoked Nvidia on those benchmarks.


----------



## huzzug

I'm sorry but didn't Nvidia drop drivers that cleared the smoke for Nvidia cards that AMD left them in and are now competing head to head ?


----------



## MadRabbit

Quote:


> Originally Posted by *huzzug*
> 
> I'm sorry but didn't Nvidia drop drivers that cleared the smoke for Nvidia cards that AMD left them in and are now competing head to head ?


Im not sure nor do I want to mess myself into some "battle" but once you emulate something, something else needs to take the load. Since AFAIK nVidia's cards don't have the hardware power for it?


----------



## looniam

Quote:


> Originally Posted by *MoorishBrutha*
> 
> You do understand that Nvidia lied to their 980Ti customers about being fully-supported of DX12 while knowing they didn't have one of the main features of DX12- *Async Compute*?
> 
> Ashes of Singularity wasn't a fluke, it was a reason why AMD smoked Nvidia on those benchmarks.


AMD: "There's no such thing as full support for DX12 today", *Fury X missing DX12 features as Well*

the more you know™


----------



## MoorishBrutha

Quote:


> Originally Posted by *huzzug*
> 
> I'm sorry but didn't Nvidia drop drivers that cleared the smoke for Nvidia cards that AMD left them in and are now competing head to head ?


I think some Nvidia shill said so but alot, not just one, benchmarks saw bad performance from Nvidia due to the *lack* of Async Compute Engines in their GPUs. It's confirmed via White papers that none of Nvidia's cards (Maxwell, Kepler, Fermi) have Async Compute Engines built inside of them.

And Nvidia is already pressuring developers not to use Async Compute at all. The developers of Ashes of Singularity said they only used a modest amount of Async Compute and that the consoles are more intensive than they are with this feature.

For example, that game, Infamous: Second Son, for the PS4 already used Async Compute.

The million dollar question about PASCAL will be: does it have Async Compute Engines inside of it.

If it doesn't then Nvidia will be doomed.


----------



## Cybertox

Sounds promising, might end up getting this game. My 290X should do very well if the game is going to be as optimized as described in the OP.


----------



## MoorishBrutha

Can someone say:


----------



## Smanci

Waiting for any critical comments about AMD bribing the devs and implementing features that cripple other HW than theirs.


----------



## GorillaSceptre

Good, more and more titles are going to be doing this, consoles have had this for ages now. About time PC gets it.

There is going to be a DX11 version for Nvidia users.


----------



## mcg75

Quote:


> Originally Posted by *huzzug*
> 
> I'm sorry but didn't Nvidia drop drivers that cleared the smoke for Nvidia cards that AMD left them in and are now competing head to head ?


Yes they did. But Ashes doesn't really use a lot of compute.

http://www.techspot.com/review/1081-dx11-vs-dx12-ashes/

http://www.tweaktown.com/news/48021/nvidia-beats-amd-ashes-singularity-dx12-test-new-driver/index.html


----------



## sixor

dx11 dx12 64bit os, quad kexa octa cores

and fallout 4 still uses 2gb of ram and looks like a ps2 game

i believe the only dx11 good game was max payne 3 and gta5, the rest were just marketing gimmicks like dx10 and 10.1 lol


----------



## MoorishBrutha

Quote:


> Originally Posted by *Smanci*
> 
> Waiting for any critical comments about AMD bribing the devs and implementing features that cripple other HW than theirs.


How could AMD be bribing off developers to use this feature where the consoles already been doing it- *Infamous: Second Son*? Nvidia's big mistake was letting AMD control the consoles' market.

By controlling the consoles market, AMD manipulated the game industry in its favor. That's Nvidia's fault, not AMD's.


----------



## 98uk

Quote:


> Originally Posted by *MoorishBrutha*
> 
> Can someone say:


No, because it's completely irrelevant and a childish comment to what is quite an interesting article.

I am interested to see what comes of this technology. I wonder if it will see the light of day in DICE's new BF game too?


----------



## GoLDii3

Quote:


> Originally Posted by *Smanci*
> 
> Waiting for any critical comments about AMD bribing the devs and implementing features that cripple other HW than theirs.


You mean nVidia?

Better than 47 having x64 tesselation on his bald head.


----------



## looniam

i am not a fan of the hitman series - just never appealed to me. but i would buy this just to support a dev using the newest features even if my 980ti struggles.

though i am wondering if a 270X would work as a dedicated a-sync compute card. . .


----------



## Ha-Nocri

So, the game will run smoothly using any GPU vendor


----------



## PlugSeven

Quote:


> Originally Posted by *mcg75*
> 
> Yes they did. But Ashes doesn't really use a lot of compute.
> 
> http://www.techspot.com/review/1081-dx11-vs-dx12-ashes/
> 
> http://www.tweaktown.com/news/48021/nvidia-beats-amd-ashes-singularity-dx12-test-new-driver/index.html


After watching *this*, I find those Tweaktown and Techspot numbers to be utterly meaningless.


----------



## GoLDii3

Quote:


> Originally Posted by *looniam*
> 
> i am not a fan of the hitman series - just never appealed to me. but i would buy this just to support a dev using the newest features even if my 980ti struggles.
> 
> though i am wondering if a 270X would work as a dedicated a-sync compute card. . .


If you want to give this franchise a try i suggest you to try Hitman Blood Money.


----------



## Tivan

Quote:


> Originally Posted by *Ha-Nocri*
> 
> So, the game will run smoothly using any GPU vendor


As long as Nvidia impliments the Dx12 spec for async compute in hardware, sometime. It'll run smoothly either way, depending on what settings you turn off.

I mean you don't see people go on lawsuits that their DX10 cards only run DX11 poorly and with missing features, if the companies were so generous to emulate the DX11 features in software. There's no native DX12 cards at the moment, so same thing.


----------



## mcg75

Quote:


> Originally Posted by *PlugSeven*
> 
> After watching *this*, I find those Tweaktown and Techspot numbers to be utterly meaningless.


If you can show me another video where they use the updated driver and it shows the same thing I will consider them useless as well.

The video uses 355.98 and the Ashes fix driver is 358.50


----------



## Tivan

https://www.youtube.com/watch?v=6MAWl3YzsTE

neither the slow traveling rockets, nor the flying things, have the glowing effect present on the 390X, in this video, which uses 'Nvidia driver version 358.50'

If that's what you mean.

edit:
this:


and this effect:


Just pointing out that this is what people seem concerned about, not trying to prove Nvidia doesn't have it or anything, but doesn't seem present here, and maybe on more driver/game versions. I don't know if this was fixed with current day patch/driver or is going to be addressed in any way.


----------



## sugarhell

Quote:


> Originally Posted by *mcg75*
> 
> If you can show me another video where they use the updated driver and it shows the same thing I will consider them useless as well.
> 
> The video uses 355.98 and the Ashes fix driver is 358.50


https://www.youtube.com/watch?v=dpSN0wUWft0

Here it seems that particles for nvidia are the same as the above video


----------



## GorillaSceptre

Forget about Ashes Of The Singularity guys.. The amount of Async it uses is negligible.







Using it as "proof" that Maxwell is just as capable is nonsensical.. It's a well known fact that GCN is far more complex and far more geared towards parallelism. Nvidia optimised their architectures for current API's, AMD (stupidly from a business stand point) geared their architectures toward something like DX12. I think at the time AMD expected developers to run with it, but they never did.


----------



## p4inkill3r

I'm not a big Hitman player at all, but







for the implementation of new tech.


----------



## keikei

Quote:


> Originally Posted by *98uk*
> 
> No, because it's completely irrelevant and a childish comment to what is quite an interesting article.
> 
> I am interested to see what comes of this technology. _I wonder if it will see the light of day in DICE's new BF game too_?


i hope so. Mantle was a good start, but never got out of beta. DX12 will have more mainstream support.


----------



## GorillaSceptre

Quote:


> Originally Posted by *looniam*
> 
> i am not a fan of the hitman series - just never appealed to me. but i would buy this just to support a dev using the newest features even if my 980ti struggles.
> 
> though i am wondering if a 270X would work as a dedicated a-sync compute card. . .


That would be awesome.







DX12 supports GPU's from multiple vendors, it just depends if Nvidia allows it.. Nvidia disables PhysX if an AMD card is detected, so don't count on it.


----------



## ht_addict

Ashes of the Singularity is still in Beta, hasn't implemented Crossfire or SLI yet. We just got to wait for developers to start pumping out DX12 games or someone like 3DMark to release a DX12 benchmark.


----------



## GorillaSceptre

_"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs-called asynchronous compute engines-to handle heavier workloads and better image quality without compromising performance. PC gamers may have heard of asynchronous compute already, and Hitman demonstrates the best implementation of this exciting technology yet. By unlocking performance in GPUs and processors that couldn't be touched in DirectX 11, gamers can get new performance out of the hardware they already own."_

It's frustrating that we could have had this for years now.. Imagine how AMD must have felt, watching their hardware get mocked on power consumption, because all that complex hardware was guzzling watts, while no developers were actually using it.. They were so desperate that they eventually went as far to create their own API..







That's what happens when you pretty much have a monopoly in the form of Direct X..


----------



## looniam

Quote:


> Originally Posted by *GoLDii3*
> 
> If you want to give this franchise a try i suggest you to try Hitman Blood Money.


never found stealth games attractive; got watch dogs and MGSV both w/gpu purchase. BUT you got me looking and steam has the whole collection on sale until tomorrow:


Spoiler: Warning: Spoiler!







for $10 how can i go wrong? thanks!









Quote:


> Originally Posted by *GorillaSceptre*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> i am not a fan of the hitman series - just never appealed to me. but i would buy this just to support a dev using the newest features even if my 980ti struggles.
> 
> though i am wondering if a 270X would work as a dedicated a-sync compute card. . .
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That would be awesome.
> 
> 
> 
> 
> 
> 
> 
> DX12 supports GPU's from multiple vendors, it just depends if Nvidia allows it.. Nvidia disables PhysX if an AMD card is detected, so don't count on it.
Click to expand...

even though NV put the kibosh on physX, doesn't mean AMD will stop from a user having a dedicated a-sync card. esp. if they want to promote themselves as being more "open" than NV.









yeah, my green underwear wearing butt is gonna say:

i got faith in AMD.

E;
typos like always


----------



## GorillaSceptre

Quote:


> Originally Posted by *looniam*
> 
> even though NV put the kibosh on physX, doesn't mean AMD will stop from a user having a dedicated a-sync card. esp. if they want to promote themselves as being more "open" than NV.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> yeah, my green underwear wearing butt is gonna say:
> 
> i got faith in AMD.
> 
> E;
> typos like always


? If nvidia choose to block it then there's nothing AMD can do.


----------



## mcg75

Quote:


> Originally Posted by *Tivan*
> 
> https://www.youtube.com/watch?v=6MAWl3YzsTE
> 
> neither the slow traveling rockets, nor the flying things, have the glowing effect present on the 390X, in this video, which uses 'Nvidia driver version 358.50'
> 
> If that's what you mean.


Quote:


> Originally Posted by *sugarhell*
> 
> https://www.youtube.com/watch?v=dpSN0wUWft0
> 
> Here it seems that particles for nvidia are the same as the above video


Thanks guys. It does indeed seem there is an issue here.

I just wonder why AMD hasn't jumped on this. It would be undeniable evidence if true that Nvidia is skirting IQ.


----------



## looniam

Quote:


> Originally Posted by *GorillaSceptre*
> 
> ? If nvidia choose to block it then there's nothing AMD can do.


and just how can they block it????

c'mon man. it's one thing to block their own proprietary software but a completely other to block a competitor's hardware.

not. gonna. happen.


----------



## p4inkill3r

Quote:


> Originally Posted by *looniam*
> 
> and just how can they block it????
> 
> c'mon man. it's one thing to block their own proprietary software but a completely other to block a competitor's hardware.
> 
> not. gonna. happen.


Care to make it interesting?


----------



## looniam

^ sure.


----------



## GorillaSceptre

Quote:


> Originally Posted by *looniam*
> 
> and just how can they block it????
> 
> c'mon man. it's one thing to block their own proprietary software but a completely other to block a competitor's hardware.
> 
> not. gonna. happen.


There's many ways they could block it.. You can be an optimist, but when it comes to Nvidia, i'm more of a believe it when i see it person. They have proven time and time again that they don't like playing with others, what has changed?


----------



## sugarhell

I dont understnad what are you talking about? Dedicated a-sync card?

Aces feeds your shaders either with graphics works or compute works. Nvidia system can only feed with either graphics or compute and the pipeline needs to change everytime. Amd can feed for example half the shaders with graphics and half with compute works.

This happens inside the hardware. It's a hardware sheduler. A dedicated async card without the right configuration of shaders (for example like fermi the shaders on gcn can communicate ) is useless.


----------



## p4inkill3r

Quote:


> Originally Posted by *looniam*
> 
> ^ sure.


I bet you 45984587 internet points that nvidia will not use AMD hardware as a corollary/SLI/async compute solution.


----------



## DNMock

Quote:


> Originally Posted by *GorillaSceptre*
> 
> ? If nvidia choose to block it then there's nothing AMD can do.


Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.


----------



## PostalTwinkie

I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.

Onto some comments.....

Quote:


> Originally Posted by *MoorishBrutha*
> 
> You do understand that Nvidia lied to their 980Ti customers about being fully-supported of DX12 while knowing they didn't have one of the main features of DX12- *Async Compute*?
> 
> Ashes of Singularity wasn't a fluke, it was a reason why AMD smoked Nvidia on those benchmarks.


Smoked? AMD caught up in AoTS, especially after a driver release from Nvidia.

Also, you might want to look at the different DX 12 levels, and what is being advertised, and what is considered required for the specific levels. Before you start accusing someone of knowingly advertising something they didn't have, which would be extremely illegal.

Libel, like what you are spreading, doesn't help the conversation at all.

Quote:


> Originally Posted by *huzzug*
> 
> I'm sorry but didn't Nvidia drop drivers that cleared the smoke for Nvidia cards that AMD left them in and are now competing head to head ?


Yes.

Although the amount of ASC that AoTS uses is pretty damn small, and not much of an indicator.

Quote:


> Originally Posted by *MoorishBrutha*
> 
> I think some Nvidia shill said so but alot, not just one, benchmarks saw bad performance from Nvidia due to the *lack* of Async Compute Engines in their GPUs. It's confirmed via White papers that none of Nvidia's cards (Maxwell, Kepler, Fermi) have Async Compute Engines built inside of them.
> 
> *And Nvidia is already pressuring developers not to use Async Compute at all.* The developers of Ashes of Singularity said they only used a modest amount of Async Compute and that the consoles are more intensive than they are with this feature.
> 
> For example, that game, Infamous: Second Son, for the PS4 already used Async Compute.
> 
> The million dollar question about PASCAL will be: does it have Async Compute Engines inside of it.
> 
> If it doesn't then Nvidia will be doomed.


Source on the bold part? Again, another claim of yours without anything to back it up.

The fact is, we don't know how big ASC is going to be, or who is even going to bother using it. A couple of console games, and a fractional use in an early Beta, are not something you can really make grand claims on.

Quote:


> Originally Posted by *DNMock*
> 
> Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.


Many, a lot, of developers wouldn't survive without the PC market to support their Console market.

You need to consider that Nvidia still completely controls the PC market with their products. If software developers suddenly abandoned that, they would quickly starve and go out of business. The PC gaming market is a huge market for developers, they can't dump it.

Is it unfortunate Nvidia has gotten that big? Yes, it is, as it makes it very difficult for AMD to gain ground in it. However, it is possible they will do it, but it is going to take time.


----------



## GorillaSceptre

Quote:


> Originally Posted by *sugarhell*
> 
> I dont understnad what are you talking about? Dedicated a-sync card?
> 
> Aces feeds your shaders either with graphics works or compute works. Nvidia system can only feed with either graphics or compute and the pipeline needs to change everytime. Amd can feed for example half the shaders with graphics and half with compute works.
> 
> This happens inside the hardware. It's a hardware sheduler. A dedicated async card without the right configuration of shaders (for example like fermi the shaders on gcn can communicate ) is useless.


Yeah, of course it couldn't be used that way. But if the game is built so that x card could handle effects, and the other the rest, i don't see why it couldn't work. Similar to having a dedicated PhysX card.


----------



## zealord

Quote:


> Originally Posted by *DNMock*
> 
> Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.


Before the consoles released people expected that all multiplat games run better on AMD hardware because both _next gen_ consoles are using AMD hardware. Yeah ... didn't happen.


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.
> 
> Onto some comments.....
> Smoked? AMD caught up in AoTS, especially after a driver release from Nvidia.


This isn't AMD's version of anything.. This is Async compute, it just happens that Nvidia's current architecture can't handle it, how you can equate that to Gameworks is beyond me..

Again, Aots hardly uses Async, it's a useless example.


----------



## sugarhell

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yeah, of course it couldn't be used that way. But if the game is built so that x card could handle effects, and the other the rest, i don't see why it couldn't work. Similar to having a dedicated PhysX card.


Split the workflow based on properties(compute,graphics rendering etc)? This is totally different

They want to use async because you can use all the shaders. For example when you do graphics pipeline you dont use all your shaders or when you do compute you dont use all your ROPs. With Async you can feed both of them so some shaders could do graphics and use the ROPs and the rest of the shaders could do putre compute(effects). With that way you stress your cards components to 100%.


----------



## looniam

Quote:


> Originally Posted by *GorillaSceptre*
> 
> There's many ways they could block it.. You can be an optimist, but when it comes to Nvidia, i'm more of a believe it when i see it person. They have proven time and time again that they don't like playing with others, what has changed?


many? like?

Quote:


> Originally Posted by *sugarhell*
> 
> I dont understnad what are you talking about? Dedicated a-sync card?
> 
> Aces feeds your shaders either with graphics works or compute works. Nvidia system can only feed with either graphics or compute and the pipeline needs to change everytime. Amd can feed for example half the shaders with graphics and half with compute works.
> 
> This happens inside the hardware. It's a hardware sheduler. A dedicated async card without the right configuration of shaders (for example like fermi the shaders on gcn can communicate ) is useless.


one of the "features" of DX12 is multi gpu systems right? that pretty much throws out a lot of multi gpu and memory pooling issues. since DX12 is the layer between the game and hardware why is that not able to schedule the graphics/compute scheduling?
Quote:


> Originally Posted by *p4inkill3r*
> 
> I bet you 45984587 internet points that nvidia will not use AMD hardware as a corollary/SLI/async compute solution.


i like round figures, how about a straight 46000000


----------



## NuclearPeace

Look at that. When AMD works with developers instead of calling NVIDIA cheaters or whatever, their technology gets used in games and AMD users benefit.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GorillaSceptre*
> 
> This isn't AMD's version of anything.. This is Async compute, it just happens that Nvidia's current architecture can't handle it, how you can equate that to Gameworks is beyond me..
> 
> Again, Aots hardly uses Async, it's a useless example.


You might want to read the opening of the announcement again....
Quote:


> Originally Posted by *OP*
> As the newest member of the AMD Gaming Evolved program.....


AMD's Gaming Evolved program is the same thing as Nvidia's program. It is a massive double standard. Taking it another step further, and highlighting that massive (and stupid) double standard, and using the logic just presented by yourself.

GameWorks: _"This isn't *Nvidia's* version of anything, it just so happens *AMD's* current architecture can't handle it."_

It makes OCN look hilariously bad when, in one thread, people are e-raging at Nvidia and their developer program, and then cheer on AMD and their developer program. Oh, and for clarity, I am talking about AMD's program as a whole, not just ASC.

EDIT:

More bits for the eyeballs from the article...
Quote:


> That was no accident. With on-staff game developers, source code and effects, the AMD Gaming Evolved program helps developers to bring the best out of a GPU.


Literally the same thing Nvidia does.


----------



## GorillaSceptre

Quote:


> Originally Posted by *looniam*
> 
> many? like?


Nvidia have some of the best engineers on the planet, how many people on this forum could list the things they could do? How about something as simple as scanning for AMD hardware in the driver? Like they already do with Physx.

Quote:


> Originally Posted by *PostalTwinkie*
> 
> You might want to read the opening of the announcement again....
> AMD's Gaming Evolved program is the same thing as Nvidia's program. It is a massive double standard. Taking it another step further, and highlighting that massive (and stupid) double standard, and using the logic just presented by yourself.
> 
> GameWorks: _"This isn't *Nvidia's* version of anything, it just so happens *AMD's* current architecture can't handle it."_
> 
> It makes OCN look hilariously bad when, in one thread, people are e-raging at Nvidia and their developer program, and then cheer on AMD and their developer program. Oh, and for clarity, I am talking about AMD's program as a whole, not just ASC.


Find me the posts of people in this thread going "oh wow Gaming Evolved!" I'll wait.. You're trying to make something out of nothing.. I won't derail this thread getting into the differences between GE and GW either.

All i see is people excited for Async, and Async is not in-house tech by AMD, it has nothing to do with GE..


----------



## looniam

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Nvidia have some of the best engineers on the planet, how many people on this forum could list the things they could do? How about something as simple as scanning for AMD hardware in the driver? Like they already do with Physx.


as i said BIG difference between proprietary software and hardware so i call FUD.
*STAHP!*

in the meantime i give you all proof of AMD/NVIDIA working hand in hand in a DX12 game:



http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview/4


----------



## sugarhell

Quote:


> Originally Posted by *looniam*
> 
> many? like?
> one of the "features" of DX12 is multi gpu systems right? that pretty much throws out a lot of multi gpu and memory pooling issues. since DX12 is the layer between the game and hardware why is that not able to schedule the graphics/compute scheduling?
> i like round figures, how about a straight 46000000


No it wouldnt. Async helps for the total performance of EACH card. It helps the engine to utilize better the specific card. If the engine sends a compute work and a graphics work a serialized card would have to wait for the graphics pipeline to end and then it would send the compute work. With async the card can send both works to the its shaders at the same time. You ask for the data to arrive to the hardware sheduler and then travel back through the pcie to a different card . You can avoid this because you can split the works before you send the data.


----------



## GorillaSceptre

Quote:


> Originally Posted by *looniam*
> 
> as i said BIG difference between proprietary software and hardware so i call FUD.
> *STAHP!*
> 
> in the meantime i give you all proof of AMD/NVIDIA working hand in hand in a DX12 game:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview/4


That's completely different to having a "dedicated Async card".. As i said, we'll see. I hope so.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Nvidia have some of the best engineers on the planet, how many people on this forum could list the things they could do? How about something as simple as scanning for AMD hardware in the driver? Like they already do with Physx.
> *Find me the posts of people in this thread going "oh wow Gaming Evolved!" I'll wait..* You're trying to make something out of nothing.. I won't derail this thread getting into the differences between GE and GW either.
> 
> All i see is people excited for Async, and Async is not in-house tech by AMD, it has nothing to do with GE..












That is the double standard I am talking about.

Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.

No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
Quote:


> *AMD is once again partnering with IO Interactive* to bring an incredible Hitman gaming experience to the PC. *As the newest member to the AMD Gaming Evolved program*, Hitman will feature *top-flight effects* and performance optimizations for PC gamers.
> Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs-called asynchronous compute engines-to handle heavier workloads and better image quality without compromising performance. PC gamers may have heard of asynchronous compute already, and Hitman demonstrates the best implementation of this exciting technology yet. By unlocking performance in GPUs and processors that couldn't be touched in DirectX 11, gamers can get new performance out of the hardware they already own.
> AMD is also looking to provide an exceptional experience to PC gamers with high-end PCs, collaborating with IO Interactive to implement AMD Eyefinity and ultrawide support, plus super-sample anti-aliasing for the best possible AA quality.
> *This partnership* is a journey three years in the making, which started with Hitman: Absolution in 2012, a top seller in Europe and widely critically acclaimed. PC technical reviewers lauded all the knobs and dials that pushed GPUs of the time to their limit. That was no accident. *With on-staff game developers, source code and effects, the AMD Gaming Evolved program* helps developers to bring the best out of a GPU. And now in 2016, Hitman gets the same PC-focused treatment *with AMD and IO Interactive* to ensure that the series' newest title represents another great showcase for PC gaming!


This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...

ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?


----------



## sugarhell

The game will support async if the hardware supports async. Even nvidia supports async in some form if you look at vr sdks. If the card doesnt support async ofc the engine will use the proper pipeline each time. Physx is a feature that can run on AMD but nvidia choose not to allow it. Async cant run on nvidia properly because their architectures is different.


----------



## MadRabbit

People tend to forget, ASC is not something AMD came up with. It's an DX spec.









Comparing ASC to PhysX, really?


----------



## PostalTwinkie

Quote:


> Originally Posted by *MadRabbit*
> 
> People tend to forget, ASC is not something AMD came up with. It's an DX spec.


I don't think anyone forgot this.

Although you would forgive someone who has, based off the language AMD has been using to push ASC. They have claimed, and I quote;

_"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs"_

AMD is completely riding the idea that ASC is "unique" and "only" on AMD hardware. They are going to push it as their thing for as long as they can. If they want to wave the flag, then they too will carry the burden.

As of right now you have a situation of AMD having support for a "feature" that Nvidia doesn't, just like Nvidia currently has ones that AMD doesn't support. Yet here on OCN, one is OK, while the other isn't.


----------



## MoorishBrutha

Nvidia told everyone that the 980ti is the first FULL supported DX12 card. Well, one of the features of DX12, Async Compute, is not natively supported by the 980Ti or any of Nvidia's cards for that matter so Nvidia falsely advertised........*once again*.


----------



## looniam

Quote:


> Originally Posted by *sugarhell*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> many? like?
> one of the "features" of DX12 is multi gpu systems right? that pretty much throws out a lot of multi gpu and memory pooling issues. since DX12 is the layer between the game and hardware why is that not able to schedule the graphics/compute scheduling?
> i like round figures, how about a straight 46000000
> 
> 
> 
> No it wouldnt. Async helps for the total performance of EACH card. It helps the engine to utilize better the specific card. If the engine sends a compute work and a graphics work a serialized card would have to wait for the graphics pipeline to end and then it would send the compute work. With async the card can send both works to the its shaders at the same time. You ask for the data to arrive to the hardware sheduler and then travel back through the pcie to a different card . You can avoid this because you can split the works before you send the data.
Click to expand...

you're killing me.








Quote:


> Originally Posted by *GorillaSceptre*
> 
> That's completely different to having a "dedicated Async card".. As i said, we'll see. I hope so.


thanks.
i know it different but what draws my attention is the mixed cards have a better performance than a SLI/Xfire configuration of each vendor.

i am asking that you don't be a debbie downer - for now


----------



## MadRabbit

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I don't think anyone forgot this.
> 
> Although you would forgive someone who has, based off the language AMD has been using to push ASC. They have claimed, and I quote;
> 
> _"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs"_
> 
> AMD is completely riding the idea that ASC is "unique" and "only" on AMD hardware. They are going to push it as their thing for as long as they can. If they want to wave the flag, then they too will carry the burden.
> 
> As of right now you have a situation of AMD having support for a "feature" that Nvidia doesn't, just like Nvidia currently has ones that AMD doesn't support. Yet here on OCN, one is OK, while the other isn't.


That's AMD marketing...you still read into that crap?


----------



## PostalTwinkie

Quote:


> Originally Posted by *MoorishBrutha*
> 
> Nvidia told everyone that the 980ti is the first FULL supported DX12 card. Well, one of the features of DX12, Async Compute, is not natively supported by the 980Ti or any of Nvidia's cards for that matter so Nvidia falsely advertised........*once again*.


"Once again"

Go brush up on the requirements of the various DX 12 tiers and what are optional features and required.

Quote:


> Originally Posted by *MadRabbit*
> 
> That's AMD marketing...you still read into that crap?


Well, we have an entire thread going right now about AMD marketing crap, that is very active at the moment. When isn't it "marketing crap" from anyone?

That fact doesn't change the response to it, and what information is put out there. Hell, one of the first posts in this thread was an outright lie in regard to Nvidia and DX 12 support.

Consider this; we aren't the only ones that read this. OCN is a destination for a lot of people looking for information they don't know. If we can't be open, honest, and balanced in our approach to issues, what is the point?


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I don't think anyone forgot this.
> 
> Although you would forgive someone who has, based off the language AMD has been using to push ASC. They have claimed, and I quote;
> 
> _"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs"_
> 
> AMD is completely riding the idea that ASC is "unique" and "only" on AMD hardware. They are going to push it as their thing for as long as they can. If they want to wave the flag, then they too will carry the burden.
> 
> As of right now you have a situation of AMD having support for a "feature" that Nvidia doesn't, just like Nvidia currently has ones that AMD doesn't support. Yet here on OCN, one is OK, while the other isn't.


Quote:


> Originally Posted by *looniam*
> 
> you're killing me.
> 
> 
> 
> 
> 
> 
> 
> 
> thanks.
> i know it different but what draws my attention is the mixed cards have a better performance than a SLI/Xfire configuration of each vendor.
> 
> i am asking that you don't be a debbie downer - for now


----------



## NightAntilli

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That is the double standard I am talking about.
> 
> Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.
> 
> No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
> This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...
> 
> ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?


It's clear that you don't understand what's going on. ASC is a feature that an architecture supports or doesn't. It's based on hardware, and there is no restriction to a single type of architecture that can do ASC. If the hardware manufacturer can design to support it, it's supported.

PhysX on the other hand is middleware. Meaning, it's proprietary software that has been locked out from working on all other architectures. ASC has not been locked out and cannot be locked out for anybody since it can be made to work in different ways.

GameWorks as a whole is like PhysX. A proprietary middleware that locks out anyone else from using its features. The Gaming Evolved program is NOT the same, because it encourages the use of new techniques, that are later made open source for EVERYONE to use, just like TressFX turned into PureHair in the new Tomb Raider, which works for both nVidia and AMD cards. GameWorks does NOT allow anything like this, since all the effects stay as proprietary ownership of nVidia. HairWorks is still only adaptable by nVidia, and AMD is never allowed to optimize for it.

That is why it's different. And that is why nVidia is bashed and AMD is praised.


----------



## looniam

Quote:


> Originally Posted by *sugarhell*
> 
> 
> 
> Spoiler: Warning: Spoiler!


you keep explaining A-sync but you're still ignoring the game/DX12 layer.


----------



## sugarhell

Quote:


> Originally Posted by *looniam*
> 
> you keep explaining A-sync but you're still ignoring the game/DX12 layer.


And i keep explaining because you dont understand properly async.


----------



## MadRabbit

Quote:


> Originally Posted by *PostalTwinkie*
> 
> "Once again"
> 
> Go brush up on the requirements of the various DX 12 tiers and what are optional features and required.
> Well, we have an entire thread going right now about AMD marketing crap, that is very active at the moment. When isn't it "marketing crap" from anyone?
> 
> That fact doesn't change the response to it, and what information is put out there. Hell, one of the first posts in this thread was an outright lie in regard to Nvidia and DX 12 support.
> 
> Consider this; we aren't the only ones that read this. OCN is a destination for a lot of people looking for information they don't know. If we can't be open, honest, and balanced in our approach to issues, what is the point?


In other words, take some popcorn and have a good time for the both of us.


----------



## FallenFaux

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That is the double standard I am talking about.
> 
> Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.
> 
> No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
> This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...


Have you considered that people's aversion to Gameworks has nothing to do with Nvidia? I'm guessing it has more to do with many Gamworks titles shipping with terrible performance regardless of GPU vendor. I'm not saying that any of that actually has anything to do with Gameworks but when Nvidia markets Gameworks games and then they turn out terrible, it makes them look bad.
Quote:


> ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?


One is a standard feature of a cross-vendor API (DX12) and the other is a proprietary SDK developed by a single GPU vendor. Everyone can use ASC (Even Nvidia uses a software implementation) but only one vendor can use Physx.


----------



## looniam

Quote:


> Originally Posted by *sugarhell*
> 
> And i keep explaining because you dont understand properly async.


In regards to the purpose of Async compute, there are really 2 main reasons for it:
http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1400_50#post_24360916
Quote:


> 1*) It allows jobs to be cycled into the GPU during dormant phases.* In can vaguely be thought of as the GPU equivalent of hyper threading. Like hyper threading, it really depends on the workload and GPU architecture for as to how important this is. In this case, it is used for performance. I can't divulge too many details, but GCN can cycle in work from an ACE incredibly efficiently. Maxwell's schedular has no analog just as a non hyper-threaded CPU has no analog feature to a hyper threaded one.
> 
> 2) *It allows jobs to be cycled in completely out of band with the rendering loop.* This is potentially the more interesting case since it can allow gameplay to offload work onto the GPU as the latency of work is greatly reduced. I'm not sure of the background of Async Compute, but it's quite possible that it is intended for use on a console as sort of a replacement for the Cell Processors on a ps3. On a console environment, you really can use them in a very similar way. This could mean that jobs could even span frames, which is useful for longer, optional computational tasks.


I UNDERSTAND!

again you are completely ignoring SOFTWARE! game/DX12 (and i forgot) DRIVERS!


----------



## Yvese

My 980ti doesn't know whether to be happy or cower in fear


----------



## sugarhell

Quote:


> Originally Posted by *looniam*
> 
> In regards to the purpose of Async compute, there are really 2 main reasons for it:
> http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/1400_50#post_24360916
> I UNDERSTAND!
> 
> again you are completely ignoring SOFTWARE! game/DX12 (and i forgot) DRIVERS!


So as i told you a dedicated async card is totally useless because it will gonna help the card itself. No other cards that lacks async capabilities.

You either split the work before you send the data to the gpu so you send graphics to one card and compute to others (This is not async by the way) or you send all the work to the card and then the command processor split the work . Then based on the idle resources the card can use with async the idle workers. There isnt a magic solution to dx12 to work around this. I will leave this conversation here because i think there is no purpose anymore tot his.


----------



## PostalTwinkie

Quote:


> Originally Posted by *NightAntilli*
> 
> It's clear that you don't understand what's going on. ASC is a feature that an architecture supports or doesn't. It's based on hardware, and there is no restriction to a single type of architecture that can do ASC. If the hardware manufacturer can design to support it, it's supported.
> 
> PhysX on the other hand is middleware. Meaning, it's proprietary software that has been locked out from working on all other architectures. ASC has not been locked out and cannot be locked out for anybody since it can be made to work in different ways.
> 
> GameWorks as a whole is like PhysX. A proprietary middleware that locks out anyone else from using its features. The Gaming Evolved program is NOT the same, because it encourages the use of new techniques, that are later made open source for EVERYONE to use, just like TressFX turned into PureHair in the new Tomb Raider, which works for both nVidia and AMD cards. GameWorks does NOT allow anything like this, since all the effects stay as proprietary ownership of nVidia. HairWorks is still only adaptable by nVidia, and AMD is never allowed to optimize for it.
> 
> That is why it's different. And that is why nVidia is bashed and AMD is praised.


I am fully aware of what ASC is, and if you want to get really specific, Nvidia offered to get AMD on board with the Phsyx train, and AMD turned it down. But people don't remember what happened a few years ago.

The fact is, for all intent, AMD is pushing ASC as their exclusive feature, just like Nvidia pushes Phsyx. Right now, regardless of why, ASC and Phsyx are both features that are "exclusive" to each side. Except on OCN one is OK, the other isn't.

EDIT:

Oh, and Nvidia is most certainly barred from touching Gaming Evolved source code. Gaming Evolved is not some open, mystical, platform that everyone runs around hugging in. It is literally just as closed as Nvidia.
Quote:


> Originally Posted by *FallenFaux*
> 
> *Have you considered that people's aversion to Gameworks has nothing to do with Nvidia? I'm guessing it has more to do with many Gamworks titles shipping with terrible performance regardless of GPU vendor.* I'm not saying that any of that actually has anything to do with Gameworks but when Nvidia markets Gameworks games and then they turn out terrible, it makes them look bad.
> One is a standard feature of a cross-vendor API (DX12) and the other is a proprietary SDK developed by a single GPU vendor. Everyone can use ASC (Even Nvidia uses a software implementation) but only one vendor can use Physx.


Nvidia has nothing to do with the shoddy developers and their quality of work. Nor does AMD have anything to do with a shoddy developer, and their quality of work. Outside the specific programs to each vendor, as being discussed.

GameWorks features aren't breaking games, terrible business decisions at the developer are.


----------



## Ha-Nocri

GimpWorks is a black box for AMD. They can't optimize it for their GPU's. That is a BIG difference compared to a DX12 feature, which ASC is. Nothing is black boxed there and closed to one vendor or another. It's just that AMD was first to implement it.


----------



## sugarhell

Quote:


> Originally Posted by *Ha-Nocri*
> 
> GimpWorks is a black box for AMD. They can't optimize it for their GPU's. That is a BIG difference compared to a DX12 feature, which ASC is. Nothing is black boxed there and closed to one vendor or another. It's just that AMD was first to implement it.


Actually this is wrong. Fermi uses a hardware Sheduler quite similar to the GCN's one.

You can check it here.

http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/3


----------



## criminal

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That is the double standard I am talking about.
> 
> Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.
> 
> No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
> This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...
> 
> *ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?*


You have got to be kidding right?
Quote:


> Originally Posted by *MadRabbit*
> 
> People tend to forget, ASC is not something AMD came up with. It's an DX spec.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Comparing ASC to PhysX, really?


I know right.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> I don't think anyone forgot this.
> 
> Although you would forgive someone who has, based off the language AMD has been using to push ASC. They have claimed, and I quote;
> 
> _"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs"_
> 
> AMD is completely riding the idea that ASC is "unique" and "only" on AMD hardware. They are going to push it as their thing for as long as they can. If they want to wave the flag, then they too will carry the burden.
> 
> As of right now you have a situation of AMD having support for a "feature" that Nvidia doesn't, just like Nvidia currently has ones that AMD doesn't support. Yet here on OCN, one is OK, while the other isn't.


You know it is not the same thing at all. Take your green glasses off. Physx is closed and only allowed to run on Nvidia hardware. ASC can be run on Nvidia, but Nvidia chose to limit that on their cards for some reason. The fact that AMD is going to take advantage of Nvidia's decision is a good business move and there is nothing wrong with it IMHO.


----------



## zealord

Really excited about Benchmarks for this game, because I have no idea what exactly it means (DX12, Async compute etc.) for the end user


----------



## looniam

Quote:


> Originally Posted by *sugarhell*
> 
> So as i told you a dedicated async card is totally useless because it will gonna help the card itself. No other cards that lacks async capabilities.
> 
> You either split the work before you send the data to the gpu so you send graphics to one card and compute to others (This is not async by the way) or you send all the work to the card and then the command processor split the work . Then based on the idle resources the card can use with async the idle workers. There isnt a magic solution to dx12 to work around this. I will leave this conversation here because i think there is no purpose anymore tot his.


FWIW thanks for trying to be helpful. if it turns out i'm just being stubborn - well it wouldn't be the first time.


----------



## sugarhell

Quote:


> Originally Posted by *looniam*
> 
> FWIW thanks for trying to be helpful. if it turns out i'm just being stubborn - well it wouldn't be the first time.


Dont worry. There is a lot of misinformations or at least not entirely correct info about the tech and i am trying to help







But either way if i was a bit aggressive i am sorry.


----------



## looniam

totally fine, like always sugarhell.


----------



## NightAntilli

Quote:


> Originally Posted by *zealord*
> 
> Really excited about Benchmarks for this game, because I have no idea what exactly it means (DX12, Async compute etc.) for the end user


Basically it means that software will be limiting hardware a lot less than before, and AMD's more powerful hardware will finally be used like it's supposed to.


----------



## PostalTwinkie

Quote:


> Originally Posted by *criminal*
> 
> You have got to be kidding right?
> I know right.
> You know it is not the same thing at all. Take your green glasses off. Physx is closed and only allowed to run on Nvidia hardware. ASC can be run on Nvidia, but Nvidia chose to limit that on their cards for some reason. The fact that AMD is going to take advantage of Nvidia's decision is a good business move and there is nothing wrong with it IMHO.


I don't wear glasses, but nice shot at trying to make it personal.

Look, as stated, I am well aware of where ASC stands in terms of availability to vendors. However, I am not the one pushing it as a unique and exclusive feature, AMD is. Further, it isn't unique or exclusive, if...


Nvidia has done similiar in the past.
Is available for Nvidia adoption via new hardware (Possibly Pascal).
AMD is pushing it as theirs right now, period. So with that comes the same standard as is applied to Nvidia and their "unique" features. Period.

I wasn't the one that started pushing ASC as something specific to AMD, they did that on their own in the very announcement of the partnership with IO Interactive, the one posted in Post #1 of this thread.


----------



## FallenFaux

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I am fully aware of what ASC is, and if you want to get really specific, Nvidia offered to get AMD on board with the Phsyx train, and AMD turned it down. But people don't remember what happened a few years ago.


They offered PhysX for a hefty licensing fee. DirectX features are free to anyone that wants to make a GPU.
Quote:


> The fact is, for all intent, AMD is pushing ASC as their exclusive feature, just like Nvidia pushes Phsyx. Right now, regardless of why, ASC and Phsyx are both features that are "exclusive" to each side. Except on OCN one is OK, the other isn't.


This comment is disingenuous at best. Nvidia could have chosen to implement ASC on their hardware for free but chose or were unable to. AMD couldn't implement Physx without paying Nvidia Money.

Quote:


> Oh, and Nvidia is most certainly barred from touching Gaming Evolved source code. Gaming Evolved is not some open, mystical, platform that everyone runs around hugging in. It is literally just as closed as Nvidia.


Actually everything in GPUOpen was released as open source under the MIT license.

Visual effects libraries: https://github.com/GPUOpen-Effects/
Tools: https://github.com/GPUOpen-LibrariesAndSDKs/

SDKs
LiquidVR: https://github.com/GPUOpen-LibrariesAndSDKs/LiquidVR
FireRays: https://github.com/GPUOpen-LibrariesAndSDKs/FireRays_SDK
FireRender: https://github.com/amdadvtech/FireRenderSDK

HSA Runtime API: https://github.com/HSAFoundation/HSA-Runtime-AMD
Quote:


> Nvidia has nothing to do with the shoddy developers and their quality of work. Nor does AMD have anything to do with a shoddy developer, and their quality of work. Outside the specific programs to each vendor, as being discussed.
> 
> GameWorks features aren't breaking games, terrible business decisions at the developer are.


When you put your name and branding on something you're endorsing it. When it looks bad, you look bad.


----------



## zealord

Quote:


> Originally Posted by *NightAntilli*
> 
> Basically it means that software will be limiting hardware a lot less than before, and AMD's more powerful hardware will finally be used like it's supposed to.


So my 2500K + 290X might actually be really good for this game you say?


----------



## criminal

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I don't wear glasses, but nice shot at trying to make it personal.
> 
> Look, as stated, I am well aware of where ASC stands in terms of availability to vendors. However, I am not the one pushing it as a unique and exclusive feature, AMD is. Further, it isn't unique or exclusive, if...
> 
> 
> Nvidia has done similiar in the past.
> Is available for Nvidia adoption via new hardware (Possibly Pascal).
> AMD is pushing it as theirs right now, period. So with that comes the same standard as is applied to Nvidia and their "unique" features. Period.
> 
> I wasn't the one that started pushing ASC as something specific to AMD, they did that on their own in the very announcement of the partnership with IO Interactive, the one posted in Post #1 of this thread.


Not trying to get personal, just calling it like I have been seeing it based on your posts. Again, it was Nvidia's decision to not support ASC. ASC is nothing like Physx.

At least looniam admits his bias. Why can't you?


----------



## NightAntilli

Quote:


> Originally Posted by *zealord*
> 
> So my 2500K + 290X might actually be really good for this game you say?


Basically, yes. Any GCN 1.1 or newer card will get a massive performance boost, if the feature is indeed implemented correctly. GCN 1.0 will also get it, but not as big, since they have only 4 queues compared to the 64 queues of GCN 1.1 and GCN 1.2.


----------



## PostalTwinkie

Quote:


> Originally Posted by *FallenFaux*
> 
> They offered PhysX for a hefty licensing fee. Driect X features are free to anyone that wants to make a GPU.


No, they offered it to AMD for literally pennies per GPU sold, to help support the development of it and push it forward. Why would Nvidia be the only one to financially commit to pushing it forward, if AMD is using it as well?

Quote:


> Originally Posted by *FallenFaux*
> 
> *This comment is disingenuous at best.* Nvidia could have chosen to implement ASC on their hardware for free but chose or were unable to. AMD couldn't implement Physx without paying Nvidia Money.


My comment about AMD claiming it as theirs is "disingenuous"? Are you kidding me?

It is AMD's own words! Here, I will quote the article, again!
Quote:


> Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs


They aren't my words, but AMD's word!

Quote:


> Originally Posted by *FallenFaux*
> 
> Actually everything in GPUOpen was released as open source under the MIT license.


Wow, AMD actually put out something as truly open, and not the "open" AMD has been known for. Then again, their position sort of dictates that.

Quote:


> Originally Posted by *FallenFaux*
> 
> When you put your name and branding on something you're endorsing it. When it looks bad, you look bad.


I suppose this is true, and thus why AMD is in the toilet as a whole. Then again, using this very logic, why is Nvidia first to get pointed at on a bad game that has their logo? Why doesn't the developer get the flack first? Time and time again, a game runs like ass and people point a finger at Nvidia's GameWorks - even in situations where it isn't being used/running.

Quote:


> Originally Posted by *criminal*
> 
> Not trying to get personal, just calling it like I have been seeing it based on your posts. Again, it was Nvidia's decision to not support ASC. ASC is nothing like Physx.
> 
> At least looniam admits his bias. Why can't you?


Why can't I? I am not biased.....

Especially given the fact that before my one 780 Ti, I ran 3 7970s, just purchased a 290X this summer, and currently have a 380 sitting in a shopping cart ready for order. But that is all to the side of the conversation.

My points I am making, and what I have posted, are direct quotes from AMD themselves. I am simply pointing at them and wanting to have a conversation about it. So if that means I am biased, so be it, what does it make AMD? As they are the ones making the comments.


----------



## caswow

Quote:


> Originally Posted by *PostalTwinkie*
> 
> It is AMD's own words! Here, I will quote the article, again!
> They aren't my words, but AMD's word!


well never heared of PR didnt you? what is new when a company says things like that. doesnt change the fact that as is a dx12 spec. its not even close to beeing something nvidia has ever made.


----------



## keikei

If async compute is open source, the why is amd claiming (from what i'm reading) is exclusive to them? Amd may not be lying here, but using the words, 'unique' steers the readers thinking to mean exclusive. Then again why would amd mention the competition in this instance? That is what i'm getting from the article at least.


----------



## PontiacGTX

Quote:


> Originally Posted by *NightAntilli*
> 
> Basically, yes. Any GCN 1.1 or newer card will get a massive performance boost, if the feature is indeed implemented correctly. GCN 1.0 will also get it, but not as big, since they have only 4 queues compared to the 64 queues of GCN 1.1 and GCN 1.2.


GCN 2nd/3rd iteration doesnt achieve 8 queues per ACE +1 for compute?


----------



## degenn

Holy crap this thread...

I'm totally with you, *PostalTwinkie*, you are spot-on in your statements. There is a double-standard and you are not showing any bias. +rep

God... sometimes I'm embarrassed to be a member of these forums.


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That is the double standard I am talking about.
> 
> Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.
> 
> No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
> This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...
> 
> ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?










What the? Async is NOT an AMD exclusive feature.. Sure their marketing is acting like it is, and in some respects it's true, but only until Pascal drops (maybe).

Secondly, Gameworks has a proven track record of deliberately/unintentionally crippling the competition, Gaming Evolved does not. Regardless, as i said earlier, you've come in here playing the Nvidia victim card for no reason, before your original post most of the discussion revolved around Async, no one even meantioned GE..

Quote:


> Originally Posted by *looniam*
> 
> you're killing me.
> 
> 
> 
> 
> 
> 
> 
> 
> thanks.
> i know it different but what draws my attention is the mixed cards have a better performance than a SLI/Xfire configuration of each vendor.
> 
> i am asking that you don't be a debbie downer - for now


I'll be Mr positive from now on.









Quote:


> Originally Posted by *criminal*
> 
> You have got to be kidding right?


I wish he was..


----------



## degenn

He hasn't played any victim card at all. He is merely bringing up valid points and being gang-attacked for doing it.

Get off his nuts.


----------



## infranoia

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.


Darned right we're excited. "Playing nice" has gotten AMD exactly what in the last several years? An 80% market share in discrete GPUs for Nvidia has proven exactly one thing: consumers love dirty tricks. If they have decided to compete on Nvidia's level, well then we can only hope it's not too late.

The best thing we can hope for is that AMD turns the screws _hard_ on its console leverage to gain a strong foothold in discrete. Pushing Async Compute and competitively showing just how 'legacy' Maxwell is as an architecture should be the first step to doing just that.

Unless you really want that 80% to look more like 95%.


----------



## degenn

Holy crap he's not saying there's anything wrong with AMD doing this. Are you people really that dense?


----------



## NightAntilli

Quote:


> Originally Posted by *PontiacGTX*
> 
> GCN 2nd/3rd iteration doesnt achieve 8 queues per ACE +1 for compute?


Uhm... It achieves 8 compute queues per ACE, and it has 8 ACEs, making a total of 64 achievable compute queues. It also has a separate graphics command processor, which works in parallel with the 8 ACEs. So it's actually 1 graphics queue + 64 compute queues.


----------



## BradleyW

I hope the game is enjoyable for both Nvidia and AMD users.


----------



## GorillaSceptre

Quote:


> Originally Posted by *degenn*
> 
> He hasn't played any victim card at all. He is merely bringing up valid points and being gang-attacked for doing it.
> 
> Get off his nuts.


Oh? Please quote his "valid points".. Equating Gameworks and Physx to Asynchronous Compute is ridiculous.. But call it valid if you wish.

His first statement of this thread: "I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other."







That's playing a victim, and deflecting the topic away from Async towards Nvidia fanboys being hard done by.. Being red or green doesn't matter, they're all damn annoying.

Besides all this petty drivel, AMD stating that only their hardware is currently capable of Async, means that they would of made sure of it prior to saying that. This pretty much confirms what @Mahigan and others that got hate thrown at them were saying all along. Maxwell is indeed incapable of Async. To little to late for AMD though, Nvidia already "won" this gen. But it's good for those of us with GCN, we'll get a nice boost.

I'll ignore your last sentence.. Maybe the mods will too.


----------



## FallenFaux

Quote:


> Originally Posted by *PostalTwinkie*
> 
> No, they offered it to AMD for literally pennies per GPU sold, to help support the development of it and push it forward. *Why would Nvidia be the only one to financially commit to pushing it forward, if AMD is using it as well?*


Because AMD does. (See GPUOpen)
*
*
Quote:


> My comment about AMD claiming it as theirs is "disingenuous"? Are you kidding me?
> 
> It is AMD's own words! Here, I will quote the article, again!
> They aren't my words, but AMD's word!


Straw Man

This is what AMD actually said, "Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs."
This is a factual statement, Radeon cards are currently the only ones with hardware ASC support.

You then inferred that AMD is claiming to own ASC.
You then take your inference and construct and argument that is not true. "...AMD is pushing ASC as their exclusive feature..."
You then claim that this inference is actually the word of AMD and attack it instead of the actual statement.
It's much easier to defeat this inferred claim because it, unlike the actual statement, is not true.

Quote:


> Wow, AMD actually put out something as truly open, and not the "open" AMD has been known for. _Then again, their position sort of dictates that_.


Yes, because clearly only a company that was suffering would do something positive for their market.
Quote:


> I suppose this is true, and thus why AMD is in the toilet as a whole. Then again, using this very logic, why is Nvidia first to get pointed at on a bad game that has their logo? Why doesn't the developer get the flack first? Time and time again, a game runs like ass and people point a finger at Nvidia's GameWorks - even in situations where it isn't being used/running.
> Why can't I? I am not biased.....


These are the reasons people think you're bias.

1. You were the first person in this thread to mention GameWorks or Gaming Evolved and started out your presence in the thread defending GameWorks by saying it was just as closed as Gaming Evolved.
2. You made an incorrectly inferred claim about AMD's statement and attempted to compare a free API to a closed source SDK.
3. You made a false statement about the open source nature of AMDs developer program and when presented with evidence that proved you wrong you used it to attack AMD.
Quote:


> Then again, their position sort of dictates that


4. You made this statement in an argument that had nothing to do with AMD.
Quote:


> I suppose this is true, and thus why AMD is in the toilet as a whole.


----------



## mtcn77

I think the storm has passed and the defense squad has finally beaten the point they don't want GameWorks any more than Gaming Evolved. I couldn't agree more; however I should say, should GameWorks have provided what it should(a better experience) and not +30 FPS none what so ever, we wouldn't be juxtaposing it with Asynchronous Shaders and the +40% performance it brings when it matters.


----------



## GoLDii3

Quote:


> Originally Posted by *PostalTwinkie*
> 
> I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.


Gameworks offers nothing but additional useless settings that in the worst case are like the Godrays on Fallout 4.

No where does it mention they will add any AMD special effects unlike Gameworks,it just says they will use Async Compute,and that is a feature of DX12.

Their last Hitman game did not even include any AMD effects.

This is just a partnership like the one done with Starwars Battlefront. They will probably bundle this game with Polaris cards.
Quote:


> Originally Posted by *degenn*
> 
> He hasn't played any victim card at all. He is merely bringing up valid points and being gang-attacked for doing it.
> 
> Get off his nuts.


U BULLIES LEAVE BRITNEY ALONE!!!









Get real dude,this is the internet.


----------



## BradleyW

I hope Async becomes a standard for both AMD and Nvidia if it brings great improvements on both sides as it may lift the single core hogging CPU bottlenecks, as seen in various DX11 games.


----------



## mtcn77

Quote:


> Originally Posted by *BradleyW*
> 
> I hope Async becomes a standard for both AMD and Nvidia if it brings great improvements on both sides as it may lift the single core hogging CPU bottlenecks, as seen in various DX11 games.


It cannot be if the rumours are true and Nvidia has to launch Volta in order for that to happen.
I hate to be blunt, but those gpus everyone thought were new are not more up to date than GCN.


----------



## degenn

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Equating Gameworks and Physx to Asynchronous Compute is ridiculous.. But call it valid if you wish.


That's not really what he's doing.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> His first statement of this thread: "I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other."
> 
> 
> 
> 
> 
> 
> 
> That's playing a victim, and deflecting the topic away from Async towards Nvidia fanboys being hard done by.. Being red or green doesn't matter, they're all damn annoying.


Is that really how you interpreted his statement? That all he was getting at is Nvidia fanboys are hard done by?

You know what....? Nevermind, I concede... you guys win.


----------



## NightAntilli

Quote:


> Originally Posted by *mtcn77*
> 
> It cannot be if the rumours are true and Nvidia has to launch Volta in order for that to happen.
> I hate to be blunt, but those gpus everyone thought were new are not more up to date than GCN.


Indeed. nVidia is so good at manipulating their fanbase... You're actually better off for DX12 with GCN 1.1 from 2013 than Maxwell 2 from 2015. Arguably you're even better off with GCN 1.0 from 2011 rather than Maxwell 2... Depending on what you prioritize.

nVidia has admitted that their preemption is still a long way off from improving, so don't count on Pascal being much better. It's likely a Maxwell 2 node shrink with some minor tweaks. The true async capabilities from nVidia's side won't likely arrive until Volta. By that time, AMD's GCN has likely evolved quite a bit more. People don't know it, but it's nVidia that's currently playing catch-up in terms of architecture. People only go for nVidia because they either like upgrading every 1 or 2 years, or because of ignorance.


----------



## Themisseble

So nvidia fanboys want to convince normal users that async shaders are black boxed just like gameworks, ...physx.

I alway though that nvidia cannot run async because hardware doesnt support it and AMD GPU cant run phsyx, because software doesnt allow it.
And also phsyx and async shaders are very different things, I am right?


----------



## PostalTwinkie

Quote:


> Originally Posted by *degenn*
> 
> That's not really what he's doing.
> Is that really how you interpreted his statement? That all he was getting at is Nvidia fanboys are hard done by?
> 
> You know what....? Nevermind, I concede... you guys win.


No worries, it has been this way here for years. Extreme and blind bias towards a vendor is nothing new, at least not in the 20+ years I have been doing this crap. I find it entertaining that I can quote AMD themselves, literally put it right in the forum. Yet they get turned into my words, and it is OK for AMD to say them..."because".

We, as a community, literally have AMD making false claims, or rather outright lying again, about ASC. They are pushing it as their "unique" and "exclusive" feature, when that isn't even true. Hell, even everyone in here is in agreement with ASC not being exclusive to them.

Yet people want to get mad at me for pointing out what they agree on? Interesting.

If AMD wants to play the "unique" feature card, then they get to play that card and must deal with the standards that come with it. If they can have a unique feature, so can Nvidia - regardless of how it is being done. If having a unique feature is OK for one, then it must be for the other.

After all, I am not the one claiming uniqueness here, it is AMD.

Quote:


> Originally Posted by *Themisseble*
> 
> So nvidia fanboys want to convince normal users that async shaders are black boxed just like gameworks, ...physx.
> 
> I alway though that nvidia cannot run async because hardware doesnt support it and AMD GPU cant run phsyx, because software doesnt allow it.
> And also phsyx and async shaders are very different things, I am right?


If you actually read a damn thing, you would have known what you think is happening, isn't. No one said anything about Nvidia being black boxed.

Try and keep up, it is only a few paragraphs you need to read and comprehend.

*EDIT:*

Simplified and brief (consumer) history on compute.....

Years ago Nvidia was compute heavy, and AMD wasn't. Then the world/usage/market shifted, Nvidia went away from compute. Around this time AMD went into compute with their hardware. Think back to the AMD mining days. Now, in the time that Nvidia went away, they also (through various events) gained a massive marketshare, now North of 80%.

Enter DX 12, and compute. We still currently don't know what roll it is going to play in the consumer space, it is all speculation. However, as it sits right now...


Nvidia hardware doesn't have the hardware for ASC support, and thus is being emulated via drivers.
AMD has boat loads of support for it.
Who knows about Pascal.

We could easily see a period, if DX 12 and ASC really took off, where AMD is the dominate performer in scenarios that use it. Until Volta from Nvidia comes along. Why? Well, it could be possible that Pascal was too far into development to turn around to focus back on supporting ASC.

Now, let's say Pascal just sucks at it. The new question becomes _"Will developers spend the resources to develop for a feature ~20% of users can use?"_ Even if it is "better", will they sink that resource in the face of the 80%?

Only the developers can answer that. If it is difficult and costly to have in their product, and only 20% of the user base supports it, they won't do it. If it is easily understood and implemented, it could be part of the push to get AMD/RTG off the ground again.


----------



## FallenFaux

Quote:


> Originally Posted by *NightAntilli*
> 
> Indeed. nVidia is so good at manipulating their fanbase... You're actually better off for DX12 with GCN 1.1 from 2013 than Maxwell 2 from 2015. Arguably you're even better off with GCN 1.0 from 2011 rather than Maxwell 2... Depending on what you prioritize.
> 
> nVidia has admitted that their preemption is still a long way off from improving, so don't count on Pascal being much better. It's likely a Maxwell 2 node shrink with some minor tweaks. The true async capabilities from nVidia's side won't likely arrive until Volta. By that time, AMD's GCN has likely evolved quite a bit more. People don't know it, but it's nVidia that's currently playing catch-up in terms of architecture. People only go for nVidia because they either like upgrading every 1 or 2 years, or because of ignorance.


Well to be fair, for most of us it wont matter. GCN may be better than Maxwell on some DX12 stuff but by time we're actually using it We're going to have 14/16nm cards. I'm guessing that AMD and Nvidia with have identical feature sets for DX12/Vulcan and then we can just get back to buying whatever card is the fastest.


----------



## Themisseble

Nice forum.


----------



## Gunderman456

Who cares about this game anyway.

I will try Hitman: Blood Money. I heard it was the best game out of all of them with many choices available to complete your objectives.

As for this triple AAA game that will be "episodic", no thanks.


----------



## NightAntilli

Quote:


> Originally Posted by *Themisseble*
> 
> So nvidia fanboys want to convince normal users that async shaders are black boxed just like gameworks, ...physx.
> 
> I alway though that nvidia cannot run async because hardware doesnt support it and AMD GPU cant run phsyx, because software doesnt allow it.
> And also phsyx and async shaders are very different things, I am right?


Basically, this is correct.
Quote:


> Originally Posted by *FallenFaux*
> 
> Well to be fair, for most of us it wont matter. GCN may be better than Maxwell on some DX12 stuff but by time we're actually using it We're going to have 14/16nm cards. I'm guessing that AMD and Nvidia with have identical feature sets for DX12/Vulcan and then we can just get back to buying whatever card is the fastest.


Indeed. But for people upgrading since last year, having gone with AMD would've been the better choice as a long term investment. But obviously, the most bought card was the GTX 970, rather than the R9 390. nVidia convinced the public that its new cards are fully DX12, and even DX12.1 compliant. Anyone who did their homework knew this wasn't the case however.

Right now is one of the worst times to upgrade since the 16/14nm FinFET process is indeed just around the corner. But GCN is still a better choice. We might have 14/16nm cards by that time, but you might not have to upgrade to such a card at all for a couple of years if you go with GCN. And yet, everyone wants nVidia, mainly due to the power consumption and nVidia's reputation. The money saved by not having to upgrade again is a lot more than the money saved because of a more power efficient card.


----------



## magnek

If we really wanted to make an accurate analogy, it would be akin to (over)tessellating games because the geometry engines in AMD hardware have pitiful tessellation performance. I really don't see how ASC is comparable to Gameworks at all.

As for Gaming Evolved source code not being open, well nVidia worked with CD to change the final game code in Tomb Raider to fix TressFX issues on nVidia GPUs, so I'm not entirely sure about that.


----------



## mtcn77

I think the heart of the problem lies in the interaction of users with the performance ratings in games. Reviewers like Digital Foundry completely miss the point of "better utilization" and assign newer titles with higher performance in the same category as the old ones which are destined to be retired. Letting newer titles be withdrawn from the exposure in their video reviews just in time as they launch while single threaded performance hogs continue their overly sensationalized journey onward with superlatives such as "takes every resource you throw at it" & "no limit to its performance demand" rather encouraging the worse conduct.


----------



## ZealotKi11er

Until recently I was not really hyped much about ASync. Recently Rise of the Tomb on Xbox shows that Async makes huge difference. Trying to achive same effects it cost PC hardware stronger hardware doing it the traditional way. Rise of Tomb Raider would have been another TR 2013 game which run very well had it been DX12. Thank Nvidia for this.


----------



## iLeakStuff

So its not OK to work with game developer partners to get games to run better on your hardware (Nvidia, GameWorks) but when it *benefit you* (AMD, Async Compute), its OK to have the partners code the game to work better on your hardware.

The double standard here...


----------



## criminal

Quote:


> Originally Posted by *degenn*
> 
> He hasn't played any victim card at all. He is merely bringing up valid points and being gang-attacked for doing it.
> 
> Get off his nuts.


Sorry... not valid points. Just because AMD is taking advantage of an open DX12 feature and Nvidia cannot doesn't mean what AMD is doing is anything like Nvidia. AMD is basically marketing a feature that Nvidia has full access to and should be able to take full advantage of when Pascal drops. Nvidia on the other hand has a lock down on Physx and AMD can't ever use the feature unless Nvidia gives them rights to do so. Not even the same thing. Not even in the same ballpark!


----------



## MoorishBrutha

Quote:


> Originally Posted by *DNMock*
> 
> Actually, AMD has all the leverage at the moment in that aspect since it's AMD stuff in consoles. Game devs can get lazy, in case you didn't notice, and may use a ton of A-sync for their game because, consoles, then just port it to PC. If most devs use A-sync because of consoles then there is nothing Nvidia can do but give in or fall way behind.


You just echoed everything what I was telling people last year with all of this. Nvidia made a colossal mistake by letting AMD control the consoles market. First AMD gave the consoles X86 so we get all of the 3rd party game now, then they gave Mantle over to the Khronus Group so they can come out with *Vulkan* and make game development "*standardized*".

Once *standardized*, developers would do all of the optimizations for the PC that they do for the consoles, e.g. Async Compute.

Traditionally speaking, AMD always had more *Floating point operations* than Nvidia and more *cores* than Nvidia so now with Async Compute being heavily implemented, Nvidia will get destroyed.

They only have themselves to blame.

Watch Nvidia's 80% market share vanish into thin air.....poof.


----------



## NightAntilli

Quote:


> Originally Posted by *iLeakStuff*
> 
> _So its not OK to work with game developer partners to get games to run better on your hardware (Nvidia, GameWorks)_ but when it *benefit you* (AMD, Async Compute), its OK to have the partners code the game to work better on your hardware.
> 
> The double standard here...


@Italic;
No that's not how it is. It's fine to work with game developers to get games to run better on your hardware. It's not fine to work with game developers to lock out the competition from optimizing effects used in the game. GameWorks uses tessellation, which is a feature that AMD cards also have and actually ATi came up with, but are not allowed to optimize for. No one is refraining nVidia from coding for Async compute. Their hardware does not have the capability, and that is something else. You would have a point is AMD cards couldn't do tessellation, but since they can, your point is moot.


----------



## DNMock

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Many, a lot, of developers wouldn't survive without the PC market to support their Console market.
> 
> You need to consider that Nvidia still completely controls the PC market with their products. If software developers suddenly abandoned that, they would quickly starve and go out of business. The PC gaming market is a huge market for developers, they can't dump it.
> 
> Is it unfortunate Nvidia has gotten that big? Yes, it is, as it makes it very difficult for AMD to gain ground in it. However, it is possible they will do it, but it is going to take time.


That's very true, but it's not as if Nvidia gpu's can't emulate A-sync, you can still play the beta of AoS with an Nvidia GPU, it just doesn't run as well (or so it seems, I haven't played it). All that would mean is that the coveted benchmarks for some games would lean more in favor of AMD.

Simply put, it wouldn't be cost effective for Nvidia to try and block it. The cost in paying off devs to make sure they don't use A-sync combined with the sales loss from those benchmark numbers that favor AMD with devs who opt to use A-sync despite Nvidia's strong arming would cost a lot more than simply incorporating proper A-sync into their next architecture, be that Pascal or Volta.

That's the way I see it anyway, maybe I'm missing some details as I'm not involved in the industry, but it seems like the cost vs benefit analysis from my position of armchair QB would seem to tell me giving in and building it into future gpus is the best option.


----------



## criminal

Quote:


> Originally Posted by *iLeakStuff*
> 
> So its not OK to work with game developer partners to get games to run better on your hardware (Nvidia, GameWorks) but when it *benefit you* (AMD, Async Compute), its OK to have the partners code the game to work better on your hardware.
> 
> The double standard here...


Not sure what Nvidia is doing with Gameworks and what AMD is doing with ASC is the same thing. I really don't see it as a double standard at all and I have been on the green team for awhile now. Beats me on how some of you think it is even comparable.









Last time I checked Microsoft (not AMD) was responsible for DX12. Nvidia on the other hand is responsible for Gameworks.


----------



## mtcn77

One thing is for sure: AMD has full support of Directx 12 features and Nvidia has none. Point being,

(Limitless) Resource Binding Tier 3,
Context Switching,
Asynchronous Shaders are the vaunted features missing from Nvidia hardware.
I think it will be better than Evolved, this time with Hitman.








Those who don't know what I mean, read: "280X>970".


----------



## Dargonplay

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That is the double standard I am talking about.
> 
> Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.
> 
> No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
> This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...
> 
> ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?


AMD Gaming Evolved doesn't have Gameworks record of making games run like crap in both GPUs Vendor, the hate you see about Gameworks is well deserved, your fanboyism doesn't allow you to see that.

Also AMD Gaming Evolved isn't an "Development suite" its just a tag the game use to let users know that it didn't dump a big turd on AMD's hardware.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> As of right now you have a situation of AMD having support for a "feature" that Nvidia doesn't, just like Nvidia currently has ones that AMD doesn't support. Yet here on OCN, one is OK, while the other isn't.


That's downright stupid, as of right now you have a situation of AMD having support for a "Feature" that Nvidia doesn't, an open feature that anyone can use, unlike Nvidia who currently has ones that AMD doesn't support nor will be able to support ever.

If you can't see the difference then you're probably shrekt, and shrekt is disgusting and smelly.


----------



## DNMock

Quote:


> Originally Posted by *Dargonplay*
> 
> AMD Gaming Evolved doesn't have Gameworks record of making games run like crap in both GPUs Vendor, the hate you see about Gameworks is well deserved, your fanboyism doesn't allow you to see that.


This is true, Gameworks runs like trash period. Doesn't matter if you are using an Intel Igpu, AMD, or Nvidia card, you turn on a gameworks feature and you are gonna get hammered for 25% reduction in FPS. I run dual T-X gpu's and still turn off most trashworks features because it destroys my FPS.


----------



## mtcn77

INB4, reviews coming up with Asynchronous Shaders disabled for BOTH brands because it is not fair for Nvidia.


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> So its not OK to work with game developer partners to get games to run better on your hardware (Nvidia, GameWorks) but when it *benefit you* (AMD, Async Compute), its OK to have the partners code the game to work better on your hardware.
> 
> The double standard here...


You might've had a point if Gameworks actually ran decently on nVidia hardware or TWIMTBP titles weren't broken on release.


----------



## ZealotKi11er

The difference here even if A-Sync was AMD only is that A-Sync increases performance hence allowing better effect. While GameWorks allows better effects on sacrifice of limited performance. The reason we dont see much DX12 is Nvidia sadly. They are just not ready for it so they will delay it as much as possible. If Nvidia had GCN as their architecture right now with same marketshare you would see DX12 in every games. They sure have shown us how deticated they are with GameWorks.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *looniam*
> 
> and just how can they block it????
> 
> c'mon man. it's one thing to block their own proprietary software but a completely other to block a competitor's hardware.
> 
> not. gonna. happen.


I've already explained this.

"DriverEntry is the first routine called after a driver is loaded, and is responsible for initializing the driver."
Source: https://msdn.microsoft.com/en-us/library/windows/hardware/ff544113%28v=vs.85%29.aspx

Microsoft offers some WDK examples here: https://msdn.microsoft.com/en-us/windows/hardware/dn433227.aspx If you take a look at the Windows 10 ones, specifically the kernel mode video driver, you will see the DriverEntry function inside BDD_DDI.cxx here: https://github.com/Microsoft/Windows-driver-samples/blob/master/video/KMDOD/bdd_ddi.cxx

Code:



Code:


// BEGIN: Init Code

//
// Driver Entry point
//

extern "C"
NTSTATUS
DriverEntry(
    _In_  DRIVER_OBJECT*  pDriverObject,
    _In_  UNICODE_STRING* pRegistryPath)
{
    PAGED_CODE();

    // Initialize DDI function pointers and dxgkrnl
    KMDDOD_INITIALIZATION_DATA InitialData = {0};

    InitialData.Version = DXGKDDI_INTERFACE_VERSION;

    InitialData.DxgkDdiAddDevice                    = BddDdiAddDevice;
    InitialData.DxgkDdiStartDevice                  = BddDdiStartDevice;
    InitialData.DxgkDdiStopDevice                   = BddDdiStopDevice;
    InitialData.DxgkDdiResetDevice                  = BddDdiResetDevice;
    InitialData.DxgkDdiRemoveDevice                 = BddDdiRemoveDevice;
    InitialData.DxgkDdiDispatchIoRequest            = BddDdiDispatchIoRequest;
    InitialData.DxgkDdiInterruptRoutine             = BddDdiInterruptRoutine;
    InitialData.DxgkDdiDpcRoutine                   = BddDdiDpcRoutine;
    InitialData.DxgkDdiQueryChildRelations          = BddDdiQueryChildRelations;
    InitialData.DxgkDdiQueryChildStatus             = BddDdiQueryChildStatus;
    InitialData.DxgkDdiQueryDeviceDescriptor        = BddDdiQueryDeviceDescriptor;
    InitialData.DxgkDdiSetPowerState                = BddDdiSetPowerState;
    InitialData.DxgkDdiUnload                       = BddDdiUnload;
    InitialData.DxgkDdiQueryAdapterInfo             = BddDdiQueryAdapterInfo;
    InitialData.DxgkDdiSetPointerPosition           = BddDdiSetPointerPosition;
    InitialData.DxgkDdiSetPointerShape              = BddDdiSetPointerShape;
    InitialData.DxgkDdiIsSupportedVidPn             = BddDdiIsSupportedVidPn;
    InitialData.DxgkDdiRecommendFunctionalVidPn     = BddDdiRecommendFunctionalVidPn;
    InitialData.DxgkDdiEnumVidPnCofuncModality      = BddDdiEnumVidPnCofuncModality;
    InitialData.DxgkDdiSetVidPnSourceVisibility     = BddDdiSetVidPnSourceVisibility;
    InitialData.DxgkDdiCommitVidPn                  = BddDdiCommitVidPn;
    InitialData.DxgkDdiUpdateActiveVidPnPresentPath = BddDdiUpdateActiveVidPnPresentPath;
    InitialData.DxgkDdiRecommendMonitorModes        = BddDdiRecommendMonitorModes;
    InitialData.DxgkDdiQueryVidPnHWCapability       = BddDdiQueryVidPnHWCapability;
    InitialData.DxgkDdiPresentDisplayOnly           = BddDdiPresentDisplayOnly;
    InitialData.DxgkDdiStopDeviceAndReleasePostDisplayOwnership = BddDdiStopDeviceAndReleasePostDisplayOwnership;
    InitialData.DxgkDdiSystemDisplayEnable          = BddDdiSystemDisplayEnable;
    InitialData.DxgkDdiSystemDisplayWrite           = BddDdiSystemDisplayWrite;

    NTSTATUS Status = DxgkInitializeDisplayOnlyDriver(pDriverObject, pRegistryPath, &InitialData);
    if (!NT_SUCCESS(Status))
    {
        BDD_LOG_ERROR1("DxgkInitializeDisplayOnlyDriver failed with Status: 0x%I64x", Status);
        return Status;
    }

    return Status;
}
// END: Init Code

Nvidia could use this function to verify that no AMD hardware IDs are present before initializing their driver.


----------



## caswow

Funny thing is you you can only run crapworks effects with the top of the line nividia cards. if you dare to turn on crapworks effects on lower than 980 you better be ready for getting uranus destroyed:thumb:


----------



## Noufel

and yet it's 770gtx VS 290 recommanded


----------



## ZealotKi11er

Quote:


> Originally Posted by *caswow*
> 
> Funny thing is you you can only run crapworks effects with the top of the line nividia cards. if you dare to turn on crapworks effects on lower than 980 you better be ready for getting uranus destroyed:thumb:


I run PureHair and HBOA+ fine 4K with my setup. Its how they are implemented.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Dargonplay*
> 
> AMD Gaming Evolved doesn't have Gameworks record of making games run like crap in both GPUs Vendor, the hate you see about Gameworks is well deserved, your fanboyism doesn't allow you to see that.


Feel free to list specific examples, and sources, that show GameWorks was the issue with a game, and it wasn't just the terrible development to save a buck. Which games did GameWorks destroy, and where the user couldn't turn off the optional features it brought?

To say that Gaming Evolved games don't have the same record is just ridiculous. A short list of Gaming Evolved titles that had performance issues....


Watch Dogs
Battlefield 4
Thief
EDIT: To correct list.

All of these games launched broken in some fashion. Do we blame AMD for that? No. We blame the developers for it, just like the games that sport Nvidia's features.

The game developer is developing the game, they are responsible for the quality of the game. Not the hardware vendor that the consumer uses.

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I've already explained this.
> 
> "DriverEntry is the first routine called after a driver is loaded, and is responsible for initializing the driver."
> Source: https://msdn.microsoft.com/en-us/library/windows/hardware/ff544113%28v=vs.85%29.aspx
> 
> Microsoft offers some WDK examples here: https://msdn.microsoft.com/en-us/windows/hardware/dn433227.aspx If you take a look at the Windows 10 ones, specifically the kernel mode video driver, you will see the DriverEntry function inside BDD_DDI.cxx here:**snip**
> 
> Nvidia could use this function to verify that no AMD hardware IDs are present before initializing their driver.


They will probably do it too.


----------



## DNMock

Quote:


> Originally Posted by *ZealotKi11er*
> 
> The difference here even if A-Sync was AMD only is that A-Sync increases performance hence allowing better effect. While GameWorks allows better effects on sacrifice of limited performance. The reason we dont see much DX12 is Nvidia sadly. They are just not ready for it so they will delay it as much as possible. If Nvidia had GCN as their architecture right now with same marketshare you would see DX12 in every games. They sure have shown us how deticated they are with GameWorks.


Actually the reason every game that's come out to date was deep into production before DX12 became available. It's gonna be another year or two before we start to see DX12 really start to be implemented. At which point in time I have zero doubt the Nvidia's flagship cards will have full support for every DX12 feature. Don't know it if will begin with Pascal or Volta, but mark my words, once games built around DX12 start coming out NV will be supporting it fully. All we are seeing now, with Tomb Raider and such is just little DX12 features that aren't tough to patch in, nothing built around the full array of DX12 features.


----------



## Dargonplay

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Feel free to list specific examples, and sources, that show GameWorks was the issue with a game, and it wasn't just the terrible development to save a buck. Which games did GameWorks destroy, and where the user couldn't turn off the optional features it brought.
> 
> To say that Gaming Evolved games don't have the same record is just ridiculous. A short list of Gaming Evolved titles that had performance issues....
> 
> 
> 
> 
> Battlefield 4


What are you smoking sir? That is certainly something strong. Battlefield 4 is probably one of the best optimized games in recent History, it had NETWORK Problems at release, which doesn't relate to performance issues in this Universe, thanks for proving my point.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> To say that Gaming Evolved games don't have the same record is just ridiculous. A short list of Gaming Evolved titles that had performance issues....
> 
> 
> Watch Dogs
> Farcry 4
> Battlefield 4
> Thief


Why do you include Gameworks Titles? are you trying to shoot yourself in the foot? Watch Dogs was a Heavy Main Gameworks Title and one of the worst performing games in History, FarCry 4 is another Gameworks Title and Thief? You're the first person I've heard complaining about Thief Performance, even a 750Ti could run that game maxed out 60 FPS plus.


----------



## ZealotKi11er

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Feel free to list specific examples, and sources, that show GameWorks was the issue with a game, and it wasn't just the terrible development to save a buck. Which games did GameWorks destroy, and where the user couldn't turn off the optional features it brought.
> 
> To say that Gaming Evolved games don't have the same record is just ridiculous. A short list of Gaming Evolved titles that had performance issues....
> 
> 
> Watch Dogs
> Farcry 4
> Battlefield 4
> Thief
> All of these games launched broken in some fashion. Do we blame AMD for that? No. We blame the developers for it, just like the games that sport Nvidia's features.
> 
> The game developer is developing the game, they are responsible for the quality of the game. Not the hardware vendor that the consumer uses.
> They will probably do it too.


So much fail.

1) Watch Dogs and Far Cry 4 both GameWorks. Secondly BF4 had 0 performance problems. One of the best performing games. It had game problems which are targeted to the developer. Nobody is blaming GameWorks for how buggy the game is but how poor the game runs. No idea about Thief.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Dargonplay*
> 
> What are you smoking sir? That is certainly something strong. Battlefield 4 is probably one of the best optimized games in recent History, it had NETWORK Problems at release, which doesn't relate to performance issues in this Universe, thanks for proving my point.


Yes, it was such a marvel they had to push an early patch just to address stability issues the January after release.


----------



## Dargonplay

Quote:


> Originally Posted by *ZealotKi11er*
> 
> So much fail.
> 
> 1) Watch Dogs and Far Cry 4 both GameWorks. Secondly BF4 had 0 performance problems. One of the best performing games. It had game problems which are targeted to the developer. Nobody is blaming GameWorks for how buggy the game is but how poor the game runs. No idea about Thief.


Thank you for saving me the shame of going through the entirety of such failed attempt of fanboyism.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yes, it was such a marvel they had to push an early patch just to address stability issues the January after release.


It didn't failed as hard as your post though, also that patch came in because of Network Issues, the crashes were coming from the servers not the game, get your facts straight, Single Player was a blast to play even at those times.


----------



## Lex Luger

Please correct me if I'm wrong, but don't ALL gameworks titles allow you to turn off the gameworks effects?

What are you crying about?

Just turn them OFF if you don't like the performance hit. They don't add much anyway. STOP CRYING!

I bleed green, but I still welcome async compute and hope it helps out AMD's GPU performance on 7000 series and up.

I always turn off physx, gameworks, tressfx, and whatever else they come up with because they add HUGE latency and destroy your minimum framerate, even if Im still able to average 60 FPS.


----------



## caswow

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I run PureHair and HBOA+ fine 4K with my setup. Its how they are implemented.


its because purehair is amd tech and tomb raider was a ge title to begin with. nvidia just threw money at square and got the pr deal to sell tr with their cards and have their name plastered all over the game.


----------



## NuclearPeace

Quote:


> Originally Posted by *DNMock*
> 
> This is true, Gameworks runs like trash period. Doesn't matter if you are using an Intel Igpu, AMD, or Nvidia card, you turn on a gameworks feature and you are gonna get hammered for 25% reduction in FPS. I run dual T-X gpu's and still turn off most trashworks features because it destroys my FPS.


Well duh! Gameworks is computationally expensive because the effects are inherently computationally expensive. This is like whining about how MSAA kills framerate.


----------



## PlugSeven

Quote:


> Originally Posted by *Lex Luger*
> 
> Please correct me if I'm wrong, but don't ALL gameworks titles allow you to turn off the gameworks effects?
> 
> What are you crying about?
> 
> Just turn them OFF if you don't like the performance hit. They don't add much anyway. STOP CRYING!
> 
> I bleed green, but I still welcome async compute and hope it helps out AMD's GPU performance on 7000 series and up.


Gameworks isn't just some blackbox dll files that you drop into the game folder and call it a day. The developer has to change the render path to incorporate it. So turning it off is not the fix it all solution you think it is.


----------



## mtcn77

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Feel free to list specific examples, and sources, that show GameWorks was the issue with a game, and it wasn't just the terrible development to save a buck. Which games did GameWorks destroy, and where the user couldn't turn off the optional features it brought?
> 
> To say that Gaming Evolved games don't have the same record is just ridiculous. A short list of Gaming Evolved titles that had performance issues....
> 
> 
> Watch Dogs
> Farcry 4
> Battlefield 4
> Thief
> All of these games launched broken in some fashion. Do we blame AMD for that? No. We blame the developers for it, just like the games that sport Nvidia's features.
> 
> The game developer is developing the game, they are responsible for the quality of the game. Not the hardware vendor that the consumer uses.
> They will probably do it too.


Let's look at Tomb Raider and what changed between the sponsorship of two graphics partners:


Spoiler: Warning: Spoiler!





[/SPOILER


Spoiler: Warning: Spoiler!







Can it be stated that they perform the same? Absolutely not! The new version is an failure of optimization under the patronage of Nvidia.
Nvidia has no incentive to optimize games. They should just be keep clear of sponsoring more titles lest they suffer the same fate of mediocre performance all around.


----------



## looniam

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> "DriverEntry is the first routine called after a driver is loaded, and is responsible for initializing the driver."
> Source: https://msdn.microsoft.com/en-us/library/windows/hardware/ff544113%28v=vs.85%29.aspx
> 
> Microsoft offers some WDK examples here: https://msdn.microsoft.com/en-us/windows/hardware/dn433227.aspx If you take a look at the Windows 10 ones, specifically the kernel mode video driver, you will see the DriverEntry function inside BDD_DDI.cxx here: https://github.com/Microsoft/Windows-driver-samples/blob/master/video/KMDOD/bdd_ddi.cxx
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> // BEGIN: Init Code
> 
> //
> // Driver Entry point
> //
> 
> extern "C"
> NTSTATUS
> DriverEntry(
> _In_  DRIVER_OBJECT*  pDriverObject,
> _In_  UNICODE_STRING* pRegistryPath)
> {
> PAGED_CODE();
> 
> // Initialize DDI function pointers and dxgkrnl
> KMDDOD_INITIALIZATION_DATA InitialData = {0};
> 
> InitialData.Version = DXGKDDI_INTERFACE_VERSION;
> 
> InitialData.DxgkDdiAddDevice                    = BddDdiAddDevice;
> InitialData.DxgkDdiStartDevice                  = BddDdiStartDevice;
> InitialData.DxgkDdiStopDevice                   = BddDdiStopDevice;
> InitialData.DxgkDdiResetDevice                  = BddDdiResetDevice;
> InitialData.DxgkDdiRemoveDevice                 = BddDdiRemoveDevice;
> InitialData.DxgkDdiDispatchIoRequest            = BddDdiDispatchIoRequest;
> InitialData.DxgkDdiInterruptRoutine             = BddDdiInterruptRoutine;
> InitialData.DxgkDdiDpcRoutine                   = BddDdiDpcRoutine;
> InitialData.DxgkDdiQueryChildRelations          = BddDdiQueryChildRelations;
> InitialData.DxgkDdiQueryChildStatus             = BddDdiQueryChildStatus;
> InitialData.DxgkDdiQueryDeviceDescriptor        = BddDdiQueryDeviceDescriptor;
> InitialData.DxgkDdiSetPowerState                = BddDdiSetPowerState;
> InitialData.DxgkDdiUnload                       = BddDdiUnload;
> InitialData.DxgkDdiQueryAdapterInfo             = BddDdiQueryAdapterInfo;
> InitialData.DxgkDdiSetPointerPosition           = BddDdiSetPointerPosition;
> InitialData.DxgkDdiSetPointerShape              = BddDdiSetPointerShape;
> InitialData.DxgkDdiIsSupportedVidPn             = BddDdiIsSupportedVidPn;
> InitialData.DxgkDdiRecommendFunctionalVidPn     = BddDdiRecommendFunctionalVidPn;
> InitialData.DxgkDdiEnumVidPnCofuncModality      = BddDdiEnumVidPnCofuncModality;
> InitialData.DxgkDdiSetVidPnSourceVisibility     = BddDdiSetVidPnSourceVisibility;
> InitialData.DxgkDdiCommitVidPn                  = BddDdiCommitVidPn;
> InitialData.DxgkDdiUpdateActiveVidPnPresentPath = BddDdiUpdateActiveVidPnPresentPath;
> InitialData.DxgkDdiRecommendMonitorModes        = BddDdiRecommendMonitorModes;
> InitialData.DxgkDdiQueryVidPnHWCapability       = BddDdiQueryVidPnHWCapability;
> InitialData.DxgkDdiPresentDisplayOnly           = BddDdiPresentDisplayOnly;
> InitialData.DxgkDdiStopDeviceAndReleasePostDisplayOwnership = BddDdiStopDeviceAndReleasePostDisplayOwnership;
> InitialData.DxgkDdiSystemDisplayEnable          = BddDdiSystemDisplayEnable;
> InitialData.DxgkDdiSystemDisplayWrite           = BddDdiSystemDisplayWrite;
> 
> NTSTATUS Status = DxgkInitializeDisplayOnlyDriver(pDriverObject, pRegistryPath, &InitialData);
> if (!NT_SUCCESS(Status))
> {
> BDD_LOG_ERROR1("DxgkInitializeDisplayOnlyDriver failed with Status: 0x%I64x", Status);
> return Status;
> }
> 
> return Status;
> }
> // END: Init Code
> 
> 
> 
> 
> Nvidia could use this function to verify that no AMD hardware IDs are present before initializing their driver.


first of all please excuse me for not following EVERY POST you've made.









and secondly its really ridiculous for nvidia to block their own hardware.

let's think here . .


----------



## MoorishBrutha

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Although the amount of ASC that AoTS uses is pretty damn small, and not much of an indicator.
> Source on the bold part? Again, another claim of yours without anything to back it up.


You dudes act like you all don't have a search engine or something. It's well-known that Nvidia is using that 80% market share to bully developers not to use Async Compute:
*http://www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/*


----------



## GoLDii3

Quote:


> Originally Posted by *iLeakStuff*
> 
> So its not OK to work with game developer partners to get games to run better on your hardware (Nvidia, GameWorks) but when it *benefit you* (AMD, Async Compute), its OK to have the partners code the game to work better on your hardware.
> 
> The double standard here...


You don't even know what GameWorks function is. How surprising.









Async compute could also benefit Nvidia if they supported it,so it is no one but Nvidia's fault.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Lex Luger*
> 
> Please correct me if I'm wrong, but don't ALL gameworks titles allow you to turn off the gameworks effects?
> 
> What are you crying about?
> 
> Just turn them OFF if you don't like the performance hit. They don't add much anyway. STOP CRYING!
> 
> I bleed green, but I still welcome async compute and hope it helps out AMD's GPU performance on 7000 series and up.


People are stupid as hell. "Ultra" or "Very High" are the same. Gameworks is just extra pretty graphics if you have the power. Whether that be on game launch or replaying the game years later. Personally, I love Gameworks. I can go back to a game in 3 years with a more powerful card and play on settings even higher than what I originally played at.


----------



## GorillaSceptre

More "valid points" by Postal.. FC4 and WD are now GE titles.







This man taking more L's than Meek Mill..








Quote:


> Originally Posted by *Lex Luger*
> 
> Please correct me if I'm wrong, but don't ALL gameworks titles allow you to turn off the gameworks effects?
> 
> What are you crying about?
> 
> Just turn them OFF if you don't like the performance hit. They don't add much anyway. STOP CRYING!
> 
> I bleed green, but I still welcome async compute and hope it helps out AMD's GPU performance on 7000 series and up.
> 
> I always turn off physx, gameworks, tressfx, and whatever else they come up with because they add HUGE latency and destroy your minimum framerate, even if Im still able to average 60 FPS.


It's not about turning things on or off.. That's pretty damn obvious. It's about Nvidias black-box method which inhabits AMD's/developers ability to optimize effectively, and as has been pointed out, most GameWorks titles are completely broken. You'd think most would be able to spot the trend by now..


----------



## bigjdubb

_This thread blew up fast! So much unnecessary information, my head almost popped._

I can't wait to see some benchmark numbers on this game. I won't be buying it because of it coming in "episodes" but I really want to see some numbers. It would be nice to have an actual game for people to argue about instead of everyones theory on how things will work.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *looniam*
> 
> first of all please excuse me for not following EVERY POST you've made.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> and secondly its really ridiculous for nvidia to block their own hardware.
> 
> let's think here . .


Gotta hover over me and select "Follow Member"









I was referencing Nvidia blocking driver initialization when AMD cards are present to avoid "SLI" between say a 980 Ti and Fury X. Pretty sure that's what the original context was.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Dargonplay*
> 
> Thank you for saving me the shame of going through the entirety of such failed attempt of fanboyism.
> It didn't failed as hard as your post though, also that patch came in because of Network Issues, the crashes were coming from the servers not the game, get your facts straight, Single Player was a blast to play even at those times.


The one error in my list of four doesn't do anything to the others on the list, or to illustrate the point that it isn't AMD or Nvidia at fault for bad game performance, but the developer. As for your claims on Netcode being patched in BF4 in that initial stability patch I referenced, nope.

BF4 released on 10-29-2013. In Jan. 2014 they released a stability update that was preventing people from playing, one of many updates. The first Netcode update came in June(ish) of 2014, all the way into 2015. The issues with BF4 Netcode, and how long it was/is a problem, is damn near legendary at this point.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *PostalTwinkie*
> 
> The one error in my list of four doesn't do anything to the others on the list, or to illustrate the point that it isn't AMD or Nvidia at fault for bad game performance, but the developer. As for your claims on Netcode being patched in BF4 in that initial stability patch I referenced, nope.
> 
> BF4 released on 10-29-2013. In Jan. 2014 they released a stability update that was preventing people from playing, one of many updates. The first Netcode update came in June(ish) of 2014, all the way into 2015. The issues with BF4 Netcode, and how long it was/is a problem, is damn near legendary at this point.


Could have sworn it had some bad memory leaks too. Unless that was 3. Or another game.


----------



## NuclearPeace

Quote:


> Originally Posted by *GorillaSceptre*
> 
> It's not about turning things on or off.. That's pretty damn obvious.


Then I don't see the issue. If an effect, Gameworks or not, doesn't run well on your rig, its common sense to just turn it off. I don't see why people have to go on internet message boards and whine endlessly that it's NVIDIA's fault that their computer cant handle it.


----------



## DNMock

Quote:


> Originally Posted by *Dargonplay*
> 
> What are you smoking sir? That is certainly something strong. Battlefield 4 is probably one of the best optimized games in recent History, it had NETWORK Problems at release, which doesn't relate to performance problems in this known Universe, thanks for proving my point.


I think that was just a typo. BF3 was an abortion on launch. BF4 was bad, but not terrible, except the single player campaign which was just a broken mess.


----------



## L36

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Could have sworn it had some bad memory leaks too. Unless that was 3. Or another game.


There were memory leak issues but that was due to bad AMD drivers at the time.


----------



## looniam

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Gotta hover over me and select "Follow Member"












fair enough
Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I was referencing Nvidia blocking driver initialization when AMD cards are present to avoid "SLI" between say a 980 Ti and Fury X. Pretty sure that's what the original context was.


look right now i choose to live in the land of magic and unicorns where AMD and NV will cohabitate in peace and harmony.


----------



## bigjdubb

Quote:


> Originally Posted by *looniam*
> 
> look right now i choose to live in the land of magic and unicorns where AMD and NV will cohabitate in peace and harmony.


I am pretty sure it is just their respective users that don't cohabitate in peace and harmony.


----------



## GorillaSceptre

Quote:


> Originally Posted by *NuclearPeace*
> 
> Then I don't see the issue. If an effect, Gameworks or not, doesn't run well on your rig, its common sense to just turn it off. I don't see why people have to go on internet message boards and whine endlessly that it's NVIDIA's fault that their computer cant handle it.


I explain the "issue" in the part of my post that you omitted.. What's the point in replying to one aspect of my post?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *looniam*
> 
> look right now i choose to live in the land of magic and unicorns where AMD and NV will cohabitate in peace and harmony.


----------



## BradleyW

So many posts, so little progress. Here's my understanding. You know, we can go back and forth between the positive and negative attributes between RTG and Nvidia. One things certain, Nvidia's current line up of GPU's are well suited for limited API's. Their drivers are highly tuned in relation to this. As for RTG, the GCN architechture has a very deep and complex pipeline with a tone of available processing cores/shaders. This is well suited for a low level API, which will allow the whole GPU to be fed with data properly and optimally. DX11 is very limiting to GCN and stops much of the GPU from ever being utilized. That's why Nvidia win the benchmarks despite GCN being the far supirior architecture in terms of available power.


----------



## PostalTwinkie

Quote:


> Originally Posted by *MoorishBrutha*
> 
> You dudes act like you all don't have a search engine or something. It's well-known that Nvidia is using that 80% to bully developers not to use Async Compute:
> *http://www.dsogaming.com/news/oxide-developer-nvidia-was-putting-pressure-on-us-to-disable-certain-settings-in-the-benchmark/*


Try and keep up...

http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2120_20#post_24379702

In the post from Kollock, from Oxide, he states...
Quote:


> Originally Posted by *Kollock*
> We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.


There was no pressure from Nvidia. Nvidia stated things weren't working, Oxide thought they were, it went back and forth, and it turns out they really weren't working. So Oxide began to work closely with Nvidia to correct it.


----------



## DNMock

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> People are stupid as hell. "Ultra" or "Very High" are the same. Gameworks is just extra pretty graphics if you have the power. Whether that be on game launch or replaying the game years later. Personally, I love Gameworks. I can go back to a game in 3 years with a more powerful card and play on settings even higher than what I originally played at.


So it's normal for a single lighting effect (godrays) or the hair of a single character (hairworks) to eat up over 20% of the entire GPU processing budget? If the Witcher 3 was made using the same level of optimization as gameworks stuff does, you might be able to get Geralt to be able to run around in a blank white environment at 4k 30fps if you have 4-way Titan-X sli ruinning.

Heck Square-Enix was doing hairworks level of detail on a PS3 in their games, so don't tell me it's just super detailed and that amazing. It's just optimized like trash, pure and simple.

Just curious, did you buy Arkham Knight on PC and just run at low settings because, hey in 3 years you might be able to turn it up to high, and praise the quality of work done?


----------



## GorillaSceptre

Quote:


> Originally Posted by *BradleyW*
> 
> So many posts, so little progress. Here's my understanding. You know, we can go back and forth between the positive and negative attributes between RTG and Nvidia. One things certain, Nvidia's current line up of GPU's are well suited for limited API's. Their drivers are highly tuned in relation to this. As for RTG, the GCN architechture has a very deep and complex pipeline with a tone of available processing cores/shaders. This is well suited for a low level API, which will allow the whole GPU to be fed with data properly and optimally. DX11 is very limiting to GCN and stops much of the GPU from ever being utilized. That's why Nvidia win the benchmarks despite GCN being the far supirior architecture in terms of available power.


Pretty much this.









I'll say this again since it got buried earlier.. Imagine how AMD must have felt, watching their hardware get mocked on power consumption because all that complex hardware was guzzling watts, while no developers were actually using it.. They were so desperate that they eventually went as far to create their own API.. lachen.gif That's what happens when you pretty much have a monopoly in the form of Direct X..


----------



## mtcn77

Quote:


> Originally Posted by *BradleyW*
> 
> So many posts, so little progress. Here's my understanding. You know, we can go back and forth between the positive and negative attributes between AMD and Nvidia. One things certain, Nvidia's current line up of GPU's are well suited for limited API's. Their drivers are highly tuned in relation to this. As for AMD, the GCN architechture has a very deep and complex pipeline with a tone of available processing cores/shaders. This is well suited for a low level API, which will allow the whole GPU to be fed with data properly and optimally. DX11 is very limiting to GCN and stops much of the GPU from ever being utilized. That's why Nvidia win the benchmarks despite GCN being the far supirior architecture in terms of available power.


Please don't resort to canning the discussion with rhetoric. Nvidia does not win in the benchmarks. They actually lose the moment Directx 12 is mentioned. Futuremark's API test is the case in point, here. You just cannot draw sufficiently on Nvidia hardware, there is 2:1 scaling between the Titan X and 295X2(13M>24M).


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Try and keep up...
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2120_20#post_24379702
> 
> 
> 
> In the post from Kollock, from Oxide, he states...
> There was no pressure from Nvidia. Nvidia stated things weren't working, Oxide thought they were, it went back and forth, and it turns out they really weren't working. So Oxide began to work closely with Nvidia.


That was just damage control after the fact.. If you're naive to believe that then go ahead.

The CEO of Oxide even tweeted at Nvidia to "tread lightly".


----------



## NuclearPeace

Quote:


> Originally Posted by *GorillaSceptre*
> 
> . It's about Nvidias black-box method which inhabits AMD's/developers ability to optimize effectively, and as has been pointed out, most GameWorks titles are completely broken. You'd think most would be able to spot the trend by now..


Firstly, anyone (including me) is able to obtain a license to use or modify Gameworks from NVIDIA, including AMD. The whole "black box" shtick is just not only scary sounding words to get people to be irrationally scared of something (i.e "death panels" and "chemtrails"), its false.

Secondly, how is it NVIDIA's fault that a game is released in a broken or poor state? Gameworks is a library of graphical effects and its not NVIDIA who created the game.


----------



## BradleyW

Quote:


> Originally Posted by *mtcn77*
> 
> Please don't resort to canning the discussion with rhetoric. *Nvidia does not win in the benchmarks. They actually lose the moment Directx 12 is mentioned*. Futuremark's API test is the case in point, here. You just cannot draw sufficiently on Nvidia hardware, there is 2:1 scaling between the Titan X and 295X2(13M>24M).


Yeah, my comment was in regards to DX11 and how it limits GCN. That's a fact.
GCN is technically better. Nvidia's Arch is suited for DX11. DX11 chokes GCN. Nvidia does better in DX11 in general.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GorillaSceptre*
> 
> That was just damage control after the fact.. If you're naive to believe that then go ahead.
> 
> The CEO of Oxide even tweeted at Nvidia to "tread lightly".












Get real! What is the point of even having a conversation if you are going to be just dismissive, of Oxide themselves, just to hold your point up?

Oxide screwed up, spoke big publicly, and then actually ended up with a foot in their mouth. Nvidia said it wasn't working, Oxide disagreed, and later found out that Nvidia was right. So at that point Oxide made the apology, let us all know, and began working to fix it.

To be dismissive of that completely ruins the point of having a conversation.


----------



## clao

Great ! now is I am just waiting for Win 7 to support dx 12


----------



## DNMock

Quote:


> Originally Posted by *BradleyW*
> 
> So many posts, so little progress. Here's my understanding. You know, we can go back and forth between the positive and negative attributes between RTG and Nvidia. One things certain, Nvidia's current line up of GPU's are well suited for limited API's. Their drivers are highly tuned in relation to this. As for RTG, the GCN architechture has a very deep and complex pipeline with a tone of available processing cores/shaders. This is well suited for a low level API, which will allow the whole GPU to be fed with data properly and optimally. DX11 is very limiting to GCN and stops much of the GPU from ever being utilized. That's why Nvidia win the benchmarks despite GCN being the far supirior architecture in terms of available power.


Hasn't this been common for quite some time?

Nvidia's newest GPU's run great on the newest titles but over a year or two, that same gpu performs like trash. AMD gpu's out the gate are slower than their Nvidia counter-parts out the gates, but after a year or two on the newest titles outperform the Nvidia version.

if I recall, the 780ti smashed the 290X when they were both brand new, but the 290X is now on par and even beginning to surpass the 780ti's in the newest titles.

Really, Nvidia should be the choice for people who upgrade every time a new GPU comes out, and AMD should be the choice for those who hold onto their GPU for a few years before upgrading.


----------



## Themisseble

Well you mean R9 280X is as fast as GTX 780Ti in newer titles?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *DNMock*
> 
> So it's normal for a single lighting effect (godrays) or the hair of a single character (hairworks) to eat up over 20% of the entire GPU processing budget? If the Witcher 3 was made using the same level of optimization as gameworks stuff does, you might be able to get Geralt to be able to run around in a blank white environment at 4k 30fps if you have 4-way Titan-X sli ruinning.
> 
> Heck Square-Enix was doing hairworks level of detail on a PS3 in their games, so don't tell me it's just super detailed and that amazing. It's just optimized like trash, pure and simple.
> 
> Just curious, did you buy Arkham Knight on PC and just run at low settings because, hey in 3 years you might be able to turn it up to high, and praise the quality of work done?


I feel like it's not even worth replying to this terrible quality post.



http://www.overclock3d.net/reviews/gpu_displays/batman_arkham_knight_amd_vs_nvidia_performance_review/7


----------



## Themisseble

So every time we will go AMD vs NVIDIA?Why? cant you all just make prediction... and then just wait until actually benchmarks will arrive?

But anyway since this is thread about HITMAN, Asycn shader and DX12.

Well Hitman was always very CPU intensive. So thank god they will use DX12.
For async shaders... well, we dont know much about how much performance will it add... So I am very glad that this game will actually use it.
Well I have played HITMAN, but I was always get bored in a middle of a game... I hope it will have movie like story.


----------



## GorillaSceptre

Quote:


> Originally Posted by *NuclearPeace*
> 
> Firstly, anyone (including me) is able to obtain a license to use or modify Gameworks from NVIDIA, including AMD. The whole "black box" shtick is just not only scary sounding words to get people to be irrationally scared of something (i.e "death panels" and "chemtrails"), its false.
> 
> Secondly, how is it NVIDIA's fault that a game is released in a broken or poor state? Gameworks is a library of graphical effects and its not NVIDIA who created the game.


I'm sick and tired of the GW arguments, do some research of your own. Or better yet, if AMD's GE garbage starts crippling Nvidia in the future, don't moan about it.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Get real! What is the point of even having a conversation if you are going to be just dismissive, of Oxide themselves, just to hold your point up?
> 
> Oxide screwed up, spoke big publicly, and then actually ended up with a foot in their mouth. Nvidia said it wasn't working, Oxide disagreed, and later found out that Nvidia was right. So at that point Oxide made the apology, let us all know, and began working to fix it.
> 
> To be dismissive of that completely ruins the point of having a conversation.


A developer says they were pressured into disabling features of their game by Nvidia, Nvidia and the boss of that developer then end up with a back and forth on Twitter. The CEO of Oxide then tells Nvidia to "tread lightly". Those are the facts, then later down the road a _spokesperson_ for Oxide says they love Nvidia and everyone else and all is fine. Right on.









As you and the other team green brigade keep reminding everyone, Nvidia has 80% marketshare. Oxide is a small company, you do the math.

If Oxide was wrong then why did Nvidia go work on a driver, if it was all Oxides problem? Where is this "apology" you speak of?


----------



## airfathaaaaa

the funny part is that no one is speaking the real thing

the whole dx12 works on amd and doesnt work on nvidia is going so far as when ms was talking about the x box one....
dx12 is the evolution of the api that xbox one has an api that is built around amd cgn cards...who ever thinks otherwise needs to wake up and its not like nvidia didnt had the chance to grab the console market but their pr derpament said the console gains are terrible http://www.extremetech.com/gaming/150892-nvidia-gave-amd-ps4-because-console-margins-are-terrible

now ofc when you read this on 2016 you obviously call for BS and it makes sense the only thing that works as it suppose to on that company is the pr team
after that we now for a fact that nvidia was talking with ms about dx12 when it was on development as far as late 2013...to tell that they didnt knew about async is just a lie and a stupid one

their own words on their own site

"Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC"
http://blogs.nvidia.com/blog/2014/03/20/directx-12/

you see the internet never forgets and you can never hide from the truth eventually the karma will hit you


----------



## GoLDii3

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I feel like it's not even worth replying to this terrible quality post.
> 
> 
> 
> http://www.overclock3d.net/reviews/gpu_displays/batman_arkham_knight_amd_vs_nvidia_performance_review/7


----------



## PontiacGTX

Quote:


> Originally Posted by *BradleyW*
> 
> Yeah, my comment was in regards to DX11 and how it limits GCN. That's a fact.
> GCN is technically better. Nvidia's Arch is suited for DX11. DX11 chokes GCN. Nvidia does better in DX11 in general.


that wouldnt be a problem if AMD wanted to fix it, but if DX12 will improve the game developing industry , then is fine as it is


----------



## criminal

Quote:


> Originally Posted by *DNMock*
> 
> This is true, Gameworks runs like trash period. Doesn't matter if you are using an Intel Igpu, AMD, or Nvidia card, you turn on a gameworks feature and you are gonna get hammered for 25% reduction in FPS. I run dual T-X gpu's and still turn off most trashworks features because it destroys my FPS.


Amen.

Quote:


> Originally Posted by *GoLDii3*


I can speak for Fallout 4. Performance is crap with Godrays on. I shouldn't have any issues running that game on my 980, if I enable Godrays I do. Pisses me off.


----------



## MoorishBrutha

Quote:


> Originally Posted by *ZealotKi11er*
> 
> The difference here even if A-Sync was AMD only is that A-Sync increases performance hence allowing better effect. While GameWorks allows better effects on sacrifice of limited performance. The reason we dont see much DX12 is Nvidia sadly. They are just not ready for it so they will delay it as much as possible. If Nvidia had GCN as their architecture right now with same marketshare you would see DX12 in every games. They sure have shown us how deticated they are with GameWorks.


I said all of this last year when Ark: Evolved got delayed. On Nvidia, DX11 did better than DX12.


----------



## BradleyW

Quote:


> Originally Posted by *PontiacGTX*
> 
> that wouldnt be a problem if AMD wanted to fix it, but if DX12 will improve the game developing industry , then is fine as it is


Do you know what RTG would have to do in order to fix the DX11 overhead they suffer? They'd have to develope a heavily modified version of GCN or scrap it all together. The very structure of their drivers would then be built from the ground up. Starting anew. It would take a few years. They don't have the time or money to do it. And why would they if low level API's may become the standard?


----------



## PontiacGTX

Quote:


> Originally Posted by *BradleyW*
> 
> Do you know what RTG would have to do in order to fix the DX11 overhead they suffer? They'd have to develope a heavily modified version of GCN or scrap it all together. The very structure of their drivers would then be built from the ground up. Starting anew. It would take a few years. They don't have the time or money to do it. And why would they if low level API's may become the standard?


but they can improve DIrectx 11 MT/ST performance they have been doing since Win 10 was released


----------



## BradleyW

Quote:


> Originally Posted by *PontiacGTX*
> 
> but they can improve DIrectx 11 MT/ST performance they have been doing since Win 10 was released


WDDM v2.0 was the reason for their slight boost. RTG also made driver improvements. Overall, this gave RTG an average of a 2 FPS increase at most in some CPU bound titles. This is the best they can do currently.


----------



## Dargonplay

The fact of the matter is, we'll have only a few games using Async Computing, coming from companies with good relations with AMD, that on itself is sad because Async Computing is the future, a better alternative to everything we have, everyone should be using it now because it's the most efficient shading technique, but sadly because Nvidia have such a huge marketshare they'll have the influence and power to delay the progress in this industry as much as they want until they take their sweet years time to integrate Async Computing into Volta.

In the meantime, let more games be praised for X64 tessellation on every character's Hair that looks exactly the same as x8 but it is implemented like that for reasons you can only imagine, with games like Fallout 4 that looks like crap and run like crap.

This makes me sad.


----------



## Brutuz

Quote:


> Originally Posted by *looniam*
> 
> AMD: "There's no such thing as full support for DX12 today", *Fury X missing DX12 features as Well*
> 
> the more you know™


Except the features missing aren't as important and as said in your link, are doable through other methods with high FPS.
Quote:


> Originally Posted by *sixor*
> 
> and fallout 4 still uses 2gb of ram and looks like a ps2 game


I want to know what PS2 games you played that had the amount of detail in FO4..Don't get me wrong, plenty still look great but PS2 is a low resolution mess most of the time these days. (Unless you're upping the internal resolution with an emulator)
Sure, it's not amazing looking but it's still pretty good given the amount going on typically in a Beth game. Plus, it's their first foray into the new consoles and a massive engine update, TESVI should be much more thorough (Hopefully) because of all that work on the foundation of both series'.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That is the double standard I am talking about.
> 
> Had this been a GameWorks thread/feature discussion, it would have been flooded with _"GameWorks! Get that crap out of here, ruining and tubing games!"_. Yet, here we are, discussing an ENTIRE article about AMD's Gaming Evolved, and people (you) are trying to say it is just about ASC.
> 
> No, this is an announcement by AMD and IO of their partnership via the Gaming Evolved program, literally. Let me show you....
> This is a Gaming Evolved announcement, and people are going spastic over it, yet with Nvidia they would be burning torches. Further, and I want people to actually think about this...
> 
> ASC v.s. Phsyx: Both are features that one can use, but not the other, so why is one OK?


As far as I can tell, ASync is part of DX12 and they're optimizing for use of that...So at least nVidia will have better support for it in future when they get the full DX12 spec whereas PhysX with a GPU will always screw over anyone wanting a Radeon.
And nVidia does support async, it just doesn't have dedicated hardware in the GPU for it and performance suffers greatly as a result. Even TWIMTBP was alright compared to Gameworks when they just helped developers leverage DX better even if it focused on areas nVidia was strong in..AMD/ATi could still run the games and get all the features.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> No, they offered it to AMD for literally pennies per GPU sold, to help support the development of it and push it forward. Why would Nvidia be the only one to financially commit to pushing it forward, if AMD is using it as well?


It'd be idiotic for AMD to do. PhysX would become the standard and nVidia could update it in ways that run horribly on AMD architectures with no real way for AMD to fix the problem, plus if it was a standard and all of a sudden nV had to raise the price for whatever reason AMD would have to lose support for a tonne of games or just pay it. If they asked AMD/Intel to come on board as co-developers even if they paid a fee for the license it'd be much better but still not ideal.


----------



## MoorishBrutha

Quote:


> Originally Posted by *Dargonplay*
> 
> The fact of the matter is, we'll have only a few games using Async Computing, coming from companies with good relations with AMD, that on itself is sad because Async Computing is the future, a better alternative to everything we have, everyone should be using it now because it's the superior alternative, but sadly because Nvidia have such a huge marketshare they'll have the influence and power to delay the progress in this industry as much as they want until they take their sweet years time to integrate Async Computing into Volta.
> 
> In the meantime, let more games be praised for X64 tessellation on every character's Hair that looks exactly the same as x8 but it is implemented like that for reasons you can only imagine, with games like Fallout 4 that looks like crap and run like crap.
> 
> This makes me sad.


Ditto: It doesn't matter how much market share Nvidia has, as we just saw with Ubisoft's Division, the consoles have the biggest priority from developers and they can run heavy workloads of Async Compute so Async Compute will get implemented regardless of Nvidia's blessings.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *GoLDii3*


Yep. Higher settings = lower fps. Just like it's always been.


----------



## ZealotKi11er

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yep. Higher settings = lower fps. Just like it's always been.


Yes if IQ was higher but that was not the case with God Rays.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> The fact of the matter is, we'll have only a few games using Async Computing, coming from companies with good relations with AMD, that on itself is sad because Async Computing is the future, a better alternative to everything we have, everyone should be using it now because it's the most efficient shading technique, but sadly because Nvidia have such a huge marketshare they'll have the influence and power to delay the progress in this industry as much as they want until they take their sweet years time to integrate Async Computing into Volta.
> 
> In the meantime, let more games be praised for X64 tessellation on every character's Hair that looks exactly the same as x8 but it is implemented like that for reasons you can only imagine, with games like Fallout 4 that looks like crap and run like crap.
> 
> This makes me sad.


don't be sad dude. DX12 only came out ~6 months ago with W10. and if AMD had't had mantle, MS would still be sitting on the proverbial pot with it. buy the game and devs won't care who has market share.

rome wasn't built in a day.


----------



## GoLDii3

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yep. Higher settings = lower fps. Just like it's always been.


Higher settings mean the image quality has to be higher due to the increased workload.

Please look for yourself the comparison between Godrays Ultra and Godrays Low.

http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-002-ultra-vs-low.html
http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-004-ultra-vs-low.html
http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-001-ultra-vs-low.html
http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-006-ultra-vs-low.html


----------



## Dargonplay

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yep. Higher settings = lower fps. Just like it's always been.


I thought that Higher Settings = Higher Image quality, at least that's how its always been, until Gameworks.


----------



## BradleyW

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yep. Higher settings = lower fps. Just like it's always been.


The difference in IQ between low and ultra god rays was around 1% at the cost of 20+ frames.


----------



## looniam

E:

nvrmnd, my bad.


----------



## Dargonplay

Quote:


> Originally Posted by *BradleyW*
> 
> The difference in IQ between low and ultra god rays was around 1% at the cost of 20+ frames.


the 20-25 frames penalty is for Nvidia cards, AMD gets around 35 frames penalty, and 1% Image Quality Difference is being optimistic.


----------



## sugarhell

Quote:


> Originally Posted by *GoLDii3*
> 
> Higher settings mean the image quality has to be higher due to the increased workload.
> 
> Please look for yourself the comparison between Godrays Ultra and Godrays Low.
> 
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-002-ultra-vs-low.html
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-004-ultra-vs-low.html
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-001-ultra-vs-low.html
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-006-ultra-vs-low.html


Doesnt worth the fps impact for me. But i have bad eyes so i cann really judge


----------



## BradleyW

Quote:


> Originally Posted by *Dargonplay*
> 
> the 20-25 frames penalty is for Nvidia cards, AMD gets around 35 frames penalty, and 1% Image Quality Difference is being optimistic.


True dat!


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Dargonplay*
> 
> I thought that Higher Settings = Higher Image quality, at least that's how its always been, until Gameworks.


Yes, at the cost of performance.

Pretty easy to just turn off settings that cause huge impacts or to lower them if there's no discernable difference in image quality. Really can't believe all the people whining in here instead of using common sense. If Gameworks settings can be turned off, there's no reason to complain.


----------



## BradleyW

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yes, at the cost of performance.
> 
> Pretty easy to just turn off settings that cause huge impacts or to lower them if there's no discernable difference in image quality. Really can't believe all the people whining in here instead of using common sense. If Gameworks settings can be turned off, there's no reason to complain.


But you miss out on good looking features, which can be implimented far more optimally compared to GameWorks.


----------



## Dargonplay

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Pretty easy to just turn off settings that cause huge impacts or to lower them if there's no discernable difference in image quality.


Then why adding them in the first place? If the setting don't increase image Quality and only destroys performance? Probably just for benchmarking sites maxing out games in their benchmarks to make competing graphics offerings look bad, I just don't see any other valid reason given the fact those effects are utterly useless but can cripple up to 50% of AMD's graphic performance.


----------



## NuclearPeace

Quote:


> Originally Posted by *Dargonplay*
> 
> Then why adding them in the first place? Probably just for benchmarking sites maxing out games in their benchmarks to make competing graphics offerings look bad, I just don't see any other valid reason given the fact those effects are utterly useless but can cripple up to 50% of AMD's graphic performance.


Because some people have really strong rigs that could support the graphic effects and probably would like to get as much as they can out of their cards in terms of eye candy.
Quote:


> Originally Posted by *BradleyW*
> 
> But you miss out on good looking features, which can be implimented far more optimally compared to GameWorks.


True, but its easier/cheaper just to use the Gameworks library than to spend even more man hours (and thus money) to develop your own effects that could just end up buggy and horribly optimized.

Gameworks is the easy way out of making a game look better.


----------



## BradleyW

Quote:


> Originally Posted by *NuclearPeace*
> 
> Because some people have really strong rigs that could support the graphic effects and probably would like to get as much as they can out of their cards in terms of eye candy.


Fallout 4 God rays anyone?


----------



## criminal

Quote:


> Originally Posted by *NuclearPeace*
> 
> Because some people have really strong rigs that could support the graphic effects and probably would like to get as much as they can out of their cards in terms of eye candy.


So Titan X SLI is not a strong rig? I mean you quoted someone with that configuration that complained about Gameworks and you tried to use the same logic (or lack there of I should say). Where does it stop?

If a Gameworks feature does nothing but cause performance drops without increasing the experience, it is a waste of time and should not even be implemented.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *BradleyW*
> 
> But you miss out on good looking features, which can be implimented far more optimally compared to GameWorks.


That's on the basis that the developers would have invested the resources in the first place to implement said settings.


----------



## BradleyW

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> That's on the basis that the developers would have invested the resources in the first place to implement said settings.


Well of course. How else would we have "optimal" graphics effects in games?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> So Titan X SLI is not a strong rig? I mean you quoted someone with that configuration that complained about Gameworks and you tried to use the same logic (or lack there of I should say). Where does it stop?
> 
> If a Gameworks feature does nothing but cause performance drops without increasing the experience, it is a waste of time and should not even be implemented.


Agree. Those resources could have gone into 1 setting that works instead of multiple broken ones. But when you have a lot of Gameworks features that aren't terrible, it's pretty damn nice to have. That's how I feel about the new Tomb Raider game.


----------



## NuclearPeace

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> That's on the basis that the developers would have invested the resources in the first place to implement said settings.


Agreed.

In this situation I think it was a bit daft of Bethesda to include more levels of Godray "quality" that don't do anything but hurt FPS, but hey it's their money and time they spent adding those Gamework effects. I also dont see how in BF4 that someone will feel the need for even more anti-aliasing after enabling 4x MSAA and FXAA, but hey its DICE's call if they want to add the resolution scale.


----------



## Dargonplay

Quote:


> Originally Posted by *NuclearPeace*
> 
> Because some people have really strong rigs that could support the graphic effects and probably would like to get as much as they can out of their cards in terms of eye candy.


What Eye candy? have you even looked at Godrays in Fallout 4? There's absolutely nothing there, it doesn't look better, it doesn't even look different but that eats 20-25 FPS in Nvidia cards and up to 40 FPS in AMD cards.

I have a Fury X card and I surely don't lack in horse power to drive any worthy graphical setting and I'm still complaining, in afct everyone who's complaining about this have High End computers, we don't like to waste our resources mindlessly, we need to get something in return for those 40 FPS for Christ Sake.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *BradleyW*
> 
> Well of course. How else would we have "optimal" graphics effects in games?


What I'm trying to convey is that it's most likely less money and time to implement an array of gw features than 1 extra feature themselves.


----------



## NuclearPeace

Quote:


> Originally Posted by *Dargonplay*
> 
> What Eye candy? have you even looked at Godrays in Fallout 4? There's absolutely nothing there, it doesn't look better, it doesn't even look different but that eats 20-25 FPS in Nvidia cards and up to 40 FPS in AMD cards.
> 
> I have a Fury X card and I surely don't lack in horse power to drive any worthy graphical setting and I'm still complaining, in afct everyone who's complaining about this have High End computers, we don't like to waste our resources mindlessly, we need to get something in return for those 40 FPS for Christ Sake.


Does the mere presence of Godrays Ultra offend you or something? If you feel something is useless, don't use it... I don't get the whole crusade about something that is optional.


----------



## Dargonplay

Quote:


> Originally Posted by *NuclearPeace*
> 
> Does the mere presence of Godrays Ultra offend you or something? If you feel something is useless, don't use it... I don't get the whole crusade about something that is optional.


It doesn't offend me, I'm pointing out that this setting does nothing for us gamers and it functions exclusively as leverage for Benchmarks in Nvidia's favor, just like Gameworsky Gerald X64 Hair.
Quote:


> Originally Posted by *NuclearPeace*
> 
> I also dont see how in BF4 that someone will feel the need for even more anti-aliasing after enabling 4x MSAA and FXAA, but hey its DICE's call if they want to add the resolution scale.


BF4 Implementation of those many AA settings is understandable, some people prefer to have FXAA because of its efficiency, others MSAA because of its Image Quality, others SSAA because its the best AA ever conceived and it comes at a tremendous FPS impact, but having those 3 in separate settnigs allow you to finetune the resulting AA, maybe MSAA X4 and FXAA Low (For little blur on textures, FXAA does a better job at getting those perky AA jagies but it blurs the images, while MSAA is more selective but its sharper)


----------



## NuclearPeace

That's actually really reasonable. Hopefully testers were smart enough to realize that Godrays Ultra is useless and should instead opt for the way more reasonable Godrays Low or perhaps even Off if you want to make the benchmarks more fair.


----------



## BradleyW

RTG have locked Fallout 4 to x8 tess on a driver level to combat the God Ray fiasco.


----------



## EightDee8D

lol , compare tessellation to ACS if you don't want double standards







. and amd is right here because ATM their hardware is the only one that fully supports ACS . that doesn't mean nvidia is locked out or something. just that they are behind with ACS same as AMD is behind with tessellation.

"double standards"


----------



## Dargonplay

Quote:


> Originally Posted by *NuclearPeace*
> 
> That's actually really reasonable. Hopefully testers were smart enough to realize that Godrays Ultra is useless and should instead opt for the way more reasonable Godrays Low or perhaps even Off if you want to make the benchmarks more fair.


A thick envelope often times distort the meaning of "Fair".

Benchmark sites, Gameworks or not, always use maxed out games, Benchmarking sites don't usually have selective settings on their games.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GorillaSceptre*
> 
> I'm sick and tired of the GW arguments, do some research of your own. Or better yet, if AMD's GE garbage starts crippling Nvidia in the future, don't moan about it.
> A developer says they were pressured into disabling features of their game by Nvidia, Nvidia and the boss of that developer then end up with a back and forth on Twitter. The CEO of Oxide then tells Nvidia to "tread lightly". Those are the facts, then later down the road a _spokesperson_ for Oxide says they love Nvidia and everyone else and all is fine. Right on.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> As you and the other team green brigade keep reminding everyone, Nvidia has 80% marketshare. Oxide is a small company, you do the math.
> 
> If Oxide was wrong then why did Nvidia go work on a driver, if it was all Oxides problem? Where is this "apology" you speak of?


What does the price of tea in China have to do with it? Who cares who said what from each involved party? The facts are what they are! It wasn't working, and had to be fixed. You can make personal shots at Kollock all you want about what his position at Oxide might be. The fact is, the statements were made, those statements are still live and active.

If Kollock's management (assuming it isn't the handle of the CEO already) had an issue with the statements, they would be down. As they made their rounds to every publication that covers the subject.

They thought it was working, it wasn't, and had to apologize and backtrack. Things like that happen, no need to get offended by it and push an agenda that isn't there.

Quote:


> Originally Posted by *BradleyW*
> 
> WDDM v2.0 was the reason for their slight boost. RTG also made driver improvements. Overall, this gave RTG an average of a 2 FPS increase at most in some CPU bound titles. This is the best they can do currently.


We are probably approaching the limits of what both sides can do with current APIs as a whole. As we have seen some stagnation in performance and fidelity increases.

EDIT:

Hell, I think we hit that limit a while ago, and have just been getting by via creative tricks they have come up with.


----------



## mtcn77

Do you stop and ask yourselves just why is there "1 cpu" able to run the game above 60 fps when the former episode of the same game could run at 60p for more than 50% OF CPUS TESTED? Not like it is AMD getting the shaft alone, the test is done on 980 Ti people! Not even a 1000$ 4960X, ONLY i7 6700K is capable of running at 60p. Stop and ask yourselves what is going on for a minute why cpus are getting hampered by a gpu sponsor.


----------



## BradleyW

Quote:


> Originally Posted by *mtcn77*
> 
> Do you stop and ask yourselves just why is there "1 cpu" able to run the game above 60 fps when the former episode of the same game could run at 60p for more than 50% OF CPUS TESTED? Not like it is AMD getting the shaft alone, the test is done on 980 Ti people! Not even a 1000$ 4960X, ONLY i7 6700K is capable of running at 60p. Stop and ask yourselves what is going on for a minute why cpus are getting hampered by a gpu sponsor.


To sell more high end Intel CPU's like the 6700K?


----------



## Dargonplay

Quote:


> Originally Posted by *mtcn77*
> 
> Do you stop and ask yourselves just why is there "1 cpu" able to run the game above 60 fps when the former episode of the same game could run at 60p for more than 50% OF CPUS TESTED? Not like it is AMD getting the shaft alone, the test is done on 980 Ti people! Not even a 1000$ 4960X, ONLY i7 6700K is capable of running at 60p. Stop and ask yourselves what is going on for a minute why cpus are getting hampered by a gpu sponsor.


More expensive and enthusiast CPUs have less core Frequency therefore having less Single Thread performance leading to less FPS in games but I do understand what you're trying to say and I agree, Gameworks is garbage and it shouldn't exist.

Someone else posted a list of games in a fail post trying to prove that gameworks works wonders on games and that I'd need to select one game of his post where gameworks wasn't ruining the game, I'd dare to say a more precise way to do this would be asking for 3 games with Gameworks that weren't utterly broken, I think everyone would have a hard time finding those 3 games.

GPU vendors should have no business developing Game Engines or Libraries for game developers, this should come from neutral third parties that gain nothing by hampering one or the other.


----------



## semitope

to those talking about gaming evolved and gameworks, I actually am happy when a game is a gaming evolved title. Because it means it's very likely well optimized. Probably would include a benchmark

GE is not the same as what nvidia does at all. Should listen to what AMD have said about GE and look at how GE titles perform.

Its good news for hitman. If tomb raider was GE as well that would have been nice.


----------



## keikei

Quote:


> Originally Posted by *airfathaaaaa*
> 
> the funny part is that no one is speaking the real thing
> 
> the whole dx12 works on amd and doesnt work on nvidia is going so far as when ms was talking about the x box one....
> dx12 is the evolution of the api that xbox one has an api that is built around amd cgn cards...who ever thinks otherwise needs to wake up and its not like nvidia didnt had the chance to grab the console market but their pr derpament said the console gains are terrible http://www.extremetech.com/gaming/150892-nvidia-gave-amd-ps4-because-console-margins-are-terrible
> 
> now ofc when you read this on 2016 you obviously call for BS and it makes sense the only thing that works as it suppose to on that company is the pr team
> after that we now for a fact that nvidia was talking with ms about dx12 when it was on development as far as late 2013...to tell that they didnt knew about async is just a lie and a stupid one
> 
> their own words on their own site
> 
> "O_ur work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead_. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC"
> http://blogs.nvidia.com/blog/2014/03/20/directx-12/
> 
> you see the internet never forgets and you can never hide from the truth eventually the karma will hit you


Well, apparently it was just that...talk. Good find.


----------



## NightAntilli

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Get real! What is the point of even having a conversation if you are going to be just dismissive, of Oxide themselves, just to hold your point up?
> 
> Oxide screwed up, spoke big publicly, and then actually ended up with a foot in their mouth. Nvidia said it wasn't working, Oxide disagreed, and later found out that Nvidia was right. So at that point Oxide made the apology, let us all know, and began working to fix it.
> 
> To be dismissive of that completely ruins the point of having a conversation.


It was just PR because everyone that has tested the architectures has confirmed that GCN can do async compute with major performance gains, while on paper nVidia's also can, but in practice the asynchronicity benefits are nullified due to the high delay caused by the required context switch. Implementing it through drivers for nVidia will not do anything other than change the way things are rendered, but not making it any faster.


----------



## Lex Luger

The AMD conspiracy theorists really need to cool it when it comes to gameworks. Would you rather nvidia go back to physx only where the option to enable them is grayed out?

Do you have any ACTUAL proof that gameworks titles intentionally cripple AMD hardware even when all gameworks effects are disabled?

If async compute and dx12 give AMD a big advantage over kepler and maxwell, I would be happy for AMD and wouldn't shout from the rooftops, CONSPIRACY, CONSPIRACY on every thread relating to games and AMD gpu's for next 5 years like some of the people on these forums.

YOU HAVE NO PROOF! NONE! ZIP! Deal with it.


----------



## magnek

Quote:


> Originally Posted by *EightDee8D*
> 
> lol , compare tessellation to ACS if you don't want double standards
> 
> 
> 
> 
> 
> 
> 
> . and amd is right here because ATM their hardware is the only one that fully supports ACS . that doesn't mean nvidia is locked out or something. just that they are behind with ACS same as AMD is behind with tessellation.
> 
> "double standards"


You're late to the party







I already pointed this out a while back, but yes thank you for also seeing the absolute absurdity between comparing ASC with Gameworks
Quote:


> Originally Posted by *magnek*
> 
> If we really wanted to make an accurate analogy, it would be akin to (over)tessellating games because the geometry engines in AMD hardware have pitiful tessellation performance. I really don't see how ASC is comparable to Gameworks at all.
> 
> As for Gaming Evolved source code not being open, well nVidia worked with CD to change the final game code in Tomb Raider to fix TressFX issues on nVidia GPUs, so I'm not entirely sure about that.


----------



## Unkzilla

A lot of wasted energy here

If we are to believe how much performance increase the new GPUs (from both sides) are going to offer, who really cares about how current cards are going to run in the future? I'm going to be dumping my GPU on ebay quick smart and will be ready to upgrade - to either side

By the time these titles release in any meaningful quantity, its probably going to be a debate of 25 vs 30fps on the current gen hardware


----------



## Dargonplay

Quote:


> Originally Posted by *Lex Luger*
> 
> The AMD conspiracy theorists really need to cool it when it comes to gameworks. Would you rather nvidia go back to physx only where the option to enable them is grayed out?
> 
> Do you have any ACTUAL proof that gameworks titles intentionally cripple AMD hardware even when all gameworks effects are disabled?
> 
> If async compute and dx12 give AMD a big advantage over kepler and maxwell, I would be happy for AMD and wouldn't shout from the rooftops, CONSPIRACY, CONSPIRACY on every thread relating to games and AMD gpu's for next 5 years like some of the people on these forums.
> 
> YOU HAVE NO PROOF! NONE! ZIP! Deal with it.


Well here's a fact, AMD have countless time stated they can't optimize anything for Gameworks titles during the development process, Nvidia is the only one who can and Nvidia doesn't let developers to add AMD's own code to optimize the game for Radeon Cards, AMD have to make these optimizations with their drivers instead of being released with the game, this means the game have to launch and after it is available to everyone AMD have to optimize their drivers for it. In all Gameworks titles AMD always loses miserably the performance war on early game but always catch up at late game, the thing is, for a marketing standpoint... Late game doesn't matter.

When a game comes out every Web Site will benchmark the soul out of it, there are a few (Minor sites) that will benchmark the same game again months, even years after release, once the early benchmarks for Gameworks titles are out, AMD Reputations gets tainted for the remaining of that generation and people would always see it as the 2nd tier offering, even when they are in fact the best performing offering at the time (Late game), also when casual players looking to upgrade see these "Late game benchmarks" they often disregard them because they aren't coming from big sites like Techspot or Anandtech who only benchmark games when they're freshly out in the market.

After seeing Nvidia's record of business decision will you really say that the idea of nVidia doing this on purpose to gain marketshare hampering AMD is far fetched? The tessellation Scandal, Hairworks Effects, Godrays, PhysX, Gameworks... They all have something in common, they hurt AMD more than they hurt nVidia, and they grant nVidia the performance crown when games are releasing, which in the end is all that matters for newcomers and people looking to upgrade who just do a few benchmark searches when deciding for a new video card.
Quote:


> Originally Posted by *Unkzilla*
> 
> who really cares about how current cards are going to run in the future? I'm going to be dumping my GPU on ebay quick smart and will be ready to upgrade - to either side


This right here is what I'm talking about.


----------



## keikei

Quote:


> Originally Posted by *Unkzilla*
> 
> A lot of wasted energy here
> 
> If we are to believe how much performance increase the new GPUs (from both sides) are going to offer, who really cares about how current cards are going to run in the future? I'm going to be dumping my GPU on ebay quick smart and will be ready to upgrade - to either side
> 
> By the time these titles release in any meaningful quantity, its probably going to be a debate of 25 vs 30fps on the current gen hardware


Many current cards support dx12. Early benchmarks show significant performance improvement. To qoute to the vice prez, 'this is a big deal'.


----------



## mtcn77

Quote:


> Originally Posted by *keikei*
> 
> Many current cards support dx12. Early benchmarks show significant performance improvement. To qoute to the vice prez, 'this is a big deal'.


That VP9 stream is so real.


----------



## PostalTwinkie

Quote:


> Originally Posted by *NightAntilli*
> 
> It was just PR because everyone that has tested the architectures has confirmed that GCN can do async compute with major performance gains, while on paper nVidia's also can, but in practice the asynchronicity benefits are nullified due to the high delay caused by the required context switch. Implementing it through drivers for nVidia will not do anything other than change the way things are rendered, but not making it any faster.


We aren't talking about if Nvidia can do it or not via hardware, we know they can't.

We are talking about the Nvidia specific code that Oxide thought was working (was a false positive), and it was pointed out by Nvidia with a bit of disagreement from Oxide. However, it was fixed, as they verified the false positive.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Dargonplay*
> 
> Well here's a fact, AMD have countless time stated they can't optimize anything for Gameworks titles during the development process, Nvidia is the only one who can and Nvidia doesn't let developers to add AMD's own code to optimize the game for Radeon Cards, AMD have to make these optimizations with their drivers instead of being released with the game, this means the game have to launch and after it is available to everyone AMD have to optimize their drivers for it. In all Gameworks titles AMD always loses miserably the performance war on early game but always catch up at late game, the thing is, for a marketing standpoint... Late game doesn't matter.
> 
> When a game comes out every Web Site benchmark the soul out of it, there are a few (Minor sites) that benchmarks the same game again months, even years after release, once the early benchmarks for Gameworks titles are out, AMD Reputations gets tainted for the remaining of that generation and people would always see it as the 2nd tier offering, even when they are in fact the best performing offering at the time (Late game), also when casual players looking to upgrade see these "Late game benchmarks" they often disregard them because they aren't coming from big sites like Techspot or Anandtech who only benchmark games when they're freshly out in the market.
> 
> After seeing Nvidia's record of business decision will you really say the above is far fetched? The tessellation Scandal, Hairworks Effects, Godrays, PhysX, Gameworks... They all have something in common, they hurt AMD more than they hurt nVidia, and they grant nVidia the performance crown when games are releasing, which in the end is all that matters for newcomers and people looking to upgrade who just do a few benchmark searches when deciding for a new video card.
> This right here is what I'm talking about.


Well said. Even if AMD can optimize they have to spend more time and money to optimize Nvidia IP. Also I am much better to not have GameWork and have PhsyX like before.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *BradleyW*
> 
> The difference in IQ between low and ultra god rays was around 1% at the cost of 20+ frames.


I don't even see the 1% of IQ difference in those screenshots above. They are literally 100% the same image between godrays low and ultra! Pretty scandalous right there...


----------



## cainy1991

Quote:


> Originally Posted by *Lex Luger*
> 
> The AMD conspiracy theorists really need to cool it when it comes to gameworks. Would you rather nvidia go back to physx only where the option to enable them is grayed out?


Short answer.... Yes.


----------



## Dargonplay

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I don't even see the 1% of IQ difference in those screenshots above. They are literally 100% the same image between godrays low and ultra! Pretty scandalous right there...


Actually, they might as well be called Godrays because you just to have faith of their existence.... Oh wait


----------



## PostalTwinkie

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I don't even see the 1% of IQ difference in those screenshots above. They are literally 100% the same image between godrays low and ultra! Pretty scandalous right there...


This is more common than people want to admit.

Very minor, if any, visual changes between higher levels of settings, yet massive performance impacts. Call of Duty Ghosts seemed to be really bad with this, no decernable difference between the higher tier settings. Yet going from "High" to "Ultra" resulted in a near 50% performance hit at times.


----------



## Dargonplay

Quote:


> Originally Posted by *PostalTwinkie*
> 
> This is more common than people want to admit.
> 
> Very minor, if any, visual changes between higher levels of settings, yet massive performance impacts. Call of Duty Ghosts seemed to be really bad with this, no decernable difference between the higher tier settings. Yet going from "High" to "Ultra" resulted in a near 50% performance hit at times.


Yet another Gameworks title.


----------



## Fyrwulf

Quote:


> Originally Posted by *PostalTwinkie*
> 
> AMD is pushing it as theirs right now, period. So with that comes the same standard as is applied to Nvidia and their "unique" features. Period.
> 
> I wasn't the one that started pushing ASC as something specific to AMD, they did that on their own in the very announcement of the partnership with IO Interactive, the one posted in Post #1 of this thread.


Uh, no. Sorry, that's not how this works. nVidia chose not to support a feature of an open and independent API in their hardware implementation, that's their fault and AMD is well within their rights to force feed crow pie to nVidia for their hubris given their relative market positions. nVidia chose to turn PhysX into a closed system by blocking out AMD card owners from utilizing it, that is also nVidia's fault and not okay given their market position.


----------



## looniam

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Quote:
> 
> 
> 
> Originally Posted by *Dargonplay*
> 
> Well here's a fact, AMD have countless time stated they can't optimize anything for Gameworks titles during the development process, Nvidia is the only one who can and Nvidia doesn't let developers to add AMD's own code to optimize the game for Radeon Cards, AMD have to make these optimizations with their drivers instead of being released with the game, this means the game have to launch and after it is available to everyone AMD have to optimize their drivers for it. In all Gameworks titles AMD always loses miserably the performance war on early game but always catch up at late game, the thing is, for a marketing standpoint... Late game doesn't matter.
> 
> When a game comes out every Web Site benchmark the soul out of it, there are a few (Minor sites) that benchmarks the same game again months, even years after release, once the early benchmarks for Gameworks titles are out, AMD Reputations gets tainted for the remaining of that generation and people would always see it as the 2nd tier offering, even when they are in fact the best performing offering at the time (Late game), also when casual players looking to upgrade see these "Late game benchmarks" they often disregard them because they aren't coming from big sites like Techspot or Anandtech who only benchmark games when they're freshly out in the market.
> 
> After seeing Nvidia's record of business decision will you really say the above is far fetched? The tessellation Scandal, Hairworks Effects, Godrays, PhysX, Gameworks... They all have something in common, they hurt AMD more than they hurt nVidia, and they grant nVidia the performance crown when games are releasing, which in the end is all that matters for newcomers and people looking to upgrade who just do a few benchmark searches when deciding for a new video card.
> This right here is what I'm talking about.
> 
> 
> 
> 
> 
> 
> 
> Well said. Even if AMD can optimize they have to spend more time and money to optimize Nvidia IP. Also I am much better to not have GameWork and have PhsyX like before.
Click to expand...

no not really well said, all he's done is parrot fuddy's huddy's claims; you know the same guy that said GWs was unavailable even though NV had a download page. the problem with AMD's performance on a games release is they never have drivers ready *and you know this.* any reputable site will turn off proprietary tech with some, like [HC], turns them on only for a comparison.

pro tip:
if a site isn't benching correctly then don't go there!

my lord with all the yacking about hairworks in GW, you would think no one knows how to open the CCP, add TW3 exe to make a profile and turn down the tessellation to have less of a performance hit than maxwell!

oh wait, no they wouldn't if all they do is read huddy's FUD . . .

seriously. here is a game that is going to pummel every freaking green card on the planet and the red team chooses to whine, piss, moan and complain.


----------



## airfathaaaaa

well you can see the benches of fallout when it came out and on 1.3....
despite the fact that once more nvidia gimped all of the 7xx cards on 1.3 amd gained something like 30% of fury cards and they didnt really released any more drivers to optimise this game...

so now when something like this happens consistently you cant call it random or dev errors (pcars/witcher3/fallout/ark surv)


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> no not really well said, all he's done is parrot fuddy's huddy's claims; you know the same guy that said GWs was unavailable even though NV had a download page. the problem with AMD's performance on a games release is they never have drivers ready *and you know this.* any reputable site will turn off proprietary tech with some, like [HC], turns them on only for a comparison.
> 
> pro tip:
> if a site isn't benching correctly then don't go there!
> 
> my lord with all the yacking about hairworks in GW, you would think no one knows how to open the CCP, add TW3 exe to make a profile and turn down the tessellation to have less of a performance hit than maxwell!
> 
> *oh wait, no they wouldn't if all they do is read huddy's FUD . . .*
> 
> seriously. here is a game that is going to pummel every freaking green card on the planet and the red team chooses to whine, piss, moan and complain.


All they do is check early benchmarks on Gameworks titles and go straight to Nvidia's markeshare, the numbers wont lie.

If benchmarking sites have to be selective on the settings of the games they're benchmarkings where do they draw the line? People always prefer to bench the MAX settings, sort of worst case scenario, is just how we are.

But I'm sure Gameworks is all fine and dandy, which is why AMD had to waste so many resources making an Open Source answer to it, also as much as you want to Ad Hominem Huddy he haven't said a word about how Early benchmarks affect sales or marketshare favoring nVidia thanks to Gameworks, that's mine, and even if he did said it you should address the argument and not the person.

People ain't crying for Hairworks, just Gameworks in generals, if you want something that make people mad that's Godrays, not even Einstein would come up with a Theory explaining its existence, people get mad because Nvidia is pushing inferior tech when there's obviously better alternatives, alternative that not only look better but also performs better without hurting the competition.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> All they do is check early benchmarks on Gameworks titles and go straight to Nvidia's markeshare, the numbers wont lie.
> 
> If benchmarking sites have to be selective on the settings of the games they're benchmarkings where do they draw the line? People always prefer to bench the MAX settings, sort of worst case scenario, is just how we are.
> 
> But I'm sure Gameworks is all fine and dandy, which is why AMD had to waste so many resources making an Open Source answer to it, also as much as you want to Ad Hominem Huddy he haven't said a word about how Early benchmarks affect sales or marketshare favoring nVidia thanks to Gameworks, that's mine, and even if he did said it you should address the argument and not the person.
> 
> People ain't crying for Hairworks, just Gameworks in generals, if you want something that make people mad that's Godrays, not even Einstein would come up with a Theory explaining its existence, people get mad because Nvidia is pushing inferior tech when there's obviously better alternatives, alternative that not only look better but also performs better without hurting the competition.


to be blunt all you've done is complain and parrot/fabricate rubbish. so that's why i didn't care to reply directly to you but someone i know that can be reasonable.

no reputable site enables proprietary tech in benchmarks for graphics cards; physX has been turn off in metro 2033 for ages. game reviews will give IQ screen shots w/various setting with benchmarks showing the performance difference. and god forbid that gpu manufactures have to invest time and money to develop and promote their own technology, whether proprietary or open, to improve IQ setting in the game experience. tbh, sounds like you' be happy with a console.

and btw, its not ad hominem when you show _how they were wrong_.









i'll be very clear here; sorry, but i don't care to discuss your rubbish with you.

but feel free to continue your rantings.


----------



## B NEGATIVE

Quote:


> Originally Posted by *PostalTwinkie*
> 
> "Once again"
> 
> Go brush up on the requirements of the various DX 12 tiers and what are optional features and required.
> Well, we have an entire thread going right now about AMD marketing crap, that is very active at the moment. When isn't it "marketing crap" from anyone?
> 
> That fact doesn't change the response to it, and what information is put out there. Hell, one of the first posts in this thread was an outright lie in regard to Nvidia and DX 12 support.
> 
> *Consider this; we aren't the only ones that read this. OCN is a destination for a lot of people looking for information they don't know. If we can't be open, honest, and balanced in our approach to issues, what is the point*?


Exactly this.

Polarised debate that turns into a poop flinging fest is not the behavior of enthusiasts. Its the behavior of brand hounds......


----------



## kx11

that means Multi-GPU support is broken on AMD again ?!!


----------



## cowie

i do not know if more then half the things said in this thread even belong here but anyways...if you don't know already the only reason amd wants things so open source is they can save time and money by having others do it for them its not for any other reason like to be nice or anything.
I really hope this is the best asc Implementation





















I mean there is so many pc games with this fantastic dx12 feature by all the many benchmarks I have seen it seems to be the standard







....just wondering when all the other dx12 features will be used.
asc the new hype game gw the old hype game

I hope I can still bash people in the head with miscellaneous objects is all I want, be it at 30 or 100 fps

* oh f n a this game is now going to be a feature series? like every two months a new episode? oh screw that maybe I will see it next year then.

they have to find a way to ruin everything now a days don't they


----------



## provost

Well, at least some of us don't believe what Nvidia is doing is in the best interest of PC gamers. If I understand correctly, Nvidia had plenty to say about 3D/fx's proprietary Glide API, when Nvidia was still a baby...lol. And, we also know what happened to 3D/fx when it refused to fully embrace DirectX at the time. DX12 is where the games are headed, and some of us don't care much for proprietary walled off systems that don't benefit us as much they benefit the proprietor of such an approach. Let's not forget Intel's proprietary compilers and what it ended up doing to the end consumers... Lol

On a separate note, just tried my new Fury (which replaced my quad Titan set up), and although only ran a handful of games for a few minutes, the transition has been easy, almost too easy... lol . Just plug and play, other than I did a ddu uninstall of nv drivers. Games, including Witcher 3, ran great on my single Fury (albeit it was for 5 -10 minutes only this morning) looking forward to doing some gaming on it this weekend, if I get the time.


----------



## MoorishBrutha

Quote:


> Originally Posted by *p4inkill3r*
> 
> As much as I dislike nvidia, I think 'doomed' is a bit presumptuous.
> They aren't going anywhere.


I never said they are going somewhere, what I said is that they are doomed and if they don't have Async Compute Engines in PASCAL, they will be doomed sooner rather than later.


----------



## mcg75

Thread closed for cleaning.

When it reopens, there will be no more name calling in the thread or you will be removed from the discussion. That includes "shill" or anything else.

This is an important topic as it could result in the reversal of the discrete gpu market IF true.

So let's stick to having a professional discussion about it.


----------



## mcg75

Reopened.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Dargonplay*
> 
> Yet another Gameworks title.


Textures have nothing to do with GameWorks in the game I referenced, at all.

Once again, someone blaming a product for something it has ZERO impact on. GameWorks wasn't even a factor in my statement you quoted.

Quote:


> Originally Posted by *Fyrwulf*
> 
> Uh, no. Sorry, that's not how this works. nVidia chose not to support a feature of an open and independent API in their hardware implementation, that's their fault and AMD is well within their rights to force feed crow pie to nVidia for their hubris given their relative market positions. nVidia chose to turn PhysX into a closed system by blocking out AMD card owners from utilizing it, that is also nVidia's fault and not okay given their market position.


Um, yes, yes it is how it works. AMD published the announcement, they chose the words, not me.

Second, Nvidia's hardware decision many years ago has nothing to do with ASC now, in terms of it being used. It wasn't "supported" as you say, because years ago there was no need for it, just like heavy compute and DX 12 still won't matter for a year or two minimum.

Your PhysX comment is also completely lacking in factual basis. Nvidia purchased PhysX, they had a financial investment in it. They did not lock out AMD, in fact they offered to license the technology to AMD for "pennies per GPU shipped", and AMD turned them down. AMD/ATi literally wanted to use PhysX for free, even in the face of Nvidia purchasing it and having a financial interest in it.

If AMD wanted to use a technology that cost money, they need to contribute to that.

As far as using Nvidia and AMD cards in one system? That was NEVER officially supported.


----------



## Panzerfury

Quote:


> Originally Posted by *BradleyW*
> 
> But you miss out on good looking features, which can be implimented far more optimally compared to GameWorks.


Did you look at the algorithms used for GameWorks stuff?

If so, I sure would like to know (i'm sure Nvidia would too) how they can improve it.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> god forbid that gpu manufactures have to invest time and money to develop and promote their own technology, whether proprietary or open, to improve IQ setting in the game experience.


You're right, what I was thinking!, Godrays improves Image Quality and the Game Experience, thanks for proving me wrong, also all games that were developed with Gameworks offered the best game experience ever.
Quote:


> Originally Posted by *looniam*
> 
> and btw, its not ad hominem when you show _how they were wrong_.


https://en.wikipedia.org/wiki/Ad_hominem

You have only proven your ability to rant, you didn't even touched the argument itself, I can't see how that equals to proving someone wrong.
Quote:


> Originally Posted by *looniam*
> 
> i'll be very clear here; sorry, but i don't care to discuss your rubbish with you.


No great loss.
Quote:


> Originally Posted by *looniam*
> 
> to be blunt all you've done is complain and parrot/fabricate rubbish. so that's why i didn't care to reply directly to you but someone i know that can be reasonable.


And you've done so differently.

Truth is, Gameworks only benefits nVidia, I don't hate nVidia I jut don't like what they're doing with this industry, I haven't seen a single Gameworks title that haven't ended up being a broken mess at release, and my points in my previous posts about how early benchmarks of these titles affects AMD's image are still untouched.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> Once again, someone blaming a product for something it has ZERO impact on. GameWorks wasn't even a factor in my statement you quoted.


It is a factor when all other Gameworks titles presents the same issues. You said, and I'll quote for you to remember.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> Very minor, if any, visual changes between higher levels of settings, yet massive performance impacts. Call of Duty Ghosts seemed to be really bad with this, no decernable difference between the higher tier settings. Yet going from "High" to "Ultra" resulted in a near 50% performance hit at times.


That's what Gameworks do, Godrays, Tesselation X64 in people's heads, you say this behavior is very common now but it wasn't before Gameworks.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> Very minor, if any, visual changes between higher levels of settings, yet massive performance impacts.


This just screams Gameworks and Godrays


----------



## airfathaaaaa

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Textures have nothing to do with GameWorks in the game I referenced, at all.
> 
> Once again, someone blaming a product for something it has ZERO impact on. GameWorks wasn't even a factor in my statement you quoted.
> Um, yes, yes it is how it works. AMD published the announcement, they chose the words, not me.
> 
> Second, Nvidia's hardware decision many years ago has nothing to do with ASC now, in terms of it being used. It wasn't "supported" as you say, because years ago there was no need for it, just like heavy compute and DX 12 still won't matter for a year or two minimum.
> 
> Your PhysX comment is also completely lacking in factual basis. Nvidia purchased PhysX, they had a financial investment in it. They did not lock out AMD, in fact they offered to license the technology to AMD for "pennies per GPU shipped", and AMD turned them down. AMD/ATi literally wanted to use PhysX for free, even in the face of Nvidia purchasing it and having a financial interest in it.
> 
> If AMD wanted to use a technology that cost money, they need to contribute to that.
> 
> As far as using Nvidia and AMD cards in one system? That was NEVER officially supported.


your posts are always amusing
so nvidia NEVER hardcoded physx into their cards for when they detected an amd card? that was a dream we all saw years ago? probably..... they even made a nice suprise to people that used hybrid physx making any game unplayable by messing the textures
nvidia offered physx months after amd got into havok which by definition is better

also nvidia hardware decision was about money and only that in their own blog they said they were working closely with ms on dx12 for 4 years 2010-2014 they even marketed their cards as fully dx12 cards back then which now we know that its BS as usual


----------



## PostalTwinkie

Quote:


> Originally Posted by *airfathaaaaa*
> 
> your posts are always amusing
> so nvidia NEVER hardcoded physx into their cards for when they detected an amd card? that was a dream we all saw years ago? probably..... they even made a nice suprise to people that used hybrid physx making any game unplayable by messing the textures
> nvidia offered physx months after amd got into havok which by definition is better
> 
> also nvidia hardware decision was about money and only that in their own blog they said they were working closely with ms on dx12 for 4 years 2010-2014 they even marketed their cards as fully dx12 cards back then which now we know that its BS as usual


Did you read what I said? No, better question, what are you even talking about or saying? It doesn't make sense.

I said dual card systems, Nvidia for PhysX, AMD for primary, was never officially supported by Nvidia. Which it wasn't. That is a completely different statement from saying it couldn't be done. Also, are you agreeing with me about AMD not getting PhysX over money? Because, that is why I said AMD didn't get it.

Nvidia wanted money from AMD, AMD didn't want to pay it. Pretty straight forward.

I honestly don't know if you are agreeing with me, or not, or what you just typed. But, I look forward to clarity.


----------



## airfathaaaaa

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Did you read what I said? No, better question, what are you even talking about or saying? It doesn't make sense.
> 
> I said dual card systems, Nvidia for PhysX, AMD for primary, was never officially supported by Nvidia. Which it wasn't. That is a completely different statement from saying it couldn't be done. Also, are you agreeing with me about AMD not getting PhysX over money? Because, that is why I said AMD didn't get it.
> 
> Nvidia wanted money from AMD, AMD didn't want to pay it. Pretty straight forward.
> 
> I honestly don't know if you are agreeing with me, or not, or what you just typed. But, I look forward to clarity.


you said that nvidia never locked out amd cards which is not true since they DID lock amd cards out(on which the dx12 being hardware agnostic can totally nullify this which we already have seen on aos )
you remember when nvidia offered amd physx? it was 7 years ago months before we started to see the first clues about the cgn 1.0 do you seriously think any company would have said yes considering that they would have need a back end into their hardware for that?
nope
also considering that they choose havok and actually use it and push it to be literally everywere says a lot about what is going on behind the scenes


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> Truth is, Gameworks only benefits nVidia, I don't hate nVidia I jut don't like what they're doing with this industry, I haven't seen a single Gameworks title that haven't ended up being a broken mess at release, *and my points in my previous posts about how early benchmarks of these titles affects AMD's image are still untouched*[.


i guess you don't read what you quote.
Quote:


> Originally Posted by *looniam*
> 
> the problem with AMD's performance on a games release is they never have drivers ready *and you know this.*


cheers.









funny thing, this is the ONLY forum were most of the talk is about gameworks . everyone else is actually discussing a-sync compute and how AMD can use that to their advantage.

its nvidia's fault ya know.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> the problem with AMD's performance on a games release is they never have drivers ready and you know this.


If you read anything at all you wouldn't be quoting things that prove my point, AMD Can't take part of a Gameworks Title development process, how do you expect them to have optimizations ready at launch? Oh wait, they don't, I believe you can find brain the power to figure out the why, you just got it backwards.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> the problem with AMD's performance on a games release is they never have drivers ready and you know this.
> 
> 
> 
> If you read anything at all you wouldn't be quoting things that prove my point, AMD Can't take part of a Gameworks Title development process, how do you expect them to have optimizations ready at launch? Oh wait, they don't, I believe you can find brain the power to figure out the why, you just got it backwards.
Click to expand...

i'm not lacking any brain power since i don't have the tunnel vision focused on GW. that comment was meant drivers not being ready for ALL games not just TWIMTP titles.









whoah, there's a thought, eh?


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> i'm not lacking any brain power since i don't have the tunnel vision focused on GW. that comment was meant drivers not being ready for ALL games not just TWIMTP titles.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> whoah, there's a thought, eh?


Still don't get it apparently, funny how you can't even understand my argument but you think you're answering to it.

Driver Optimizations are the alternative for when Game Optimizations aren't possible, it is always better to optimize the game for the GPU than the GPU for the game, ACE is another example of this, nVidia optimizing the game for their GPUs instead of getting their GPUs on par with current technologies


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> Still don't get it apparently, funny how you can't even understand my argument but you think you're answering to it.
> 
> Driver Optimizations are the alternative for when Game Optimizations aren't possible, it is always better to optimize the game for the GPU than the GPU for the game, ACE is another example of this.


i understand what you're saying, you just don't like my answers. i completely agree its best to have the actual game code but not entirely necessary.

but still how many new (non GW games) still don't have Xfire profiles?

what i have said for years is its time AMD and its enthusiasts stop moopping about unfair practices, most of which they can't control and pick themselves up and start swinging back.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> i completely agree its best to have the actual game code but not entirely necessary.


It is necessary if you want your GPUs to run the game efficiently at launch and not be utterly destroyed on benchmarks, this goes back to:
Quote:


> Originally Posted by *Dargonplay*
> 
> When a game comes out every Web Site will benchmark the soul out of it, once the early benchmarks for Gameworks titles are out, AMD Reputations gets tainted for the remaining of that generation and people would always see it as the 2nd tier offering.


You say that having the actual game code is not entirely necessary, the way I see it, if you want to survive on this market it is.


----------



## PlugSeven

Quote:


> Originally Posted by *looniam*
> 
> i completely agree its best to have the actual game code but not entirely necessary.


You're just repeating what that guy from nvidia said when asked about it, do you have a clue on how AMD optimize, if so please enlighten us.
Quote:


> but still how many new (non GW games) still don't have Xfire profiles?


How many? I dare you to list a few of these non GWs titles without profiles.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> i completely agree its best to have the actual game code but not entirely necessary.
> 
> 
> 
> It is necessary if you want your GPUs to run the game efficiently at launch and not be utterly destroyed on benchmarks, this goes back to:
> Quote:
> 
> 
> 
> Originally Posted by *Dargonplay*
> 
> When a game comes out every Web Site will benchmark the soul out of it, once the early benchmarks for Gameworks titles are out, AMD Reputations gets tainted for the remaining of that generation and people would always see it as the 2nd tier offering.
> 
> Click to expand...
> 
> You say that having the actual game code is not entirely necessary, the way I see it, if you want to survive on this market it is.
Click to expand...

as i said before, any reputable review site will not use proprietary settings in their benchmarks. i can go back to [H]ardOCP's review of far cry 4. they noticed the minute they enabled godrays that AMD cards took a big hit and stated, "yelp looks like this is nvidia's stuff" and concluded accordingly.

but it's not a perfect world and every site will do that. and sure that can be unfortunate for AMD .

but now in ashes of the singularity AMD is really pummeling nvidia and in my books, thats completely fair and i wish to see hitman do the same. level playing field, it's all good.
Quote:


> Originally Posted by *PlugSeven*
> 
> You're just repeating what that guy from nvidia said when asked about it, do you have a clue on how AMD optimize, if so please enlighten us.
> 
> How many? I dare you to list a few of these non GWs titles without profiles.


a lot more than just one guy, sorry.









so, how's that Xcom 2 crossfire?

darkest dungeon?

(i'm working backwards . . .)

hey! you got a good one for FO4!!! . .but doesn't count since it has GW.









did dirt rally ever get consistent? steam people want to know.









(shall i go on?)


----------



## PlugSeven

Quote:


> Originally Posted by *looniam*
> 
> so, how's that Xcom 2 crossfire?
> 
> darkest dungeon?
> 
> (i'm working backwards . . .)
> 
> hey! you got a good one for FO4!!! . .but doesn't count since it has GW.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> did dirt rally ever get consistent? steam people want to know.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (shall i go on?)


Xcom 2 Unreal engine 3.5 which is nvidia's baby really, so the lack of aa profile is not surprising








yeah do go on, try list stuff that actually needs multi gpu though.


----------



## looniam

Quote:


> Originally Posted by *PlugSeven*
> 
> Xcom 2 Unreal engine 3.5 which is nvidia's baby really, so the lack of aa profile is not surprising
> 
> 
> 
> 
> 
> 
> 
> 
> yeah do go on, try list stuff that actually needs multi gpu though.


yeah shoulda know - its always nvidia's fault. . .silly me.

are you going to pay me?

my time is money.


----------



## PlugSeven

Quote:


> Originally Posted by *looniam*
> 
> yeah shoulda know - its always nvidia's fault. . .silly me.
> 
> are you going to pay me?
> 
> my time is money.


Sorry, i thought you had them on the tips of your fingers all cued and ready to go. I'll google some when I can be bothered to look into multi gpu. Thx for your time


----------



## looniam

Quote:


> Originally Posted by *PlugSeven*
> 
> Sorry, i thought you had them on the tips of your fingers all cued and ready to go. I'll google some when I can be bothered to look into multi gpu. Thx for your time


lol. nope. look i'll admit that i don't have them cued up. what i do have is the impression of AMD users themselves complaining about the lack of support.

now, if at your convenience you can correct that - i am all ears/eyes. (even with a PM)


----------



## Assirra

Quote:


> Originally Posted by *PlugSeven*
> 
> You're just repeating what that guy from nvidia said when asked about it, do you have a clue on how AMD optimize, if so please enlighten us.
> How many? I dare you to list a few of these non GWs titles without profiles.


This very thread is about what AMD said.
By your logic this whole thread should get removed and nobody can talk about it.
If you now say no, that is double standards.


----------



## mrawesome421

I was and am still blown away by the level design, artistic nature and graphical awesomeness of the last game. It was... a pleasure, to play a game so gorgeous. The gameplay was pretty damn good too, and I enjoyed every minute of it. Can't wait for the next installment.









EDIT: Coming from an AMD platform gamer. If that matters, regarding the topic at hand. Eh..


----------



## PlugSeven

Quote:


> Originally Posted by *looniam*
> 
> lol. nope. look i'll admit that i don't have them cued up. *what i do have is the impression of AMD users themselves complaining about the lack of support.
> *
> now, if at your convenience you can correct that - i am all ears/eyes. (even with a PM)


what can I say? If you're a day one gamer stay away from x-fire. The cards are cheap enough to tempt you you to run em' in pairs just don't expect them to work well all the time.


----------



## PlugSeven

Quote:


> Originally Posted by *Assirra*
> 
> This very thread is about what AMD said.
> By your logic this whole thread should get removed and nobody can talk about it.
> If you now say no, that is double standards.


What?I'll bet you anything you're the only person on these forums who interprets what I said there this way. I merely questioned Loon's understanding of how game optimizations work because I felt he was just regurgitating something some nvidia dude said, thats all.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> lol. nope. look i'll admit that i don't have them cued up. what i do have is the impression of AMD users themselves complaining about the lack of support.
> 
> now, if at your convenience you can correct that - i am all ears/eyes. (even with a PM)


The only people I've seen complaining about Multi GPUs are nVidia's users with horrible pacing, stuttering and low performance gains compared to Crossfire which always have a better relative yield in performance, and don't get me started on good'ol Kepler today, nVidia Gamework's so bad that its rotting effects on performance doesn't even stop on AMD.

To be honest about Crossfire it does have issues and I've seen people talking about them, but it is a fact that crossfire do have better performance gains.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> The only people I've seen complaining about Multi GPUs are nVidia's users with horrible pacing, stuttering and low performance gains compared to Crossfire which always have a better relative yield in performance, and don't get me started on good'ol Kepler today, nVidia Gamework's so bad that its rotting effects on performance doesn't even stop on AMD.
> 
> To be honest about Crossfire it does have issues and I've seen people talking about them, but it is a fact that crossfire do have better performance gains.


yep, AMD got there act together when they were thrown under the bus for those same issues.

and since then _when Xfire is supported_ it embarrasses nvidia. however sometimes it takes a minute to get support *- thats what i was saying.*


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> yep, AMD got there act together when they were thrown under the bus for those same issues.
> 
> and since then _when Xfire is supported_ it embarrasses nvidia. however sometimes it takes a minute to get support *- thats what i was saying.*


So you're saying that Crossfire is now the superior alternative but isn't always available at launch? It might not be 100% of the cases, but I'm pretty sure Gameworks is to blame for at least 90% of them, that comes back to this:
Quote:


> Originally Posted by *looniam*
> 
> i completely agree its best to have the actual game code but not entirely necessary.


Given the logical fact that they need the game code to implement Xfire (Let alone for optimizing the game), now you understand why its necessary?


----------



## DNMock

Quote:


> Originally Posted by *looniam*
> 
> yep, AMD got there act together when they were thrown under the bus for those same issues.
> 
> and since then _when Xfire is supported_ it embarrasses nvidia. however sometimes it takes a minute to get support *- thats what i was saying.*


So much this!

I'd love to go AMD for multi-gpu set-ups, the scaling and performance, when they actually implement it in a game, just blow the doors off Nvidia's SLI.

Unfortunately, it takes forever for them to implement it if they even do at all, and going generic AFR with AMD is worse than going Nvidia with an actual SLI profile.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> So you're saying that Crossfire is now the superior alternative but isn't always available at launch? It might not be 100% of the cases, but I'm pretty sure Gameworks is to blame for at least 90% of them.


i believe 90% to be very heavy handed but you are 100% correct with the rest. XMDA is beatiful and i wish NV would get off the bridge.


----------



## looniam

now that i see your edit
Quote:


> Originally Posted by *Dargonplay*
> 
> So you're saying that Crossfire is now the superior alternative but isn't always available at launch? It might not be 100% of the cases, but I'm pretty sure Gameworks is to blame for at least 90% of them, *that comes back to this:*
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> i completely agree its best to have the actual game code but not entirely necessary.
> 
> 
> 
> *Given the logical fact that they need the game code to implement Xfire (Let alone for optimizing the game), now you understand why its necessary?*
Click to expand...

two years ago a lot of GW was locked out from devs and in turn AMD. *that is not true today.* its up to the dev to share or change the code for any specific vendor. if GW is involved, then the dev, under the licensing agreement with NV , needs to get NV approval to change the GW code if needed. the dev is free to make changes and share with anything else, as usual.

it's not a perfect world however and graphics vendors have had to made changes in their drivers after release of a game _long before there was ever GW_.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> *if GW is involved, then the dev, under the licensing agreement with NV , needs to get NV approval to change the GW code if needed*.


What could ever go wrong, I can only imagine the politics behind such requests.

Jimmy Dev.: We need top Optimize this game for AMD's hardware, can we allow AMD to add their code pls?

Nvidia: BOLLOCKS! Jimmy, You can do it later after all benchmarks are out and the game have 1 week on the market

Jimmy: But...

Nvidia: BUT NOTHING JIMMY, after all I've done for you...

In all seriousness thought, as you say this is far from a perfect world, and I do can see how any reason can be used to negate such requests, especially when doing so benefits Nvidia.

AMD is literally at their Mercy.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> What could ever go wrong, I can only imagine the politics behind such requests.
> 
> 
> Spoiler: it is funny!
> 
> 
> 
> Jimmy Dev.: We need top Optimize this game for AMD's hardware, can we allow AMD to add their code pls?
> 
> Nvidia: BOLLOCKS! Jimmy, You can do it later after all benchmarks are out and the game have 1 week on the market
> 
> Jimmy: But...
> 
> Nvidia: BUT NOTHING JIMMY.
> 
> 
> 
> In all seriousness thought, as you say this is far from a perfect world, but I do can see how any reason can be used to negate such requests.


well, if happens regularly - i would think that a few devs would have stepped forward and said something. believe or not, there are a few ethical people in the world. but see here's the thing though, i haven't seen anyone who has released a steaming pile of GWs that hasn't already released steaming piles before.

hello ubisoft - but not that they are the only ones.

E:
catching up on your edit:

so nvidia has 80% market share. so?

the devs are making more money off of consoles. most of the time PCs are an after thought.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> i would think that a few devs would have stepped forward and said something. believe or not, there are a few ethical people in the world.


A thick envelope should make sure that doesn't happen, you just think too highly of business men.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> A thick envelope should make sure that doesn't happen, you just think too highly of business men.


like i said with devs making a lot more money with consoles, it would have to be a very, very thick envelope.

but i have to disagree - devs aren't businessmen, publishers are and they aren't near any code; just pushing to get it done - which leads to short cuts by the devs.







.


----------



## mtcn77

Quote:


> Originally Posted by *looniam*
> 
> like i said with devs making a lot more money with consoles, it would have to be a very, very thick envelope.
> 
> but i have to disagree - devs aren't businessmen, publishers are and they aren't near any code; just pushing to get it done - which leads to short cuts by the devs.
> 
> 
> 
> 
> 
> 
> 
> .


I concur. Developers are not businessmen. Otherwise, we wouldn't have Project Cars announcements for Nintendo being redacted, does not make any business sense. These guys act passionately about it.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> like i said with devs making a lot more money with consoles, it would have to be a very, very thick envelope.
> 
> but i have to disagree - devs aren't businessmen, publishers are and they aren't near any code; just pushing to get it done - which leads to short cuts by the devs.
> 
> 
> 
> 
> 
> 
> 
> .


Publishers aren't near code as in Publishers can't decide what happens to the game? Just watch the Bunjie/Activision Drama over destiny, it would make up for a good Sunday movie.

Big name publishers have absolute control over what happens with their games, given the ways they operate, would it be surprising that they make deals with Nvidia on the PC Market?

The only developers I would trust are Indie developers, and I don't see all Indie devs running towards Gameworks.
Quote:


> Originally Posted by *mtcn77*
> 
> I concur. Developers are not businessmen. Otherwise, we wouldn't have Project Cars announcements for Nintendo being redacted, does not make any business sense. These guys act passionately about it.


You're talking about the same people who did this:

"No, in this case there is an entire thread in the Project Cars graphics subforum where we discussed with the software engineers directly about the problems with the game and AMD video cards. SMS knew for the past 3 years that Nvidia based PhysX effects in their game caused the frame rate to tank into the sub 20 fps region for AMD users. It is not something that occurred overnight or the past few months. It didn't creep in suddenly. It was always there from day one. Since the game uses GameWorks, then the ball is in Nvidia's court to optimize the code so that AMD cards can run it properly. Or wait for AMD to work around GameWorks within their drivers. Nvidia is banking on taking months to get right because of the code obfuscation in the GameWorks libraries as this is their new strategy to get more customers. Break the game for the competition's hardware and hope they migrate to them. If they leave the PC Gaming culture then it's fine; they weren't our customers in the first place."

Project Cars engine itself is built around a version of PhysX that simply does not work on amd cards, performance goes in the toilet if they do not have GPU physx turned on, The worst part? The developers knew this would murder performance on AMD cards, but built their entire engine off of a technology that simply does not work properly with AMD anyway.The game was built from the ground up to favor one hardware company over another.

You know, this actually reminds me to when Intel threatened OEMs with cutting of supply if they didn't stop selling AMD products, AMD won the Lawsuit after decades of Intel tanking on AMD, the damage was already done, Intel paid little over 1 Billion dollars to AMD. I just wish AMD would not stay on the defensive against companies like these who always resort to Anti-Competitive Schemes, if they somehow manage success and they wake up and start doing what Nvidia or intel was doing, boy, that's going to be a dark age for PC Gaming, and I wouldn't be able to blame AMD given the circumstances.

It'd be a world where the best components would not matter as much as the influence in the companies behind them.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> Publishers aren't near code as in Publishers can't decide what happens to the game? Just watch the Bunjie/Activision Drama over destiny, it would make up for a good Sunday movie.
> 
> Big name publishers have absolute control over what happens with their games, given the ways they operate, would it be surprising that they make deals with Nvidia?
> 
> The only developers I would trust are Indie developers, and I don't see all Indie devs running towards Gameworks.


real quick steam survey:
http://store.steampowered.com/hwsurvey


just very roughly throw out the intel and "other" - that leaves NV/AMD splitting the rest at a 2/1 margin.

thats still a BIG chunk (~30%) dev/publishers would be screwing over.

i'm sure that is far from perfect but i think you're not really giving AMD enough credit in the market - or more like too much for nvidia.


----------



## Dargonplay

Quote:


> Originally Posted by *looniam*
> 
> real quick steam survey:
> http://store.steampowered.com/hwsurvey
> 
> 
> just very roughly throw out the intel and "other" - that leaves NV/AMD splitting the rest at a 2/1 margin.
> 
> thats still a BIG chunk (~30%) dev/publishers would be screwing over.
> 
> i'm sure that is far from perfect but i think you're not really giving AMD enough credit in the market - or more like too much for nvidia.


Nvidia currently have 80% of total Marketshare.

http://www.forbes.com/sites/jasonevangelho/2015/08/19/nvidia-increases-desktop-gpu-market-share-again-despite-multiple-amd-radeon-releases/#163f0aa5250e

They don't need to screw all games, just big titles, and not so much they'd leave clear what they're trying to do, just enough to make their offering look more compelling to the masses, sometimes it is painfully obvious what they're trying to do, when Gameworks first came out, and today, from CDProjectRED

*"Many of you have asked us if AMD Radeon GPUs would be able to run NVIDIA's HairWorks technology - the answer is yes! However, unsatisfactory performance may be experienced as the code of this feature cannot be optimized for AMD products."
*
Then Nvidia goes to say things like

*"GameWorks improves the visual quality of games running on GeForce for our customers. It does not impair performance on competing hardware.
Demanding source code access to all our cool technology is an attempt to deflect their performance issues.*

What makes matters worse is that Nvidia will insist on CDProjectRED or any developer using Gameworks to not use AMD Features, and if they do this is a Case by Case basis decision.

It doesn't matter if they have 20% or 90% of PC Marketshare, if AMD were the ones doing this I would stand against them as well, Gameworks is Middleware that should simply not exist.


----------



## mcg75

Quote:


> Originally Posted by *Dargonplay*
> 
> What makes matters worse is that Nvidia wont let CDProjectRED or any developer using Gameworks to use TressFX/PureHair nor any AMD Feature code.
> 
> It doesn't matter if they have 20% or 90% of PC Marketshare, if AMD were the ones doing this I would stand against them as well, Gameworks is Middleware that should simply not exist.


GTA V has some Gameworks features and some AMD sourced features as well. So it can and has been done.

Gameworks should not exist is nonsense as well. Some features like HBAO+ are really good and AMD has no trouble with it. Godrays are nice as well although I agree 110% with everybody that says low vs high settings bring almost no benefit except fps penalty. PhysX and Hairworks can go in the garbage though.

Here's hoping AMD's new software suite gets some use from developers without AMD needing to pour money into it.


----------



## looniam

Quote:


> Originally Posted by *Dargonplay*
> 
> Nvidia currently have 80% of total Marketshare.
> 
> http://www.forbes.com/sites/jasonevangelho/2015/08/19/nvidia-increases-desktop-gpu-market-share-again-despite-multiple-amd-radeon-releases/#163f0aa5250e
> 
> They don't need to screw all games, just big titles, and not so much they'd leave clear what they're trying to do, just enough to make their offering look more compelling to the masses, sometimes it is painfully obvious what they're trying to do, when Gameworks first came out, and today, from CDProjectRED
> 
> *"Many of you have asked us if AMD Radeon GPUs would be able to run NVIDIA's HairWorks technology - the answer is yes! However, unsatisfactory performance may be experienced as the code of this feature cannot be optimized for AMD products. Radeon users are encouraged to disable NVIDIA HairWorks if the performance is below expectations."
> *
> Then Nvidia goes to say things like
> 
> *"GameWorks improves the visual quality of games running on GeForce for our customers. It does not impair performance on competing hardware.
> Demanding source code access to all our cool technology is an attempt to deflect their performance issues.*
> 
> What makes matters worse is that Nvidia wont let CDProjectRED or any developer using Gameworks to use TressFX/PureHair nor any AMD Feature code.
> 
> It doesn't matter if they have 20% or 90% of PC Marketshare, if AMD were the ones doing this I would stand against them as well, Gameworks is Middleware that should simply not exist.


you know that forbes article is showing an nvidia slide, right?. and i am sure it represents sales including OEMs of little GT720 cards too, hardly gaming sales.

forget the PR war in the press, benchmark sites did NOT turn on hairworks! AMD placed what would be expected and reasonably compared to other games that didn't have any gameworks. the exact placement varied amound benches but the 780/970/290 all pplace with in a few fps of each other and the 290x/780ti/980 the same with the 980ti/titanx in the lead. (mind you furyx wasn't out yet.)

and what happen soon after? an AMD user found importing the exe to make a profile in CCP could then turn down the tessellation to x8 - which i might add made HW run better then kepler.

nothing to see here move along.

*really this is a damn thread about hitman and a-sync compute. but now 50%+ of the post are about GWs!* jesus! nvidia doesn't have to bury AMD *the red team is are already doing it for them!*


----------



## ZealotKi11er

Quote:


> Originally Posted by *mcg75*
> 
> GTA V has some Gameworks features and some AMD sourced features as well. So it can and has been done.
> 
> Gameworks should not exist is nonsense as well. Some features like HBAO+ are really good and AMD has no trouble with it. Godrays are nice as well although I agree 110% with everybody that says low vs high settings bring almost no benefit except fps penalty. PhysX and Hairworks can go in the garbage though.
> 
> Here's hoping AMD's new software suite gets some use from developers without AMD needing to pour money into it.


If done properly to enhance graphics not to control performance gw can be good but that what OpenGPU is for.


----------



## Dargonplay

Quote:


> Originally Posted by *mcg75*
> 
> Here's hoping AMD's new software suite gets some use from developers without AMD needing to pour money into it.


Hopefully GPUOpen will have the success it needs to bring healthy competition to this industry, although I'm not sure it will. When using GPUOpen the developer have to pour resources into it in order to implement this feature in their game while with Nvidia Gameworks they can have an Nvidia rep doing it, that's something any developer will consider when choosing one over the other.

I agree with you, PhysX and Hairworks are garbage but I go as far as saying Gameworks is unnecessary, we could have HBAO+ without the need of a locked down shady development suite, just like old times.


----------



## ZealotKi11er

If it was some other game I would be more excited. AMD should have done this with TR and not let Nvidia take it.


----------



## airfathaaaaa

i havent really see to this date a game running purely on gameworks.. 99% of them have some gw and some havok on them


----------



## MadRabbit

I don't get it. Why are some people so passionate about defending gameworks? Even in this same topic it has been proven that GW actually has an impact even on nVidia's own cards with little to none visual improvement.

Anyway, this won't change before AMD can actually get a little bigger chunk of the market.


----------



## cowie

Quote:


> Originally Posted by *MadRabbit*
> 
> I don't get it. Why are some people so passionate about defending gameworks? Even in this same topic it has been proven that GW actually has an impact even on nVidia's own cards with little to none visual improvement.
> 
> Anyway, this won't change before AMD can actually get a little bigger chunk of the market.


it seems to be the same people all the time, maybe they should stop trying to argue over opinion.
I like gimpworks sometimes I really liked the nv phyxs too
bigger chunk???....well I would hope so









but I still will not over look the *fact* that this game will be broken up into a series and that's a no no in my book I don't care what it has or looks like.
that alone is worse then gw's or ge combined.


----------



## caswow

Quote:


> Originally Posted by *cowie*
> 
> it seems to be the same people all the time, maybe they should stop trying to argue over opinion.
> I like gimpworks sometimes I really liked the nv phyxs too
> bigger chunk???....well I would hope so
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> but I still will not over look the *fact* that this game will be broken up into a series and that's a no no in my book I don't care what it has or looks like.
> that alone is worse then gw's or ge combined.


it depends on how much content each episode will have and how much it will cost. there are games that made it well but then there is trine 3...


----------



## ZealotKi11er

Quote:


> Originally Posted by *airfathaaaaa*
> 
> i havent really see to this date a game running purely on gameworks.. 99% of them have some gw and some havok on them


PCars.


----------



## NuclearPeace

Quote:


> Originally Posted by *MadRabbit*
> 
> I don't get it. Why are some people so passionate about defending gameworks?


Because some people seem to want to annihilate Gameworks in its entirety and I don't want that. There are some Gameworks effects that are excellent, such as HBAO+. Even Godrays "low" is pretty good looking.


----------



## ZealotKi11er

Quote:


> Originally Posted by *NuclearPeace*
> 
> Because some people seem to want to annihilate Gameworks in its entirety and I don't want that. There are some Gameworks effects that are excellent, such as HBAO+. Even Godrays "low" is pretty good looking.


There are not needed. Developers can make their own effects.


----------



## Majin SSJ Eric

Crysis 3 is still the best looking game I've ever played and Crytek didn't need GW to make it...


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Crysis 3 is still the best looking game I've ever played and Crytek didn't need GW to make it...


Nvidia thinks by using GW they are helping developers push graphics. I dont see anything from GW that benefits gamers or developers but Nvidia.


----------



## Sleazybigfoot

Quote:


> Originally Posted by *PlugSeven*
> 
> Depends on what you mean by performance, are you talking raw fps or just a more immersive experience?A world of difference here.


I know that game has a big performance boost, but most people refuse to see that game as a proper benchmark because of the AMD partnership and alpha state, but I guess this game will receive the same treatment.


----------



## cowie

Quote:


> Originally Posted by *Sleazybigfoot*
> 
> I know that game has a big performance boost, but most people refuse to see that game as a proper benchmark because of the AMD partnership and alpha state, but I guess this game will receive the same treatment.


nah this game is a lot different then that first bs farce/its a fps not a turn based or whatever you call that.
secondly they where full of crap on a lot of what they said and just tried to sell there game and drum up publicity I would never trust the dev. on that game at all.

hitman pretty much spoke for itself since it started even thou amd always runs better on it I could care less.

I hear its 10 bucks a expansion or what you want to call it. I will not do that I will wait till they are all out I don't care even if its next year....lets all just stop a second as someone said, if the game has all these bells and whistles it don't make it a great game....if I can smash somebody the head then hide the body or take down a chopper or two without all the bs and frequent cut scenes it will be ok in my book


----------



## NuclearPeace

Quote:


> Originally Posted by *ZealotKi11er*
> 
> There are not needed. Developers can make their own effects.


The whole reason why developers go to solutions such as Gameworks is because doing it themselves would take a lot more effort, time, and therefore money.

Crysis 3 looks great but the game itself coudn't hold a candle to the original. Star Wars Battlefront 2015 also looks amazing, but the game itself is pretty bland. I rather developers take shortcuts on graphics by using Gameworks/ Game Evolved stuff and then spending more time on making the gameplay amazing.


----------



## caswow

Quote:


> Originally Posted by *NuclearPeace*
> 
> *The whole reason why developers go to solutions such as Gameworks is because doing it themselves would take a lot more effort, time, and therefore money.*
> 
> Crysis 3 looks great but the game itself coudn't hold a candle to the original. Star Wars Battlefront 2015 also looks amazing, but the game itself is pretty bland. I rather developers take shortcuts on graphics by using Gameworks/ Game Evolved stuff and then spending more time on making the gameplay amazing.


just for the record. what part of gameworks helped any gameworks sponsored titel? certainly not batman or any other heavily gameworks sponsored titel dont you agree? i mean amd must be a freaking saint to have all GE games pretty much good optimized or is it just that all of the developers that have GE sponsoring are better than all others?


----------



## NuclearPeace

There's only a handful of Gaming Evolved titles, so amount of games that suck that GE will be found in will naturally be less than Gameworks which is by far more popular.

It's dumb logic to blame a whole game from performing badly or just not being great in general because Gameworks is in it. Are you guys even trying to be unbiased? If Gameworks is supposed to make games terrible, I guess the Witcher 3 and GTA5 missed that memo.


----------



## zealord

Quote:


> Originally Posted by *NuclearPeace*
> 
> The whole reason why developers go to solutions such as Gameworks is because doing it themselves would take a lot more effort, time, and therefore money.
> 
> Crysis 3 looks great but the game itself coudn't hold a candle to the original. Star Wars Battlefront 2015 also looks amazing, but the game itself is pretty bland. *I rather developers take shortcuts on graphics by using Gameworks/ Game Evolved stuff and then spending more time on making the gameplay amazing*.


That reminds me of the developer meeting I attended where we all shook hands, had that evil smirk on our faces and said "huehuehue let's put all our efforts into graphics and make gameplay bad on purpose, because developing a game is an easy process where we can just shift our _gameplay-people_ to _better-graphics-people_."


----------



## DrFPS

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nvidia thinks by using GW they are helping developers push graphics. I dont see anything from GW that benefits gamers or developers but Nvidia.


How do you know? Have you tried it?


----------



## Ghoxt

Quote:


> Originally Posted by *MoorishBrutha*
> 
> I think some Nvidia shill said so but alot, not just one, benchmarks saw bad performance from Nvidia due to the *lack* of Async Compute Engines in their GPUs. It's confirmed via White papers that none of Nvidia's cards (Maxwell, Kepler, Fermi) have Async Compute Engines built inside of them.


I purchased personally not just heard about someone's cousin's statement regarding Ashes and Nvidia performance. Ashes and had excellent performance with Nvidia but I'm not up on how faster AMD might have been on that "benchmark" cough game.

TBH though I'm running T-X SLI at 1500 so everything seems fast for what its worth.

Ultimately it turned out to be a bridge to nowhere as no AAA game we all play took advantage of Async DX12 last year so it was another paper launch in practice. Yet so many hang their hat on Async because they all heard about it being good "in practice". We'll have to wait until we see it fully. I have no doubt it can be great however how much is it to implement into Developers workflow for the parallel processing. We'll see.


----------



## CBZ323

Unfortunately AMD has yet to deliver on all their amazing promises at trade shows. When the product or feature finally comes out, it tends to be a "meh", only compensated by lower prices (Freesync, CPUs, GPUs, Mantle)

I wouldnt be surprised if they flop on this one too, even with what looks like no competition from Nvidia.

Trust me, I would love for AMD to finally get a solid lead and get some real competition going, but after so many letdowns I will only expect a "meh" until proven wrong.


----------



## kx11

when it comes to promises AMD is the best

i mean TrueAudio is where it's at


----------



## ZealotKi11er

Quote:


> Originally Posted by *kx11*
> 
> when it comes to promises AMD is the best
> 
> i mean TrueAudio is where it's at


It just shows that dev dont care. The tech is there. GW is in games because Nvidia give money to dev to use those tools.


----------



## Assirra

Quote:


> Originally Posted by *kx11*
> 
> when it comes to promises AMD is the best
> 
> i mean TrueAudio is where it's at


Didn't they promise better audio or some stuff like over a year ago?
I remember it being an actually big part for me but when i went shopping for a new card it sorta felt into the dark abyss.


----------



## Dargonplay

Quote:


> Originally Posted by *CBZ323*
> 
> Unfortunately AMD has yet to deliver on all their amazing promises at trade shows. When the product or feature finally comes out, it tends to be a "meh", only compensated by lower prices (Freesync)


I'd take FreeSync out, FreeSync is superior to Nvidia in almost every way while being... Free. Its got less Latency when not Using Vsync and lower FreeSync threshold is now irrelevant, meaning the only important factor is the upper range.

I have a BenQ XL2730Z, FreeSync have never failed me, its been working flawlessly since day 1.

Mark my words, Gsync is going to die in less than 2 years, besides a pissing contest there's no reason for Nvidia to not support FreeSync.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Dargonplay*
> 
> I'd take FreeSync out, FreeSync is superior to Nvidia in almost every way while being... Free. Its got less Latency when not Using Vsync and lower FreeSync threshold is now irrelevant, meaning the only important factor is the upper range.
> 
> I have a BenQ XL2730Z, FreeSync have never failed me, its been working flawlessly since day 1.
> 
> Mark my words, Gsync is going to die in less than 2 years, besides a pissing contest there's no reason for Nvidia to not support FreeSync.


Sorry, no.

FreeSync doesn't have less latency than G-Sync, that is a lie from AMD that was crushed. FreeSync also had many issues up through launch, including Ghosting problems. The G-Sync module does several things, that AMD is trying to duplicate via drivers.

Your post clearly shows you have read ZERO into VRR and how it functions. Please, stop spreading fud.

EDIT:

Oh, and FreeSync wasn't free, by any means. It was "free" in the sense that AMD didn't have to put much into it, and left fixing it and getting it running up to the hardware manufacturers of the panels and control boards. Unlike Nvidia, who sold the G-Sync module at cost, and actually took the time to tune it to each display that opted to use it.

EDIT 2:

Has FreeSync come a long way sense? Yes! Is VRR as a whole amazing? Yes! Thankfully, no matter what happens to G-Sync as we know it, it started the VRR race. So in the end, we all win. Hopefully it does become, as a whole, an open standard in all displays.

VRR is one of the most significant developments in gaming and display tech in damn near forever. So even having another G-Sync v FreeSync discussion is pointless and does nothing to move it forward. Just like spreading misinformation.


----------



## RushTheBus

Isn't asynchronous compute simply supported by DX12 and not a requirement. It's merely one way of doing things. Everyone is going to defcon 1 every time this stuff comes up. I can't see a developer, especially one who's releasing a new installment of the very IP that put them on the map, would implement something that would cause the game to implode on 80% of the potential install base.

I have to agree, to a degree, that there is double standard going on when it comes to how AMD and Nvidia are perceived. I think what so many of you forget is that AMD, like nvidia, is a for profit company beholden to its shareholders. It's not an NGO distributing humanitarian aid to vulnerable populations. Let's also be honest here, AMD is and has been in pretty horrendous financial straits. From a business perspective they are a disaster right now and that's not projected to change for some months (and that's generous). The point is, if the market share #s were reversed I think the discussion would be very different.

Hopefully I'm not being too obtuse. I really think this thing is much higher level than dx12 or asc. Seems many have fallen into the way AMD has spun all of this.

Sent from my iPhone using Tapatalk


----------



## Dargonplay

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Sorry, no.
> 
> FreeSync doesn't have less latency than G-Sync, that is a lie from AMD that was crushed. FreeSync also had many issues up through launch, including Ghosting problems. The G-Sync module does several things, that AMD is trying to duplicate via drivers.
> 
> Your post clearly shows you have read ZERO into VRR and how it functions. Please, stop spreading fud.
> 
> EDIT:
> 
> Oh, and FreeSync wasn't free, by any means. It was "free" in the sense that AMD didn't have to put much into it, and left fixing it and getting it running up to the hardware manufacturers of the panels and control boards. Unlike Nvidia, who sold the G-Sync module at cost, and actually took the time to tune it to each display that opted to use it.
> 
> EDIT 2:
> 
> Has FreeSync come a long way sense? Yes! Is VRR as a whole amazing? Yes! Thankfully, no matter what happens to G-Sync as we know it, it started the VRR race. So in the end, we all win. Hopefully it does become, as a whole, an open standard in all displays.
> 
> VRR is one of the most significant developments in gaming and display tech in damn near forever. So even having another G-Sync v FreeSync discussion is pointless and does nothing to move it forward. Just like spreading misinformation.


Are you sure I'm the one spreading Fud? You're fanboyism doesn't let you see clearly... Again.

https://www.youtube.com/watch?v=MzHxhjcE0eQ

The only thing that Nvidia Gsync does better is Vsync, so I beg your pardon? Don't get mad at me for using facts when forming my opinion, unlike you who only take Nvidia's PR statements.

For anyone that's Lazy just go to minute 11:40, please keep considering Nvidia's statements like if they were coming from the Bible, is very amusing.

Also FreeSync is FREE, I can get a FreeSync perfectly Working 1440p 144Hz 1ms Acer XG270HU for 300$ (Last Amazon Sale), how much do the Gsync equivalent cost? even when using the same Panel it never gets below 500$, its FREE where it matters.

Also you talk about AMD FreeSync having issues at launch like if Gsync came flawlessly to the market, the FUD is strong, as strong as the fanboyism.

Oh, and don't mind the fact that thanks to FreeSync Monitors like the BenQ XL2730Z we do have complete 144Hz Strobing thanks to not having Gsync doing it.

EDIT: I'm eagerly waiting for your response to this, you always back up when you are proven wrong, third time on this thread alone. I want to see how you discredit LinusTechTips, or how 45FPS is the only thing that matters for VRR.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> I'd take FreeSync out, FreeSync is superior to Nvidia in almost every way while being... Free. Its got less Latency when not Using Vsync and lower FreeSync threshold is now irrelevant, meaning the only important factor is the upper range.
> 
> I have a BenQ XL2730Z, FreeSync have never failed me, its been working flawlessly since day 1.
> 
> Mark my words, Gsync is going to die in less than 2 years, besides a pissing contest there's no reason for Nvidia to not support FreeSync.


Yeah, unless the main motivation of G-Sync was to compete in the scaler industry, FreeSync has surpassed it in the consumer space with the addition of HDMI compatibility, imo. AMD has consecutively announced a boat load of extended display tiers, too. The monitor manufacturers will readily take advantage of this _'free marketing'_ gest of good will.


----------



## degenn

So much rabid frothing at the mouth in this thread.... should be closed tbh.

Carry on....


----------



## p4inkill3r

Quote:


> Originally Posted by *CBZ323*
> 
> Unfortunately AMD has yet to deliver on all their amazing promises at trade shows. When the product or feature finally comes out, it tends to be a "meh", only compensated by lower prices (Freesync, CPUs, GPUs, Mantle)


Mantle was not 'meh' in the least, but if you still feel that way, you can email Khronos and let them know they're wasting their time with Vulkan.








Freesync is also not a 'meh' product; on the contrary, it is a very exciting technology and has been well received by consumer and manufacturers alike. Their GPUs are in a great place and Polaris is going to be strong as well.
The CPU division is betting the field on Zen, but even their offerings now have a place.

Nevertheless, none of that matters to some.


----------



## Assirra

The double standards are out again and it is hilarious.
Mantle created apparently both DX12 AND vulcan but no mention of Gsync to introduce variable refresh rate to standard desktops.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Dargonplay*
> 
> Are you sure I'm the one spreading Fud? You're fanboyism doesn't let you see clearly... Again.
> 
> https://www.youtube.com/watch?v=MzHxhjcE0eQ
> 
> The only thing that Nvidia Gsync does better is Vsync, so I beg your pardon? Don't get mad at me for using facts when forming my opinion, unlike you who only take Nvidia's PR statements.
> 
> For anyone that's Lazy just go to minute 11:40, please keep considering Nvidia's statements like if they were coming from the Bible, is very amusing.
> 
> Also FreeSync is FREE, I can get a FreeSync perfectly Working 1440p 144Hz 1ms Acer XG270HU for 300$ (Last Amazon Sale), how much do the Gsync equivalent cost? even when using the same Panel it never gets below 500$, its FREE where it matters.
> 
> Also you talk about AMD FreeSync having issues at launch like if Gsync came flawlessly to the market, the FUD is strong, as strong as the fanboyism.
> 
> Oh, and don't mind the fact that thanks to FreeSync Monitors like the BenQ XL2730Z we do have complete 144Hz Strobing thanks to not having Gsync doing it.


Look, you can put up Linus videos all you want, that only further shows that you actually haven't looked into VRR at all.

It isn't even a discussion worth having with you, especially since you can't go more than a line without trying to make personal insults. Your posts in this thread have gone from oddly opinionated, to just outright wrong at this point.

Do yourself a favor, and educate yourself on the technology you are commenting on. I would recommend you don't start with Linus.

EDIT:

Oh, and the "free", technically, was from a licensing standpoint actually. However, per hardware reps on this very forum, there very much was cost involved with getting FreeSync to market. As AMD laid the entirety of that process on them, aside from a validation process. Which was actually revamped to help prevent the myriad of issues FreeSync had.


----------



## Dargonplay

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Look, you can put up Linus videos all you want, that only further shows that you actually haven't looked into VRR at all.
> 
> It isn't even a discussion worth having with you, especially since you can't go more than a line without trying to make personal insults. Your posts in this thread have gone from oddly opinionated, to just outright wrong at this point.
> 
> Do yourself a favor, and educate yourself on the technology you are commenting on. I would recommend you don't start with Linus.


How predictable, the only third party entity doing benchmarks into VRR are the ones you can't trust. Nvidia's word is the only absolute truth right.

Right there you have this test in a controlled environment using reliable techniques to measure input lag to the Milisecond with Analog Electric Bulbs and 1000 FPS Cameras, if you don't take that seriously then your scientific method is flawed, just because its Linus doesn't mean its wrong as you seem to imply, but it doesn't surprise me, when you can't fight the proof your only choice is to discredit its sources.

Also I'm not getting personal, you said I was spreading FUD and misinformation, I showed facts and a third party benchmark that Proved You Wrong, you discredit that for no other reason than "Linus lol" , so it begs the question, what are you doing?

Do me a favor and educate me please, I want to see you trying.


----------



## Defoler

Quote:


> Originally Posted by *MoorishBrutha*
> 
> You do understand that Nvidia lied to their 980Ti customers about being fully-supported of DX12 while knowing they didn't have one of the main features of DX12- *Async Compute*?
> 
> Ashes of Singularity wasn't a fluke, it was a reason why AMD smoked Nvidia on those benchmarks.


You mean "smoked" using updated drivers vs no drivers? You do remember what happened when nvidia brought updated drivers right? There was a nice 180 in terms of performance.

Btw, AMD are doing the same thing with transference calculations.
No one rising up to the pitchforks about that for some reason.

The 980 TI will fully support all DX12 API. That is what they promised. I'm not sure why lie about it if it ends up being true.

quote name="MoorishBrutha" url="/t/1590939/wccf-hitman-to-feature-best-implementation-of-dx12-async-compute-yet-says-amd#post_24883183"]
I think some Nvidia shill said so but alot, not just one, benchmarks saw bad performance from Nvidia due to the *lack* of Async Compute Engines in their GPUs. It's confirmed via White papers that none of Nvidia's cards (Maxwell, Kepler, Fermi) have Async Compute Engines built inside of them.
[/quote]

The queue engine is in software, but the wrap scheduler are in hardware. So they have "partial" of overall compute engine in hardware.
Since AMD's "lead" in async was gone, regardless of nvidia doing it in software, currently it doesn't seems to be an issue.

Unless an early alpha version of the game shows up, use optimised AMD drivers but not working nvidia drivers, AMD will call "we are the winners!" and a couple of months later an updated driver comes from nvidia which again remove AMD's lead.
AMD are very early to declare things based on alpha or beta versions in order to establish a "win" regardless of the end results on release date.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> Are you sure I'm the one spreading Fud? You're fanboyism doesn't let you see clearly... Again.
> 
> https://www.youtube.com/watch?v=MzHxhjcE0eQ
> 
> The only thing that Nvidia does better is Vsync, so I beg your pardon? Don't get mad at me for using facts when forming my opinion, unlike you who only take Nvidia's PR statements.
> 
> For anyone that's Lazy just go to minute 11:40, please keep considering Nvidia's statements like if they were coming from the Bible, is very amusing.
> 
> Also FreeSync its FREE, I can get a FreeSync perfectly Working 1440p 144Hz 1ms Acer XG270HU for 300$ (Last Amazon Sale), how much do the Gsync equivalent cost? even when using the same Panel it never gets below 500$, its FREE where it matters.


I wonder if Linus' conclusion is too hasty and might have mistaken the error of the panel for the involved protocol. The later G-Sync panel PG279Q is not calibrated to work at 165 Hz faster than its 144 Hz refresh rate. Could the results be confounded by the pixel switching times? Here is Tft Central's own assessment of the situation:
Quote:


> One area which wasn't quite as good though was the response times. We tested these again at 165Hz and compared them to our measurements we had taken at the optimum 144Hz refresh rate earlier. As a reminder, we found that as you increase the refresh rate from 60Hz to 144Hz, the response times improved as you went. The response times and overdrive impulse are dynamically controlled by the G-sync module it seems, and influenced by the active refresh rate. We hoped for a further improvement with the boost to 165Hz but actually the opposite was the case.
> The response times were slightly slower overall at 165Hz than they had been at 144Hz. The average G2G was now 6.0ms instead of 5.2ms at 144Hz. This translated to a small amount of increased motion blur, but we're talking very very slight. This is arguably offset anyway by the slight improvement in motion clarity brought about by the higher frame rate / higher refresh rate.
> ...
> So it seems that at the overclocked 165Hz refresh rate the overdrive impulse is not being applied as aggressively and so response times are a little slower.


[Source]


----------



## Defoler

Quote:


> Originally Posted by *p4inkill3r*
> 
> Mantle was not 'meh' in the least, but if you still feel that way, you can email Khronos and let them know they're wasting their time with Vulkan.
> 
> 
> 
> 
> 
> 
> 
> 
> Freesync is also not a 'meh' product; on the contrary, it is a very exciting technology and has been well received by consumer and manufacturers alike. Their GPUs are in a great place and Polaris is going to be strong as well.
> The CPU division is betting the field on Zen, but even their offerings now have a place.
> 
> Nevertheless, none of that matters to some.


Mantle is dead.
Vulkan is alive only because someone else took the rains off AMD.
Freesync is not new. It was "stolen" from the already existing eDP.
Their GPUs place is great if losing market and money means a great place. The 300 series ended up being a bust, and the promised fiji and HBM as a "top end killer" ended up also being just a "meh" card. They are not marketing geniuses, and instead of attacking nvidia, they should concentrate on their own products.

Claims about polaris being strong, unless a crystal ball is involved, lets wait and see. I hope it does. AMD's track record is not great though.
And zen, has been promised for about 1.5 years now. it is only going to come out at the end of 2016. That is a long long time and many things can happen until then. The industry are not holding their breath right now.


----------



## Dargonplay

Quote:


> Originally Posted by *mtcn77*
> 
> I wonder if Linus' conclusion is too hasty and might have mistaken the error of the panel for the involved protocol. The later G-Sync panel PG279Q is not calibrated to work at 165 Hz faster than its 144 Hz refresh rate. Could the results be confounded by the pixel switching times? Here is Tft Central's own assessment of the situation:
> [Source]


The Gsync monitor used in the test was a PG278Q which have the same exact panel as the BenQ XL2730Z to avoid variables, both work at a maximum refresh rate of 144Hz and share the exact same features and characteristics, except for the VRR Technology.

I actually looked it up because if it was the PG279Q then the test would have been flawed.


----------



## EightDee8D

seems like for some AMD is there just for them to spit on it. doesn't matter what they do, AMD does nothing positive.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> The Gsync monitor used in the test was a PG278Q which have the same exact panel as the BenQ XL2730Z to avoid variables, both work at a maximum refresh rate of 144Hz and share the exact same features and characteristics, except for the VRR Technology.
> 
> I actually looked it up because if it was the PG279Q then the test would have been flawed.


Still, the brand might be the confounder, BenQ might have succeeded what Asus couldn't achieve.


----------



## MadRabbit

Quote:


> Originally Posted by *Defoler*
> 
> Mantle is dead.
> Vulkan is alive only because someone else took the rains off AMD.
> Freesync is not new. It was "stolen" from the already existing eDP.
> Their GPUs place is great if losing market and money means a great place. The 300 series ended up being a bust, and the promised fiji and HBM as a "top end killer" ended up also being just a "meh" card. They are not marketing geniuses, and instead of attacking nvidia, they should concentrate on their own products.
> 
> Claims about polaris being strong, unless a crystal ball is involved, lets wait and see. I hope it does. AMD's track record is not great though.
> And zen, has been promised for about 1.5 years now. it is only going to come out at the end of 2016. That is a long long time and many things can happen until then. The industry are not holding their breath right now.


Mantle is dead.
This doesn't make any sense?
Got any actual proof that AMD stole anything? Like an court document or some sort? Or just a wet dream of some fanboy on the internet again?
Again, proof? Strange enough that AMD's quarterly reports say just the other thing
Quote:


> GPU ASP increased sequentially and year-over-year primarily due to a higher AIB channel ASP.


Just because you and some others keep screaming "Rebrand" doesn't mean it's a bust.
With the last point, I do agree.


----------



## Dargonplay

Quote:


> Originally Posted by *mtcn77*
> 
> Still, the brand might be the confounder, BenQ might have succeeded what Asus couldn't achieve.


We would have to check TFTCentral performance graphs and make a direct Comparison between both the PG278Q and the BenQ XL2730Z using Extreme Overdrive presets, I'd like to think Linus would have taken this into account.

EDIT: I'll EDIT this post with the performance graphs of both monitors for comparison.

EDIT 2 The Graphs:

This is the BenQ XL2730Z Performance



And this is the PG278Q



As you can see their performance is identical regarding overdrive which eliminates this as a variable.

Then you have TFTCentral Input Lag Comparisons (Different to Pixel Response Times but equally important) and what you see here might surprise you as much as me.



The PG278Q input lag is the exact same as the BenQ XL2730Z, the only difference is that the BenQ XL2730Z is faster at signal processing, while the PG278Q is faster at the response time side of things, but the end result is the same.

This should be enough to validate Linus Tests and rule out the possibility of performance variables between brands.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> We would have to check TFTCentral performance graphs and make a direct Comparison between both the PG278Q and the BenQ XL2730Z using Extreme Overdrive presets, I'd like to think Linus would have taken this into account.
> 
> EDIT: I'll EDIT this post with the performance graphs of both monitors for comparison.
> 
> EDIT 2 The Graphs:
> 
> This is the BenQ XL2730Z Performance
> 
> 
> 
> And this is the PG278Q
> 
> 
> 
> As you can see their performance is identical regarding overdrive which eliminates this as a variable.
> 
> Then you have TFTCentral Input Lag Comparisons (Different to Pixel Response Times but equally important) and what you see here might surprise you as much as me.
> 
> 
> 
> The PG278Q is actually FASTER than the BenQ XL2730Z with less input lag, meaning that the FreeSync Results should have the difference in input lag subtracted to it, making it even faster than it already is, but again, I'd like to think Linus have taken all of this into account, if he didn't, then AMD FreeSync Wins even harder.


Yes, however we need the precise benchmarks at 200 Hz to know that, as Asus is working faster at 144 Hz contrary to 165 Hz. That was the confounder I was speaking of.


----------



## Dargonplay

Quote:


> Originally Posted by *mtcn77*
> 
> Yes, however we need the precise benchmarks at 200 Hz to know that, as Asus is working faster at 144 Hz contrary to 165 Hz. That was the confounder I was speaking of.


I don't think I understand, I mean, the Monitor in the test is the PG278Q which is only capable of 144Hz, it doesn't do 165Hz, just like the BenQ one, also they both have the exact same panel, which means they should behave just the same at any given refresh rate.
Quote:


> Originally Posted by *mtcn77*
> 
> as Asus is working faster at 144 Hz contrary to 165 Hz


This is happening to a different monitor, unrelated to anything used in the test, so I don't follow you there.

Maybe you're implying that Gsync decrease in Latency Performance as FPS goes up in all G-Sync Monitors not just the 165Hz one, if that's so then I don't know what to think, this COULD explain why Gsync was faster than FreeSync at 45 FPS with Vsync OFF and why FreeSync totally destroyed Gsync at higher FPS and every other setting, maybe Hardware Overhead? This would line up with what AMD said about how Hardware Based VRR could prove slower than software based VRR.

I would love to see more detailed Benchmarks comparing G-Sync VS FreeSync.


----------



## Assirra

Quote:


> Originally Posted by *EightDee8D*
> 
> seems like for some AMD is there just for them to spit on it. doesn't matter what they do, AMD does nothing positive.


Yea because the general view of Nvidia here is as angels, oh wait.

A game runs bad on AMD? nvdia fault
A game runs bad on nvidia? nvidia fault

mantle failed on its own but some of the technologies are getting used in further API? AMD/mantle was solely responsible for both DX12 AND Vulkan
Nvdia made Gsync and started the VRR revolution for desktops? They were just greedy! Gsync will die! long live Freesync! (which AMD claimed the responsibility for even tough all they did was whine at VESA and did nothing of the actual work)

I swear people think AMD is a charity company for the good of the people or some nonsense.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> I don't think I understand, I mean, the Monitor in the test is the PG278Q which is only capable of 144Hz, it doesn't do 165Hz, just like the BenQ one, also they both have the exact same panel, which means they should behave just the same at any given refresh rate.
> This is happening to a different monitor, unrelated to anything used in the test, so I don't follow you there.


Ah, I was misinterpreting the 200 Hz chart as if the panel was overclocked with the point being that a similar consequence to PG279Q's defaulting overdrive might be happening with the former panel.
So, Nvidia wins at 144 Hz V-Sync & AMD wins at 45 Hz V-Sync & @144-200 Hz without vertical synchronization enabled - pretty interesting results to be honest. What I don't understand now, is how Linus has segregated the gpus as potential cause of latency.







Our fellow @TranquilTempest has shared some insight with me in the past that the method as to how a frame limit is set makes a difference for frame lags.


----------



## Dargonplay

Quote:


> Originally Posted by *mtcn77*
> 
> Ah, I was misinterpreting the 200 Hz chart as if the panel was overclocked with the point being that a similar consequence to PG279Q's defaulting overdrive might be happening with the former panel.
> So, Nvidia wins at 144 Hz V-Sync & AMD wins at 45 Hz V-Sync & @144-200 Hz without vertical synchronization enabled - pretty interesting results to be honest. What I don't understand now, is how Linus has segregated the gpus as potential cause of latency.
> 
> 
> 
> 
> 
> 
> 
> Our fellow @TranquilTempest has shared some insight with me in the past that the method as to how a frame limit is set makes a difference for frame lags.
> Edited by mtcn77 - Yesterday at 11:59 pm


1-Freesync have the least latency compared to Gsync when both have Vsync Disabled, up to 12ms less.

2-)Gsync have the least latency Compared to Freesync when both have Vsync Enabled (Except at around 45 FPS), Up to 8ms less.

3-) Freesync have the least latency Destroying G-sync when running a game at 200 FPS (Perfect for input lag) with Vsync Disabled on both Gsync and Freesync, 14ms Average.

4-) At 45 FPS Gsync have the least latency compared to FreeSync with Vsync Disabled on both FreeSync and Gsync with an average of 19ms less latency than FreeSync

5-) At 45 FPS FreeSync have the least latency compared to Gsync with Vsync Enabled on both FreeSync and Gsync.

Something worthy of consideration. Regarding GPU Latency, don't quote me on this but I remember seeing benchmarks putting nvidia and AMD's performance both equal in Latency performance, this was a long time ago and things could have changed so IDK. I'm interested in hearing what TranquilTempest shared tho, don't see how different forms of Frame Limiting makes a difference for frame lags unless its Vsync which on itself is a Delay.


----------



## EightDee8D

Quote:


> Originally Posted by *Assirra*
> 
> Yea because the general view of Nvidia here is as angels, oh wait.
> A game runs bad on AMD? nvidia fault
> A game runs bad on nvidia? nvidia fault


that's why AMD has 80% market share , right ? oh wait.
Quote:


> mantle failed on its own but some of the technologies are getting used in further API? AMD/mantle was solely responsible for both DX12 AND Vulkan


not solely but part of it.
Quote:


> Nvdia made Gsync and started the VRR revolution for desktops? They were just greedy! Gsync will die! long live Freesync! (which AMD claimed the responsibility for even tough all they did was whine at VESA and did nothing of the actual work)


so you are saying vrr worked on amd gpu just by whining? by magic or Jen-Hsun's jacket ?

look, im not saying amd does nothing wrong. they obviously fail at executing properly. but that doesn't mean they never did anything positive. GDDR5, HBM, tessellation, mfaa ,cheaper vrr, heck they are still improving 1st gen gcn gpu performance while so called TITAN is forgotten by nvidia. but for some people, none of that counts.









just look at this thread. people comparing physx instead of tessellation , to the dx12's ASC and calling people for double standards. i mean seriously ?


----------



## Klocek001

nvidia will be stalling async implementation for as long as it's possible until they launch volta,that I'm sure.


----------



## Defoler

Quote:


> Originally Posted by *MadRabbit*
> 
> Mantle is dead.
> This doesn't make any sense?
> Got any actual proof that AMD stole anything? Like an court document or some sort? Or just a wet dream of some fanboy on the internet again?
> Again, proof? Strange enough that AMD's quarterly reports say just the other thing


Have you read about eDP? Do you even know what eDP is? Do you know how it works?
Have you read about adaptive sync in DP? Have you read how it works?

Regardless of your somewhat childish write and understanding, I did not mean stolen as they took actual something. They just took the eDP adaptive frequency feature from vesa, and suggested it to add to DP, and called it their own, like they found a new continent or something.


----------



## MadRabbit

Quote:


> Originally Posted by *Defoler*
> 
> Have you read about eDP? Do you even know what eDP is? Do you know how it works?
> Have you read about adaptive sync in DP? Have you read how it works?
> 
> Regardless of your somewhat childish write and understanding, I did not mean stolen as they took actual something. They just took the eDP adaptive frequency feature from vesa, and suggested it to add to DP, and called it their own, like they found a new continent or something.


So they did not steal anything. Next time, before calling me childish, how about use the correct words for things.

You're the one who throws around big words in every AMD related topic, not me. So, call me what ever you wish, not that I really care.


----------



## Defoler

Quote:


> Originally Posted by *Klocek001*
> 
> nvidia will be stalling async implementation for as long as it's possible until they launch volta,that I'm sure.


I'm sure you have inside information into nvidia decisions and choices, already reviewed pascal and know exactly if it is going to work or not, or whether software implementation of queue management via drivers is going to give nvidia less performance than AMD's implementation via hardware.

Or maybe not.
Since there are zero benchmarks of async outside of beta/alpha version games without any mature drivers, that argument is irrelevant until an actual async using game is being sold and we have actual performance from it.


----------



## Defoler

Quote:


> Originally Posted by *MadRabbit*
> 
> So they did not steal anything. Next time, before calling me childish, how about use the correct words for things.
> 
> You're the one who throws around big words in every AMD related topic, not me. So, call me what ever you wish, not that I really care.


Taking a tech already implemented, suggest said tech to another implementation, and calling it their own, yes, in my eyes, they "stole" it by calling it their own, claiming that they made "freesync" just for their suggestion to vesa.

And yes, you fixating no it, is childish in the end of the day. Because you do not read by the whole context, but fixate on a certain word and attack the whole post by that certain word, which is still pretty much correct.


----------



## MadRabbit

verb (used with object), stole, stolen, stealing.

1.

to take (the property of another or others) without permission or right, especially secretly or by force:

I don't care what you call it. Unless you can make them re-do the dictionary you're still accusing them of stealing which they have not done.

I did read your post, from the start to finish. And most of the "points" your brought out are, lets say, false. One makes no sense to begin with. So, how about don't try to make an impression out of me which you came up with, like the notion that AMD stole something. If you keep on like that, call up VESA and let them sue AMD. Until then, use the right word. Copied.

Also you said AMD didn't do any work what so ever and just complained to VESA and that's why they got it, can you humor us a little more?


----------



## Klocek001

Quote:


> Originally Posted by *Defoler*
> 
> I'm sure you have inside information into nvidia decisions and choices, already reviewed pascal and know exactly if it is going to work or not, or whether software implementation of queue management via drivers is going to give nvidia less performance than AMD's implementation via hardware.
> 
> Or maybe not.
> Since there are zero benchmarks of async outside of beta/alpha version games without any mature drivers, that argument is irrelevant until an actual async using game is being sold and we have actual performance from it.


I never meant it to be about software vs hardware performance,you failed to comprehend. I'm sure nvidia are thrilled for how much work they've got coming to implement async via software, rather than not use it at all.


----------



## Tivan

Quote:


> Originally Posted by *Defoler*
> 
> Taking a tech already implemented, suggest said tech to another implementation, and calling it their own, yes, in my eyes, they "stole" it by calling it their own, claiming that they made "freesync" just for their suggestion to vesa.
> 
> And yes, you fixating no it, is childish in the end of the day. Because you do not read by the whole context, but fixate on a certain word and attack the whole post by that certain word, which is still pretty much correct.


If you want to water down the term stealing this much, then rejoice, the history of art and science is a history of thiefs.

And AMD definitely made Freesync. It's their implimentation of support for the adaptive standard they helped push on the side of standalone monitors. Freesync is not a general term for variable refresh rate tech without proprietary modules. It's just one implimentation of that tech, exactly like Gsync for laptops is. Same thing.

And you don't get a moral high horse for adding complexity to the procedure with an external module (vs the internal solution on 'Freesync for Games'-compatible AMD cards), by the way. The idea is what counts to me.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Defoler*
> 
> Mantle is dead.
> Vulkan is alive only because someone else took the rains off AMD.
> Freesync is not new. It was "stolen" from the already existing eDP.
> Their GPUs place is great if losing market and money means a great place. The 300 series ended up being a bust, and the promised fiji and HBM as a "top end killer" ended up also being just a "meh" card. They are not marketing geniuses, and instead of attacking nvidia, they should concentrate on their own products.
> 
> Claims about polaris being strong, unless a crystal ball is involved, lets wait and see. I hope it does. AMD's track record is not great though.
> And zen, has been promised for about 1.5 years now. it is only going to come out at the end of 2016. That is a long long time and many things can happen until then. The industry are not holding their breath right now.


so what you are saying is because vesa had a version of freesync as their own and then amd came and made one as also nvidia that means they stole it? man that logic is so cringeworthy


----------



## Klocek001

Quote:


> Originally Posted by *Tivan*
> 
> If you want to water down the term stealing this much, then rejoice, the history of art and science is a history of thiefs.


end of this part of the discussion.
move on.


----------



## sugarhell

Quote:


> Originally Posted by *airfathaaaaa*
> 
> so what you are saying is because vesa had a version of freesync as their own and then amd came and made one as also nvidia that means they stole it? man that logic is so cringeworthy


Nah you mean the whole post is cringe


----------



## christoph

Quote:


> Originally Posted by *MoorishBrutha*
> 
> Nvidia told everyone that the 980ti is the first FULL supported DX12 card. Well, one of the features of DX12, Async Compute, is not natively supported by the 980Ti or any of Nvidia's cards for that matter so Nvidia falsely advertised........*once again*.


but they did said later that Nvidia hardware MIGHT NOT BE FULLY supporting DX12


----------



## kx11

Quote:


> Originally Posted by *christoph*
> 
> but they did said later that Nvidia hardware MIGHT NOT BE FULLY supporting DX12


we all know Nvidia will brand Pascal GPUs with " native DX12 support " so no one believed that kepler\maxwell got native dx12 support


----------



## p4inkill3r

Quote:


> Originally Posted by *sugarhell*
> 
> Nah you mean the whole post is cringe


Par for the course, unfortunately.


----------



## airfathaaaaa

Quote:


> Originally Posted by *christoph*
> 
> but they did said later that Nvidia hardware MIGHT NOT BE FULLY supporting DX12


to this day nvidia never said something like this
the only thing close enough as a statement was the do and donts paper they released concerning dx12 games development


----------



## Colossus1090

Quote:


> Originally Posted by *GoLDii3*
> 
> Higher settings mean the image quality has to be higher due to the increased workload.
> 
> Please look for yourself the comparison between Godrays Ultra and Godrays Low.
> 
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-002-ultra-vs-low.html
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-004-ultra-vs-low.html
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-001-ultra-vs-low.html
> http://images.nvidia.com/geforce-com/international/comparisons/fallout-4/fallout-4-god-rays-quality-interactive-comparison-006-ultra-vs-low.html


Holy cow, how can those fps drops be justified between the two? I'm playing with them off


----------



## christoph

Quote:


> Originally Posted by *airfathaaaaa*
> 
> to this day nvidia never said something like this
> the only thing close enough as a statement was the do and donts paper they released concerning dx12 games development


yes they did, they said so low that no one notice, but when Nvidia video card owners beggins to complaint they've pull out they card, so no more complaints like 970 video card little issue...


----------



## MadRabbit

Quote:


> Originally Posted by *christoph*
> 
> yes they did


Mind refreshing my memory since I really can't remember it either.


----------



## christoph

Quote:


> Originally Posted by *MadRabbit*
> 
> Mind refreshing my memory since I really can't remember it either.


it was posted here on OCN and YOU were commenting there too, but don't worry let me find the link to it


----------



## MadRabbit

Quote:


> Originally Posted by *christoph*
> 
> it was posted here on OCN and YOU were commenting there too, but don't worry let me find the link to it


Please do







I can't remember nVidia actually saying anything like that. All I remember is exactly what he said, DO's and Don't do's what they posted.


----------



## Themisseble

Quote:


> Originally Posted by *kx11*
> 
> we all know Nvidia will brand Pascal GPUs with " native DX12 support " so no one believed that kepler\maxwell got native dx12 support


Pascal will have native, but not full support of DX12. DX12 features will develop over time.


----------



## Kollock

Quote:


> Originally Posted by *Tivan*
> 
> https://www.youtube.com/watch?v=6MAWl3YzsTE
> 
> neither the slow traveling rockets, nor the flying things, have the glowing effect present on the 390X, in this video, which uses 'Nvidia driver version 358.50'
> 
> If that's what you mean.
> 
> edit:
> this:
> 
> 
> and this effect:
> 
> 
> Just pointing out that this is what people seem concerned about, not trying to prove Nvidia doesn't have it or anything, but doesn't seem present here, and maybe on more driver/game versions. I don't know if this was fixed with current day patch/driver or is going to be addressed in any way.


To clarify, was this effect gone in a specific version of the NV driver? This effect has already been rewritten with a more optimal and better looking version so this will be a mute point anyway. This newer effect hasn't gone out the door just yet though.


----------



## sugarhell

Quote:


> Originally Posted by *Kollock*
> 
> To clarify, was this effect gone in a specific version of the NV driver? This effect has already been rewritten with a more optimal and better looking version so this will be a mute point anyway. This newer effect hasn't gone out the door just yet though.


This was 5 days ago




It seems the same to me as the above screenshots. It lacks some particles and lightning


----------



## Kollock

Quote:


> Originally Posted by *sugarhell*
> 
> This was 5 days ago
> 
> 
> 
> 
> It seems the same to me as the above screenshots. It lacks some particles and lightning


Hmm... well, I suppose we could investigate it but that effect has already been completely changed, just not out yet in public. So it's probably a waste of time for us at this point to investigate. I haven't seen any noticeable differences between AMD and NVidia myself.

I'm very curious about what the Hitman folks are doing. At any rate, we've got some big D3D12 updates upcoming very shortly. I'd say more, but I'll probably already in trouble


----------



## Kand

The best implementation of Async compute is no implementation at all.


----------



## provost

Quote:


> Originally Posted by *Kollock*
> 
> Hmm... well, I suppose we could investigate it but that effect has already been completely changed, just not out yet in public. So it's probably a waste of time for us at this point to investigate. I haven't seen any noticeable differences between AMD and NVidia myself.
> 
> I'm very curious about what the Hitman folks are doing. At any rate, we've got some big D3D12 updates upcoming very shortly. I'd say more, but I'll probably already in trouble


Tks for the update. The way I look at this market is that Microsoft is going to be in the driver seat as far as where the developers' focus will be in utilizing DX12, async compute, etc. If Microsoft/(Intel?) want to cross market disparate platforms, then that's exactly what would happen. Either the graphic cards companies can get on this bus, or risk being left in the dust with whatever proprietary pipe dream they may have. Perhaps my assessment is a little malodramatic... lol, but please feel free to give us a holistic view of the state of affairs from your viewpoint, if you would be kind enough to share


----------



## sugarhell

Quote:


> Originally Posted by *Kand*
> 
> The best implementation of Async compute is no implementation at all.


SO salty.


----------



## infranoia

Quote:


> Originally Posted by *Kand*
> 
> The best implementation of Async compute is no implementation at all.


You're in luck! The industry has an option for you.


----------



## spyshagg

Not using async is like keeping the graphics technology from having its own industrial revolution.

Just because gpu-z says your gpu is at 100% usage, does not mean its true. Its actually false. Technologies such as async get the true usage much closer to 100%, giving you free performance.

No amount of love for a corporation should ever, EVER stop the industry from moving forward.


----------



## mtcn77

Quote:


> Originally Posted by *Kand*
> 
> The best implementation of Async compute is no implementation at all.


Why, _doesn't 40% performance increase under stress conditions have any significance at all_?
Quote:


> Originally Posted by *spyshagg*
> 
> Not using async is like keeping the graphics technology from having its own industrial revolution.
> 
> Just because gpu-z says your gpu is at 100% usage, does not mean its true. Its actually false. Technologies such as async get the true usage much closer to 100%, giving you free performance.
> 
> No amount of love for a corporation should ever, EVER stop the industry from moving forward.


----------



## Assirra

Is it really that big of a difference?
I can understand some but 40% seems borderline crazy.
If it truly is like that, it cannot come fast enough.


----------



## mtcn77

Quote:


> Originally Posted by *Assirra*
> 
> Is it really that big of a difference?
> I can understand some but 40% seems borderline crazy.
> If it truly is like that, it cannot come fast enough.


Essentially not that big of a deal, if you are going to test at 800x600 for hilarity, but these are not moments when gpus are stressed. Magicka developer's word is good enough for me. I think Second Son's developer told the same story to the press. 33 ms > 27 ms on average, plus 10 ms at really heavy loads is a great progress, imo. I still don't know how we test out the '10 ms' part, though - do we count "33 ms > 23 ms", or "43 ms > 33 ms" is anyone's guess. For me, I stick to sensationalism: 43% is a bigger rate than 30% in total, so I cross my fingers for the best.


----------



## Tivan

Quote:


> Originally Posted by *Kollock*
> 
> To clarify, was this effect gone in a specific version of the NV driver? This effect has already been rewritten with a more optimal and better looking version so this will be a mute point anyway. This newer effect hasn't gone out the door just yet though.


It seems to be gone in all versions people found videos of here (nvidia driver 355.98 and 358.50), but nice to hear you got an improved version incoming! Since the effect looks really cool


----------



## STEvil

Quote:


> Originally Posted by *Assirra*
> 
> Is it really that big of a difference?
> I can understand some but 40% seems borderline crazy.
> If it truly is like that, it cannot come fast enough.


Quote:


> Originally Posted by *mtcn77*
> 
> Essentially not that big of a deal, if you are going to test at 800x600 for hilarity, but these are not moments when gpus are stressed. Magicka developer's word is good enough for me. I think Second Son's developer told the same story to the press. 33 ms > 27 ms on average, plus 10 ms at really heavy loads is a great progress, imo. I still don't know how we test out the '10 ms' part, though - do we count "33 ms > 23 ms", or "43 ms > 33 ms" is anyone's guess. For me, I stick to sensationalism: 43% is a bigger rate than 30% in total, so I cross my fingers for the best.


I think the performance difference will be similar to the addition of T&L back in the day, or somewhere on that level. Might take a bit or go further even as the next architectures are built.


----------



## mtcn77

You know, if we round up the number a little bit to 1.429, Fury has just enough shaders to be 25% more powerful than Fury X. Let the sensation begin...
Note: wow, doesn't need more rounding - just did it again - it rounds up to 5120, exactly.


----------



## KyadCK

Quote:


> Originally Posted by *Kand*
> 
> The best implementation of Async compute is no implementation at all.


Soooooo you just hate efficiency then? It really isn't an AMD only thing forever, it just is for now.
Quote:


> Originally Posted by *Assirra*
> 
> Is it really that big of a difference?
> I can understand some but 40% seems borderline crazy.
> If it truly is like that, it cannot come fast enough.


Ya, I can see it. Any time your GPU is busy doing other tasks it can do some compute on the side at no penalty. Think of it as Hyper Threading on steroids, it may as well be.


----------



## Defoler

Quote:


> Originally Posted by *KyadCK*
> 
> Soooooo you just hate efficiency then? It really isn't an AMD only thing forever, it just is for now.


Even though nvidia can only implement the queue it in drivers right now, the actual compute is done on their GPU just like with AMD.
Since the last drivers fix from nvidia for ashes to actually implement it, we have seen nvidia being just as competitive on it.


----------



## Kand

Quote:


> Originally Posted by *KyadCK*
> 
> Soooooo you just hate efficiency then? It really isn't an AMD only thing forever, it just is for now.


Windows 10. Actually.


----------



## BeerPowered

AMD: Foiled by Nvidia again
Nvidia: [Evil Laugh] You will never beat Gameworks
AMD: Just you wait.....

[6 Months Later]

AMD: This is Async Compute
Nvidia: [Darth Vader voice] Nooooooooooooooo!
AMD:Yes We win
Nvidia: [Puts large sums of cash on the table]
Nvidia: Calling all Dev's. Free Money if you don't use A-Sync Compute
AMD: Dang it we can't compete with that

[2 Months Later]

AMD: Hey Square. I know we can't buy you off with money but I think we got something better!
Square: Okay
AMD: [Brings Booth Babes to Square Offices]
Square: _kanpai!_ A-Sync compute is awesome
Nvidia: Dang we let one slip beyond our grasp


----------



## spyshagg

Quote:


> Originally Posted by *Kollock*
> 
> Hmm... well, I suppose we could investigate it but that effect has already been completely changed, just not out yet in public. So it's probably a waste of time for us at this point to investigate. I haven't seen any noticeable differences between AMD and NVidia myself.
> 
> I'm very curious about what the Hitman folks are doing. At any rate, we've got some big D3D12 updates upcoming very shortly. I'd say more, but I'll probably already in trouble


I understand your point, but because benchmarks and analysis and conclusions were made based on that version, the graphical discrepancy matters in that point in time.

I don't know how it went unnoticed. People are not looking at image quality anymore. No one expects mistakes from the past (lowering image quality) to be made again in 2015, but they were and we should be aware for our own good.

Losing a company such as AMD to this type of dishonesty would be criminal and everybody's loss.


----------



## airfathaaaaa

so what is next? nvidia will pull another 3dmark and render the game on 2d?


----------



## Serios

Quote:


> Originally Posted by *Defoler*
> 
> Even though nvidia can only implement the queue it in drivers right now, the actual compute is done on their GPU just like with AMD.
> *Since the last drivers fix from nvidia for ashes to actually implement it, we have seen nvidia being just as competitive on it*.


I've seen no mention of that, you are just assuming.


----------



## NightAntilli

Quote:


> Originally Posted by *Defoler*
> 
> Even though nvidia can only implement the queue it in drivers right now, the actual compute is done on their GPU just like with AMD.
> Since the last drivers fix from nvidia for ashes to actually implement it, we have seen nvidia being just as competitive on it.


This is not true. The driver only brought nVidia's performance of DX12 up to DX11 level, since DX11 was faster in the earlier versions.

nVidia's current hardware cannot do concurrent graphics + compute asynchronously, while AMD's can. This has been tested extensively.


----------



## Serios

Yeah I did quick search jut to be sure but if Nvidia would have enabled A-sync in AotS logically all sites would have benchmarked it and would have talked about it, but nothing. Also I haven't seen an nvidia driver that mentions they have enabled that feature in AotS.


----------



## cowie

Quote:


> Originally Posted by *spyshagg*
> 
> I understand your point, but because benchmarks and analysis and conclusions were made based on that version, the graphical discrepancy matters in that point in time.
> 
> I don't know how it went unnoticed. People are not looking at image quality anymore. No one expects mistakes from the past (lowering image quality) to be made again in 2015, but they were and we should be aware for our own good.
> 
> *Losing a company such as AMD to this type of dishonesty would be criminal and everybody's loss*.


you don't have to answer but what do you mean?

that game is not even out yet its beta for all we know a lot of graphics might be cut out to make it easier to run.
how many times has a game dev done this like a million times...I am not saying that game will but I would not put it passed any game maker in this day and age.
that game does not run too well on anything anyways by just the looks of higher rez benches


----------



## KyadCK

Quote:


> Originally Posted by *Defoler*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Soooooo you just hate efficiency then? It really isn't an AMD only thing forever, it just is for now.
> 
> 
> 
> Even though nvidia can only implement the queue it in drivers right now, the actual compute is done on their GPU just like with AMD.
> Since the last drivers fix from nvidia for ashes to actually implement it, we have seen nvidia being just as competitive on it.
Click to expand...

With more CPU load that doesn't need to exist, and with pausing the GPU to do them, and with one of the lightest ASync loads. This very thread is about it being used more.
Quote:


> Originally Posted by *Kand*
> 
> Quote:
> 
> 
> 
> Originally Posted by *KyadCK*
> 
> Soooooo you just hate efficiency then? It really isn't an AMD only thing forever, it just is for now.
> 
> 
> 
> Windows 10. Actually.
Click to expand...

So other APIs do not exist? Apple's Metal, Vulkan?

Huh. I've been edumakated.
Quote:


> Originally Posted by *Serios*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Defoler*
> 
> Even though nvidia can only implement the queue it in drivers right now, the actual compute is done on their GPU just like with AMD.
> *Since the last drivers fix from nvidia for ashes to actually implement it, we have seen nvidia being just as competitive on it*.
> 
> 
> 
> I've seen no mention of that, you are just assuming.
Click to expand...

The new driver allows them to be capable of accepting the DX12 command associated with using them.

It then proceeds to half-ass it on the CPU, load it into the queue at the driver level, pause the GPU, context switch, process it, and context switch back to graphical load.

It's a typical software solution for a hardware problem until they add the hardware.
Quote:


> Originally Posted by *NightAntilli*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Defoler*
> 
> Even though nvidia can only implement the queue it in drivers right now, the actual compute is done on their GPU just like with AMD.
> Since the last drivers fix from nvidia for ashes to actually implement it, we have seen nvidia being just as competitive on it.
> 
> 
> 
> This is not true. The driver only brought nVidia's performance of DX12 up to DX11 level, since DX11 was faster in the earlier versions.
> 
> nVidia's current hardware cannot do concurrent graphics + compute asynchronously, while AMD's can. This has been tested extensively.
Click to expand...

^ That.


----------



## spyshagg

Quote:


> Originally Posted by *cowie*
> 
> you don't have to answer but what do you mean?


Hi

AMD is in a fragile position without there being a good reason for it. Their graphics cards performance should warrant a much stronger market position. Yet its not happening. Why?
Market perception perhaps. And this market is easily swayed by reviews opinions etc, as it should! but it seems that every time AMD market perception is brought down, nvidia as finger on it. And they are willing to go dirty big time. Gameworks was a stroke of genius. Its nothing but a cancer, but its being spread as a cure for games for the benefit of gamers. And by gamers I mean everyone who buys their currently available hardware, not their 2 year old hardware. And now, we have async. A dx12 spec feature in its own right. Capable of providing double digits performance gains when supported. Maxwell cant do it natively, so what do they do? fix the game with drivers! as it turns out, image quality was brought down and nobody noticed when it mattered. But that is OK!!! Its OK because performance reviews and articles were already written! the market was already swayed when it mattered. The resulting perception? = nvidia only needed drivers! its async performance is on PAR with AMD. Job done nvidia.

Only our money can keep companies honest. spend it on fallacies tricks and dishonesty and soon this will be the only choice on the market.


----------



## Kollock

Quote:


> Originally Posted by *KyadCK*
> 
> With more CPU load that doesn't need to exist, and with pausing the GPU to do them, and with one of the lightest ASync loads. This very thread is about it being used more.
> So other APIs do not exist? Apple's Metal, Vulkan?
> 
> Huh. I've been edumakated.
> The new driver allows them to be capable of accepting the DX12 command associated with using them.
> 
> It then proceeds to half-ass it on the CPU, load it into the queue at the driver level, pause the GPU, context switch, process it, and context switch back to graphical load.
> 
> It's a typical software solution for a hardware problem until they add the hardware.
> ^ That.


Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.


----------



## escksu

Quote:


> Originally Posted by *Kollock*
> 
> Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.


Does NV hardware support Async compute?


----------



## EightDee8D

Quote:


> Originally Posted by *escksu*
> 
> Does NV hardware support Async compute?


probably not at same level as AMD does.


----------



## p4inkill3r

Quote:


> Originally Posted by *escksu*
> 
> Does NV hardware support Async compute?


Rhetorical question, right?


----------



## Kollock

Quote:


> Originally Posted by *KyadCK*
> 
> With more CPU load that doesn't need to exist, and with pausing the GPU to do them, and with one of the lightest ASync loads. This very thread is about it being used more.
> So other APIs do not exist? Apple's Metal, Vulkan?
> 
> Huh. I've been edumakated.
> The new driver allows them to be capable of accepting the DX12 command associated with using them.
> 
> It then proceeds to half-ass it on the CPU, load it into the queue at the driver level, pause the GPU, context switch, process it, and context switch back to graphical load.
> 
> It's a typical software solution for a hardware problem until they add the hardware.
> ^ That.


Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
Quote:


> Originally Posted by *escksu*
> 
> Does NV hardware support Async compute?


I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.


----------



## Dargonplay

Quote:


> Originally Posted by *spyshagg*
> 
> Hi
> 
> "AMD is in a fragile position without there being a good reason for it. Their graphics cards performance should warrant a much stronger market position. Yet its not happening. Why?
> Market perception perhaps. And this market is easily swayed by reviews opinions etc, as it should! but it seems that every time AMD market perception is brought down, nvidia as finger on it. And they are willing to go dirty big time. Gameworks was a stroke of genius. Its nothing but a cancer, but its being spread as a cure for games for the benefit of gamers. And by gamers I mean everyone who buys their currently available hardware, not their 2 year old hardware. And now, we have async. A dx12 spec feature in its own right. Capable of providing double digits performance gains when supported. Maxwell cant do it natively, so what do they do? fix the game with drivers! as it turns out, image quality was brought down and nobody noticed when it mattered. But that is OK!!! Its OK because performance reviews and articles were already written! the market was already swayed when it mattered. The resulting perception? = nvidia only needed drivers! its async performance is on PAR with AMD. Job done nvidia.
> 
> Only our money can keep companies honest. spend it on fallacies tricks and dishonesty and soon this will be the only choice on the market.


I said it before and I'll quote my self

_AMD have countless time stated they can't optimize anything for Gameworks titles during the development process, Nvidia is the only one who can and Nvidia must approve developers's decision to add AMD's own code in a case to case basis to optimize the game for Radeon Cards, AMD have to make these optimizations with their drivers instead of being released with the game because most of the times the approval comes late in the development process or not at all, this means the game have to launch greatly favoring Nvidia's architecture.

In all Gameworks titles AMD always loses miserably the performance war on early game but always catch up at late game, the thing is, for a marketing standpoint... Late game doesn't matter.

When a game comes out every Web Site will benchmark the soul out of it, once the early benchmarks for Gameworks titles are out, AMD Reputations gets tainted for the remaining of that generation and people always perceives them as the cheap offering, even when they are in fact the best performing offering at the time (Late game)

also when casual players are looking to upgrade they only check benchmarks from Techspot, Anandtech or the like who only benchmark games when they're freshly out in the market and never revisit them with new benchmarks after AMD is done optimizing the game with their drivers thanks to Nvidia Gameworks and their shady business model.

The tessellation Scandal, Hairworks Effects, Godrays, PhysX, Gameworks... They all have something in common, they hurt AMD more than they hurt nVidia, and they grant nVidia the performance crown when games are releasing, which in the end is all that matters for newcomers and people looking to upgrade who just do a few benchmark searches when deciding for a new video card.
This right here is what I'm talking about."_

We GAMERS have to stop Gameworks, if we don't I promise you, we'll end up with games being available only in one Company's hardware and locked for the competition, I absolutely understand AMD Getting hammered down by a superior product, one that destroys AMD's offering, a solid choice in the market coming from Nvidia that wins solely for its performance and supreme features and not carried by shady business decisions that hurt gamers to earn more money and make their sometimes inferior products look better, I loathe what they are doing and if we don't stop them we will face the dark ages of Gaming, Boycotts have greater power when combined with a restrictive wallet.

All those people wishing AMD to be gone, none of them understand how this industry, any industry would be changed when there's a monopoly, history has proven that this change is never for the better.
Quote:


> Originally Posted by *Kollock*
> 
> Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.


The Performance Difference you're seeing with new drivers is caused by "Nvidia's game optimizations" which is a funny way to name lowering the graphic settings through drivers.


----------



## 3DVagabond

Quote:


> Originally Posted by *Dargonplay*
> 
> I said it before and I'll quote my self
> We GAMERS have to stop Gameworks, if we don't I promise you, we'll end up with games being available only in one Company's hardware and locked for the competition, I absolutely understand AMD Getting hammered down by a superior product, one that destroys AMD's offering, a solid choice in the market coming from Nvidia that wins solely for its performance and supreme features and not carried by shady business decisions that hurt gamers to earn more money and make their sometimes inferior products look better, I loathe what they are doing and if we don't stop them we will face the dark ages of Gaming, Boycotts have greater power when combined with a restrictive wallet.
> 
> All those people wishing AMD to be gone, none of them understand how this industry, any industry would be changed when there's a monopoly, history has proven that this change is never for the better.


All people have to do is, 1) stop preordering and 2) don't buy the first week of release. That would be enough of a boycott to get their attention. People won't wait a week though.


----------



## christoph

Quote:


> Originally Posted by *spyshagg*
> 
> Hi
> 
> AMD is in a fragile position without there being a good reason for it. Their graphics cards performance should warrant a much stronger market position. Yet its not happening. Why?
> Market perception perhaps. And this market is easily swayed by reviews opinions etc, as it should! but it seems that every time AMD market perception is brought down, nvidia as finger on it. And they are willing to go dirty big time. Gameworks was a stroke of genius. Its nothing but a cancer, but its being spread as a cure for games for the benefit of gamers. And by gamers I mean everyone who buys their currently available hardware, not their 2 year old hardware. And now, we have async. A dx12 spec feature in its own right. Capable of providing double digits performance gains when supported. Maxwell cant do it natively, so what do they do? fix the game with drivers! as it turns out, image quality was brought down and nobody noticed when it mattered. But that is OK!!! Its OK because performance reviews and articles were already written! the market was already swayed when it mattered. The resulting perception? = nvidia only needed drivers! its async performance is on PAR with AMD. Job done nvidia.
> 
> Only our money can keep companies honest. spend it on fallacies tricks and dishonesty and soon this will be the only choice on the market.


this^


----------



## provost

Quote:


> Originally Posted by *3DVagabond*
> 
> All people have to do is, 1) stop preordering and 2) don't buy the first week of release. That would be enough of a boycott to get their attention. People won't wait a week though.


Hmmm, well, I went a step further and ended up getting FO4, just cause3 (before I bought this new fury card) on the console.. Having a blast in my spare time, and may soon purchaser rise of the tomb raider on a console too or may be on PC, provided I am absolutely sure that any performance degradation related to gimpworks has been completely mitigated.... Lol.. and don't really believe in pre-orders anyway...


----------



## Mahigan

Quote:


> Originally Posted by *Kollock*
> 
> Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
> I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.


Now this I've got to see... Will Maxwell execute Graphics + Compute commands concurrently or will Asynchronous Compute simply mean that there is no defined order by which compute commands are executed.

AMD appear to stress that performance gains are best achieved through concurrent execution of Graphics and Compute commands whereas asynchronous compute doesn't really mean that in the computer science world.

Currently the prevailing conclusiom has been (sourced from across the web)

Nvidia executes Asynchronous compute + graphics code synchronously under DX12. Nvidia supports Async Compute through Hyper-Q, CUDA, but Hyper-Q doesn't support the additional wait conditions of barriers (a DX12 requirement). So no, there is no Async Compute + Graphics for Fermi, Kepler or Maxwell under DX12 currently.

Let me explain, Microsoft have introduced additional compute queues into 3D apps with their DX12 API:

Graphics queue for primary rendering tasks
Compute queue for supporting GPU tasks (lighting, post processing, physics etc)
Copy queue for simple data transfers

Command lists, from a specific queue, are still executed synchronously, while those in different queues can execute asynchronously (ex: concurrently and in parallel). What does asynchronous mean? Asynchronous means that the order of execution of each queue in relation to another is not defined. Work loads submitted to these queues may start or complete in a different order than they were issued. In terms of Fences and barriers, they only apply to each respective queue. When the work load in a queue is blocked by a fence, the other queues can still be running and submitting work for execution. If Synchronisation points between two or more queues are required, they can be defined and enforced by using fences.

Similar features have been available under OpenCL and CUDA for some time. The fences and signals, under DX12 map directly to a subset of the event system under OpenCL and CUDA. Under DX12, however, Barriers have additional wait conditions. These wait conditions are not supported by either OpenCL or CUDA. Instead, a write through of dirty buffers needs to be explicitly requested. Therefore Asynchronous compute + Graphics under DX12, though similar to Asynchronous compute under OpenCL and CUDA, requires explicit feature support for compatibility with the Asynchronous Compute + Graphics feature.

These new queues are also different than the classic Graphics queue. While the classic Graphics queue can be fed with compute commands, copy commands and graphics commands (draw calls), the new compute and copy queues can only accept compute and copy commands respectively. Hence their names.

For Maxwell, Compute and Graphics can't be active at the same time under DX12, currently, it is theorized that this is due to the fact that there is only a single function unit (Command Processor) rather than having access to ACEs as well. Copy commands, however, can run concurrently to Graphics and Compute commands due to the inclusion of more than one DMA engine in Maxwell. We see this when looking at how Fable Legends executes the various queues. What nvidia would need, in order to execute graphics and compute commands asynchronously, is to add support for additional barrier wait times for their Hyper-Q implementation. Why? This would expose the additional execution unit under Hyper-Q. The Hyper-Q interface used for CUDAs concurrent executions supports Asynchronous compute as we see in DX11 + Physx titles (Batman Arkham series for example). Hyper-Q is, however, not compatible with the DX12 API as of the time of writting this (for reasons mentioned above). If it was compatible, there would be a hardware limit of 31 asynchronous compute queues and 1 Graphics queue (as Anandtech reported).

So all that to say that if you fence often, you can get nvidia hardware to run the Asynchronous + Graphics code synchronously. You also have to make sure you use large batches of short running shaders, long running shaders would complicate scheduling on nvidia hardware and introduce latency. Oxide, because they were using AMD supplied code, ran into this problem in Ashes of the Singularity (according to posts over at overclock.net).

Since AMD are working with IO for the Hitman DX12 path, then you can be sure that the DX12 path will be AMD optimized. That means less fencing and longer running shaders.

For Hitman, Nvidia basically have to work with IO as well, in order to add a vendor ID specific DX12 path (like we saw Oxide do). It's probably not worth it seeing as nvidia have little to gain from DX12 over DX11. AMD, however, will likely suffer from a CPU bottle neck under Hitman DX11 (as they do under Rise of the Tomb Raider DX11). AMD have a lot to gain from working with developers on coding and optimizing a DX12 path.

So to summarize,

Nvidia do not support Async compute + Graphics under DX12 at this time or perhaps ever. Hitman's DX12 path may run like crap on nvidia hardware unless nvidia convince IO Interactive to code a vendor ID specific path and supply IO with optimized short running shaders. Basically, same thing that nvidia did with Oxide for Ashes of the Singularity (if memory serves me right). Since nvidia have little to gain from moving from DX11 to DX12, best for them to not waste time and money helping IO code a vendor ID specific path.

AMD will suffer performance issues due to a CPU bottleneck, brought on by the lack of support for DX11 multi-threaded command listing, when running the Hitman DX11 path. AMD has everything to gain in assisting IO Interactive in the implementation of a DX12 path. Asynchronous compute is just an added bonus on top of the removal of the CPU bottle neck which plagues AMD GCN under DX11 titles.


----------



## mtcn77

This came up on NX Gamer recently which I expected RedTechGaming to have brought up earlier, but anyway: _Asynchronous Shaders are better for gpu tetris._


----------



## ValSidalv21

Quote:


> Originally Posted by *Dargonplay*
> 
> The Performance Difference you're seeing with new drivers is caused by "Nvidia's game optimizations" which is a funny way to name lowering the graphic settings through drivers.


I think I'll take the Oxide's developer word over yours on that one. If he says there are no noticeable differences between AMD and Nvidia then I'm inclined to believe him. He should know best, right?

On the other hand you don't seem to care that AMD optimizes tessellation if it feels it's to much, do you? Yup, because tessellation is evil, just like GameWorks


----------



## Serios

Quote:


> Originally Posted by *ValSidalv21*
> 
> I think I'll take the Oxide's developer word over yours on that one. If he says there are no noticeable differences between AMD and Nvidia then I'm inclined to believe him. He should know best, right?
> 
> On the other hand you don't seem to care that AMD optimizes tessellation if it feels it's to much, do you? Yup, because tessellation is evil, just like GameWorks


That tessellation thing is like lowering settings from ultra to high whit minimal visual impact or no visual impact at all while gaining a few fps.
Sometimes more tessellation than needed is used in Nvidia sponsored games and it affects performance without improving visual quality.
I bet Nvidia users would like a tessellation slider in Geforce Drivers.


----------



## mcg75

Quote:


> Originally Posted by *Serios*
> 
> That tessellation thing is like lowering settings from ultra to high whit minimal visual impact or no visual impact at all while gaining a few fps.
> Sometimes more tessellation than needed is used in Nvidia sponsored games and it affects performance without improving visual quality.
> I bet Nvidia users would like a tessellation slider in Geforce Drivers.


I'd like to have it in the game itself rather than drivers. Fallout 4 is a good example of how to do it properly. It has off, low, medium, high and ultra for Godrays.

Low provides 90% of the visuals that ultra does with only an 8% drop in fps. Ultra almost cuts fps in half when enabled.


----------



## airfathaaaaa

Quote:


> Originally Posted by *mcg75*
> 
> I'd like to have it in the game itself rather than drivers. Fallout 4 is a good example of how to do it properly. It has off, low, medium, high and ultra for Godrays.
> 
> Low provides 90% of the visuals that ultra does with only an 8% drop in fps. Ultra almost cuts fps in half when enabled.


i wonder how it would look like on 100% probably like a game developed by jj abrams


----------



## sugarhell

@Mahican Vulcan is out and its a hard launch.

And it seems that nvidia actually lacks the hardware for async compute.

http://vulkan.gpuinfo.org/displayreport.php?id=2

As you can see in the above link they support only a queue


----------



## spyshagg

Quote:


> Originally Posted by *ValSidalv21*
> 
> I think I'll take the Oxide's developer word over yours on that one. If he says there are no noticeable differences between AMD and Nvidia then I'm inclined to believe him. He should know best, right?
> 
> On the other hand you don't seem to care that AMD optimizes tessellation if it feels it's to much, do you? Yup, because tessellation is evil, just like GameWorks


What do your eyes tell you? you have them, look at the pictures. No matter what devs say, if its visible then its happening.


----------



## cowie

Quote:


> Originally Posted by *spyshagg*
> 
> What do your eyes tell you? you have them, look at the pictures. No matter what devs say, if its visible then its happening.


I don't mean to sound snooty but did you see the guy that is a dev say that's an old build?
they will look the same when its all said and done(whenever that is)
do you realize the numbers at 1080p with max settings?
.././

If most are holding on to Async Compute as a last straw I feel bad for you guys

lets just wait till these games really come out before anyone gets all "factoid and looky here" I think we should save it till when we see things in the open market,as we should do for everything game/hardware related


----------



## Tivan

Quote:


> Originally Posted by *cowie*
> 
> I don't mean to sound snooty but did you see the guy that is a dev say that's an old build?
> they will look the same when its all said and done(whenever that is)
> do you realize the numbers at 1080p with max settings?
> .././
> 
> If most are holding on to Async Compute as a last straw I feel bad for you guys
> 
> lets just wait till these games really come out before anyone gets all "factoid and looky here" I think we should save it till when we see things in the open market,as we should do for everything game/hardware related


Only problem, people treat beta benchmarks as somehow representative of what the game is gonna run like, sometimes. Which is silly, but at least in this case it's exceedingly clear that we'll have to wait for the full release to get useful benchmarks.


----------



## mtcn77

Quote:


> Originally Posted by *cowie*
> 
> I don't mean to sound snooty but did you see the guy that is a dev say that's an old build?
> they will look the same when its all said and done(whenever that is)
> do you realize the numbers at 1080p with max settings?
> .././
> 
> *If most are holding on to Async Compute as a last straw I feel bad for you guys
> *
> lets just wait till these games really come out before anyone gets all "factoid and looky here" I think we should save it till when we see things in the open market,as we should do for everything game/hardware related


None more so than I, if you will accept negative scaling.


----------



## spyshagg

Quote:


> Originally Posted by *cowie*
> 
> I don't mean to sound snooty but did you see the guy that is a dev say that's an old build?
> they will look the same when its all said and done(whenever that is)
> do you realize the numbers at 1080p with max settings?
> .././


discussions are what forums are for









I answered that in post: http://www.overclock.net/t/1590939/wccf-hitman-to-feature-best-implementation-of-dx12-async-compute-yet-says-amd/350_50#post_24895157

It matters because conclusions were already made based on those versions. The devs can say it doesn't matter all they want. It happened, Its factual and palpable. Words were written and published on it.

And on a side note, I wouldn't side with any games developer for obvious reasons. Nor should you. Games are a leisure to us, but business to them, hence the gameworks being "sponsored" into most games. They still wan't AMD users money, so you will never, ever witness a dev publicly admit they were willing to benefit one side in exchange of cash.


----------



## ValSidalv21

Quote:


> Originally Posted by *Serios*
> 
> That tessellation thing is like lowering settings from ultra to high whit minimal visual impact or no visual impact at all while gaining a few fps.
> Sometimes more tessellation than needed is used in Nvidia sponsored games and it affects performance without improving visual quality.
> I bet Nvidia users would like a tessellation slider in Geforce Drivers.


No matter how minimal it's visual impact is, it basically makes all comparisons apples to oranges. Sure, there are few bad examples like Crisis 2, but normally more tessellation improves visual quality.
Controlling tessellation levels in driver is a good option to have, I agree. But it's the user who should decide how much tessellation is too much, not AMD or Nvidia for that matter.
I believe the default driver option for tessellation is "AMD Optimized". So basically it's not what the game developers intended, It's what AMD decides is "optimal".
Quote:


> Originally Posted by *spyshagg*
> 
> What do your eyes tell you? you have them, look at the pictures. No matter what devs say, if its visible then its happening.


Eyes can often be deceiving








Unless the information comes from a reputable source or I experience it first hand, I wont believe a random youtube video over a dev's word.


----------



## Ha-Nocri

Quote:


> Originally Posted by *ValSidalv21*
> 
> No matter how minimal it's visual impact is, it basically makes all comparisons apples to oranges. Sure, there are few bad examples like Crisis 2, but normally more tessellation improves visual quality.
> Controlling tessellation levels in driver is a good option to have, I agree. But it's the user who should decide how much tessellation is too much, not AMD or Nvidia for that matter.
> I believe the default driver option for tessellation is "AMD Optimized". So basically it's not what the game developers intended, It's what AMD decides is "optimal".
> Eyes can often be deceiving
> 
> 
> 
> 
> 
> 
> 
> 
> Unless the information comes from a reputable source or I experience it first hand, I wont believe a random youtube video over a dev's word.


Well, since NV did over-tessellate in a few games, AMD couldn't leave it as it was. Not only it was hurting performance on AMD cards, it was also doing so for NV cards, but not as bad.... for no visual gains, like cape in one Batman game, dog fur CoD Ghost, water in Crysis 2, god rays in FO4 and who knows where else. In witcher 3 also so much tessellation for Gerald's hair is clearly wasting resources for almost no visual gains. And they give AMD no option to optimize it for their cards.

I see no problem in what AMD is doing. Problem is NV.


----------



## mcg75

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Well, since NV did over-tessellate in a few games, AMD couldn't leave it as it was. Not only it was hurting performance on AMD cards, it was also doing so for NV cards, but not as bad.... for no visual gains, like cape in one Batman game, dog fur CoD Ghost, water in Crysis 2, god rays in FO4 and who knows where else. In witcher 3 also so much tessellation for Gerald's hair is clearly wasting resources for almost no visual gains. And they give AMD no option to optimize it for their cards.
> 
> I see no problem in what AMD is doing. Problem is NV.


Fallout 4 god rays and Witcher 3 hair can be turned off in game.

We don't need driver tricks from anyone. We need the game developers to give us the option if we want to run these features. The last few games, that has been the case. Farcry 4 and Dying Light also had Gameworks features that could be turned completely off.


----------



## Ha-Nocri

Quote:


> Originally Posted by *mcg75*
> 
> Fallout 4 god rays and Witcher 3 hair can be turned off in game.
> 
> We don't need driver tricks from anyone. We need the game developers to give us the option if we want to run these features. The last few games, that has been the case. Farcry 4 and Dying Light also had Gameworks features that could be turned completely off.


Tell that to sites that do benchmarks. Performance of GPUs are based on those tests


----------



## Klocek001

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Well, since NV did over-tessellate in a few games, AMD couldn't leave it as it was. Not only it was hurting performance on AMD cards, it was also doing so for NV cards, but not as bad.... for no visual gains, like cape in one Batman game, dog fur CoD Ghost, water in Crysis 2, god rays in FO4 and who knows where else. In witcher 3 also so much tessellation for Gerald's hair is clearly wasting resources for almost no visual gains. And they give AMD no option to optimize it for their cards.
> 
> I see no problem in what AMD is doing. Problem is NV.


you amd die hard fans have got to be the most boring bunch of ppl on the planet.
you're just gonna keep talking this until nvidia optimizes amd drivers for them.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Klocek001*
> 
> you amd die hard fans have got to be the most boring bunch of ppl on the planet.
> you're just gonna keep talking this until nvidia optimizes amd drivers for them.


For start they could give AMD code to work with and optimize around


----------



## Mahigan

Quote:


> Originally Posted by *sugarhell*
> 
> @Mahican Vulcan is out and its a hard launch.
> 
> And it seems that nvidia actually lacks the hardware for async compute.
> 
> http://vulkan.gpuinfo.org/displayreport.php?id=2
> 
> As you can see in the above link they support only a queue


I saw that when comparing..
GTX 980:
http://vulkan.gpuinfo.org/displayreport.php?id=1

Fury:
http://vulkan.gpuinfo.org/displayreport.php?id=12

This hints that Nvidia do support "Asynchronous Compute" meaning a non-defined order to the execution of compute work loads. But not Asynchronous Compute as it was introduced to us by AMD Mantle ansd then DX12 (meaning concurrent execution of Compute + Graphics work loads).

Since the performance boost comes from Compute + Graphics concurrent execution then it would seem like nvidia have northing to gain, or very little, from their Asynchronous Compute implementation.


The situation remains unchanged. Like GPUView showed, nvidia GPUs throw all of their work into the same queue (Graphics or 3D queue).

Fable Legends DX12:


----------



## Dargonplay

Quote:


> Originally Posted by *ValSidalv21*
> 
> I think I'll take the Oxide's developer word over yours on that one. If he says there are no noticeable differences between AMD and Nvidia then I'm inclined to believe him. He should know best, right?
> 
> On the other hand you don't seem to care that AMD optimizes tessellation if it feels it's to much, do you? Yup, because tessellation is evil, just like GameWorks


You know its a sad day when people believe the word of Money Driven business men above their own Ocular Organs that managed to evolve for million years just to have its input disregarded and outweighed by some Nvidia Spokesman.
Quote:


> Originally Posted by *Klocek001*
> 
> you amd die hard fans have got to be the most boring bunch of ppl on the planet.
> you're just gonna keep talking this until nvidia optimizes amd drivers for them.


Nvidia should actually learn to optimize for their own cards because even that is hard for them apparently. The best of Kepler (GTX 780Ti or Titan) is being hammered by a GTX 970 while the GTX 960 beats the GTX 780, we're not AMD Fanboys nor boring, we're just aware.


----------



## mcg75

Quote:


> Originally Posted by *Dargonplay*
> 
> Nvidia should actually learn to optimize for their own cards because even that is hard for them apparently. The best of Kepler (GTX 780Ti or Titan) is being hammered by a GTX 960 LOL, we're not AMD Fanboys nor boring, we're just not stupid.


Let's not get into labeling people, it derails what were trying to accomplish with discussion.

The latest TPU testing has the 780 Ti being 60% faster at 1080p, 68.5% faster at 1440p and 69% faster at 4K than the 960.

https://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html

I'm all for giving crap where deserved but we've seen too many beta tests being thrown around here as end results lately.

I can't find a single released game where a 960 "hammers" a 780 Ti. Can you shed some light on what is leading you to state as such?


----------



## poii

http://www.amd4u.eu/hitman/where-to-buy.html

AMD gives away a Hitman copy with every R9 390, R9 390X, FX-6xxx and FX-8xxx Processor, in the EU that is. No idea about NA or other countries.


----------



## Dargonplay

Quote:


> Originally Posted by *mcg75*
> 
> Let's not get into labeling people, it derails what were trying to accomplish with discussion.
> 
> The latest TPU testing has the 780 Ti being 60% faster at 1080p, 68.5% faster at 1440p and 69% faster at 4K than the 960.
> 
> https://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html
> 
> I'm all for giving crap where deserved but we've seen too many beta tests being thrown around here as end results lately.
> 
> I can't find a single released game where a 960 "hammers" a 780 Ti. Can you shed some light on what is leading you to state as such?


Well, the GTX 960 still match and surpass the 780 Kepler card, which is absolutely bollocks, also the 970 in several games do hammers the 780Ti and GTX Titan, that shouldn't happen.





Ironically, Project Cars is one of the most Gameworks intensive games out there, check how the GTX 960 Match or beat the GTX 780 while the 970 seems to look down the best of Kepler.

Also in Fallout 4 the trend continues.



A GTX 960 beating a GTX 780 and a GTX 970 hammering the best of Kepler including the Titan, tell me that this isn't wrong.

The same seems to happen with The Witcher 3 with the 970 Hammering the Titan and the rest of Kepler, with the GTX 960 slightly behind the 780.



a GTX 970 beating the Titan for almost 50% more performance, that's disgusting, let alone the fact that its also beating the 295x2 for some reason, the GTX 960 is 3 Frames away from the Titan.


----------



## atomhard

Quote:


> Originally Posted by *Kollock*
> 
> I can confirm that the latest shipping DX12 drivers from NV do support async compute.


Quote:


> Originally Posted by *Mahigan*
> 
> Now this I've got to see...


plx don't.... I don't want to see you jumping off the bridge


----------



## airfathaaaaa

Quote:


> Originally Posted by *atomhard*
> 
> plx don't.... I don't want to see you jumping off the bridge


from what? being bored waiting for a true statement from nvidia?


----------



## Themisseble

Quote:


> Originally Posted by *airfathaaaaa*
> 
> from what? being bored waiting for a true statement from nvidia?


Maxwell cannot run async shaders. Who cares? That NVIDIA lied? That few games will run better on AMD Fury X?... nobody even cared when GTX 970 was able to use whole 4Gb VRAM. *Nobody!*
People still recommend GTX 970 over R9 390 or GTX 980 over Fury/NANO and people are actually buying these cards. So who will care? *Nobody!* (_Oh, I forgot few people on forum will care about it.. and thats it)_

People are still buying NVIDIA over AMD just, because of *PHYSX!* Well Just cause 3 runs PHSYX on CPU only!

*Thats a true story.*


----------



## Assirra

Quote:


> Originally Posted by *Themisseble*
> 
> Maxwell cannot run async shaders. Who cares? That NVIDIA lied? That few games will run better on AMD Fury X?... nobody even cared when GTX 970 was able to use whole 4Gb VRAM. *Nobody!*
> People still recommend GTX 970 over R9 390 or GTX 980 over Fury/NANO and people are actually buying these cards. So who will care? *Nobody!* (_Oh, I forgot few people on forum will care about it.. and thats it)_
> 
> People are still buying NVIDIA over AMD just, because of *PHYSX!* Well Just cause 3 runs PHSYX on CPU only!
> 
> *Thats a true story.*


People don't buy Nvidia because of physx lol. That is just a silly statement.
People buy Nvidia because they are simply the bigger brand. When people come to me i will usually recommend an nvidia card. Why? Because i have experience with those and in all those year it has never been bad. Then those people do the same and it is a nice chain. Combine that with a ton of games that nvidia groups up with and yea.
Brand recognition is important and 80% of the market shows it.


----------



## Yvese

Quote:


> Originally Posted by *Assirra*
> 
> People don't buy Nvidia because of physx lol. That is just a silly statement.
> People buy Nvidia because they are simply the bigger brand. When people come to me i will usually recommend an nvidia card. Why? Because i have experience with those and in all those year it has never been bad. Then those people do the same and it is a nice chain. Combine that with a ton of games that nvidia groups up with and yea.
> Brand recognition is important and 80% of the market shows it.


Instead of being part of the problem you should be trying to solve it by recommending cards based on what's actually better and not the name on the box.

The 390 is better than the 970. If more people recommended it over the 970 then we wouldn't have a bunch of people complaining their cards stutter on max settings because they only have 3.5GB of vram


----------



## mcg75

Quote:


> Originally Posted by *Dargonplay*
> 
> Well, the GTX 960 still match and surpass the 780 Kepler card, which is absolutely bollocks, also the 970 in several games do hammers the 780Ti and GTX Titan, that shouldn't happen.
> 
> Ironically, Project Cars is one of the most Gameworks intensive games out there, check how the GTX 960 Match or beat the GTX 780 while the 970 seems to look down the best of Kepler.


Project Cars is certainly a special case. And yes, initially it was terrible and Roy Taylor had some special accusations for Slightly Mad Studios on Twitter. Magically a couple days later the post was removed and Roy and Slightly Mad were all best friends again with Roy promising they were going to work together to make it better.

http://www.hardocp.com/article/2016/01/25/xfx_r9_380_dd_black_edition_oc_4gb_review/5

So now here we have the 380 beating the 960 in Project Cars in a 1080p same exact settings scenario. What's important about this is that they also compared GTA V and the 380 beats the 960 in that game as well by the same margin.

So who's at fault here? It appears the promise Roy made about AMD working with SMS paid off. AMD worked with Rockstar on GTA V as well. The fact is we don't know for sure who's doing what here. Is the developer or AMD telling us fibs. I don't have proof to condemn either one. The only thing we can take from this is that when AMD and the developer do work together, things get fixed.
Quote:


> Originally Posted by *Dargonplay*
> 
> Also in Fallout 4 the trend continues.
> 
> A GTX 960 beating a GTX 780 and a GTX 970 hammering the best of Kepler including the Titan, tell me that this isn't wrong.


Fair point. It is wrong. But let's dissect this a little further by looking at the other settings Tom's ran in that test.

You're showing us God Rays at Ultra. We already know Maxwell is better than Kepler with tesselation which is how God Rays work.

Tom's dropped from Ultra to High and the gtx 780 went from 1 fps behind the 960 to 14 fps ahead.

I'm not really sure I trust Tom's numbers there though. One thing that stuck out to me was the Titan Black and the 780 Ti Windforce numbers. They are the same card with the Titan clocked 31 MHz higher. At high the Titan is 8 fps higher and at ultra, the Titan is only 2 fps higher. That doesn't make any sense.

If you look at the TPU review for the 980 Ti Matrix, the 780 Ti is eating the 970 for breakfast in fps.
Quote:


> Originally Posted by *Dargonplay*
> 
> The same seems to happen with The Witcher 3 with the 970 Hammering the Titan and the rest of Kepler, with the GTX 960 slightly behind the 780.
> 
> a GTX 970 beating the Titan for almost 50% more performance, that's disgusting, let alone the fact that its also beating the 295x2 for some reason, the GTX 960 is 3 Frames away from the Titan.


Let's look at some other conclusions for the Witcher 3 with patches and driver updates.

http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/18.html

The 960 ends up behind the 780 at all resolutions and the 970 is just slightly ahead of 780 Ti at all resolutions in that one game.

At release, the 980 was slightly faster than 780 Ti and the 970 was slightly slower. Since then the 980 has jumped a little further ahead of the 780 Ti due to a combination of driver maturity and tess rendering advantages. That will also mean the 970 is going to inch closer to the 780 Ti as well.

http://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html

So in TPU's latest test with a wide range of games, the 970 ties the 780 Ti at 1080p and loses at 1440p and 4K.

So when we determine what is beating what, it's really best to have a bunch of games in the loop to make that decision. Using 1 or 2 games to make a conclusion will often show things that may be true at that sole point in time but fixed later.


----------



## Mahigan

Quote:


> Originally Posted by *mcg75*
> 
> Project Cars is certainly a special case. And yes, initially it was terrible and Roy Taylor had some special accusations for Slightly Mad Studios on Twitter. Magically a couple days later the post was removed and Roy and Slightly Mad were all best friends again with Roy promising they were going to work together to make it better.
> 
> http://www.hardocp.com/article/2016/01/25/xfx_r9_380_dd_black_edition_oc_4gb_review/5
> 
> So now here we have the 380 beating the 960 in Project Cars in a 1080p same exact settings scenario. What's important about this is that they also compared GTA V and the 380 beats the 960 in that game as well by the same margin.
> 
> So who's at fault here? It appears the promise Roy made about AMD working with SMS paid off. AMD worked with Rockstar on GTA V as well. The fact is we don't know for sure who's doing what here. Is the developer or AMD telling us fibs. I don't have proof to condemn either one. The only thing we can take from this is that when AMD and the developer do work together, things get fixed.
> Fair point. It is wrong. But let's dissect this a little further by looking at the other settings Tom's ran in that test.
> 
> You're showing us God Rays at Ultra. We already know Maxwell is better than Kepler with tesselation which is how God Rays work.
> 
> Tom's dropped from Ultra to High and the gtx 780 went from 1 fps behind the 960 to 14 fps ahead.
> 
> I'm not really sure I trust Tom's numbers there though. One thing that stuck out to me was the Titan Black and the 780 Ti Windforce numbers. They are the same card with the Titan clocked 31 MHz higher. At high the Titan is 8 fps higher and at ultra, the Titan is only 2 fps higher. That doesn't make any sense.
> 
> If you look at the TPU review for the 980 Ti Matrix, the 780 Ti is eating the 970 for breakfast in fps.
> Let's look at some other conclusions for the Witcher 3 with patches and driver updates.
> 
> http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/18.html
> 
> The 960 ends up behind the 780 at all resolutions and the 970 is just slightly ahead of 780 Ti at all resolutions in that one game.
> 
> At release, the 980 was slightly faster than 780 Ti and the 970 was slightly slower. Since then the 980 has jumped a little further ahead of the 780 Ti due to a combination of driver maturity and tess rendering advantages. That will also mean the 970 is going to inch closer to the 780 Ti as well.
> 
> http://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html
> 
> So in TPU's latest test with a wide range of games, the 970 ties the 780 Ti at 1080p and loses at 1440p and 4K.
> 
> So when we determine what is beating what, it's really best to have a bunch of games in the loop to make that decision. Using 1 or 2 games to make a conclusion will often show things that may be true at that sole point in time but fixed later.


Tessellation isn't that much better going from Kepler to Maxwell:



What is optimized is shader occupancy and efficiency. Each SMM under Maxwell is far more powerful than each SMX under Kepler. Maxwell v2 contains the fp32 compute optimizations that Maxwell 20nm was supposed to have (now split into two sku's Maxwell V2 28nm and Pascal 16nm Finfet+). These result in 35% more fp32 performance per core. So while a GTX 780 has a theoretical performance of 4 Tflops, a Maxwell based GPU only needs 2.6 Tflops to match it compute wise. A GTX 960 pulls around 2.4 Tlops. Pretty close. How?
Quote:


> The end result is that in an SMX the 4 warp schedulers would share most of their execution resources and work out which warp was on which execution resource for any given cycle. But on an SMM, the warp schedulers are removed from each other and given complete dominion over a far smaller collection of execution resources. No longer do warp schedulers have to share FP32 CUDA cores, special function units, or load/store units, as each of those is replicated across each partition. Only texture units and FP64 CUDA cores are shared.


(Hint: Pascal will push this further and change the way fp64 CUDA cores are shared thus boosting fp64 performance).
Quote:


> Moving on, along with the SMM layout changes NVIDIA has also made a number of small tweaks to improve the IPC of the GPU. The scheduler has been rewritten to avoid stalls and otherwise behave more intelligently. Furthermore by achieving higher utilization of their existing hardware, NVIDIA doesn't need as many functional units to hit their desired performance targets, which in turn saves on space and ultimately power consumption.


Add the various compression algorythms found on Maxwell V2 and you have the technology to make a 900 series low-mid range card compete with a high-end 780 series card under certain conditions.

GeForce GTX 780
Pixel Fill rate: 43
Texel Fill rate: 173
Memory Bandwidth: 288 GB/s

GeForce GTX 960
Pixel Fill rate: 38
Texel Fill rate: 75
Memory Bandwidth: 112 GB/s

Memory bandwidth?
Quote:


> While on the subject of performance efficiency, NVIDIA has also been working on memory efficiency too. From a performance perspective GDDR5 is very powerful, however it's also very power hungry, especially in comparison to DDR3. With GM107 in particular being a 128-bit design that would need to compete with the likes of the 192-bit GK106, NVIDIA has massively increased the amount of L2 cache they use, from 256KB in GK107 to 2MB on GM107. This reduces the amount of traffic that needs to cross the memory bus, reducing both the power spent on the memory bus and the need for a larger memory bus altogether.
> 
> Increasing the amount of cache always represents an interesting tradeoff since cache is something of a known quantity and is rather dense, but it's only useful if there are memory stalls or other memory operations that it can cover. Consequently we often see cache implemented in relation to whether there are any other optimizations available. In some cases it makes more sense to use the transistors to build more functional units, and in other cases it makes sense to build the cache. After staying relatively stagnant on their cache sizes for so long, it looks like the balance has finally shifted and the cache increase makes the most sense for NVIDIA.


I think that Tessellation culling (saving on cache and bandwidth) coupled with fp32 performance optimizations, larger L2 cache of 2MB and pixel fill rate + pixel compression algorythms explains the differences between Kepler, which lacks these, and Maxwell V2.

Oh and part of the gameworks agreement is that the developer needs the consent of nvidia to work with AMD and the consent on what can be communicated by the dev to AMD (gameworks black box). This consent is often given late or too close to launch. Forcing AMD to miss the launch driver optimizations and end up looking bad all over the media. This image stays with consumers affecting AMD graphics sales and the Radeon brand as a whole. AMD often fix problems after the launch though but the damage is already done.

Someone else already communicated this and it is the truth.


----------



## magnek

@mcg75:

As you guessed, when he said 780 Ti he likely meant the 780. If so there are two more examples I can remember off the top of my head:

*Call of Duty: Advanced Warfare*


*Far Cry 4*


None of them have the 960 "stomping" the 780, but it does come dangerously close (within 10%) of the 780, when usually the 780 is at least a good 20-25% ahead of the 960.


----------



## Assirra

Quote:


> Originally Posted by *Yvese*
> 
> Instead of being part of the problem you should be trying to solve it by recommending cards based on what's actually better and not the name on the box.
> 
> The 390 is better than the 970. If more people recommended it over the 970 then we wouldn't have a bunch of people complaining their cards stutter on max settings because they only have 3.5GB of vram


Part of the problem?
What problem exactly?

I really don't find this too hard to understand.
For myself i do extensive research and see what is actually best. If troubles come from going a new route i take them with it. At the time of purchase this card was the best for my liking.

For other people i tend to recommend the brands i use myself and have a good opinion on. I am not going to throw somebody else in a jungle of god knows what when said person knows less about this whole subject then me. Otherwise i didn't need to recommend a card in the first place. Imagine if i recommend someone else a different brand and said person will have nothing but troubles, then what? I ain't going to risk someone's else his hardware for that.

If that makes me "part of the problem" so be it.


----------



## mcg75

Quote:


> Originally Posted by *Mahigan*
> 
> Tessellation isn't that much better going from Kepler to Maxwell:
> 
> 
> 
> What is optimized is shader occupancy and efficiency. Each SMM under Maxwell is far more powerful than each SMX under Kepler. Maxwell v2 contains the fp32 compute optimizations that Maxwell 20nm was supposed to have (now split into two sku's Maxwell V2 28nm and Pascal 16nm Finfet+). These result in 35% more fp32 performance per core. So while a GTX 780 has a theoretical performance of 4 Tflops, a Maxwell based GPU only needs 2.6 Tflops to match it compute wise. A GTX 960 pulls around 2.4 Tlops. Pretty close.
> 
> Add the various compression algorythms found on Maxwell V2 and you have the technology to make a 900 series low-mid range card compete with a high-end 780 series card under certain conditions.
> 
> I think that Tessellation culling (saving on cache and bandwidth) coupled with fp32 performance optimizations explains the differences between Kepler, which lacks these, and Maxwell V2.


Mahigan, your understanding of how these things work puts the vast majority of us to shame. Thank you for the insight.

Big Maxwell vs big Kepler still has about a 40% improvement in that test though. Just like anything these companies claim, the end result still ends up being less than what marketing states it to be. 40% certainly isn't the 3x claimed.
Quote:


> Originally Posted by *Mahigan*
> 
> Oh and part of the gameworks agreement is that the developer needs the consent of nvidia to work with AMD and the consent on what can be communicated by the dev to AMD (gameworks black box). This consent is often given late or too close to launch. Forcing AMD to miss the launch driver optimizations and end up looking bad all over the media. This image stays with consumers affecting AMD graphics sales and the Radeon brand as a whole. AMD often fix problems after the launch though but the damage is already done.
> 
> Someone else already communicated this and it is the truth.


This is for actual Gameworks labeled games correct?

For a title such as GTA V, AMD worked with the developer despite the presence of Gameworks features. But it wasn't a Gameworks game. Same goes for Fallout 4.

Is that why titles with Gameworks features but don't fall under the Gameworks banner seem to allow AMD to optimize quicker?


----------



## Mahigan

Quote:


> Originally Posted by *mcg75*
> 
> Mahigan, your understanding of how these things work puts the vast majority of us to shame. Thank you for the insight.
> 
> Big Maxwell vs big Kepler still has about a 40% improvement in that test though. Just like anything these companies claim, the end result still ends up being less than what marketing states it to be. 40% certainly isn't the 3x claimed.
> This is for actual Gameworks labeled games correct?
> 
> For a title such as GTA V, AMD worked with the developer despite the presence of Gameworks features. But it wasn't a Gameworks game. Same goes for Fallout 4.
> 
> Is that why titles with Gameworks features but don't fall under the Gameworks banner seem to allow AMD to optimize quicker?


NVIDIA marketing tends to multiply various architectural performance speedups in a linear fashion.
So say memory performance has increased 3 fold and fp64 2 fold they will claim that the new architecture is a 6 times faster.

It makes no sense but that's marketing.

As for your comments on Gameworks, it makes sense though neither AMD, Developers or nvidia have commented on thay aspect. The mud slinging has been focused towards the Gameworks banner. Probably because the title is running, from start to finish, nvidia IP. Other titles, with game works but not under the banner, have the devs own in house code spiced up with gameworks effects. AMD, and the dev, could work on optimizing the devs own in house code without an explicit permission from nvidia.


----------



## Dargonplay

Quote:


> Originally Posted by *Mahigan*
> 
> Tessellation isn't that much better going from Kepler to Maxwell:
> 
> 
> 
> What is optimized is shader occupancy and efficiency. Each SMM under Maxwell is far more powerful than each SMX under Kepler. Maxwell v2 contains the fp32 compute optimizations that Maxwell 20nm was supposed to have (now split into two sku's Maxwell V2 28nm and Pascal 16nm Finfet+). These result in 35% more fp32 performance per core. So while a GTX 780 has a theoretical performance of 4 Tflops, a Maxwell based GPU only needs 2.6 Tflops to match it compute wise. A GTX 960 pulls around 2.4 Tlops. Pretty close. How?
> (Hint: Pascal will push this further and change the way fp64 CUDA cores are shared thus boosting fp64 performance).
> Add the various compression algorythms found on Maxwell V2 and you have the technology to make a 900 series low-mid range card compete with a high-end 780 series card under certain conditions.
> 
> GeForce GTX 780
> Pixel Fill rate: 43
> Texel Fill rate: 173
> Memory Bandwidth: 288 GB/s
> 
> GeForce GTX 960
> Pixel Fill rate: 38
> Texel Fill rate: 75
> Memory Bandwidth: 112 GB/s
> 
> Memory bandwidth?
> I think that Tessellation culling (saving on cache and bandwidth) coupled with fp32 performance optimizations, larger L2 cache of 2MB and pixel fill rate + pixel compression algorythms explains the differences between Kepler, which lacks these, and Maxwell V2.
> 
> Oh and part of the gameworks agreement is that the developer needs the consent of nvidia to work with AMD and the consent on what can be communicated by the dev to AMD (gameworks black box). This consent is often given late or too close to launch. Forcing AMD to miss the launch driver optimizations and end up looking bad all over the media. This image stays with consumers affecting AMD graphics sales and the Radeon brand as a whole. AMD often fix problems after the launch though but the damage is already done.
> 
> Someone else already communicated this and it is the truth.


Complete, stunning and eye opening explanation, I don't even know anyone who could have said it better.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Dargonplay*
> 
> a GTX 970 beating the Titan for almost 50% more performance, that's disgusting, let alone the fact that its also beating the 295x2 for some reason, the GTX 960 is 3 Frames away from the Titan.


To be fair, the reviewers always bench at stock clocks which are very low on Titan. Mine will do 1320mhz which would more than make up that difference (though would still lose to a max OC 970). No question Kepler driver optimizations are nonexistent at this point which is pretty lame...


----------



## anti-clockwize

Quote:


> Originally Posted by *PlugSeven*
> 
> ?A world of difference here.


Why does the game look so much better and more detailed on the 390X in this video?
Are the effects that do not appear on the 980 driven by Async Compute and therefor don't get rendered at all? (They don't appear on the dx11 or dx12 runs at either quality settings - most clear at the back of the planes when they do a close-up of them flying past.)


----------



## Lass3

Quote:


> Originally Posted by *anti-clockwize*
> 
> Why does the game look so much better and more detailed on the 390X in this video?
> Are the effects that do not appear on the 980 driven by Async Compute and therefor don't get rendered at all? (They don't appear on the dx11 or dx12 runs at either quality settings - most clear at the back of the planes when they do a close-up of them flying past.)


Looks like the 980 has a grid overlay. I have no clue who the uploader is. Seems like a post filter applied.

Looking forward to see IQ comparison when the game is actually out, made by reliable sources.


----------



## Defoler

Quote:


> Originally Posted by *Serios*
> 
> I've seen no mention of that, you are just assuming.


No. Actual benchmarks showed nvidia doing just as good as AMD in ashes.
You not seeing it does not invalidate it's existence.
Go look it up instead of just assuming it didn't happen.


----------



## Defoler

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> To be fair, the reviewers always bench at stock clocks which are very low on Titan. Mine will do 1320mhz which would more than make up that difference (though would still lose to a max OC 970). No question Kepler driver optimizations are nonexistent at this point which is pretty lame...


Apples vs apples.
If you compare non-OC card to a 3 times the price OCed card, that comparison is worthless.
Because for the price of a titan you can buy two 970s, water blocks and loop, OC the hell out of them, and still have change.
Will that comparison be more fair? An air cooled titan with moderated OC vs water-cooled maxed OC 970 SLI?

I think the main reason why kepler optimisation is lacking, is mainly because nvidia wants you to buy maxwell, and while GCN from AMD is still based on the first gen, so optimisation affect both new and old, maxwell and kepler are not as similar, so spending time optimising to kepler while they are working on maxwell, is time spending and not really money making.

Also don't forget that the latest cards from AMD, the 300 series, are basically just 200 series rebrand, so making optimisation to the 300 series subsequently also affect the 200 series.


----------



## Defoler

Quote:


> Originally Posted by *Dargonplay*
> 
> Nvidia should actually learn to optimize for their own cards because even that is hard for them apparently. The best of Kepler (GTX 780Ti or Titan) is being hammered by a GTX 970 while the GTX 960 beats the GTX 780, we're not AMD Fanboys nor boring, we're just aware.


TBH, why should they?
Are they still selling kepler cards? Does it give them money to optimise for kepler?
They are in the business of selling cards.

AMD 200 series gets some extra optimisation because 300 is a rebrand. If it wasn't, I doubt that increase performance would have happen on AMD. This is also eating up AMD sales, people sticking to the 200 series. Which is another reason why the 900 series is doing so well for nvidia.


----------



## Serios

Quote:


> Originally Posted by *Defoler*
> 
> No. Actual benchmarks showed nvidia doing just as good as AMD in ashes.
> You not seeing it does not invalidate it's existence.
> Go look it up instead of just assuming it didn't happen.


No we actually had the confirmation a couple of posts back that at this moment A-sync compute is not activated for Nvidia cards so stop the nonsense.


----------



## mtcn77

Quote:


> Originally Posted by *Defoler*
> 
> TBH, why should they?
> Are they still selling kepler cards? Does it give them money to optimise for kepler?
> They are in the business of selling cards.
> 
> AMD 200 series gets some extra optimisation because 300 is a rebrand. If it wasn't, I doubt that increase performance would have happen on AMD. This is also eating up AMD sales, people sticking to the 200 series. Which is another reason why the 900 series is doing so well for nvidia.


The Asynchronous Shading stuff will increase texture utilization and extra ram will consecutively prove useful. Vulkan is such an adrenaline shot for Radeons which we are currently failing to recognize the full scope of, I assume.


----------



## spyshagg

Quote:


> Originally Posted by *mcg75*
> 
> Fallout 4 god rays and Witcher 3 hair can be turned off in game.
> 
> We don't need driver tricks from anyone. We need the game developers to give us the option if we want to run these features. The last few games, that has been the case. Farcry 4 and Dying Light also had Gameworks features that could be turned completely off.


Would be a sight to behold to start having these options in games:

Please chose Graphical settings:
- Tesselation - ON
- Maximum Tesselation on a flat wall in the middle of the street - ON
- Maximum Tesselation on a sea of water hiding under the terrain - ON

Reviewers response to these settings: "makes a world of a difference! Truly shines on nvidia hardware."


----------



## Defoler

Quote:


> Originally Posted by *Serios*
> 
> No we actually had the conformation a couple of posts back that at this moment A-sync compute is not activated for Nvidia cards so stop the nonsense.


You can relax. You are wrong as usual. There have been several discussions on this in the past. AMD saying that async is not possible in Nvidia is of course the most reliable source of information regarding nvidia








Quote:


> Originally Posted by *mtcn77*
> 
> The Asynchronous Shading stuff will increase texture utilization and extra ram will consecutively prove useful. Vulkan is such an adrenaline shot for Radeons which we are currently failing to recognize the full scope of, I assume.


Not saying that it won't. I'm just stating the obvious that selling cards doesn't mean supporting old cards for infinity. Especially as the tech moves forward, while the old cards do not.
We already have pascal on the way. What next? expect nvidia to still support and increase performance on fermi or microsoft to bring DX12 to vista?
Quote:


> Originally Posted by *spyshagg*
> 
> Would be a sight to behold to start having these options in games:
> 
> Please chose Graphical settings:
> - Tesselation - ON
> - Maximum Tesselation on a flat wall in the middle of the street - ON
> - Maximum Tesselation on a sea of water hiding under the terrain - ON
> 
> Reviewers response to these settings: "makes a world of a difference! Truly shines on nvidia hardware."


When the 5000 series came out from AMD, tessellation was the best thing since sliced bread. Everyone wanted tessellation, and it did make a huge difference in textures and how the game acts (a paved way would actually be a paved way instead of just a texture of one). Since then AMD performance in tessellation went down hill, so it is now useless?

Amount of tessellation can be controlled by the developer. Many end up reducing it to not hurt AMD cards.


----------



## airfathaaaaa

defoler you do realise that even the vulkan bench program exposes the truth about it eh i dont get it why you are in such a denial about it even your beloved company said it https://developer.nvidia.com/dx12-dos-and-donts

they simply cant do any async compute at the form that is presented on dx12/vulkan


----------



## mtcn77

Quote:


> Originally Posted by *Defoler*
> 
> Not saying that it won't. I'm just stating the obvious that selling cards doesn't mean supporting old cards for infinity. Especially as the tech moves forward, while the old cards do not.
> We already have pascal on the way. What next? expect nvidia to still support and increase performance on fermi or microsoft to bring DX12 to vista?


Incredulity and slippery-slope arguments won't help Nvidia if AMD gains a wider support base for their exclusive performance benefits. Bluffing ignorance will only strengthen AMD's hand.


----------



## Serios

Quote:


> Originally Posted by *Defoler*
> 
> You can relax. You are wrong as usual. There have been several discussions on this in the past. AMD saying that async is not possible in Nvidia is of course the most reliable source of information regarding nvidia
> 
> 
> 
> 
> 
> 
> 
> .


Async compute is not activated at this time AotS for Nvidia GPUs, let that sink in.
Also Nvidia can't do Async Compute the same way AMD can so probably we won't see a performance boost for Nvidia cards.


----------



## spyshagg

Quote:


> Originally Posted by *Defoler*
> 
> When the 5000 series came out from AMD, tessellation was the best thing since sliced bread. Everyone wanted tessellation, and it did make a huge difference in textures and how the game acts (a paved way would actually be a paved way instead of just a texture of one). Since then AMD performance in tessellation went down hill, so it is now useless?
> 
> Amount of tessellation can be controlled by the developer. Many end up reducing it to not hurt AMD cards.


Its being used as a weapon. If AMD weakness was some other effect, would you honestly believe nvidia wouldn't tweak gameworks to exploit it instead? I say they would.

These examples show clear intent. And its only crysis 2. There are many many other examples.

http://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing/2

Tessellated Sea. This its OK


The tessellated sea being kept rendered underneath the level. This is not OK


This mundane wall:




Is being rendered like this:


----------



## mcg75

Quote:


> Originally Posted by *spyshagg*
> 
> Would be a sight to behold to start having these options in games:
> 
> Please chose Graphical settings:
> - Tesselation - ON
> - Maximum Tesselation on a flat wall in the middle of the street - ON
> - Maximum Tesselation on a sea of water hiding under the terrain - ON
> 
> Reviewers response to these settings: "makes a world of a difference! Truly shines on nvidia hardware."


While we can't control what some questionable reviewers say.....the high-res texture pack that's a required install with the dx11 patch was responsible for the visual wow factors not tess.

But we can run the game in dx9 mode to eliminate poor performance regardless which brand of card is installed.

Game was developed with dx9 and patched with a dx11 mode and tess 4 months after the release date.

Was Crytek so eager to put dx11 features in, they didn't consider the usefulness or consequences of where they added it?

Or did Nvidia just tell them to splash it everywhere to ruin performance across the board?

I assume a little bit of both are true.


----------



## Kollock

Quote:


> Originally Posted by *Defoler*
> 
> You can relax. You are wrong as usual. There have been several discussions on this in the past. AMD saying that async is not possible in Nvidia is of course the most reliable source of information regarding nvidia
> 
> 
> 
> 
> 
> 
> 
> 
> Not saying that it won't. I'm just stating the obvious that selling cards doesn't mean supporting old cards for infinity. Especially as the tech moves forward, while the old cards do not.
> We already have pascal on the way. What next? expect nvidia to still support and increase performance on fermi or microsoft to bring DX12 to vista?
> When the 5000 series came out from AMD, tessellation was the best thing since sliced bread. Everyone wanted tessellation, and it did make a huge difference in textures and how the game acts (a paved way would actually be a paved way instead of just a texture of one). Since then AMD performance in tessellation went down hill, so it is now useless?
> 
> Amount of tessellation can be controlled by the developer. Many end up reducing it to not hurt AMD cards.


FWIW, the algorithm for DX11 tessellation is pretty poor. It isn't commonly used by developers. You can see it in the wireframe screenshots here how evident it is. A great presentation on it is this http://www.gdcvault.com/play/1020038/Advanced-Visual-Effects-with-DirectX. I wouldn't recommend using stock tessellation. You could create a compute shader that tessellated and produced better looking results and was faster.


----------



## jmcosta

if i remember correctly crysis 2 perform a ~10% less in amd gpus in dx9 and dx11, yea it had overuse of tessellation for sure but i don't think it was the main problem.. who started with that was probably just a fanboy talk trying to blame the opposite brand

http://www.tomshardware.com/reviews/crysis-2-directx-11-performance,2983-5.html

anyway about hitman we will see if it gets some good framerate gains with this implementation, for now its just marketing to me


----------



## BradleyW

I had 6970 CFX and GTX 580 SLI during the Crysisi 2 fiasco, so I did my own testing. Max out, no AA, 1080p.

6970's dropped as low as 25 during large scenes and fights.
580's kept 80+ on average.

I then tested single GPU and the 580 was around 60% faster than the 6970 during real world testing.


----------



## kx11

AMD seems to be all over this game


----------



## BradleyW

The last Hitman game was an utter embarresment to the series. I hope this one is done right! At a guess I'd say it will be.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> To be fair, the reviewers always bench at stock clocks which are very low on Titan. Mine will do 1320mhz which would more than make up that difference (though would still lose to a max OC 970). No question Kepler driver optimizations are nonexistent at this point which is pretty lame...


That is very true but 1300MHz+ is not common for Big Kepler. Most people where ~ 1200MHz range so ~ 10-20% more performance than the benchmarks shown. If you factor in overclocks my 290X OCed can beat Fury and match Fury X in a lot of games. It only it was that simple. The thing is that GTX980 Ti should not be 2x faster then Titan OG but in a lot of new games it is. Very crazy if you think about it since on the memory side it was only the improved 20-30%.


----------



## kx11

welp Beta is here


----------



## GorillaSceptre

That internet.. I get 4Mb/s(yes, _Bit_) if I'm lucky..


----------



## kx11

Quote:


> Originally Posted by *GorillaSceptre*
> 
> 
> 
> 
> 
> 
> 
> 
> That internet.. I get 4Mb/s(yes, _Bit_) if I'm lucky..


it's not full speed for some weird reason







, usually i get 35mb/s DL speed


----------



## maltamonk

Looks like they have added it to certain cards and cpus.
http://videocardz.com/58259/hitman-free-with-radeon-graphics-cards


----------



## bigjdubb

Quote:


> Originally Posted by *kx11*
> 
> welp Beta is here


Sweet. Hopefully this means we will start seeing some performance data sooner than I expected.


----------



## Dargonplay

Quote:


> Originally Posted by *Defoler*
> 
> TBH, why should they?
> Are they still selling kepler cards? Does it give them money to optimise for kepler?
> They are in the business of selling cards.
> 
> AMD 200 series gets some extra optimisation because 300 is a rebrand. If it wasn't, I doubt that increase performance would have happen on AMD. This is also eating up AMD sales, people sticking to the 200 series. Which is another reason why the 900 series is doing so well for nvidia.


Because its the non-douchebag thing to do. A GTX 960 beating a GTX 780 and closing up to a Titan seems fine to you? Personally as I like to upgrade every 2-3 generations I wouldn't go with Nvidia again given that AMD cards aren't being dumped in the trash can after just 1 year of being released.

Also AMD isn't only optimizing Hawaii for the rebrands, they are also optimizing Tahiti and Pitcairn, just watch the increasingly impressive performance those cards can output today compared to last year, or the year before.

And yes, it can give them money, there are people like me who are aware enough to see the value on a card that last longer than a year without decreasing performance on newer titles, people who won't have the confidence on upgrading to another Nvidia card in the future until this gets fixed, Nvidia's desire of saving money by choosing to not optimize their older cards to force people to upgrade could backfire, they are in the business of selling Graphics Cards not Kleenex wipes, the GPU Business needs the confidence of its customers where the great majority of them don't upgrade every year.
Quote:


> Originally Posted by *Defoler*
> 
> You can relax. You are wrong as usual. There have been several discussions on this in the past. AMD saying that async is not possible in Nvidia is of course the most reliable source of information regarding nvidia


I don't know how often you're wrong but Nvidia can't do Async Computing because they simply don't have the ACE Hardware, saying they can is like saying AMD Cards can do PhysX, PhysX is offloaded to the CPU on AMD GPUs as ACE Commands are offloaded to the CPU on Nvidia cards, both implementations sucks.








Quote:


> Originally Posted by *Defoler*
> 
> Since then AMD performance in tessellation went down hill, so it is now useless?
> 
> Amount of tessellation can be controlled by the developer. Many end up reducing it to not hurt AMD cards.


You're not thinking clearly. Tessellation is a great feature, a sea beneath the ground full of Tessellation is useless, Gerald's x64 Tessellation hair that looks exactly the same as X8 is useless, the over Tesselleted fur is useless, Developers don't reduce tessellation for AMD Hardware, Wrong again, AMD's "Tessellation Optimized Settings" do through drivers and even then they can't optimize for Gameworks effect, you'll have to do it manually. I hope you can spot the difference.


----------



## criminal

Quote:


> Originally Posted by *Dargonplay*
> 
> Because its the non-douchebag thing to do. A GTX 960 beating a GTX 780 and closing up to a Titan seems fine to you? Personally as I like to upgrade every 2-3 generations I wouldn't go with Nvidia again given that AMD cards aren't being dumped in the trash can after just 1 year of being released.
> 
> Also AMD isn't only optimizing Hawaii for the rebrands, they are also optimizing Tahiti and Pitcairn, just watch the increasingly impressive performance those cards can output today compared to last year, or the year before.
> 
> And yes, it can give them money, there are people like me who are aware enough to see the value on a card that last longer than a year without decreasing performance on newer titles, people who won't have the confidence on upgrading to another Nvidia card in the future until this gets fixed, Nvidia's desire of saving money by choosing to not optimize their older cards to force people to upgrade could backfire, they are in the business of selling Graphics Cards not Kleenex wipes, the GPU Business needs the confidence of its customers where the great majority of them don't upgrade every year.
> I don't know how often you're wrong but Nvidia can't do Async Computing because they simply don't have the ACE Hardware, saying they can is like saying AMD Cards can do PhysX, PhysX is offloaded to the CPU on AMD GPUs as ACE Commands are offloaded to the CPU on Nvidia cards, both implementations sucks.
> 
> 
> 
> 
> 
> 
> 
> 
> You're not thinking clearly. Tessellation is a great feature, a sea beneath the ground full of Tessellation is useless, Gerald's x64 Tessellation hair that looks exactly the same as X8 is useless, the over Tesselleted fur is useless, Developers don't reduce tessellation for AMD Hardware, Wrong again, AMD's "Tessellation Optimized Settings" do through drivers and even then they can't optimize for Gameworks effect, you'll have to do it manually. I hope you can spot the difference.


Part of my urge to upgrade is more want than need. I like getting new hardware. Having said that, when I upgrade this time I fully plan to go AMD. I have had my 980 for 13 months and I have the stupid upgrade itch.


----------



## Assirra

Quote:


> Originally Posted by *kx11*
> 
> it's not full speed for some weird reason
> 
> 
> 
> 
> 
> 
> 
> , usually i get 35mb/s DL speed


What kind of crazy line do you got?


----------



## magnek

Dang and I thought I was pimipin' running with a 150M line and getting 21 MB/s on Steam.









#firstworldproblems


----------



## GorillaSceptre

Quote:


> Originally Posted by *kx11*
> 
> it's not full speed for some weird reason
> 
> 
> 
> 
> 
> 
> 
> , usually i get 35mb/s DL speed


Do you mean 35MB/s? If so then that is insane.. Those speeds you posted already exceed 35Mb/s.

I can't imagine what it's like to not have to wait 3 days to download a game..


----------



## Fyrwulf

Just downloaded a 130MB update to Steam in under a minute...


----------



## Dargonplay

Quote:


> Originally Posted by *Fyrwulf*
> 
> Just downloaded a 130MB update to Steam in under a minute...


I can do that in 10 seconds, and I still think my connection is slow I wish Comcast would offer 1 Gigabit Speeds in Miami, they only offer 2 Gigabits for 300$ on top of my already expensive 100$ TV Service, it's either 100Mbps (The one I've got) and then no other offering but 2 Gigabit, it's dumb.

Anyway, we went a little off topic here.


----------



## Lex Luger

So.......does the beta have a dx12 renderer? Any news yet?


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *kx11*
> 
> AMD seems to be all over this game


What case is that? Its really nice!


----------



## cainy1991

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> What case is that? Its really nice!


Looks like an inwin 909 to me.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *cainy1991*
> 
> Looks like an inwin 909 to me.


Yep, that certainly looks like it. Thank you good sir! +1


----------



## Potatolisk

I wonder how are gains from async shading in this beta. FPS(async on)/FPS(async off). But I guess players don't have control over that. In AotS beta 2 it's about 35/30.


----------



## Mahigan

Quote:


> Originally Posted by *Potatolisk*
> 
> I wonder how are gains from async shading in this beta. FPS(async on)/FPS(async off). But I guess players don't have control over that. In AotS beta 2 it's about 35/30.


Considering AMD want to use Hitman as a show off title for the technology, they'll likely move dynamic lighting over to Async (10% boost), Post Processing (up to 45%) and maybe physics (30%). So a healthy boost.


----------



## Dargonplay

Quote:


> Originally Posted by *Mahigan*
> 
> Considering AMD want to use Hitman as a show off title for the technology, they'll likely move dynamic lighting over to Async (10% boost), Post Processing (up to 45%) and maybe physics (30%). So a healthy boost.


I don't know how or if this is possible, but could ACE Engines be used to Offload Nvidia PhysX from the CPU to the ACE modules? Not saying that Nvidia would allow it, just asking if its technically possible to run PhysX on an ACE.


----------



## Mahigan

Quote:


> Originally Posted by *Dargonplay*
> 
> I don't know how or if this is possible, but could ACE Engines be used to Offload Nvidia PhysX from the CPU to the ACE modules? Not saying that Nvidia would allow it, just asking if its technically possible to run PhysX on an ACE.


Yes.

TressFX/3.0/PureHair as well.


----------



## Dargonplay

Quote:


> Originally Posted by *Mahigan*
> 
> Yes.
> 
> TressFX/3.0/PureHair as well.


Now I see why Nvidia wants to delay the widespread use of Async Computing.

If Pascal doesn't sport ACE AMD is going to hammer Nvidia down for the first time in several generations IF Developers actually start using it, with DirectX 12 reducing API Overhead and improving CPU performance over all cores combined with Async Computing offloading all the Physics, Post Processing and Dynamic Lighting from the CPU I can see how AMD's Multi Core CPUs can make a huge come back with Zen.

This is shaping up to be one of the most interesting times for gaming in the past decade, one were even Gimpworks will be good for the consumer with its effects being off loaded to the ACEs, it might actually end up benefiting both AMD and Nvidia the same


----------



## mtcn77

Quote:


> Originally Posted by *Mahigan*
> 
> Considering AMD want to use Hitman as a show off title for the technology, they'll likely move dynamic lighting over to Async (10% boost), *Post Processing (up to 45%)* and maybe physics (30%). So a healthy boost.


I don't like this sort of calculation. 158/245 FPS(Post processing On/Off) to 230/245 FPS(Postprocessing with Asynchronous Shaders On/Off) is not a 46% improvement; it is 5800% improvement.


----------



## Dudewitbow

Quote:


> Originally Posted by *Dargonplay*
> 
> If Pascal doesn't sport ACE AMD is going to hammer Nvidia down for the first time in several generations IF Developers actually start using it:


For clarity, they already use it, its usually just removed in the DX11 port if it was a console port (as consoles basically are required to use it to maximize performance on early/mid entry hardware). Its a matter of basically convincing developers to use the same programming logic they used on the original console version. This of course is partially tied to the adoption rate of Windows 10 as the functions are only usable on DX12 until developers start pushing Vulkan.

On the flipside, I do wish AMD would up their tessellation performance regardless on the 14nm gpus. Both companies would benefit greatly by learning a bit from the others performance trends so that their product doesn't look as shody in niche uses.


----------



## Dargonplay

Quote:


> Originally Posted by *Dudewitbow*
> 
> For clarity, they already use it, its usually just removed in the DX11 port if it was a console port (as consoles basically are required to use it to maximize performance on early/mid entry hardware). Its a matter of basically convincing developers to use the same programming logic they used on the original console version. This of course is partially tied to the adoption rate of Windows 10 as the functions are only usable on DX12 until developers start pushing Vulkan.
> 
> On the flipside, I do wish AMD would up their tessellation performance regardless on the 14nm gpus. Both companies would benefit greatly by learning a bit from the others performance trends so that their product doesn't look as shody in niche uses.


It is my understanding that Nvidia do support Async Computing but offloading all the que Commands on the CPU, which doesn't grant them any improvement in performance, just allow them to match DirectX 11 performance when forced to use Async Computing .

What I mean by "Supporting ACE" is that they actually include the hardware in their GPUs for Pascal, otherwise they'll get hammered by AMD since all of their Physics, Post Processing and Dynamic Lighting could be offloaded to the ACEs allowing them to enjoy of freed up resources on the GPU and CPU, thus gaining a huge performance boost.
Quote:


> Originally Posted by *mtcn77*
> 
> I don't like this sort of calculation. 158/245 FPS(Post processing On/Off) to 230/245 FPS(Postprocessing with Asynchronous Shaders On/Off) is not a 46% improvement; it is 5800% improvement.


I think he means ACEs could be used to entirely offload all processing related to one of those effects to make full use of ACEs modules, not offloading the 3 at the same time.


----------



## Mahigan

Quote:


> Originally Posted by *Dargonplay*
> 
> It is my understanding that Nvidia do support Async Computing but offloading all the que Commands on the CPU, which doesn't grant them any improvement in performance, just allow them to match DirectX 11 performance when forced to use Async Computing .
> 
> What I mean by "Supporting ACE" is that they actually include the hardware in their GPUs for Pascal, otherwise they'll get hammered by AMD since all of their Physics, Post Processing and Dynamic Lighting could be offloaded to the ACEs allowing them to enjoy of freed up resources on the GPU and CPU, thus gaining a huge performance boost.
> I think he means ACEs could be used to entirely offload all processing related to one of those effects to make full use of ACEs modules, not offloading the 3 at the same time.


I'll start off with Asynchronous compute,



The top portion is how NVIDIA execute Asynchronous code. When NVIDIA Kepler/Maxwell execute the Asynchronous comoute code they execute it like you see in DirectX11 above with some exceptions. When the Graphics queue is executing a compute command, the compute queue can also concurrently execute a compute command. Copy commands can also be executed concurrently on NVIDIA hardware. What NVIDIA don't support is mixing Compute with Graphics thus NVIDIAs current hardware do not support Asynchronous compute + Graphics (what can significantly boost performance). AMD behave like the DirectX12 portion of this image and support all forms of concurrent executions.

The DirectX12 portion of the image above is what AMD can do. Notice that you have 3 lines of commands running concurrently to one another. Those 3 lines represent the 3 queues (Graphics/3D, Copy and Compute).

It is the same under Vulkan:


Where you see a divergence is on the CPU side. Intel, AMD and NVIDIA all split the command buffer listing (DirectX run time or red bar) across various cores under DirectX12 as such:

They also all split the light blue bar (DirectX driver).

Under DX11,

There's another difference here, where as AMD GCN executes the DirectX runtime under DX11 like the DirectX 11 shot above, NVIDIA do not. NVIDIA support DirectX 11 Multi-threaded command listing while AMD do not or don't as effectively.

DirectX 11 Multi-threaded command listing basically works like this, Batches of Commands are pre-recorded on multiple CPU cores. And the primary CPU thread simply plays back the pre-arranged and pre-computed command lists to the NVIDIA driver. The NVIDIA driver compiler orders them into grids and schedules them for execution.

Basically NVIDIA is able to split that red bar, in the DirectX11 shot above, across many CPU threads under DX11. AMD does not or doesn't do as good a job. NVIDIA also already employ a multi-threaded DirectX11 driver (pale blue bar).

This is why NVIDIA don't gain much from DX12 over DX11 performance wise and AMD gain a lot. AMD GCN hammers the primary CPU thread under DirectX11 leading to a CPU bottleneck (which is why AMD GCN catches up to NVIDIA hardware as the in game resolution increases leading to a GPU bound rather than CPU bound situation). Vulcan will eventually highlight the same behaviour for AMD (as the API matures), more performance than DX11 and NVIDIA, similar performance give or take (NVIDIAs new driver will improve things a bit by allowing concurrent executions of compute commands).

The end result is that AMD GCN gains performance from the get go by running DirectX12 over DirectX11.

If you throw Asynchronous Compute + Graphics into the mix, AMD gain even more performance. How? Asynchronous compute + Graphics significantly lowers frame times (frame latency) thus boosting performance. Asynchronous Compute + Graphics also raises GPU utilization thus minimizing idling resources.

The thing with GCN is that resources are almost always idling. The architecture is highly parallel.

So yeah, AMDs DX11 implementation is inferior to NVIDIAs. There's no denying that. Vulcan and DX12, however, are based on ideas spawned by the Mantle API and thus, like the Mantle API, are really tailored to AMD hardware as it pertains to "performance".

Want to see this multiple queue execution in action? GPUView allows you to do just that.

Here's the Fable Legends Fly by demo running on a TitanX. (Note: There are very little Asynchronous work loads in the Fable Legends Fly by test but the released version will include spell effects and more):


Notice the Compute queue is pretty much empty?

Now the same test on a Fury:


Get the idea?

NVIDIA will be able to run Asynchronous compute, running compute commands, in the compute queue, concurrently to compute commands in the Graphics (3D queue) but not Compute commands running concurrently to Graphics commands (you see that in the Fable Legends screen shot above). This is what NVIDIA call "Asynchronous Compute". Kollock, Oxide developer, mentioned that support for this was recently added into NVIDIAs driver but requires an NVIDIA specific implementation. However the real performance gains are to be had from concurrently executing Compute and Graphics tasks. This is something current NVIDIA architectures are incapable of doing.

Conclusion:
- What AMD mean by Asynchronous Compute is not what NVIDIA mean.
- NVIDIA do not support concurrent executions of Compute + Graphics commands.
- GCN has idling resources from being a very wide architecture. Exploiting those resources through Asynchronous compute can lead to significant performance improvements.
- NVIDIA has little to gain performance wise on their current architectures under DX12/Vulcan.
- AMDs DirectX11 implementation is inferior and hammers the primary CPU thread leading to a CPU bottleneck (Rise of the Tomb Raider highlighting this).

And that's that.

Now as for the second question...

Depending on the compute resources idling, say Fiji, you could offload all three







Maybe even on Hawaii.

The Fable Legends fly by demo only executed the foliage using Asynchronous compute, the shipping version will also be adding the spell effects and other effects to the mix.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> It is my understanding that Nvidia do support Async Computing but offloading all the que Commands on the CPU, which doesn't grant them any improvement in performance, just allow them to match DirectX 11 performance when forced to use Async Computing .
> 
> What I mean by "Supporting ACE" is that they actually include the hardware in their GPUs for Pascal, otherwise they'll get hammered by AMD since all of their Physics, Post Processing and Dynamic Lighting could be offloaded to the ACEs allowing them to enjoy of freed up resources on the GPU and CPU, thus gaining a huge performance boost.
> I think he means ACEs could be used to entirely offload all processing related to one of those effects to make full use of ACEs modules, not offloading the 3 at the same time.


It is actually AMD's citation; however I still think we should be counting from the performance deficit it incurs(1-n). Compute is latency bound, afaik, and we should consider the reciprocal of the apparent performance value to get the right idea.


----------



## kx11

Quote:


> Originally Posted by *Assirra*
> 
> What kind of crazy line do you got?


fibre optic ( i think )


----------



## kx11

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Do you mean 35MB/s? If so then that is insane.. Those speeds you posted already exceed 35Mb/s.
> 
> I can't imagine what it's like to not have to wait 3 days to download a game..


12 months ago i was on a ADSL line with 500kb/s DL so i know how you feel bro

now it's 300mb connection with 35mb/s DL and 77mb/s UP


----------



## BradleyW

Quote:


> Originally Posted by *Mahigan*
> 
> *Conclusion:
> - What AMD mean by Asynchronous Compute is not what NVIDIA mean.
> - NVIDIA do not support concurrent executions of Compute + Graphics commands.
> - GCN has idling resources from being a very wide architecture. Exploiting those resources through Asynchronous compute can lead to significant performance improvements.
> - NVIDIA has little to gain performance wise on their current architectures under DX12/Vulcan.
> - AMDs DirectX11 implementation is inferior and hammers the primary CPU thread leading to a CPU bottleneck (Rise of the Tomb Raider highlighting this).
> *


Completely agree. I've been saying some of these points since 2009, hence the reason I joined this forum, to talk about CPU limitation. Sadly most people around here still won't believe that overhead is a thing......
It's just good to see someone else who is able to come up with the correct conclusions.


----------



## Clocknut

I highly doubt Pascal have hardware ACE.

ACE is only highlighted very recently, by then pascal design is already done years b4.


----------



## Dargonplay

Quote:


> Originally Posted by *Clocknut*
> 
> I highly doubt Pascal have hardware ACE.
> 
> ACE is only highlighted very recently, by then pascal design is already done years b4.


If that's the case, even if Nvidia have 40% more RAW performance than AMD cards they probably just lost the performance battle this generation... UNLESS they actually make Gameworks incompatible with Async Computing leading to most developers refraining on implementing it in any meaningful way.


----------



## jezzer

Not sure, i only remember Nvidia asking the devs of the game to disable some DX12 features because it made their cards look bad.. lol.

I am sure those features where 12.1 and will, if not allready, be patched driverwise.

Secretly i hope it cant be patched and the maxwell cards with a support for 12.1 feature level are missing the tech, what AMD claimed a while ago.
If so i def gonna get a refund this summer for my 1 year old cards and buy a new... Nvidia card









But yeah.. Drivers.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Dargonplay*
> 
> If that's the case, even if Nvidia have 40% more RAW performance than AMD cards they probably just lost the performance battle this generation... UNLESS they actually make Gameworks incompatible with Async Computing leading to most developers refraining on implementing it in any meaningful way.


Would be interesting if that ended up being the real purpose behind game works...


----------



## Fyrwulf

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Would be interesting if that ended up being the real purpose behind game works...


No way the FTC wouldn't go after them if that happened. There are certain things you just can't do as a monopoly.


----------



## jezzer

Something to read about ACE on Maxwell if interested

https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


----------



## Dargonplay

Quote:


> Originally Posted by *jezzer*
> 
> Something to read about ACE on Maxwell if interested
> 
> https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


"I'm a third party analyzing other third party's analysis. I could be completely wrong in my assessment of other's assessments







"

"due to the workload (1 large enqueue operation) the GCN benches are actually running "serial" too (which could explain the strange ~40-50ms overhead on GCN for pure compute). So who knows if v2 of this test is really a good async compute test?"

This definitely needs more testing, there's a lot of conflicting information, and as some people points out that test is flawed, not advocating for one way or the other at this point, I just would love to see the unbiased truth of this.

Also, everything points out to Nvidia not having native hardware support for Async Computing in any meaningful way for Maxwell, Nvidia and Oxide themselves confirmed that Nvidia is implementing a software solution for Async Computing, which is not necessary when having the actual hardware because it negates any performance advantage.

http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html

As Mahigan said:

- Maxwell 2: Queues in Software, work distributor in software (context switching), Asynchronous Warps in hardware, DMA Engines in hardware, CUDA cores in hardware.

- GCN: Queues/Work distributor/Asynchronous Compute engines (ACEs/Graphic Command Processor) in hardware, Copy (DMA Engines) in hardware, CUs in hardware."

Maxwell can't take advantage of its Asynchronous Warps in their hardware without a proper hardware Work Distributor and queue scheduler built inside their hardware, with a software solution there's a Queue made through software which then sends commands to the software Work Distributor (Context Switching) before reaching the Async Warps to be then sent in a normal pathway to the DMA Engines and CUDA Core.

AMD handles this differently and way more efficiently: The Queue Buffer, Work Distributor and Async Compute units are all in the same hardware, so when a new command arrives it gets sent instantly to the DMA engines and CUs.

At least that's what my little understanding of this can grasp.


----------



## christoph

Quote:


> Originally Posted by *Dargonplay*
> 
> If that's the case, even if Nvidia have 40% more RAW performance than AMD cards they probably just lost the performance battle this generation... UNLESS they actually make Gameworks incompatible with Async Computing leading to most developers refraining on implementing it in any meaningful way.


is not raw performance;

Nvidia has always disable effects so it can have more FPS

is like when you play a 4 GB movie file vs the 8 GB movie file, you ALMOST can tell the difference, the 4GB file is running at 1200 kbps while the 8 GB is at 2400 kbps, of course the 2400 kbps file needs more power to be played back...

My friend and I've just compare the image quality of a game (metal gear v) between the Nvidia 960 vs my AMD 390, same computer, we just swap the video card to test the game and the game looks way better in my 390, do I have to add that it was my friends idea? he was a Nvidia fanboy


----------



## Dargonplay

Quote:


> Originally Posted by *christoph*
> 
> is not raw performance;
> 
> Nvidia has always disable effects so it can have more FPS
> 
> is like when you play a 4 GB movie file vs the 8 GB movie file, you ALMOST can tell the difference, the 4GB file is running at 1200 kbps while the 8 GB is at 2400 kbps, of course the 2400 kbps file needs more power to be played back...
> 
> My friend and I've just compare the image quality of a game (metal gear v) between the Nvidia 960 vs my AMD 390, same computer, we just swap the video card to test the game and the game looks way better in my 390, do I have to add that it was my friends idea? he was a Nvidia fanboy


I think you are referring to Nvidia's Maxwell Delta Color Compression, which is indeed a compression technique that lowers quality to favor performance so you are right. This is an architecture feature and it cannot be turned off sadly, at least not on current drivers, correct me if I'm wrong.


----------



## Themisseble

Quote:


> Originally Posted by *jezzer*
> 
> Something to read about ACE on Maxwell if interested
> 
> https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/


He shows example of async (compute + graphics), compute only and graphics only.
His example shows

Compute only - 11ms until reaching more than 30 threads.
Graphics only - 17ms
asyncs - I cant see 17ms.... its starts with 27-28ms (17+11).

Where is async compute?

R9 390x
Compute only - 52ms
Graphics only - 27ms
asyncs - 52ms

... so where are async shaders on GTX 980TI?

So GTX 980Ti can run async compute, but there is no advantages?


----------



## Kana-Maru

Quote:


> Originally Posted by *Dargonplay*
> 
> I think you are referring to Nvidia's Maxwell Delta Color Compression, which is indeed a compression technique that lowers quality to favor performance so you are right. This is an architecture feature and it cannot be turned off sadly, at least not on current drivers, correct me if I'm wrong.


Didn't AMD use that type of compression a long time ago. The difference was that AMD didn't kill visual quality.


----------



## christoph

Quote:


> Originally Posted by *Dargonplay*
> 
> I think you are referring to Nvidia's Maxwell Delta Color Compression, which is indeed a compression technique that lowers quality to favor performance so you are right. This is an architecture feature and it cannot be turned off sadly, at least not on current drivers, correct me if I'm wrong.


that's right, I had to explained like that cuz just a few people recall that "technique"

and yes they used it, but I think was with the HD3000 series, cuz at that time I was using Nvidia, so it was no difference, then I switch back to AMD with the HD4000 series, and the image look way better, then back to Nvidia now back to AMD...

I've always liked AMD more than Nvidia, but I've always bought what I can afford... both brands are good, have their "feature" so I like both but definitely when it comes to prices well you know...


----------



## Disharmonic

Actually Delta Color Compression is lossles and is used by both Maxwell and GCN 1.2


----------



## Dargonplay

Quote:


> Originally Posted by *Themisseble*
> 
> He shows example of async (compute + graphics), compute only and graphics only.
> His example shows
> 
> Compute only - 11ms until reaching more than 30 threads.
> Graphics only - 17ms
> asyncs - I cant see 17ms.... its starts with 27-28ms (17+11).
> 
> Where is async compute?
> 
> R9 390x
> Compute only - 52ms
> Graphics only - 27ms
> asyncs - 52ms
> 
> ... so where are async shaders on GTX 980TI?
> 
> So GTX 980Ti can run async compute, but there is no advantages?


Basically it can only do one thing or the other, not both at the same time like GCN, each time an Async Compute Operation begins a Graphic Operation stops, when having properly supported hardware both can happen at the same time thus gaining huge performance.


----------



## infranoia

I have always noticed AMD's superior IQ in A/B tests, but those are fighting words to those who cannot.

Especially when framerate and image latency is a higher perceived value than IQ. That valuation of motion over IQ just isn't my personal preference-- IQ impacts me more.

Will be very interesting to see what corners are cut with the new architectures, what with the focus on low-latency VR.


----------



## criminal

Quote:


> Originally Posted by *infranoia*
> 
> I have always noticed AMD's superior IQ in A/B tests, but those are fighting words to those who cannot.
> 
> Especially when framerate and image latency is a higher perceived value than IQ. That valuation of motion over IQ just isn't my personal preference-- IQ impacts me more.
> 
> Will be very interesting to see what corners are cut with the new architectures, what with the focus on low-latency VR.


I have never been able to compare side by side, but I have heard this many times. I will hopefully get to see first hand when Polaris is released.


----------



## mcg75

Quote:


> Originally Posted by *criminal*
> 
> I have never been able to compare side by side, but I have heard this many times. I will hopefully get to see first hand when Polaris is released.


I've switched twice before, AMD's default colors are more saturated. To some this is better image quality.

Look at it logically. AMD/Nvidia have had zero issues slinging mud at each other in the past for cheating.

But in all the things brought up by Roy Taylor or Richard Huddy about Nvidia doing underhanded things, cheating on image quality has never been mentioned.

If there was something to this, they'd be all over it.


----------



## Dargonplay

Quote:


> Originally Posted by *mcg75*
> 
> I've switched twice before, AMD's default colors are more saturated. To some this is better image quality.
> 
> Look at it logically. AMD/Nvidia have had zero issues slinging mud at each other in the past for cheating.
> 
> But in all the things brought up by Roy Taylor or Richard Huddy about Nvidia doing underhanded things, cheating on image quality has never been mentioned.
> 
> If there was something to this, they'd be all over it.


That's what I always think when I hear these claims, maybe they're not riding this because they're doing something worse? idk.


----------



## Forceman

Quote:


> Originally Posted by *mcg75*
> 
> I've switched twice before, AMD's default colors are more saturated. To some this is better image quality.


That's been my experience as well.

If there was something to be found, someone would have found it by now. If not a tech site, then at least some Reddit user.


----------



## Themisseble

Quote:


> Originally Posted by *mcg75*
> 
> I've switched twice before, AMD's default colors are more saturated. To some this is better image quality.
> 
> Look at it logically. AMD/Nvidia have had zero issues slinging mud at each other in the past for cheating.
> 
> But in all the things brought up by Roy Taylor or Richard Huddy about Nvidia doing underhanded things, cheating on image quality has never been mentioned.
> 
> If there was something to this, they'd be all over it.


But there is a problem.
NVIDIA is actually making it so with Geforce experience.
http://www.geforce.com/geforce-experience
I tried GTX 970 and GTX 390 and with nvidia Geforce experience GTX 970 was faster... but I noticed that image is not that great, but performance was better and smoother. Well in-game I maxed out everything yet Geforce experience fixed it for me. First I though that was only, because GTX 970 would handle this game better but it wasnt... even TITAN X showed much worse quality than Fury X.

Anyway I uninstalled it and suddenly GTX 970 had same image, but it was "way" slower.


----------



## sugarhell

Quote:


> Originally Posted by *Themisseble*
> 
> But there is a problem.
> NVIDIA is actually making it so with Geforce experience.
> http://www.geforce.com/geforce-experience
> I tried GTX 970 and GTX 390 and with nvidia Geforce experience GTX 970 was faster... but I noticed that image is not that great, but performance was better and smoother. Well in-game I maxed out everything yet Geforce experience fixed it for me. First I though that was only, because GTX 970 would handle this game better but it wasnt... even TITAN X showed much worse quality than Fury X.
> 
> Anyway I uninstalled it and suddenly GTX 970 had same image, but it was "way" slower.


Tell me this is trolling right?


----------



## Themisseble

Quote:


> Originally Posted by *sugarhell*
> 
> Tell me this is trolling right?


Nope its true.

Well I will find you forum... where they ...


----------



## Kana-Maru

I had about 3 years with my GTX 670s 2-Way SLI. I had many months of playing Middle-Earth Shadows of Mordor & Batman: Arkham Knight and years on Crysis 3, Hitman, Metro, & Tomb Raider. When I switched to the Fury X I *instantly* noticed the better image quality and so did my gf.


----------



## infranoia

To me it's apparent, but I wouldn't even try to convince anyone who doesn't immediately see it. The IQ differences are either subtle to you or completely obvious, depending on your tolerance for such things. Ultimately though, to each his own.

That said, there are far more examples of "hey, what's that all about?" in AMD's favor, than there have been in Nvidia's favor, as far as IQ goes.

https://youtu.be/3MGLQWlXo0Q?t=2m47s

I guess we're veering a bit off-topic-- I think what is on-topic is if IQ will suffer further as a way for Nvidia to approach parity with GCN, if ASC actually results in a huge performance delta (which I still have my doubts it will). Speculation on top of speculation. Must be OCN.


----------



## criminal

Quote:


> Originally Posted by *mcg75*
> 
> I've switched twice before, AMD's default colors are more saturated. To some this is better image quality.
> 
> Look at it logically. AMD/Nvidia have had zero issues slinging mud at each other in the past for cheating.
> 
> But in all the things brought up by Roy Taylor or Richard Huddy about Nvidia doing underhanded things, cheating on image quality has never been mentioned.
> 
> If there was something to this, they'd be all over it.


Quote:


> Originally Posted by *Forceman*
> 
> That's been my experience as well.
> 
> If there was something to be found, someone would have found it by now. If not a tech site, then at least some Reddit user.


I don't believe that Nvidia is doing anything shady in reducing textures quality or image quality to get better performance. I was just stating that I have never seen the quality difference first hand. But I do believe it is as both of you say in that it is color saturation differences.
Quote:


> Originally Posted by *infranoia*
> 
> To me it's apparent, but I wouldn't even try to convince anyone who doesn't immediately see it. The IQ differences are either subtle to you or completely obvious, depending on your tolerance for such things. Ultimately though, to each his own.
> 
> That said, there are far more examples of "hey, what's that all about?" in AMD's favor, than there have been in Nvidia's favor, as far as IQ goes.
> 
> https://youtu.be/3MGLQWlXo0Q?t=2m47s
> 
> I guess we're veering a bit off-topic-- I think what is on-topic is if IQ will suffer further as a way for Nvidia to approach parity with GCN, if ASC actually results in a huge performance delta (which I still have my doubts it will). Speculation on top of speculation. Must be OCN.


Watching that video it seems that it depends on the scene which one looks better in my opinion. I wouldn't label either one as looking "better".


----------



## Themisseble

Quote:


> Originally Posted by *criminal*
> 
> I don't believe that Nvidia is doing anything shady in reducing textures quality or image quality to get better performance. I was just stating that I have never seen the quality difference first hand. But I do believe it is as both of you say in that it is color saturation differences.


really?


Ofcourse there is no difference if all settings are maxed out. But Geforce experience need to be uninstalled or maybe, if you set max quality in there. But With Geforce experience on default you will never have same image as AMD.


10% right?


Geforce experience is worse app ever made. It didnt allow me ....


----------



## infranoia

Quote:


> Originally Posted by *criminal*
> 
> I don't believe that Nvidia is doing anything shady in reducing textures quality or image quality to get better performance. I was just stating that I have never seen the quality difference first hand. But I do believe it is as both of you say in that it is color saturation differences.
> Watching that video it seems that it depends on the scene which one looks better in my opinion. I wouldn't label either one as looking "better".


For my eye, it's not saturation. It's the texture definition and the quality / precision of the anisotropic filter. Also consider the shader definition of the smoke and fire. It's blurry on the 980Ti.

I see that in other titles as well, when comparing the 290x to this 960.


----------



## criminal

Quote:


> Originally Posted by *Themisseble*
> 
> really?
> 
> Ofcourse there is no difference if all settings are maxed out. But Geforce experience need to be uninstalled or maybe, if you set max quality in there. But With Geforce experience on default you will never have same image as AMD.
> 
> Geforce experience is worse app ever made. It didnt allow me ....


I don't use Geforce Experience recommended settings, so...


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> For my eye, it's not saturation. It's the texture definition and the quality / precision of the anisotropic filter. Also consider the shader definition of the smoke and fire. It's blurry on the 980Ti.
> 
> I see that in other titles as well, when comparing the 290x to this 960.


Of course there's also the possibility that the "performance" texture setting in the respective control panels have different optimizations. Setting high quality should eliminate that factor.

The incidents I can remember were Sleeping Dogs, where Nvidia wasn't rendering some lights on some driver sets, and then the bug in Battlefield 4 where the in-game texture settings got stuck on low (which briefly caused a firestorm of controversy here).


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> *Of course there's also the possibility that the "performance" texture setting in the respective control panels have different optimizations. Setting high quality should eliminate that factor.*
> 
> The incidents I can remember were Sleeping Dogs, where Nvidia wasn't rendering some lights on some driver sets, and then the bug in Battlefield 4 where the in-game texture settings got stuck on low (which briefly caused a firestorm of controversy here).


Yeah, that's fine, but are review sites doing that? If they are bench-marking with lower image quality because of Nvidia's driver, then you can throw the fps results out the window.

If i had a recent Nvidia card i would check myself.


----------



## Assirra

About the whole IQ thing.
This argument has been popping up a couple times yet it feels strange that there is no tech site that actually ever did research it with that mind.
Yes we got some videos but who knows what settings those were, might be something in a control panel, an setting or who knows what.

I really feel some techsite/tech youtuber should just investigate it with only that in mind.
If it is true, it is at least official and i know i will take it into account for my next purchase, if it ain't then that argument will eventually die off.


----------



## Tivan

Quote:


> Originally Posted by *Assirra*
> 
> About the whole IQ thing.
> This argument has been popping up a couple times yet it feels strange that there is no tech site that actually ever did research it with that mind.
> Yes we got some videos but who knows what settings those were, might be something in a control panel, an setting or who knows what.
> 
> I really feel some techsite/tech youtuber should just investigate it with only that in mind.
> If it is true, it is at least official and i know i will take it into account for my next purchase, if it ain't then that argument will eventually die off.


Tech sites care about things that have outside traction. Like new releases. Investigative journalism doesn't pay.
Sad, but it can't be helped, unless we introduce something like an unconditional basic income (for journalists).


----------



## f1LL

The guys at Gamers Nexus seem very neutral and I think they might be interested in investigative stuff.

Their Youtube: https://www.youtube.com/channel/UChIs72whgZI9w6d6FhwGGHA
Their HP: http://www.gamersnexus.net/

p.s. I'm not affiliated with them in any way, so this not meant as promotion. I just like to watch their videos on youtube.


----------



## Forceman

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yeah, that's fine, but are review sites doing that? If they are bench-marking with lower image quality because of Nvidia's driver, then you can throw the fps results out the window.
> 
> If i had a recent Nvidia card i would check myself.


I think I checked it last summer when this came up then (because of the BF4 video) and it didn't make squat for a difference in FPS, at least on a 960/290X. But I didn't see a quality difference either, so who knows. There's a thread about it somewhere.

Edit: Well, my 5 minute testing showed jack-all for difference. Using JC2 (only thing I had handy with a built-in benchmark) High Performance was 78.22 FPS, High Quality was 77.29 FPS, and Quality (the default) was 77.62 FPS on a GTX 960. On the 290X it is Performance 97.07, Standard 95.59, and High 94.19. Different settings and resolution obviously, but a slightly larger split percentage-wise.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Forceman*
> 
> I think I checked it last summer when this came up then (because of the BF4 video) and it didn't make squat for a difference in FPS, at least on a 960/290X. But I didn't see a quality difference either, so who knows. There's a thread about it somewhere.
> 
> Edit: Well, my 5 minute testing showed jack-all for difference. Using JC2 (only thing I had handy with a built-in benchmark) High Performance was 78.22 FPS, High Quality was 77.29 FPS, and Quality (the default) was 77.62 FPS on a GTX 960.


Yup, this was the thread i think: http://www.overclock.net/t/1563386/texture-filtering-quality-thread-amd-gcn-vs-nvidia-maxwell-and-kepler#post_24126882 TL;DR = Probably nothing, but still inconclusive.

Gregster noted a 7fps decrease when he set the driver to prefer max quality. The problem with that thread is that no one had a standard for testing, they just posted screens. We need to know if person X was using GE while person Y wasn't etc. Maybe it was just a bug in that title, but if no one tests further then we'll never know.









Do you change your driver settings through GE? It's been a while so i can't remember.
Quote:


> Originally Posted by *f1LL*
> 
> The guys at Gamers Nexus seem very neutral and I think they might be interested in investigative stuff.
> 
> Their Youtube: https://www.youtube.com/channel/UChIs72whgZI9w6d6FhwGGHA
> Their HP: http://www.gamersnexus.net/
> 
> p.s. I'm not affiliated with them in any way, so this not meant as promotion. I just like to watch their videos on youtube.


Yeah i enjoy their stuff too.







Hopefully they check it out.


----------



## christoph

I've mentioned this color compression just to have something to have in mind when in comes to choose which brand to buy, not to start a war who's cheating or not, why??

cuz, like our friend above said, the definition of saturating colors or compressing colors is different to the eyes of every user, one can only claim that,

AMD is over saturating colors to make it look better so Nvidia delta color compression is Lossles that's why it look pale...

or

Nvidia's compressing color to have better performance and AMD's color compressing is Lossles that's why it looks brighter...


----------



## sugarhell

Quote:


> Originally Posted by *christoph*
> 
> I've mentioned this color compression just to have something to have in mind when in comes to choose which brand to buy, not to start a war who's cheating or not, why??
> 
> cuz, like our friend above said, the definition of saturating colors or compressing colors is different to the eyes of every user, one can only claim that,
> 
> AMD is over saturating colors to make it look better so Nvidia delta color compression is Lossles that's why it look pale...
> 
> or
> 
> Nvidia's compressing color to have better performance and AMD's color compressing is Lossles that's why it looks brighter...


This has nothing to do with color compression.

By default RBG palette on AMD drivers are more saturated than nvidias. A lot of user actually tries to replicate the same colors but its quite difficult to approach exactly the same.

Iq differences is always there but the untrained eye cant see them. So they are irrelevant


----------



## Forceman

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yup, this was the thread i think: http://www.overclock.net/t/1563386/texture-filtering-quality-thread-amd-gcn-vs-nvidia-maxwell-and-kepler#post_24126882 TL;DR = Probably nothing, but still inconclusive.
> 
> Gregster noted a 7fps decrease when he set the driver to prefer max quality. The problem with that thread is that no one had a standard for testing, they just posted screens. We need to know if person X was using GE while person Y wasn't etc. Maybe it was just a bug in that title, but if no one tests further then we'll never know.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Do you change your driver settings through GE? It's been a while so i can't remember.


I'd be interested to see his control panel settings, because he says "Guys, I did one video with the default Nvidia settings "which is let the 3D application decide and then did another with prefer max quality and this showed there to be an IQ difference.", but there is no option for "let the 3d application decide" or "prefer max quality" in the texture filtering options. It's just High Performance, Performance, Quality, and High Quality. So who knows what all it was changing.


----------



## Mahigan

Many users have mentioned the LOD bias but that driver feature no longer works in NVIDIA. Setting it to its lowest or highest setting does nothing.


----------



## Dargonplay

Yes, we need a big tech site to investigate this matter as I've been able to notice the change my self, although our experiences are all subjective and anecdotal we all agree that AMD have different image Quality compared to Nvidia, some say better others just different, but we agree on the fact that it is indeed different.

I'd like to see a review on this and how exactly the image change between cards, also It'd be amazing if they could measure it, maybe a tool to measure polygons or textures applied? The Filter intensity? We really need some focused research!


----------



## Majin SSJ Eric

I have to say, having owned both dual Titan and dual 7970 setups that I never really noticed any significant IQ differences between the two platforms TBH. I run all my games with the slider set to max quality rather than performance in the CP and games like Crysis 3 and Metro LL look absolutely stunning on my Titans. FWIW.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I have to say, having owned both dual Titan and dual 7970 setups that I never really noticed any significant IQ differences between the two platforms TBH. I run all my games with the slider set to max quality rather than performance in the CP and games like Crysis 3 and Metro LL look absolutely stunning on my Titans. FWIW.


I do not think it's possible to tell unless you look for it. I did notice different colors when I used 560M with my PC monitor.


----------



## Kana-Maru

Quote:


> Originally Posted by *infranoia*
> 
> To me it's apparent, but I wouldn't even try to convince anyone who doesn't immediately see it. The IQ differences are either subtle to you or completely obvious, depending on your tolerance for such things. Ultimately though, to each his own.
> That said, there are far more examples of "hey, what's that all about?" in AMD's favor, than there have been in Nvidia's favor, as far as IQ goes.
> 
> https://youtu.be/3MGLQWlXo0Q?t=2m47s


From where the video starts you can CLEARLY see that AMD has superior quality. I'm not sure what anyone is missing, but those blurry areas on the 980 Ti is extremely noticeable. It looks almost like a console title. I'm hoping I'm not the only one that notices the difference in quality.


----------



## Dargonplay

Quote:


> Originally Posted by *Kana-Maru*
> 
> From where the video starts you can CLEARLY see that AMD has superior quality. I'm not sure what anyone is missing, but those blurry areas on the 980 Ti is extremely noticeable. It look almost like a console title. I'm hoping I'm not the only one that sees the difference in quality.


Yes is very easy to spot, probably upping the texture filtering to max would make both AMD and Nvidia Equal, but then again we'd need proper benchmarks and measures for this, TECHSPOT/TOM's HARDWARE, IF YOU'RE READING THIS... DO IT!


----------



## infranoia

The problem with IQ analysis is that it's just like the American political system. You vote along party lines. So that takes Tom's Hardware out, along with most other so-called journalists in the GPU industry.

The effect is so subtle to gamers who have already made their choice that it's hardly an article you could hang your hat on.

You risk angering the 80% gorilla in the room for a largely subjective and engine-dependent conclusion.

Point being, I don't really need an article to tell me that AMD has better IQ, I see it every day flipping between Hawaii and Maxwell-- but I would still question that article's conclusions, whatever they are. While it's obvious to me, I wouldn't assume that everyone has the same set of criteria when they look at game visuals. If your cortex is wired differently than mine, right on man.

But it is interesting that if you Google AMD vs. Nvidia IQ, nearly every single hit is questioning why AMD has better visuals. That's interesting in itself. Maybe that IQ article needs to start with a crowdsourced image comparison.


----------



## looniam

Quote:


> Originally Posted by *Mahigan*
> 
> Many users have mentioned the LOD bias but that driver feature no longer works in NVIDIA. Setting it to its lowest or highest setting does nothing.


it works, if you know what you're doing.









Spoiler: Warning: Spoiler!


----------



## gamervivek

Quote:


> Originally Posted by *mcg75*
> 
> I've switched twice before, AMD's default colors are more saturated. To some this is better image quality.
> 
> Look at it logically. AMD/Nvidia have had zero issues slinging mud at each other in the past for cheating.
> 
> But in all the things brought up by Roy Taylor or Richard Huddy about Nvidia doing underhanded things, cheating on image quality has never been mentioned.
> 
> If there was something to this, they'd be all over it.


Accusations about image quality do go around. The last big furore was AMD being blamed for poor AF but it ended up blowing up in nvidia's face when it was found that nvidia had lower LoD and textures were getting blurred in the distance.









An image quality thread ended up on reddit too today.

https://www.reddit.com/r/Amd/comments/46issb/just_switched_from_green_to_red_and_immediately/


----------



## mcg75

Quote:


> Originally Posted by *infranoia*
> 
> To me it's apparent, but I wouldn't even try to convince anyone who doesn't immediately see it. The IQ differences are either subtle to you or completely obvious, depending on your tolerance for such things. Ultimately though, to each his own.
> 
> That said, there are far more examples of "hey, what's that all about?" in AMD's favor, than there have been in Nvidia's favor, as far as IQ goes.
> 
> https://youtu.be/3MGLQWlXo0Q?t=2m47s


You can see it there.

How about this one though? Watch in 4K with no AA applied. That first video uses FXAA. I wonder if how each camp applies FXAA has something to do with this because I'm not seeing it in the 4K video. I noticed several spots the Fury X was more blurred. But I'm 99.9% sure it's a combination of being in motion and compression artifacts.


----------



## ZealotKi11er

Quote:


> Originally Posted by *mcg75*
> 
> You can see it there.
> 
> How about this one though? Watch in 4K with no AA applied. That first video uses FXAA. I wonder if how each camp applies FXAA has something to do with this because I'm not seeing it in the 4K video. I noticed several spots the Fury X was more blurred. But I'm 99.9% sure it's a combination of being in motion and compression artifacts.


Asked 3 people which have no idea what Fury X and GTX 980 Ti are and they picked the left side.


----------



## Kollock

Quote:


> Originally Posted by *infranoia*
> 
> The problem with IQ analysis is that it's just like the American political system. You vote along party lines. So that takes Tom's Hardware out, along with most other so-called journalists in the GPU industry.
> 
> The effect is so subtle to gamers who have already made their choice that it's hardly an article you could hang your hat on.
> 
> You risk angering the 80% gorilla in the room for a largely subjective and engine-dependent conclusion.
> 
> Point being, I don't really need an article to tell me that AMD has better IQ, I see it every day flipping between Hawaii and Maxwell-- but I would still question that article's conclusions, whatever they are. While it's obvious to me, I wouldn't assume that everyone has the same set of criteria when they look at game visuals. If your cortex is wired differently than mine, right on man.
> 
> But it is interesting that if you Google AMD vs. Nvidia IQ, nearly every single hit is questioning why AMD has better visuals. That's interesting in itself. Maybe that IQ article needs to start with a crowdsourced image comparison.


Personally, I think it's (mostly) red herring. Or green herring, depending on your viewpoint. A blue herring might actually exist though.

First their are few places were you could cheat and not violate the WHQL rules. The most likely culprits for differences would be precision on some various shader instructions. I've seen a few notable things in my career, but the biggest offender was Intel where they would do things like only use half precise exponent function. This was quite a while ago though, all the new shader instructions are specified typically within 1 ULP. The other area you might see some fuzziness is perhaps some slightly lossy compression or bits of precision being shaved off in the texture filtering, however likely not enough to be notable. Remember, most of your frame is being calculated by shaders these days. It's not like it used to be.

Second, we have a mode in Ashes where we can AFR across an NV and AMD card (to be released soon). Everyone was expecting that precision differences would cause a very noticeable flicker. But the reality is that it's not all that noticeable, even to experts. If you had big image difference across cards, I'd expect to see a very obvious strobe effect which we don't see.


----------



## kx11

not a bad beta , 60fps solid @ 1440p through the whole level

amazing graphics


----------



## brandonb21

does the beta use dx12 or is it not implemented yet?


----------



## Lex Luger

No dx 12 in the beta. Im guessing dx12 wont be added for months, if at all, just like all the other dx12 fake outs. So much for amd wonder api, 50 plus pages of useless drivel.


----------



## BradleyW

Quote:


> Originally Posted by *Lex Luger*
> 
> No dx 12 in the beta. Im guessing dx12 wont be added for months, if at all, just like all the other dx12 fake outs. So much for amd wonder api, 50 plus pages of useless drivel.


DX12 and full Async support will be ready on final release in my opinion. This I'm willing to bet on.


----------



## infranoia

Quote:


> Originally Posted by *Kollock*
> 
> [ . . . ]
> Second, we have a mode in Ashes where we can AFR across an NV and AMD card (to be released soon). Everyone was expecting that precision differences would cause a very noticeable flicker. But the reality is that it's not all that noticeable, even to experts. If you had big image difference across cards, I'd expect to see a very obvious strobe effect which we don't see.


Now this surprises me, because there is *definitely* a difference between the two in the output, even if it's just color saturation and filter precision. This suggests that the post-GPU interface is what is primarily the issue, and the quality of the output modules.

DX12 AFR actually strobes between final, composited GPU outputs (and doesn't just offload primitives)? I guess yeah, that's AFR. Then I would expect to see that strobe in the output, unless it all comes down to output hardware.


----------



## coelacanth

Quote:


> Originally Posted by *Kollock*
> 
> Personally, I think it's (mostly) red herring. Or green herring, depending on your viewpoint. A blue herring might actually exist though.
> 
> First their are few places were you could cheat and not violate the WHQL rules. The most likely culprits for differences would be precision on some various shader instructions. I've seen a few notable things in my career, but the biggest offender was Intel where they would do things like only use half precise exponent function. This was quite a while ago though, all the new shader instructions are specified typically within 1 ULP. The other area you might see some fuzziness is perhaps some slightly lossy compression or bits of precision being shaved off in the texture filtering, however likely not enough to be notable. Remember, most of your frame is being calculated by shaders these days. It's not like it used to be.
> 
> *Second, we have a mode in Ashes where we can AFR across an NV and AMD card (to be released soon). Everyone was expecting that precision differences would cause a very noticeable flicker. But the reality is that it's not all that noticeable, even to experts. If you had big image difference across cards, I'd expect to see a very obvious strobe effect which we don't see.*


I was wondering about that when I read that you could use both Nvidia and AMD cards in concert on Ashes. Thank you for your observation.

There are some noticeable differences in the Starwars Battlefront videos. If that game was NV / AMD AFR capable then I can see the possibility for some more noticeable flickering.


----------



## kx11

some gameplay from the beta

https://www.youtube.com/watch?v=iLTWNY8kNzg


----------



## ZealotKi11er

True DX12 will come out once the game can't run DX11 anymore. Will probably take years. If they are still building to have DX11 usable then they are not taking advantage of DX12.


----------



## zealord

Quote:


> Originally Posted by *kx11*
> 
> some gameplay from the beta
> 
> https://www.youtube.com/watch?v=iLTWNY8kNzg


I thought it would look much better. This is probably ultra settings since that dude is running 2x Titan X, isn't it?

Don't get me wrong it looks good enough, but it doesn't amaze me. It looks a bit bland and stiff.

As long as its fun I don't care much, but if Hitman really supports DX12 and is going to run well I wish it would look like a graphical behemoth to show it off what DX12 is capable of.


----------



## fewness

Do we have benchmark data yet?


----------



## Dargonplay

Quote:


> Originally Posted by *kx11*
> 
> some gameplay from the beta
> 
> https://www.youtube.com/watch?v=iLTWNY8kNzg


I noticed a Huge FPS Drop between minute 1:24 and 1:27, with a Titan X... in SLI.


----------



## Kana-Maru

Quote:


> Originally Posted by *Dargonplay*
> 
> I noticed a Huge FPS Drop between minute 1:24 and 1:27, with a Titan X... in SLI.


I actually noticed a drop before then. Check from 1:16 when he walks to the back of the bathroom and turns......stutter.
I don't want to spoil the game for myself so I'm not going to watch the entire vid.


----------



## kx11

SLi is off since this game doesn't have a multi-gpu support and it's a beta


----------



## Tideman

Quote:


> Originally Posted by *kx11*
> 
> SLi is off since this game doesn't have a multi-gpu support and it's a beta


Yeah that seems to be the case. I tried setting AFR2 in nv control panel which resulted in 99% usage on both gpus but fps in game was almost half that of single card for some reason. No chance I'm maxing this game at 1440p with a single 980, so I hope SLI will be supported come release.


----------



## bigjdubb

Is the beta running DX12?


----------



## kx11

Quote:


> Originally Posted by *bigjdubb*
> 
> Is the beta running DX12?


nope dx11

dx12 is 4 months away ( i think )


----------



## fewness

No cross-fire either, just tested.


----------



## Tivan

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Asked 3 people which have no idea what Fury X and GTX 980 Ti are and they picked the left side.


No susrprise, considering the left side has less greyed out shadows. Just looks more juicy.

Though it's subtle for the most part, and actually mostly just the shadows. As far as I can tell from this.

Might even be a competitive gaming optimization to get more info out of dark areas, or something, on the Nvidia side. I mean it does remind me of the infographics regarding benq black equalizer.

If anyone knows what exactly it is supposed to do, that'd be cool.


----------



## infranoia

Quote:


> Originally Posted by *Tivan*
> 
> No susrprise, considering the left side has less greyed out shadows. Just looks more juicy.
> 
> Though it's subtle for the most part, and actually mostly just the shadows. As far as I can tell from this.
> 
> Might even be a competitive gaming optimization to get more info out of dark areas, or something, on the Nvidia side. I mean it does remind me of the infographics regarding benq black equalizer.
> 
> If anyone knows what exactly it is supposed to do, that'd be cool.


The overall effect is more depth to the image. I see more shadow detail and a better anisotropic (distance) filter, and yeah, saturation-- all of it adds a greater 3D depth to the scene.

I see this additional shadow detail on AMD even in Gameworks titles like Dying Light and Batman, although no one's done such a precision job at an A/B comparison as that guy did with his Battlefront videos.

But as @Kollock suggests, all of this could be engine-dependent, if mixed-vendor AFR isn't showing shadow detail and anisotropic strobing with AotS.


----------



## Cybertox

Quote:


> Originally Posted by *kx11*
> 
> some gameplay from the beta
> 
> https://www.youtube.com/watch?v=iLTWNY8kNzg


I was expecting it to look better than that...


----------



## coelacanth

Quote:


> Originally Posted by *Cybertox*
> 
> I was expecting it to look better than that...


Under promise, over deliver is better than the reverse.

Witcher 3 and The Division both downgraded from the previews and it cost them some goodwill.


----------



## bigjdubb

Quote:


> Originally Posted by *Tivan*
> 
> No susrprise, considering the left side has less greyed out shadows. Just looks more juicy.
> 
> Though it's subtle for the most part, and actually mostly just the shadows. As far as I can tell from this.
> 
> Might even be a competitive gaming optimization to get more info out of dark areas, or something, on the Nvidia side. I mean it does remind me of the infographics regarding benq black equalizer.
> 
> If anyone knows what exactly it is supposed to do, that'd be cool.


Quote:


> Originally Posted by *infranoia*
> 
> The overall effect is more depth to the image. I see more shadow detail and a better anisotropic (distance) filter, and yeah, saturation-- all of it adds a greater 3D depth to the scene.
> 
> I see this additional shadow detail on AMD even in Gameworks titles like Dying Light and Batman, although no one's done such a precision job at an A/B comparison as that guy did with his Battlefront videos.
> 
> But as @Kollock suggests, all of this could be engine-dependent, if mixed-vendor AFR isn't showing shadow detail and anisotropic strobing with AotS.


Quote:


> Originally Posted by *mcg75*
> 
> You can see it there.
> 
> How about this one though? Watch in 4K with no AA applied. That first video uses FXAA. I wonder if how each camp applies FXAA has something to do with this because I'm not seeing it in the 4K video. I noticed several spots the Fury X was more blurred. But I'm 99.9% sure it's a combination of being in motion and compression artifacts.


I know this is all off topic but can you guys tell me what causes this. It looks like motion blur on a monitor but it's a video capture so it can't be from a monitor. And the fury usually have better fps yet still has this blur thing. I dunno what it is or what it is called. I only did a capture of the first scene I noticed it in but it shows up throughout the video.

Video is in my dropbox along with the screen captures of Premiere Pro (what I used to look at the video frame by frame).



https://www.dropbox.com/sh/k5yeec6acncscry/AAA0dkXhaqrz0ZgriT0H_VTda?dl=0


----------



## sugarhell

Quote:


> Originally Posted by *bigjdubb*
> 
> I know this is all off topic but can you guys tell me what causes this. It looks like motion blur on a monitor but it's a video capture so it can't be from a monitor. And the fury usually have better fps yet still has this blur thing. I dunno what it is or what it is called. I only did a capture of the first scene I noticed it in but it shows up throughout the video.
> 
> Video is in my dropbox along with the screen captures of Premiere Pro (what I used to look at the video frame by frame).
> 
> 
> 
> https://www.dropbox.com/sh/k5yeec6acncscry/AAA0dkXhaqrz0ZgriT0H_VTda?dl=0


It's not the same frame exactly?


----------



## bigjdubb

It's not, thats why I had to make the video. The right hand side doesn't do it on any frame so it isn't a matter of the blurred frames being out of sync


----------



## infranoia

Quote:


> Originally Posted by *bigjdubb*
> 
> I know this is all off topic but can you guys tell me what causes this. It looks like motion blur on a monitor but it's a video capture so it can't be from a monitor. And the fury usually have better fps yet still has this blur thing. I dunno what it is or what it is called. I only did a capture of the first scene I noticed it in but it shows up throughout the video.
> 
> Video is in my dropbox along with the screen captures of Premiere Pro (what I used to look at the video frame by frame).
> 
> 
> 
> https://www.dropbox.com/sh/k5yeec6acncscry/AAA0dkXhaqrz0ZgriT0H_VTda?dl=0


It's just motion blur. Try this: go full screen, then step through the video with the right arrow key. You'll land on 980Ti motion blur examples more often than the Fury X.
Quote:


> Originally Posted by *bigjdubb*
> 
> It's not, thats why I had to make the video. The right hand side doesn't do it on any frame so it isn't a matter of the blurred frames being out of sync


Not a true statement.





There are a lot more examples of that. It's just inter-frame blur and both GPUs have it. If blur is completely off at the engine level, then it's probably YouTube H.264 shenanigans, if the frames are out-of-sync between the two.


----------



## kx11

Quote:


> Originally Posted by *Cybertox*
> 
> I was expecting it to look better than that...


it's a beta with closed training levels , AFAIK the game got open world gameplay so this Beta doesn't show the final product

it surely runs better than The Division


----------



## infranoia

This beta really hit with a whimper. I really thought it would come roaring out of the game with DX12 like AotS, what with all the hype.

No word on when DX12 will go in?


----------



## bigjdubb

Quote:


> Originally Posted by *infranoia*
> 
> It's just motion blur. Try this: go full screen, then step through the video with the right arrow key. You'll land on 980Ti motion blur examples more often than the Fury X.
> Not a true statement.


That is exactly what I did, the 19 frames I put in drop box are consecutive.

It's a youtube video so it isn't exactly a good base to start with. I just thought it was weird that it was showing up on side of the video and not the other. There are certainly places where it shows up on both sides, it is the places where it only shows on one that made me curious.


----------



## tweezlednutball

Beta is DX11 and no crossfire support...


----------



## fewness

Seems like ROTTR's SLI bits can partially rescue it....this is 4K all settings at highest in the crowd....


----------



## kx11

i can't see crap in that picture , upload a bigger one or make the OSD font @ 200% scale


----------



## fewness

Quote:


> Originally Posted by *kx11*
> 
> i can't see crap in that picture , upload a bigger one or make the OSD font @ 200% scale


or you could just click the top picture and enlarge?


----------



## CBZ323

The beta runs very smoothly for me, but the mechanics feel pretty barebones and the AI a bit dumb to say the least. Animations look more like Blood money than Absolution (not a good thing).

Hopefully it's just the Beta and we see a lot more polish on release.


----------



## Majin SSJ Eric

I thought the character models looked really good and the action of dragging the body around was nice and realistic. Considering this is just a beta, I'm cautiously optimistic...


----------



## Tideman

Quote:


> Originally Posted by *fewness*
> 
> Seems like ROTTR's SLI bits can partially rescue it....this is 4K all settings at highest in the crowd....


That works a lot better, thanks for the info. Now I can play on max instead of high and I get about 80-100 outside the ship but in certain spots on the ship it does tank way down to the low 30s for me.

At least I can get a rough idea of how my SLI 980s will perform before release.


----------



## headd




----------



## ZealotKi11er

Quote:


> Originally Posted by *headd*


Another example of Kepler getting no support. GTX980 is 23% faster than GTX780 Ti. At launch of GTX980 there was 5% difference.


----------



## huzzug

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Quote:
> 
> 
> 
> Originally Posted by *headd*
> 
> 
> 
> 
> 
> Another example of Kepler getting no support. GTX980 is 23% faster than GTX780 Ti. At launch of GTX980 there was 5% difference.
Click to expand...

More like the 780 holding its own against the 7970.


----------



## tweezlednutball

Quote:


> Originally Posted by *huzzug*
> 
> More like the 780 holding its own against the 7970.


nvidia getting beat up pretty bad when u put it like that. 7970 is two years older than the 780


----------



## headd

When GTX780 was introduced -20%faster than 7970ghz


----------



## mtcn77

I suppose Nvidia cards will be relegated to way back when lane in this title. Anything up to "GTX 960" is limited to low textures. Next level Watch Dogs.


----------



## mcg75

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Another example of Kepler getting no support. GTX980 is 23% faster than GTX780 Ti. At launch of GTX980 there was 5% difference.


The 290 was 20% faster than the 7970 ghz at 1080p at launch. Source.

This test has the 290 42% faster than the 7970 ghz.

So this means AMD is not supporting the 7970 ghz anymore.

Or perhaps this is just a beta.


----------



## ZealotKi11er

Quote:


> Originally Posted by *mcg75*
> 
> The 290 was 20% faster than the 7970 ghz at 1080p at launch. Source.
> 
> This test has the 290 42% faster than the 7970 ghz.
> 
> So this means AMD is not supporting the 7970 ghz anymore.
> 
> Or perhaps this is just a beta.


That just R9 290 stock sucking so much.


----------



## mtcn77

Quote:


> Originally Posted by *mcg75*
> 
> The 290 was 20% faster than the 7970 ghz at 1080p at launch. Source.
> 
> This test has the 290 42% faster than the 7970 ghz.
> 
> *So this means AMD is not supporting the 7970 ghz anymore.
> *
> Or perhaps this is just a beta.


How did we reach to that conclusion? It could be the native performance gap between them now that all overhead has been cured for the most part.


----------



## Assirra

Considering the 980 there runs at 1100 and mine at 1500 i should be able to get to 60fps.
Quote:


> Originally Posted by *mtcn77*
> 
> How did we reach to that conclusion? It could be the native performance gap between them now that all overhead has been cured for the most part.


But isn't this exactly the same argument people use against nvidia when they compare current to past generation cards?\
You cannot enter a nvidia thread or you got some clown saying "just wait till your card get nerfed with the next generation".


----------



## mcg75

Quote:


> Originally Posted by *ZealotKi11er*
> 
> That just R9 290 stock sucking so much.


So, you really don't have any answer to why then.
Quote:


> Originally Posted by *mtcn77*
> 
> How did we reach to that conclusion? It could be the native performance gap between them now that all overhead has been cured for the most part.


That is NOT a conclusion. Nobody in their right mind makes any conclusions off 1 piece of data especially when it's a beta. That's the whole point.

So looking at that chart again, 780 Ti is 26% faster than 780. It was only 16% faster at launch.

That must mean Nvidia is picking and choosing which Kepler cards to support then.

Or, it's just a beta.


----------



## mtcn77

Quote:


> Originally Posted by *mcg75*
> 
> So, you really don't have any answer to why then.
> That is NOT a conclusion. Nobody in their right mind makes any conclusions off 1 piece of data especially when it's a beta. That's the whole point.


Then, this is a false dichotomy since you, yourself, have disclaimed one of them. Anyway, I'm sure you have noticed the joke^. Let me make that clearer.









Spoiler: Medium textures









Spoiler: LOW TEXTURES, LOLOLOLOLOL


----------



## zealord

290(X) / 390(X) cards looking pretty good. Hawaii has aged pretty well


----------



## Themisseble

Lol, GTX 660 vs 7870... lol .


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> Lol, GTX 660 vs *7770*7870... lol .


You mean, this?


----------



## magnek

Quote:


> Originally Posted by *headd*


How does the Titan X end up being ~10% faster than 980 Ti at 1080p? I know it's got 9% more shaders and TMUs, but you never see 100% scaling in real life. Plus this is at 1080p with FXAA.

Instead of blaming the beta, I say the test results are questionable.


----------



## Ha-Nocri

It occurs, more and more, as new games come out, that 280X is close to performance of 780... and 780 used to go head-to-head against 290


----------



## ZealotKi11er

Quote:


> Originally Posted by *Ha-Nocri*
> 
> It occurs, more and more, as new games come out, that 280X is close to performance of 780... and 780 used to go head-to-head against 290


Stock GTX780 is too slow. You can OC it for extra 30% performance. A OCed GTX780 will keep up with a OCed GTX970.


----------



## sugarhell

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Stock GTX780 is too slow. You can OC it for extra 30% performance. A OCed GTX780 will keep up with a OCed GTX970.


I highly doubt it for the second.


----------



## philhalo66

i wonder if grandpa 580 will see any benefits at all. probably not lol


----------



## PontiacGTX

Quote:


> Originally Posted by *philhalo66*
> 
> i wonder if grandpa 580 will see any benefits at all. probably not lol


in Directx 12 not even with improved drivers...in this game with current benchmarks results? even a 660 cant keep up with a hd 7850...


----------



## cowie

Quote:


> Originally Posted by *sugarhell*
> 
> I highly doubt it for the second.


in a game like bf4 it gets about a 20% and that's only with about a 75 more on the clocks.

its so far ot that I wont bother with many links I have just looked at besides my own testing in the few games I played when I use to run 780's

most stock voltage 780's will game at 1150 or so depending on cooling it comes with


----------



## Ha-Nocri

Here Fury X is winning in every scene against 980ti (1440p):


----------



## infranoia

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Here Fury X is winning in every scene against 980ti (1440p):


DudeRandom84 comes through again with the IQ comparisons. What is up with that first shot, at the 15 second mark? Look at the shadow detail, the mesh detail-- the 980Ti looks terrible.


----------



## Ha-Nocri

Quote:


> Originally Posted by *infranoia*
> 
> DudeRandom84 comes through again with the IQ comparisons. What is up with that first shot, at the 15 second mark? Look at the shadow detail, the mesh detail-- the 980Ti looks terrible.


Yeah, I don't know. Seems the fog is thicker and closer, might be weather difference


----------



## BradleyW

I've seen poorer IQ on Nvidia cards a small range of games through personal testing. This includes the likes of BF Hardline and Crysis 2.


----------



## cowie

Quote:


> Originally Posted by *infranoia*
> 
> DudeRandom84 comes through again with the IQ comparisons. What is up with that first shot, at the 15 second mark? Look at the shadow detail, the mesh detail-- the 980Ti looks terrible.


doh


----------



## cowie

Quote:


> Originally Posted by *BradleyW*
> 
> I've seen poorer IQ on Nvidia cards a small range of games through personal testing. This includes the likes of BF Hardline and Crysis 2.


I have vids that prove other wise you have any?


----------



## Kuivamaa

Quote:


> Originally Posted by *magnek*
> 
> How does the Titan X end up being ~10% faster than 980 Ti at 1080p? I know it's got 9% more shaders and TMUs, but you never see 100% scaling in real life. Plus this is at 1080p with FXAA.
> 
> Instead of blaming the beta, I say the test results are questionable.


It is pclab, you can ignore it.


----------



## Fyrwulf

Is anybody actually surprised that AMD cards are better in Dx12? _Really_?


----------



## BradleyW

Quote:


> Originally Posted by *Fyrwulf*
> 
> Is anybody actually surprised that AMD cards are better in Dx12? _Really_?


Beta is running on DX11. DX12 should be ready on release, with even larger performance boosts over Nvidia.


----------



## cowie

still cant wait to try this game thou


----------



## Fyrwulf

Quote:


> Originally Posted by *BradleyW*
> 
> Beta is running on DX11. DX12 should be ready on release, with even larger performance boosts over Nvidia.


Okay, that surprises the hell out of me.


----------



## Ha-Nocri

Where? Shadows look the same to me


----------



## cowie

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Where? Shadows look the same to me


you are right
but what ever I don't care I just wanna smash a vase over some guys head then take his clothes


----------



## Fyrwulf

Quote:


> Originally Posted by *cowie*
> 
> wow after looking at this vid a few more times if you guys cant see the lack of shadow detail on the fury
> 1 your are blind
> 2 you are a fanboy
> 
> well that's one way amd can get in the black hahahahahaha why don't they have an even bigger lead with shadows like that?
> 
> still cant wait to try this game thou


Dunno what you're talking about. I just watched the video again, on a 720p laptop screen, and the light levels are roughly equal. Are you sure _you're_ no the fanboy?


----------



## infranoia

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Yeah, I don't know. Seems the fog is thicker and closer, might be weather difference


It's more than that, unless the fog goes all the way to the camera. The shadow detail and overall contrast is just not good on Maxwell. Most of the middle scenes look okay, but the first one and the last scene with the jet in the hangar just look flat.

Like I said, you can argue details forever-- everyone has their preference-- but there are just too many examples of IQ in GCN's favor. The overall effect is a real-world depth, whereas the 980Ti just looks like flat computer graphics. But that's just me and my picky sense of IQ.


----------



## cowie

Quote:


> Originally Posted by *Fyrwulf*
> 
> Dunno what you're talking about. I just watched the video again, on a 720p laptop screen, and the light levels are roughly equal. Are you sure _you're_ no the fanboy?


no you are right I just got some really good stuff
editing now it was my yt settings for flash ...I am such a dumba5s


----------



## cowie

Quote:


> It's more than that, unless the fog goes all the way to the camera. The shadow detail and overall contrast is just not good on Maxwell. Most of the middle scenes look okay, but the first one and the last scene with the jet in the hangar just look flat.
> 
> Like I said, you can argue details forever-- everyone has their preference-- but there are just too many examples of IQ in GCN's favor. The overall effect is a real-world depth, whereas the 980Ti just looks like flat computer graphics. But that's just me and my picky sense of IQ.


I am not that picky but I agree at least on this game from what I see in the vid


----------



## Assirra

Quote:


> Originally Posted by *BradleyW*
> 
> I've seen poorer IQ on Nvidia cards a small range of games through personal testing. This includes the likes of BF Hardline and Crysis 2.


Here we go again...


----------



## looniam

Quote:


> Originally Posted by *Assirra*
> 
> Quote:
> 
> 
> 
> Originally Posted by *BradleyW*
> 
> I've seen poorer IQ on Nvidia cards a small range of games through personal testing. This includes the likes of BF Hardline and Crysis 2.
> 
> 
> 
> Here we go again...
Click to expand...

it's the cycle of life.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *mcg75*
> 
> The 290 was 20% faster than the 7970 ghz at 1080p at launch. Source.
> 
> This test has the 290 42% faster than the 7970 ghz.
> 
> So this means AMD is not supporting the 7970 ghz anymore.
> 
> Or perhaps this is just a beta.


To be fair, the 7970 came out in December 2011 whereas the 780 launched in May 2013...


----------



## dubldwn

Not sure if anyone else looked at this but my 5820k is sleepwalking through the beta. Maybe it's the tiny levels. Dropping voltage more often than not. 980ti running full tilt though. 1.8GB vram used max. Hopefully this all goes up with a large board. Turning down shadows dramatically increases fps. This game doesn't look better than Absolution right now but it should be fun.


----------



## Tideman

Quote:


> Originally Posted by *dubldwn*
> 
> Not sure if anyone else looked at this but my 5820k is sleepwalking through the beta. Maybe it's the tiny levels. Dropping voltage more often than not. 980ti running full tilt though. 1.8GB vram used max. Hopefully this all goes up with a large board. Turning down shadows dramatically increases fps. This game doesn't look better than Absolution right now but it should be fun.


I seem get about 30-40 usage avg on all cores (5930k), will have to double check later..

I've hit 3.6gb vram max on highest settings at 1440p (but with supersampling at default 1.0).

Also it seems to be the mirrors that are causing the insane framerate drops for the most part. This needs to be fixed, it's awful. No issue with the second level though (I get 100+fps there).


----------



## mcg75

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> To be fair, the 7970 came out in December 2011 whereas the 780 launched in May 2013...


We don't need to be fair because we all know the statement I made is not true. It is sarcasm because of another statement made earlier.

AMD hasn't dropped support for the 7970.

The entire point was that drawing a conclusion from a single piece of data that's in beta form doesn't work.

If you look at the numbers top to bottom, there's multiple places where they don't look right for cards on both sides.


----------



## PlugSeven

Quote:


> Originally Posted by *mcg75*
> 
> We don't need to be fair because we all know the statement I made is not true. It is sarcasm because of another statement made earlier.
> 
> AMD hasn't dropped support for the 7970.
> 
> The entire point was that drawing a conclusion from a single piece of data that's in beta form doesn't work.
> 
> If you look at the numbers top to bottom, there's multiple places where they don't look right for cards on both sides.


This is not the first time such a conclusion has been drawn now is it? It's all part of the Kepler gimpage.


----------



## mcg75

Quote:


> Originally Posted by *PlugSeven*
> 
> This is not the first time such a conclusion has been drawn now is it? It's all part of the Kepler gimpage.


The problem being the "Kepler gimpage" has never had valid data to back up what's being said.

980 was 7.5% faster than the 780 Ti at launch in 4K and 1080p.

The 980 is now 14% faster than the 780 Ti at 4K and 1080p.

Source. Source.

Hardware Canucks specifically tested trying to find this phenomenon but had no choice but to conclude it not true. Source.

The 980 is a year and a half old. Exactly how is it unreasonable to find an extra 6.5% of performance vs old cards during that lifetime?


----------



## ZealotKi11er

Quote:


> Originally Posted by *mcg75*
> 
> The problem being the "Kepler gimpage" has never had valid data to back up what's being said.
> 
> 980 was 7.5% faster than the 780 Ti at launch in 4K and 1080p.
> 
> The 980 is now 14% faster than the 780 Ti at 4K and 1080p.
> 
> Source. Source.
> 
> Hardware Canucks specifically tested trying to find this phenomenon but had no choice but to conclude it not true. Source.
> 
> The 980 is a year and a half old. Exactly how is it unreasonable to find an extra 6.5% of performance vs old cards during that lifetime?


The problem there is games they tested. The problem is with new games after Witcher 3.


----------



## iLeakStuff

GTX Titan X vs Fury X. Both overclocked


----------



## Xuper

bohhhh, 1400 vs 1150 , funny that at 4K Titan loses to Fury X despite overclocked to 1400Mhz.


----------



## zealord

Interesting benchmarks. I am surprised to see that the 650$ Fury X does beat the 1000$ Titan X at 4K by more than 10%.

The Fiji cards sadly don't overclock well, but I wonder what the benchmark would look like with both cards at 1400 mhz core clock (I know you can't overclock the Fury X to 1400mhz)


----------



## ZealotKi11er

Quote:


> Originally Posted by *zealord*
> 
> Interesting benchmarks. I am surprised to see that the 650$ Fury X does beat the 1000$ Titan X at 4K by more than 10%.
> 
> The Fiji cards sadly don't overclock well, but I wonder what the benchmark would look like with both cards at 1400 mhz core clock (I know you can't overclock the Fury X to 1400mhz)


Nope. You are looking at MAX fps. Middle is Avg. I think it fair to say Titan X is faster because of min fps.


----------



## czin125

What if they tested it at 1200/600 for the Fury X?


----------



## Fyrwulf

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nope. You are looking at MAX fps. Middle is Avg. I think it fair to say Titan X is faster because of min fps.


Why do momentary dips matter more than momentary spikes? I think it's funny how the goalpost is constantly moving in favor of nVidia on these forums.


----------



## Klocek001

Quote:


> Originally Posted by *Fyrwulf*
> 
> Why do momentary dips matter more than momentary spikes? I think it's funny how the goalpost is constantly moving in favor of nVidia on these forums.


why do min fps matter more than max fps ? are you serious ?

I like the oc'd comparison, but 1400MHz is weak for Maxwell v2, most aftermarket 980Ti's come at 1400MHz. They should've tested at least @1500MHz for maxwell, most break it without extra voltage.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Fyrwulf*
> 
> Why do momentary dips matter more than momentary spikes? I think it's funny how the goalpost is constantly moving in favor of nVidia on these forums.


You want the least amount of fps variance. Yeah Nvidia just has the better product right now especially under DX11.
Quote:


> Originally Posted by *Klocek001*
> 
> why do min fps matter more than max fps ? are you serious ?
> 
> I like the oc'd comparison, but 1400MHz is weak for Maxwell v2, most aftermarket 980Ti's come at 1400MHz. They should've tested at least @1500MHz for maxwell, most break it without extra voltage.


This is a Titan X hence reference cooler. 1400MHz is very realistic. It will match a 1500MHz GTX 980 Ti no problem.


----------



## Klocek001

Fury X with the same problems as always. The difference grows bigger the more stress is on the CPU. That would probably mean that for a person like me, who usually turns off most of "ultra" settings to get better fps, 980Ti will run this game much better than Fury X.


----------



## cowie

who cares what card runs it better sadly it does not look any better then the first few that came out and it looks very similar on the consoles.
as long as the game play is about the same as hm2 or ab its all good with me you don't need a lot of fps for this game to be enjoyable.

it sucks that a few 650 dollar cards and it runs doggy and it does not look all fancy at all

so just give me gameplay before I go get a console and never look back to pc gaming agian


----------



## caswow

Quote:


> Originally Posted by *ZealotKi11er*
> 
> You want the least amount of fps variance. Yeah Nvidia just has the better product right now especially under DX11.
> This is a Titan X hence reference cooler. 1400MHz is very realistic. It will match a 1500MHz GTX 980 Ti no problem.


iam sorry but you only consider fps charts worthy if they are a curve and not just 3 numbers each. it could have been the first screen...


----------



## zealord

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nope. You are looking at MAX fps. Middle is Avg. I think it fair to say Titan X is faster because of min fps.


Oh you are right. Second time today I read something wrong. I am starting to get worried









Still impressive results for a 650$ Fury [email protected] versus 1000$ Titan [email protected] imho.


----------



## ZealotKi11er

Quote:


> Originally Posted by *zealord*
> 
> Oh you are right. Second time today I read something wrong. I am starting to get worried
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Still impressive results for a 650$ Fury [email protected] versus 1000$ Titan [email protected] imho.


But GTX980 Ti exists. Also its losing 1440p and 1080p. Should not be a problem in DX12 though. CPU Overhead will eliminate the problem and ASync will increase the advantage.


----------



## zealord

Quote:


> Originally Posted by *ZealotKi11er*
> 
> But GTX980 Ti exists. Also its losing 1440p and 1080p. Should not be a problem in DX12 though. CPU Overhead will eliminate the problem and ASync will increase the advantage.


Yeah I am eagerly awaiting Hitman DX12 benchmarks. We can finally compare DX11 and DX12 then in a real game. I don't expect too much of it, but 10-15% should be reasonable


----------



## iLeakStuff

Considering that GTX Titan with a massive OC is slightly below Fury X at 4K, its safe to assume Fury X will be faster than 980Ti when both are stock.


----------



## Themisseble

Quote:


> Originally Posted by *iLeakStuff*
> 
> Considering that GTX Titan with a massive OC is slightly below Fury X at 4K, its safe to assume Fury X will be faster than 980Ti when both are stock.


This is till DX11, no sync... So what Fury X will gain 10-20%? Does that mean that Fury will be faster than TITAN X?


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> This is till DX11, *no Async*... So what Fury X will gain 10-20%? Does that mean that Fury will be faster than TITAN X?


Don't count on that, imo. AMD has been head over heels in Directx 12 with this title.


----------



## Forceman

Quote:


> Originally Posted by *mtcn77*
> 
> Don't count on that, imo. AMD has been head over heels in Directx 12 with this title.


If this is going to be AMD's big DX12 showcase title (finally, after crowing about it for 9 months), why did they release a DX11 beta? Seems like a bit of a red flag to me.


----------



## mcg75

Quote:


> Originally Posted by *ZealotKi11er*
> 
> The problem there is games they tested. The problem is with new games after Witcher 3.


TPU's test that I linked contains 9 games that release after the Witcher 3.

If you take the data from those 9 games released after Witcher 3, the margin is 16.7% instead of 14%.

Of those 9 games results, COD's margin is huge compared to the others and Fallout's is too small compared to the others.

Drop those and use the other 7 games released after Witcher 3 and the margin is 15.3%

This "gimping" has never been backed up by valid data plus we have valid data that shows otherwise.

Where's the proof???


----------



## Assirra

Quote:


> Originally Posted by *mcg75*
> 
> TPU's test that I linked contains 9 games that release after the Witcher 3.
> 
> If you take the data from those 9 games released after Witcher 3, the margin is 16.7% instead of 14%.
> 
> Of those 9 games results, COD's margin is huge compared to the others and Fallout's is too small compared to the others.
> 
> Drop those and use the other 7 games released after Witcher 3 and the margin is 15.3%
> 
> This "gimping" has never been backed up by valid data plus we have valid data that shows otherwise.
> 
> Where's the proof???


The whole "gimping" thing came from 1 driver that indeed killed kepler performance which Nvidia fixed in the next version.
Before that this never was even mentioned and now it won't go away because people have no clue what they are talking about.

There is a difference between having worse performance over time (gimping) and simply not optimizing anymore for older generation GPU (what nvidia is doing)

Numerous people have investigated this allegation, including someone on these very forums and they all came to the conclusion, it is a load of bogus.

Not that won't stop people spreading misinformation every single thread Nvdia is even mentioned.


----------



## mtcn77

Quote:


> Originally Posted by *Forceman*
> 
> If this is going to be AMD's big DX12 showcase title (finally, after crowing about it for 9 months), why did they release a DX11 beta? Seems like a bit of a red flag to me.


So, do I still need to form a reply to your loaded question? I don't know how to properly address you in this situation, I'm afraid.


----------



## MonarchX

Quote:


> Originally Posted by *mtcn77*
> 
> Don't count on that, imo. AMD has been head over heels in Directx 12 with this title.


Why is that? So far all we got was DirectX 11 BETA, not the DirectX 12 Preview or whatever newer version. I am certain AMD will make this game run better on its hardware than on NVidia hardware, just like they did with AoS.

Don't forget, AMD's DirectX 11 drivers suck and under-utilize CPU's, as well as, do not generally perform as well as NVidia cards on average. With DirectX 12, the tide will turn because there is more need from developers to optimize the game and less need for better drivers because everything will happen on lower-level.


----------



## Klocek001

apparently it's better to have to wait 1-2 years for your card to reach its desired level of performance rather than buy a product that runs at full potential from launch.
next time remember to buy a car whose engine takes 200.000 km to break in.


----------



## EightDee8D

at least there are games running mantle and amd is trying to push dx12, even though they don't have money. why nvidia isn't pushing for dx12 ? they have money and so called full hardware support. gamenotworks isn't giving any performance boost. seems to me all that talk about working on dx12 for 4 years was bs as their hardware doesn't even support ASC and they are just waiting for fully supported pascal gpu and after that they will push for dx12+gamestillnotworks.


----------



## Klocek001

Quote:


> Originally Posted by *EightDee8D*
> 
> amd is trying to push dx12, even though they don't have money. why nvidia isn't pushing for dx12 ?


I think dx12 is a microsoft creation to be used by game developers. Not an amd/nvidia created api, neither of these companies makes games too.


----------



## mtcn77

Quote:


> Originally Posted by *Klocek001*
> 
> apparently it's better to have to wait 1-2 years for your card to reach its desired level of performance rather than buy a product that runs at full potential from launch.
> *next time remember to buy a car whose engine takes 200.000 km to break in.*


This actually occurs in the real life as piston bore caliber enlarges in the life period of the engine.


----------



## ZealotKi11er

Yeah with eveything that we have seen so far Nvidias main problem is not pushing DX12. With their market dominance DX12 would have been here by now. The fact that we have GimpWork in so many game the $ and persuasive skills of Nvidia are top notch.


----------



## Klocek001

290X took on 780Ti when both were EOL already.
For some this is good, for some this is bad. There are many perspectives. I would much rather have my 980Ti's performance to be squeezed to the last drop since launch, I'm gonna upgrade to Pascal HBM2 card anyway,no matter if my current card would or wouldn't age well.


----------



## infranoia

290x was competitive at launch for the price. It's competitive now as a screaming deal. Any whinging about "not meeting its launch potential" is a pretty desperate-sounding spin.


----------



## Kana-Maru

Have people forgot how the 290X was constantly sold out and was selling as high as $600-$700. It was only $549 when it released. Superior hardware. Mining was crazy.


----------



## infranoia

Quote:


> Originally Posted by *Kana-Maru*
> 
> Have people forgot how the 290X was constantly sold out and was selling as high as $600-$700. It was only $549 when it released. Superior hardware. Mining was crazy.


I guess I was lucky, I was a launch-day sale at $549. And yeah, when I wasn't playing BF4 and Thief I was mining the blazes out of it.


----------



## provost

So, just following up on that car analogy someone noted earlier, a better car analogy may be as follows:

One dealer has slick marketing programs, and spends more on marketing new cars than on providing on going maintenance support for the cars it sells. This dealer wants to make as much money as possible upfront, and wants to keep selling new car models every few months. A buyer walks in, after seeing all the flashy marketing and claims of a superior engine, etc., and pays a premium for the car.

A few months later, the buyer sees that not so flashy cars sold by not so flashy dealer next door for a lower cost are now either performing or outperforming his "superior engine car".

The irate buyer goes back to his flashy dealer and asks what's up; the dealer tells him, look it, if you are not happy with the relative performance of your flashy car, I suggest you sell it on the open market to someone who is a low information buyer, and hasn't really figured out how this dealer's car business works, but at the end of the day, you, Mr. Buyer are still taking the residual risk on the resale value, as I (the dealer) don't have a straight up trade-in program for you for the newest flashy car that I am marketing now.

Dealer then proceeds to tell the buyer not to worry, the dealer's marketing program is extremely good, and there are plenty of other un-savvy buyers out there for a secondary resale market for the buyer's car. He further advises the buyer to hurry up and get rid of the car, before the newer car model comes out and everyone else gets in on the joke.

The "savvy" buyer has gotten may be a handful of AAA races (games) out of his few months old car, and he now has to spend the time and energy to find a greater fool in the open market to buy his existing flashy car, before the gig is up . If the buyer is lucky, and as long as the market remains inefficient, the buyer may be able to sell his car at a decent price, and all in all he may end paying x amount of $$$rent/month for few months of "superior" fun, all the while assuming the residual risk of the resale value up front.

I guess we can say, different strokes for different folks:

If someone wants to rent a card while assuming the resale residual risk = buy Nvidia
If someone wants buy a card and keep it for a while = buy AMD

This is how I see it anyway&#8230;lol


----------



## Klocek001

Quote:


> Originally Posted by *provost*
> 
> So, just following up on that car analogy someone noted earlier, a better car analogy may be as follows:
> 
> One dealer has slick marketing programs, *and spends more on marketing new cars than on providing on going maintenance support for the cars it sells*.


this is where we should stop reading.

according to your elaborate logic every next used nvidia card buyer is proportionally more stupid.

nvidia GPUs will run new games very well, AMD's last gen cards age very well since amd optimize them as they haven't had a big architectural change like Kepler->Maxwell, all there's to it.


----------



## infranoia

Quote:


> Originally Posted by *Klocek001*
> 
> [ ... ]
> 
> they haven't had a big architectural change like Kepler->*Pascal*, all there's to it.


Yikes, are you bringing imaginary unicorns into a logic fight?


----------



## Klocek001

Quote:


> Originally Posted by *infranoia*
> 
> Yikes, are you bringing imaginary unicorns into a logic fight?


you could have just pointed out I made a mistake.


----------



## infranoia

Quote:


> Originally Posted by *Klocek001*
> 
> you could have just pointed out I made a mistake.


No sweat, but that would have been an assumption on my part. There's usually a fine line between mistake and misdirection on OCN. Thanks for the clarification.

By the way, your point is completely valid. If GCN isn't a well-worn glove for AMD by now, there would be hell to pay. They are obviously still finding new ways of exploiting it in DX11, not to even mention DX12.


----------



## Ha-Nocri

Quote:


> Originally Posted by *EightDee8D*
> 
> at least there are games running mantle and amd is trying to push dx12, even though they don't have money. why nvidia isn't pushing for dx12 ? they have money and so called full hardware support. gamenotworks isn't giving any performance boost. seems to me all that talk about working on dx12 for 4 years was bs as their hardware doesn't even support ASC and they are just waiting for fully supported pascal gpu and after that they will push for dx12+gamestillnotworks.


nVidia holding back the industry. Shocking


----------



## kx11

this link here claims you can run the beta with 98% SLI usage

http://www.dsogaming.com/news/hitman-beta-sli-fix-found-offering-96-scaling-on-two-nvidia-graphics-cards/


----------



## Unkzilla

Just out of curiosity - assuming that you have a game with a good DX12 implementation (with async compute) - what are people anticipating the performance difference to be between the AMD and Nvidia cards?


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *mcg75*
> 
> The problem being the "Kepler gimpage" has never had valid data to back up what's being said.
> 
> 980 was 7.5% faster than the 780 Ti at launch in 4K and 1080p.
> 
> The 980 is now 14% faster than the 780 Ti at 4K and 1080p.
> 
> Source. Source.
> 
> Hardware Canucks specifically tested trying to find this phenomenon but had no choice but to conclude it not true. Source.
> 
> The 980 is a year and a half old. Exactly how is it unreasonable to find an extra 6.5% of performance vs old cards during that lifetime?


Well AMD continues to find more performance in Tahiti year after year and thats a GPU nearly 2 years OLDER than GK110. Surely if Nvidia found 7% more performance in a small-die 980 over the last year and a half they could have found SOMETHING in the much larger GK110 over that time??? I mean, its been years since my Titans gained anything at all with drivers...


----------



## Xuper

Quote:


> Originally Posted by *kx11*
> 
> this link here claims you can run the beta with 98% SLI usage
> 
> http://www.dsogaming.com/news/hitman-beta-sli-fix-found-offering-96-scaling-on-two-nvidia-graphics-cards/


I think it's AFR , right ? How about Stutter?


----------



## kx11

Quote:


> Originally Posted by *Xuper*
> 
> I think it's AFR , right ? How about Stutter?


tried it and lost 15fps with AFR2 , although setting AFR to AUTOSELECT boost the performance a bit , beta was running between 40fps to 55fps with AUTOSELECT while AFR2 was pushing 18 to 25fps


----------



## Serios

Quote:


> Originally Posted by *Klocek001*
> 
> I think dx12 is a microsoft creation to be used by game developers. Not an amd/nvidia created api, neither of these companies makes games too.


Microsoft is not very involved whit PC games so we need Nvidia, AMD and Intel to push superior tech like DX12.
The funny thing is Nvidia was all over DX12 when AMD was still deciding what to do whit Mantle but after the first DX12 demo was released things changed.


----------



## degenn

I don't think Nvidia cares at this point tbh. By the time DX12 is actually relevant and there are 10 or more native "AAA" DX12 games on the market, both Nvidia & AMD will be on their new architectures (post-Pascal/Polaris) with full-on DX12 support. This is of course barring major delays in the manufacturing process, which is entirely possible.

In the mean-time developers are more than welcome to and capable of using DX12 if they so choose. If they don't, then blame them... not a GPU manufacturer. The mere fact that this Hitman game is being touted as DX12 should be more than enough evidence for anyone speaking the contrary.


----------



## Blameless

Quote:


> Originally Posted by *mcg75*
> 
> Fallout 4 god rays and Witcher 3 hair can be turned off in game.
> 
> We don't need driver tricks from anyone. We need the game developers to give us the option if we want to run these features. The last few games, that has been the case. Farcry 4 and Dying Light also had Gameworks features that could be turned completely off.


Simple on or off options really don't cut it.

When I was playing Witcher 3, I ran with Hairworks enabled, but at no where near the tessellation level CDPR/Nvidia specified. The result was that I had all the subjective visual quality of it being enabled and less than a third of the standard performance hit.

Same IQ and my reference R9 290 (non-X) runs a large portion of TWIMTBP titles better than my GTX 780 precisely because several of these titles use excessive levels of tesselation that I cannot tune downwards on my NVIDIA GPUs.
Quote:


> Originally Posted by *Dargonplay*
> 
> I think you are referring to Nvidia's Maxwell Delta Color Compression, which is indeed a compression technique that lowers quality to favor performance so you are right. This is an architecture feature and it cannot be turned off sadly, at least not on current drivers, correct me if I'm wrong.


NVIDIA's delta color compression is lossless. It has zero effect on image quality.

Most GPUs have been using lossless color/texture compression for a very long time, it just keeps getting better (at the cost of more transistors).
Quote:


> Originally Posted by *Kana-Maru*
> 
> Didn't AMD use that type of compression a long time ago. The difference was that AMD didn't kill visual quality.


Everyone uses color/texture compression and they are all generally lossless implementations.

Maxwell and Tonga simply have newer and better versions than past architectures.
Quote:


> Originally Posted by *mcg75*
> 
> You can see it there.
> 
> How about this one though? Watch in 4K with no AA applied. That first video uses FXAA. I wonder if how each camp applies FXAA has something to do with this because I'm not seeing it in the 4K video. I noticed several spots the Fury X was more blurred. But I'm 99.9% sure it's a combination of being in motion and compression artifacts.


The amount of artifacting and blur I'm seeing from 70mbps VP9 totally dwarfs what I can tell about the GPUs from this video.
Quote:


> Originally Posted by *Fyrwulf*
> 
> Why do momentary dips matter more than momentary spikes? I think it's funny how the goalpost is constantly moving in favor of nVidia on these forums.


Because a momentary dip is a pause or stutter, while a momentary spike will generally be invisible.

Still, it's generally a good idea to toss out the real outliers (say the worst 0.02% of frame times) when doing minimum FPS comparisons.


----------



## cowie

Quote:


> Originally Posted by *mtcn77*
> 
> Don't count on that, imo. AMD has been head over heels in Directx 12 with this title.


Amd was pretty much head over heals about dx11 too how did that work out for them

http://www.theinquirer.net/inquirer/news/1399999/dx11-amd-weapon

dx11 actually added visuals to the games as you can see dx12 adds performance on hardware that can take advantage of it

I wish we could talk about the game/engine more and not all this bs amd v NVidia garbage


----------



## Themisseble

Quote:


> Originally Posted by *cowie*
> 
> Amd was pretty much head over heals about dx11 too how did that work out for them
> 
> http://www.theinquirer.net/inquirer/news/1399999/dx11-amd-weapon
> 
> dx11 actually added visuals to the games as you can see dx12 adds performance on hardware that can take advantage of it
> 
> I wish we could talk about the game/engine more and not all this bs amd v NVidia garbage


- HD 5870 and HD 6970 were great card... dont know why people bought NVIDIA.
AMD created GCN to win consoles.... and as you can see GCN is still great in newer games. Thats why Kepler cannot fight against GCN.

I can tell you that in DX12 NVIDIA has higher CPU overhead than AMD. Yes the will fix it... but last few years DX11 is actually running great on GCN, but CPU overhead is destroying it.
- DX12 will help AMD to fix this problem. I dont know why AMD did actually fix DX11 cpu driver overhead by themselves.

Finnaly CPU overhead is moving away ... but still NVIDIA has better weapon. Its simple NVIDIA has more money, which allowed them to force Gameworks.


----------



## cowie

Quote:


> Originally Posted by *Themisseble*
> 
> - HD 5870 and HD 6970 were great card... dont know why people bought NVIDIA.
> AMD created GCN to win consoles.... and as you can see GCN is still great in newer games. Thats why Kepler cannot fight against GCN.
> 
> I can tell you that in DX12 NVIDIA has higher CPU overhead than AMD. Yes the will fix it... but last few years DX11 is actually running great on GCN, but CPU overhead is destroying it.
> - DX12 will help AMD to fix this problem. I dont know why AMD did actually fix DX11 cpu driver overhead by themselves.
> 
> Finnaly CPU overhead is moving away ... but still NVIDIA has better weapon. Its simple NVIDIA has more money, which allowed them to force Gameworks.


I had about 5 5870's and the drivers were very bad for a long time and cf drivers really sucked at that time....I got two 5890's for quad(benching) and aged about 10 years in two months.
was about a year later that I could enjoy the 1 card I kept









what helped amd was that now DX12 is the game developer's responsibility to manage the CPU and GPU -

I think keplar does fine and gcn just helped amd catch up not surpass Maxwell by any means.

the show has just begun sadly and knowing NVidia they will probably have some other dx12 feature they will push, so there might be more to cry about in the next few months besides gw's.

but anyway how do you like this game and lack of any major improvements to the engine


----------



## Themisseble

Quote:


> Originally Posted by *cowie*
> 
> I had about 5 5870's and the drivers were very bad for a long time and cf drivers really sucked at that time....I got two 5890's for quad(benching) and aged about 10 years in two months.
> was about a year later that I could enjoy the 1 card I kept
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I think keplar does fine and gcn just helped amd catch up not surpass by any means.
> 
> the show has just begun sadly and knowing NVidia they will probably have some other dx12 feature they will push, so there might be more to cry about in the next few months besides gw's.
> 
> but anyway how do you like this game and lack of any major improvements to the engine


Well I do not agree with you.
7870 is clearly destroying GTX 660. Well try out latest games. Battlefront, F4, etc.


----------



## cowie

http://www.anandtech.com/show/6276/nvidia-geforce-gtx-660-review-gk106-rounds-out-the-kepler-family/10

but at launch it was no contest the 660 ti craped all over it

not very good at tess so amd then added the slider in ccc on the amd cards at that time not to mention no phyx or very little of it which was in a lot of games at that time.

there was a lot of reasons to buy 6xx cards over the 7xxx cards way back in the day of ps1 or so








all my 79xx card died within two years how does yours still live? my matrix 7970 card was dead in two months and still waiting for asus to get back to me in an email lol that card was 600 bucks









but gladly for you and other old card amd owners you don't have to buy any cards for a while thats good for you but bad for amd.....hey maybe that's why 80% of the market buys nv cards because the amd card guys are so content they don't have to???/

I don't really want to go into any amd v nv junk

or 7xxx x 6xxx debates I would not even bother at this point

so how about that game engine?


----------



## Kana-Maru

Quote:


> Originally Posted by *Blameless*
> 
> NVIDIA's delta color compression is lossless. It has zero effect on image quality.
> Most GPUs have been using lossless color/texture compression for a very long time, it just keeps getting better (at the cost of more transistors).
> Everyone uses color/texture compression and they are all generally lossless implementations.


Yeah, but my point was AMD created the tech first and used it many years before Nvidia started to use the tech. I guess that could be added to AMDs innovations list? The point was that AMD doesn't kill their quality in hopes of increasing FPS to win "benchmarking" test like Nvidia does through their drivers. The lack of quality on a Nvidia GPU is noticeable when you run AMD and Nvidia GPUs side by side for sure, well at least in my eyes and others.

The issue I'm seeing is that most benchmark websites need to start forcing quality on both AMD and GPU and if they do you'll see a drop in FPS on Nvidia GPUs. Actually forcing it on Nvidia cards from what I've been seeing lately. It's been tested by some gamers including myself and Nvidia FPS drops when you actually force Quality over Performance. Will anyone actually do this???.... Heck no Nvidia pays tons for advertisement. They pay me nothing which is why I'll call both companies out when they try lame crap. Even if they did pay they would cancel my checks because I'll call a spade a spade any day [I'm looking at you Gameworks and x64 unnecessary tessellation] Lately it's been one company in general [green team].


----------



## cowie

sooo how you guys liking this game engine please do tell?
same old???? or whats new?

any major improvements on graphics over a what 4-5 year old engine?

talk about gimpworks this game looks the same but just plays slower then past ht's?

please do tell


----------



## Themisseble

7870 vs GTX 660
BF3 - on launch = GTX 660 clearly a winner
BF3 - after 6 months = 7870 wins by 0-5%.
BF4 - on launch = 7870 wins 0-5%
BF4 - after 6 months = 7870 wins by 10-15%.

Battlefront
7870 = GTX 680

FC4
7870 = GTX 680

Fallout 4
7870 = GTX 680

RoTR
7870 = GTX 680

Wicther 3
7870 = GTX 680

CoD Black Ops 3
7870 > GTX 770

DL
7870 = GTX 670

Rainbow six siege
7870 = GTX 680

and so on... how well will kepler do when pascal gets out?


----------



## cowie

^that a laymans 6xxx card all of mine totally whooped (yes I said it)any single cored 7xxx in any game amd ringer or gimpworks titles'you just have to know what you are doing.the 6xx cards can overclock your doors off








how about them apples/ don't try that on your 7xxx that junk would be in the garbage within days

but I am glad you went on topic and started talking about the game


----------



## Themisseble

hmm, Hitman is beautiful and its is running great.
Async might help AMD to get few % of performance. While kepler perfomrance on DX12 is really bad and I really want to see DX12 game only.

Buy yourself a kepler and GCN then do some benchmarks for comparison...


----------



## Defoler

Quote:


> Originally Posted by *mtcn77*
> 
> Incredulity and slippery-slope arguments won't help Nvidia if AMD gains a wider support base for their exclusive performance benefits. Bluffing ignorance will only strengthen AMD's hand.


Doesn't AMD need to gain market and actually have better performance in games instead of promises over not even released games?
AMD promised us performance gains in the past, which turned out to fall apart over and over again. How about less talk and more actual performance gains?

The cold hard truth is that better GPU and better numbers and better visuals through gameworks has been beneficial to nvidia. And thinking that only AMD will game wider support and thinking that nvidia are going to let themselves be dragged behind in performances, shows poor arguments. Do you really think nvidia plan to stay behind in performance and DX12 performance?

There are a total of zero DX12 in the market. When there is more than 1, we can talk about DX12 actual performance. Same with vulkan.
Quote:


> Originally Posted by *spyshagg*
> 
> Its being used as a weapon. If AMD weakness was some other effect, would you honestly believe nvidia wouldn't tweak gameworks to exploit it instead? I say they would.


I will tell you a little secret but shhhh don't tell anyone.
Nvidia are not standing behind a developer and holding their hands while they write the code.
If a developer decides to not run tessellation on underwater unseen sea, a tessellation algorithm will not run on it.

Blaming nvidia for that, is just one more random conspiracy theory. The same as a developer putting 64 tessellation level instead of 4 or 10 or 16.

This is also the ugly two-faced truth about many OCN members.
Lets say that nvidia can't run any async computes (even though it is a lie, and they can to a certain level).
Are you going to blame AMD for running a high amount of async in a game even if it doesn't need to, just so they can cause issues to nvidia, or are you going to blame nvidia for not being able to run it?
People here will of course go against the latter, even though they say the exact opposite when it comes to AMD being the so called "victim".


----------



## Serios

Quote:


> Originally Posted by *Themisseble*
> 
> hmm, Hitman is beautiful and its is running great.
> Async might help AMD to get few % of performance. While kepler perfomrance on DX12 is really bad and I really want to see DX12 game only.
> 
> Buy yourself a kepler and GCN then do some benchmarks for comparison...


Yeah Kepler is close to a disaster in DX12.
The 680 and 770 have aged very bad and it's not like thing are going to improve.
I suspect the 780 will be closer than ever to a 280x in DX12.


----------



## Serios

Quote:


> Originally Posted by *cowie*
> 
> ^that a laymans 6xxx card all of mine totally whooped (yes I said it)any single cored 7xxx in any game amd ringer or gimpworks titles'you just have to know what you are doing.the 6xx cards can overclock your doors off
> 
> 
> 
> 
> 
> 
> 
> 
> how about them apples/ don't try that on your 7xxx that junk would be in the garbage within days
> 
> but I am glad you went on topic and started talking about the game


I sense a lot of salt.


----------



## Defoler

Quote:


> Originally Posted by *Kana-Maru*
> 
> Yeah, but my point was AMD created the tech first and used it many years before Nvidia started to use the tech. I guess that could be added to AMDs innovations list? The point was that AMD doesn't kill their quality in hopes of increasing FPS to win "benchmarking" test like Nvidia does through their drivers. The lack of quality on a Nvidia GPU is noticeable when you run AMD and Nvidia GPUs side by side for sure, well at least in my eyes and others.


This myth is as old as the FX cards.
Numerous sites have been trying to bash one company or the other about it. Beside small differences in contrast or vibrance of colors, image quality is similar in both colors reproductions or image quality settings.
In most time this is "my eyes and others" means "I think there is a difference hence there must be one and even if I can't pin point exactly, it is there and I feel it in my bones".

This is the same when AMD claimed that mantle runs better in BF4 than DX11. Later it was found out that they had some image quality bugs which caused mantle looks a bit washy washy. When this was fixed and mantle game the same image quality, suddenly the DX11 vs mantle performance disappeared.
Did AMD tried to rig the game to get more numbers? I don't know. But doesn't it make you wonder?


----------



## Serios

Quote:


> Originally Posted by *Defoler*
> 
> There are a total of zero DX12 in the market. When there is more than 1, we can talk about DX12 actual performance. Same with vulkan.
> I will tell you a little secret but shhhh don't tell anyone.
> Nvidia are not standing behind a developer and holding their hands while they write the code.
> If a developer decides to not run tessellation on underwater unseen sea, a tessellation algorithm will not run on it.


You sure do like to speculate a lot.
Anyway why wouldn't a developer want their game to run great on slower hardware? There are only advantages like positive reviews, more potential clients and so on.
Using useless amounts of tessellation helps Nvidia and hammers AMD so let's use our logic.


----------



## EightDee8D

Quote:


> Originally Posted by *Defoler*
> 
> This myth is as old as the FX cards.
> Numerous sites have been trying to bash one company or the other about it. Beside small differences in contrast or vibrance of colors, image quality is similar in both colors reproductions or image quality settings.
> In most time this is "my eyes and others" means "I think there is a difference hence there must be one and even if I can't pin point exactly, it is there and I feel it in my bones".
> 
> This is the same when AMD claimed that mantle runs better in BF4 than DX11. Later it was found out that they had some image quality bugs which caused mantle looks a bit washy washy. When this was fixed and mantle game the same image quality, *suddenly the DX11 vs mantle performance disappeared.*
> Did AMD tried to rig the game to get more numbers? I don't know. But doesn't it make you wonder?


total bs. not only that image quality bug got fixed, performance also increased later with drivers + game patches. specially cte build.


----------



## criminal

Quote:


> Originally Posted by *cowie*
> 
> ^that a laymans 6xxx card all of mine totally whooped (yes I said it)any single cored 7xxx in any game amd ringer or gimpworks titles'you just have to know what you are doing.the 6xx cards can overclock your doors off
> 
> 
> 
> 
> 
> 
> 
> 
> how about them apples/ don't try that on your 7xxx that junk would be in the garbage within days
> 
> but I am glad you went on topic and started talking about the game


Just wow.


----------



## provost

My view of Dx12 is that it is Microsoft's initiative, not AMD's, but GCN just happens to be better at it than whatever Nvidia has right now. And, I am all for it; I dislike any GPU maker's middleware (drivers, boost 2.0, gimpworks or whatever) stealing performance from the gpu customers just to sell it back to them at a premium by controlling performance through the middling middleware.

The ride was good while it lasted, as GPU makers sequentially increased GPU prices, while realizing better yields on the same 28nm node. Nvidia has had 5 years of heads up on Dx12, yet it chose to focus on keep milking the 28nm, Dx 11 and instead of reinvesting those dollars in the right R&D for the discreet gpu segment for the benefit of its customers, it chose to spend these excess profits on low probability/high growth initiatives and tried to build its own version of a consumer unfriendly proprietary "ecosystem", which is breaking down, as it was based on faulty assumptions.

If Nvidia believed that its customers will rise up in anger to Microsoft, because they so much enjoyed being milked by Nvidia over the last few years, then it was dead wrong. Just as 3dfx was wrong about fighting DX and directing its efforts towards fruitless initiatives, before it went out.

Edit: I never said that AMD couldn't have done a better job with DX11, but it just happens to have the arch advantage over Nvidia right now because of DX12 being adopted by MS. How AMD maximizes this leverage to the benefit of the consumers, we will find out. But, I do believe that if a Company does not do what is right for its customers, it will loose in the long run no matter what.


----------



## Klocek001

that's fantastic, now it's nvidia's fault that amd is locked at 1000MHz for 5 years.
that's just laughable stuff, nvidia high end cards break 1500MHz on air cooling with no voltage while fury X needs a clc to keep 1050MHz stable and hardly overclocks at all without adding voltage. but you know nvidia's responsible for that.

why don't you and amd get a room instead of doing it in public.


----------



## provost

I would, if AMD had any hot girls..lol
The reality is the only friendly name that I recognize on the AMD side on this forum also happens to be a fellow recovering Nvidia customer..


----------



## mcg75

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Well AMD continues to find more performance in Tahiti year after year and thats a GPU nearly 2 years OLDER than GK110. Surely if Nvidia found 7% more performance in a small-die 980 over the last year and a half they could have found SOMETHING in the much larger GK110 over that time??? I mean, its been years since my Titans gained anything at all with drivers...


Agreed on the AMD part.

Looking at it logically, the 7% that Maxwell has gained since release isn't really very impressive and is also eclipsed by AMD's gains. We haven't seen any performance increases for Maxwell for a long time either.

Given that Kepler has been around since early 2012, perhaps there isn't anything more to be gained after the first couple years. Looking at Maxwell would seem to confirm that.


----------



## mtcn77

Quote:


> Originally Posted by *cowie*
> 
> *Amd was pretty much head over heals about dx11 too how did that work out for them*
> 
> http://www.theinquirer.net/inquirer/news/1399999/dx11-amd-weapon
> 
> dx11 actually added visuals to the games as you can see dx12 adds performance on hardware that can take advantage of it
> 
> I wish we could talk about the game/engine more and not all this bs amd v NVidia garbage


- Pretty good, actually.
You know I still keep my '_dx11'_ gpu, right? Last time I checked, performance in Mad Max was still where it was in Fear 3 in its heyday. How many green gpu addicts are there still considering theirs still recommendable as a casual day to day gpu, again? The next gpu I turn up for might be Polaris which I consider a well-enough performance leap. Please, don't say I should have gone Maxwell. Nvidia have their own chronic issues which I will touch in a moment.
Quote:


> Originally Posted by *Defoler*
> 
> Doesn't AMD need to gain market and actually have better performance in games instead of promises over not even released games?
> AMD promised us performance gains in the past, which turned out to fall apart over and over again. How about less talk and more actual performance gains?
> 
> The cold hard truth is that better GPU and better numbers and better visuals through gameworks has been beneficial to nvidia. And thinking that only AMD will game wider support and thinking that nvidia are going to let themselves be dragged behind in performances, shows poor arguments. *Do you really think nvidia plan to stay behind in performance and DX12 performance?*
> 
> There are a total of zero DX12 in the market. When there is more than 1, we can talk about DX12 actual performance. Same with vulkan.


Funny thing is, you are resorting to a straw man. Their performance is showing clear violations of supposed conformance with Directx 12.
There are clear benefits to Directx 12 - such as defrauding yourself _less_ when these companies sell you back your own "cpu cores" at a less than optimal rate.
Nvidia currently 'allows' you to use 2 of your cores and everyone is gunning for them; however with Directx 12 and Mantle for the most part, AMD users are reclaiming 6 cores of their cpus. Now, I understand everyone isn't particularly fans of MOAR COARS, yet you cannot oversimplify the whole userbase into i3 consumers, too. I honestly have nothing on Intel, they are doing great for themselves and keeping it coy, however I want to do the same for myself. I have 4 cores and I want to use it... and use it good.
You might still think this exclusivity game is still up for grabs for these technology giants, but it isn't. 2 cores and 20,000 draw call scenes are a past history. Nvidia is doing the best for themselves with their in-house 'curated' software development programme, but that is for their sole benefit and it isn't meant for breaking new frontiers. They just standardized game development into the same npc count and lighting and scenery games and you are spectating for it. Either you are too old, or young, but if you happened to live in my generation, there were some quite original ideas on computer gaming. The hardware was present; however the adopters were not attuned to the hardware. The challenge was the carrot and great new ideas and developers sprung foth from the opportunity. In fact, the whole RPG scene started with the first developer who could show mastery of the underlying hardware.
What GW does is the polar opposite of this. Everytime a standard game shows up, an original idea's chances fall short. There is nothing new and all it has ever produced is miserly games with overly-enthusiastic adopters. It is not a negotiable discussion, too. The games speak for themselves while the developers could be doing so much more and I say this because without the developers discretion, there is no character asset of the game coming from the developer's effort to surmount the ordeal. I say 400,000 draw call scenes and 6 cores AT LEAST is some, much needed, progress that will allow the genius of the developers to show up in their titles. I'm done with the same game genres and story narratives. Nothing but FPS and TPS titles are getting the go ahead.
Better in-game visuals, quoted as film grade, is where gaming of 2016 should be embodied in and not in the stagnant comments you make, imo.
Note: @cowie,
Quote:


> In reality, most of Battle of Naboo would (according to both Brad Wardell and some of his co-workers) come down to *the engine* itself. If you're running a very optimized LOD (Level of Detail), fully leveraging the power of the hardware and so on, then 50K would probably be a good enough approximation. Ideally, you're still rather shy of what you'd need, and indeed StarDocks lead artist believes you'd be better off with 300,000 batches.


[Source]


----------



## caswow

Quote:


> Originally Posted by *cowie*
> 
> ^that a laymans 6xxx card all of mine totally whooped (yes I said it)any single cored 7xxx in any game amd ringer or gimpworks titles'you just have to know what you are doing.the 6xx cards can overclock your doors off
> 
> 
> 
> 
> 
> 
> 
> 
> how about them apples/ don't try that on your 7xxx that junk would be in the garbage within days
> 
> but I am glad you went on topic and started talking about the game


yea instead of a 7870 i got a 660 because why not nvidia hype train was at its fullest at this time. guess what. since 2001 this was the worst buy ever in my gpu history. no card was worse and aged that bad like this card. you sure as hell iam not gonna go nvidia next round NO MATTER WHAT. it doesnt matter if amd sucks up to two times the power for the same perf it certainly wont be nvidia in any case


----------



## provost

Quote:


> Originally Posted by *mcg75*
> 
> Agreed on the AMD part.
> 
> Looking at it logically, the 7% that Maxwell has gained since release isn't really very impressive and is also eclipsed by AMD's gains. We haven't seen any performance increases for Maxwell for a long time either.
> 
> Given that Kepler has been around since early 2012, perhaps there isn't anything more to be gained after the first couple years. Looking at Maxwell would seem to confirm that.


You are right MCG, and perhaps some may take comfort in the fact that "misery loves company"&#8230;









But, looking at it through a skeptic's lens, this would appear to be a negative development for Nvidia customers, broadly speaking, and specifically for Kepler and Maxwell customers. It reinforces the notion that Nvidia really doesn't care about ongoing optimizations for its customers, after the initial sale, as long as the benchmarks remain favorable during the sales cycle. It would also appear that Nvidia has already EOL'd these cards, from an optimization viewpoint, as it prepares for its next sales cycle for Pascal. At least, this would make perfect business sense based on getting a return on each $ of labor hour spent; better spend the same labor $ working on drivers for the new set of products rather than supporting what is already "cashed-in"&#8230; lol


----------



## magnek

Well if Pascal is a flop, then nVidia will have no choice but to gimp Maxwell harder (at least keep the facade of) optimizing for Maxwell.


----------



## Themisseble

Quote:


> Originally Posted by *caswow*
> 
> yea instead of a 7870 i got a 660 because why not nvidia hype train was at its fullest at this time. guess what. since 2001 this was the worst buy ever in my gpu history. no card was worse and aged that bad like this card. you sure as hell iam not gonna go nvidia next round NO MATTER WHAT. it doesnt matter if amd sucks up to two times the power for the same perf it certainly wont be nvidia in any case


well 7870 has better efficiency than GTX 660... but nobody cared about it, when they were released.

R7 370, 7850, R9 270X, 7870, 7790 all these cards have great efficiency.
GTX 950 uses 10W less than R7 370, but it is around 5-10% faster.

R9 290 stock with custom cooler is also quite efficient. I dont know why did AMD put 512bit memory on it... to take + 50W more than 256bit.


----------



## p00q

Quote:


> Originally Posted by *Themisseble*
> 
> I dont know why did AMD put 512bit memory on it... to take + 50W more than 256bit.


Probably because it needed that. The extra 4GB vRAM on the new cards however...


----------



## mcg75

Quote:


> Originally Posted by *Themisseble*
> 
> 7870 vs GTX 660
> BF3 - on launch = GTX 660 clearly a winner
> BF3 - after 6 months = 7870 wins by 0-5%.
> BF4 - on launch = 7870 wins 0-5%
> BF4 - after 6 months = 7870 wins by 10-15%.
> 
> Battlefront
> 7870 = GTX 680
> 
> FC4
> 7870 = GTX 680
> 
> Fallout 4
> 7870 = GTX 680
> 
> RoTR
> 7870 = GTX 680
> 
> Wicther 3
> 7870 = GTX 680
> 
> CoD Black Ops 3
> 7870 > GTX 770
> 
> DL
> 7870 = GTX 670
> 
> Rainbow six siege
> 7870 = GTX 680
> 
> and so on... how well will kepler do when pascal gets out?


Those numbers completely contradict what TPU's database says about those cards. What is the source? Do they retest with new drivers on the same system regularly?

7870 was 7-8% faster than gtx 660 at launch across the board in TPU's 660 launch review. Source.

The 270X is the successor to the 7870 and is 9-10% faster. Source.

The 270x and the 770 were tested very recently and the 770 was 27% faster. Source.

So the 770 is 26-27% faster than a card that is already 9-10% faster than the 7870.

Yet you are telling us they are equal.

We can see the 280x (7970) getting really close to the 780 and we can see the 290x catching the 780 Ti in TPU's numbers. But the 7870 as fast as 680 or 770? Not even close.


----------



## dubldwn

Quote:


> Originally Posted by *Tideman*
> 
> I seem get about 30-40 usage avg on all cores (5930k), will have to double check later..
> 
> I've hit 3.6gb vram max on highest settings at 1440p (but with supersampling at default 1.0).
> 
> Also it seems to be the mirrors that are causing the insane framerate drops for the most part. This needs to be fixed, it's awful. No issue with the second level though (I get 100+fps there).


I maxed everything out at 1440p including shadows (second level) and yeah my 5820k is just snoozing. Load vcore for me is usually 1.296v. It did up the vram usage to 2.8GB.

GTA5 I'm at max clocks/vcore pretty much the whole time. This is just an observation in the context of a beta, small levels, and the promise of DX12 using the CPU even less


----------



## Ha-Nocri

970 vs 390. Big win for 390:


----------



## ZealotKi11er

Quote:


> Originally Posted by *Ha-Nocri*
> 
> 970 vs 390. Big win for 390:


Where is the CPU overhead myth?


----------



## Assirra

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Where is the CPU overhead myth?


Come on now, you are now saying the thing has been a fact for so long is a myth?
Look 2 posts above you, this game barely uses CPU at all.




7:25: From there starts the "myth".


----------



## NightAntilli

Quote:


> Originally Posted by *mcg75*
> 
> Those numbers completely contradict what TPU's database says about those cards. What is the source? Do they retest with new drivers on the same system regularly?
> 
> 7870 was 7-8% faster than gtx 660 at launch across the board in TPU's 660 launch review. Source.
> 
> The 270X is the successor to the 7870 and is 9-10% faster. Source.
> 
> The 270x and the 770 were tested very recently and the 770 was 27% faster. Source.
> 
> So the 770 is 26-27% faster than a card that is already 9-10% faster than the 7870.
> 
> Yet you are telling us they are equal.
> 
> We can see the 280x (7970) getting really close to the 780 and we can see the 290x catching the 780 Ti in TPU's numbers. But the 7870 as fast as 680 or 770? Not even close.


What about Fallout 4? The following is based on gamegpu.ru;


----------



## ZealotKi11er

Quote:


> Originally Posted by *Assirra*
> 
> Come on now, you are now saying the thing has been a fact for so long is a myth?
> Look 2 posts above you, this game barely uses CPU at all.
> 
> 
> 
> 
> 7:25: From there starts the "myth".


I know that. I think the biggest problem AMD can improve it and the game can take that into account. There should not be CPU overhead as visible as there is with AMD 1080p unless the game just asks for too much Draw Calls. I think it is all down to software. You can tune Nvidia to match AMD DX12 and Tune AMD to match Nvidia DX11. Problem is both cases there will be a lot of work involved.


----------



## FrenzyBee

Quote:


> Originally Posted by *NightAntilli*
> 
> What about Fallout 4? The following is based on gamegpu.ru;


Nvidia performance was fixed in the full release of patch 1.3


----------



## rage fuury

PCGameshardware benchmarks with various cards ( overclocked NVidia cards in this test, too) and a frame rate analysis:
http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/Benchmark-1186919/


----------



## Themisseble

Quote:


> Originally Posted by *rage fuury*
> 
> PCGameshardware benchmarks with various cards ( overclocked NVidia cards in this test, too) and a frame rate analysis:
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/Benchmark-1186919/


PCGH is not good site.


----------



## mcg75

Quote:


> Originally Posted by *Themisseble*
> 
> PCGH is not good site.


Neither is a site that has a 7870 equal to the 680/770.


----------



## Themisseble

Quote:


> Originally Posted by *mcg75*
> 
> Neither is a site that has a 7870 equal to the 680/770.


You are out of your mind.
TPU is still benchmarking old games just to show NVIDIA kepler doing better. And also you picked old review


----------



## mtcn77

Quote:


> Originally Posted by *mcg75*
> 
> Neither is a site that has a 7870 equal to the 680/770.


Talk about a bias.
I have some benchmarks if you're interested in censoring Hwbot.


----------



## Klocek001

the hell's going on with those frametimes on 970/980 in the last part of the benchmark? tehy seem fine for the most part then boom ... doesn't seem to happen on 980Ti, seems it has even better frametimes/framerate than Fury X.


----------



## Themisseble

look

75% R9 270X
81% GTX 770
*81*100/75 = 108%*

So good.


----------



## mcg75

Quote:


> Originally Posted by *NightAntilli*
> 
> What about Fallout 4? The following is based on gamegpu.ru;


The results gamegpu got there have never been corroborated by any other source.

One of two things happened. That patch was replaced by another 4 days after it was released because of issues. Or they mucked up the settings.

Either way, with the official patch, Nvidia performance was back to where it should be and AMD gained a lot as well.


----------



## delboy67

Quote:


> Originally Posted by *Themisseble*
> 
> PCGH is not good site.


Agree fully, pclab is another one i can never replicate thier results on similar hw.


----------



## criminal

Quote:


> Originally Posted by *Themisseble*
> 
> look
> 
> 75% R9 270X
> 81% GTX 770
> *81*100/75 = 108%*
> 
> So good.


Meanwhile the 770's original competitor (280x) is now 19% faster.


----------



## zealord

Quote:


> Originally Posted by *rage fuury*
> 
> PCGameshardware benchmarks with various cards ( overclocked NVidia cards in this test, too) and a frame rate analysis:
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/Benchmark-1186919/


Thanks for sharing. PCGH and Computerbase are the sites I trust the most.

Computerbase is probably my number#1 because their interactive graphs are amazing, but sadly they don't benchmark many games on release, but rather hardware when it comes out.

I wonder why noone else is doing interactive graphs. I love the direct comparison between 2 cards in percentage form


----------



## Themisseble

Quote:


> Originally Posted by *zealord*
> 
> Thanks for sharing. PCGH and Computerbase are the sites I trust the most.
> 
> Computerbase is probably my number#1 because their interactive graphs are amazing, but sadly they don't benchmark many games on release, but rather hardware when it comes out.
> 
> I wonder why noone else is doing interactive graphs. I love the direct comparison between 2 cards in percentage form


PCGH is worst site ever.


----------



## Klocek001

Quote:


> Originally Posted by *Themisseble*
> 
> PCGH are worst site ever.


mhm.


----------



## mcg75

Quote:


> Originally Posted by *mtcn77*
> 
> Talk about a bias.
> I have some benchmarks if you're interested in censoring Hwbot.


There is a stark difference between being biased and simply wanting the truth.

The truth is we can see where the 290x catches the 780 Ti and the 7970 has almost completely closed the gap on the 780.

If something is true, we accept it as true and move on. When it's not true, it should be disproven.

The problem with you is you always have to see red win regardless of the facts support it as being true or not.


----------



## criminal

Quote:


> Originally Posted by *mcg75*
> 
> There is a stark difference between being biased and simply wanting the truth.
> 
> The truth is we can see where the 290x catches the 780 Ti and the 7970 has almost completely closed the gap on the 780.
> 
> If something is true, we accept it as true and move on. When it's not true, it should be disproven.
> 
> *The problem with you is you always have to see red win regardless of the facts support it as being true or not*.


QFT


----------



## Themisseble

Quote:


> Originally Posted by *mcg75*
> 
> There is a stark difference between being biased and simply wanting the truth.
> 
> The truth is we can see where the 290x catches the 780 Ti and the 7970 has almost completely closed the gap on the 780.
> 
> If something is true, we accept it as true and move on. When it's not true, it should be disproven.
> 
> The problem with you is you always have to see red win regardless of the facts support it as being true or not.


You mean 7970 is closing gap on GTX 780Ti and R9 290X is closing gap on GTX 980.


----------



## sugarhell

Quote:


> Originally Posted by *Themisseble*
> 
> You mean 7970 is closing gap on GTX 780Ti and R9 290X is closing gap on GTX 980.


Chill out dude. That means a 7970 is close to 290x performance.


----------



## Themisseble

Quote:


> Originally Posted by *sugarhell*
> 
> Chill out dude. That means a 7970 is close to 290x performance.


I am calm. But PCGH is really the worst site for comparing GPU performance... lately basically all sites are showing really bad review. Where is IQ comparison? etc.

Just please realize all NVIDIA new titles are running bad on kepler = pure truth.


----------



## Klocek001

turning everything upside down based on three beta benchmarks really requires an amd fan at their best.
Quote:


> Originally Posted by *Themisseble*
> 
> Where is IQ comparison? etc.


yeah, where ? that would save a lot of problems on this forum.


----------



## Themisseble

Quote:


> Originally Posted by *Klocek001*
> 
> you gotta be an amd fan to turn everything upside down based on three beta benchmarks.


Sure.. I AM BIGGEST AMD fanboy!... MY friend wants to switch with me GTX 680 for R8 270X.... and then I noticed why. So yes I am glad that I bought AMD GCN. I just knew nvidias 2 year cycle ...

basically he is playing GTA V, battlefront, FC4, FO4, Dirt Rally and new games...

I am not saying that GTX 680 is bad, it kick my R9 270X in BF4, but. .... Well I am sying that NVIDIA doesnt support kepler as it should anymore.


----------



## mtcn77

Quote:


> Originally Posted by *mcg75*
> 
> There is a stark difference between being biased and simply wanting the truth.
> 
> The truth is we can see where the 290x catches the 780 Ti and the 7970 has almost completely closed the gap on the 780.
> 
> If something is true, we accept it as true and move on. When it's not true, it should be disproven.
> *
> The problem with you* is you always have to see red win regardless of the facts support it as being true or not.


There, another fundamental attribution bias.


----------



## sugarhell

Quote:


> Originally Posted by *Themisseble*
> 
> Sure.. I AM BIGGEST AMD fanboy!... MY friend wants to switch with me GTX 680 for R8 270X.... and then I noticed why. So yes I am glad that I bought AMD GCN. I just knew nvidias 2 year cycle ...
> 
> basically he is playing GTA V, battlefront, FC4, FO4, Dirt Rally and new games...


Both are kinda bad now. 270x is like how many years old ?

Stick with the 680 until the new gpus


----------



## Klocek001

Quote:


> Originally Posted by *sugarhell*
> 
> Both are kinda bad now. 270x is like how many years old ?
> 
> Stick with the 680 until the new gpus


yeah it's like 25 vs 27 fps @1080p medium settings so who gives a damn.
I sold my 7870 GHz (~270X) back in early 2014 cause it was already too weak for 1080p/High settings.
It's 2016 so comparing 2011 GPUs is kind of silly.


----------



## Ha-Nocri

What is with Kepler? Can't even run the game @1440p or what?!


----------



## Klocek001

280X 3G can't too so I guess it's a vram issue.


----------



## Themisseble

Quote:


> Originally Posted by *sugarhell*
> 
> Both are kinda bad now. 270x is like how many years old ?
> 
> Stick with the 680 until the new gpus


Well I am not giving my radeon away for kepler.


----------



## mtcn77

Quote:


> Originally Posted by *sugarhell*
> 
> Both are kinda bad now. 270x is like how many years old ?
> 
> Stick with the *680* until the new gpus


Quote:


> Originally Posted by *Themisseble*
> 
> Well I am not giving my radeon away for kepler.


lol, he has 270X - another forum user with green sight correspondence bias.


----------



## sugarhell

Quote:


> Originally Posted by *mtcn77*
> 
> lol, he has 270X - another forum user with green sight correspondence bias.


lol i am an amd fanboy rofl


----------



## Themisseble

Quote:


> Originally Posted by *Klocek001*
> 
> yeah it's like 25 vs 27 fps @1080p medium settings so who gives a damn.
> I sold my 7870 GHz (~270X) back in early 2014 cause it was already too weak for 1080p/High settings.
> It's 2016 so comparing 2011 GPUs is kind of silly.


Not it is not. GCN is in PS4/X1 and thats why GCN will have far better support than kepler.


----------



## Themisseble

Quote:


> Originally Posted by *mtcn77*
> 
> lol, he has 270X - another forum user with green sight correspondence bias.


Well I also had GTX 660 for some time. Does that make me NVIDIA fanboy?

For real, its really bad to see GTX 660 falling behind R9 270X for like 30%. Not cool.

http://www.techspot.com/review/917-far-cry-4-benchmarks/page2.html

R9 270X 57 FPS
GTX 660 41 FPS
Both were similar priced.


----------



## mtcn77

Quote:


> Originally Posted by *Themisseble*
> 
> Well I also had GTX 660 for some time. Does that make me NVIDIA fanboy?


Not any more.


----------



## zealord

I am glad PCGH are doing so many benchmarks lately. I hope they do some benchmarks once DX12 for Hitman comes out and a direct comparison. Should be very interesting


----------



## mcg75

Quote:


> Originally Posted by *Themisseble*
> 
> You are out of your mind.
> TPU is still benchmarking old games just to show NVIDIA kepler doing better. And also you picked old review


The test I linked was the last time they actually tested the 270x. If you look at the test you linked, the 270x doesn't appear in any of the game numbers just the end results.

Despite that, let's use your numbers. As stated earlier, the 270x was 10% faster than the 7870 when tested together at launch which I previously linked.

The 770 is 8% faster on the test you showed.

That's still a pretty big gap. Even reverting to the 680 from the 770, we's still have a 15% gap.

What would be fair in the case would be to state that the 7870 has closed the gap between it and the 770 by almost 50%.

It would be not fair to call them equal.


----------



## daviejams

https://www.youtube.com/watch?v=fq6GyUzyuJQ&feature=share

Nobody posted the digital foundry 970 vs 390 test ?

Pretty clear


----------



## mcg75

Quote:


> Originally Posted by *daviejams*
> 
> https://www.youtube.com/watch?v=fq6GyUzyuJQ&feature=share
> 
> Nobody posted the digital foundry 970 vs 390 test ?
> 
> Pretty clear


Was posted in this thread 10 hours ago.

390 gives the 970 a pretty good beating.


----------



## magnek

I have to







at the people saying PCGH isn't trustworthy. Yeah sure a site that actually labels the 970 as "3.5+0.5G" and was one of the first to actually do any frametime testing on the 970 is somehow nVidia biased. Goddamn.


----------



## looniam

untrustworthy site=benchmarks don't show what i want.









news flash:
EVERY site has flaws.


----------



## criminal

Quote:


> Originally Posted by *magnek*
> 
> I have to
> 
> 
> 
> 
> 
> 
> 
> at the people saying PCGH isn't trustworthy. Yeah sure a site that actually labels the 970 as "3.5+0.5G" and was one of the first to actually do any frametime testing on the 970 is somehow nVidia biased. Goddamn.


Those two are part of the bad AMD crowd. Just like the bad Nvidia crowd members, they can't be taken seriously and are best to be ignored.
Quote:


> Originally Posted by *looniam*
> 
> untrustworthy site=benchmarks don't show what i want.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> news flash:
> EVERY site has flaws.


Yep... lol. Well unless it is Tom's Hardware. They just suck. Well they do to me anyway.


----------



## ValSidalv21

Quote:


> Originally Posted by *Themisseble*
> 
> I am calm. But PCGH is really the worst site for comparing GPU performance... lately basically all sites are showing really bad review. Where is IQ comparison? etc.
> Just please realize all NVIDIA new titles are running bad on kepler = pure truth.


You like to talk about IQ comparison but you call PCGH "the worst site" when they are one of the few that actually does that








Check out their Dirt Rally review here









Ah we have IQ comparisons:
AMD  NVIDIA 

What's that? NVIDIA having better IQ?







No way!!! Blasphemy!!! Must be GimpWorks fault!!!

Oh wait... it's a Gaming Evolved title









So AMD are apparently aware of the issue and are gonna fix it in drivers. Well I don't know if they did but it sure took a big performance hit going from 15.12 to 16.1:


What we have in the end is NVIDIA winning both performance and IQ in a Gaming Evolved title. Who would have guessed with all the performance and IQ gimping


----------



## criminal

Quote:


> Originally Posted by *ValSidalv21*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> You like to talk about IQ comparison but you call PCGH "the worst site" when they are one of the few that actually does that
> 
> 
> 
> 
> 
> 
> 
> 
> Check out their Dirt Rally review here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ah we have IQ comparisons:
> AMD  NVIDIA
> 
> What's that? NVIDIA having better IQ?
> 
> 
> 
> 
> 
> 
> 
> No way!!! Blasphemy!!! Must be GimpWorks fault!!!
> 
> Oh wait... it's a Gaming Evolved title
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So AMD are apparently aware of the issue and are gonna fix it in drivers. Well I don't know if they did but it sure took a big performance hit going from 15.12 to 16.1:
> 
> 
> What we have in the end is NVIDIA winning both performance and IQ in a Gaming Evolved title. Who would have guessed with all the performance and IQ gimping


BURN! LOL


----------



## Kana-Maru

Quote:


> Originally Posted by *ValSidalv21*
> 
> You like to talk about IQ comparison but you call PCGH "the worst site" when they are one of the few that actually does that
> 
> 
> 
> 
> 
> 
> 
> 
> Check out their Dirt Rally review here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ah we have IQ comparisons:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> AMD  NVIDIA


Now that's the comparison everyone has been looking for







Those screens shows why IQ really matters








Win for Nvidia.

Quote:


> So AMD are apparently aware of the issue and are gonna fix it in drivers. Well I don't know if they did but it sure took a big performance hit going from 15.12 to 16.1:
> http://www.overclock.net/content/type/61/id/2716585/


More DX11 1080p reviews with $650+ Graphics Cards smh. At 1440p and 4K I've seen a nice boost with AMD Crimson 16.1 Beta drivers using the STOCK frequency. Even Firestrike showed improvements.

From my own Apples to Apples benchmarks:

*AMD Fury X STOCK Settings*
*3DMark FireStrike Performance % Increase:*
From Catalyst Driver 15.7.1 -[7/29/2015]- to Crimson Driver 16.1 -[1/17/2016]-
-Overall Score: +3%
-Graphics Score: +2.19%
-Combined Score: +5.89%

*3DMark FireStrike Extreme % Increase:*
From Catalyst Driver 15.7.1 -[7/29/2015]- to Crimson Driver 16.1 -[1/17/2016]-
-Overall Score: +2.24%
-Graphics Score: +2.20%
-Combined Score: +2.84%

*Tomb Raider 2013 [Ultimate Setting] - 3840×2160 [4K]*
From Catalyst Driver 15.7.1 -[7/29/2015]- to Crimson Driver 16.1 -[1/17/2016]-
FPS: +6.00%
Frame times: +3.80%

*Crysis 3 4K 100% Maxed - No MSAA- Performance % Increase:*
From Catalyst Driver 15.7.1 -[7/29/2015]- to Crimson Driver 16.1 -[1/17/2016]-
FPS: +13.33%
Frame times: +4.48%

*3DMark Feature Test [DX12] Performance % Increase:*
From Catalyst Driver 15.7.1 -[7/29/2015]- to Crimson Driver 16.1 -[1/17/2016]-
DirectX 12 Increase: +5.85%

Mantle performed better than DX12 though. Mantle was 1.57% better than DX12. Mantle had 16,374,768 Draw Calls at stock settings. DX12 Draw Calls was 16.1M.

Minor increases, but I'm not complaining. I haven't thought about overclocking the Fury X yet since the 1440p\4K experience has been enjoyable. Crimson 16.1 Beta drivers has definitely improved some of AMDs DX11 overhead problems. I don't benchmark my games at 1080p since I never play games at 1080p anymore and haven't in more than 3 years. I'm planning on getting some newer titles like Rise of the Tomb Raider and Hitman for more up to date benchmarks. I'm only benchmarking games when I have time to play them nowadays. I need to get around to playing Shadows of Mordor again.

I don't know what they found in Dirt Rally, but so far all of the games I like to play are performing better.


----------



## Themisseble

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *ValSidalv21*
> 
> You like to talk about IQ comparison but you call PCGH "the worst site" when they are one of the few that actually does that
> 
> 
> 
> 
> 
> 
> 
> 
> Check out their Dirt Rally review here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ah we have IQ comparisons:
> AMD  NVIDIA
> 
> What's that? NVIDIA having better IQ?
> 
> 
> 
> 
> 
> 
> 
> No way!!! Blasphemy!!! Must be GimpWorks fault!!!
> 
> Oh wait... it's a Gaming Evolved title
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So AMD are apparently aware of the issue and are gonna fix it in drivers. Well I don't know if they did but it sure took a big performance hit going from 15.12 to 16.1:
> 
> 
> What we have in the end is NVIDIA winning both performance and IQ in a Gaming Evolved title. Who would have guessed with all the performance and IQ gimping






IQ win for NVIDIA? Thats the problem PCGH is showing NVIDIA winning in Rally, but in my experience and many others its just different result.

http://forums.anandtech.com/showthread.php?t=2457260
http://www.legitreviews.com/dirt-rally-performance-review-geforce-gtx-970-versus-radeon-r9-390_167054/4
Sure AMD is having some problems with CPU driver overhead... but R9 290 matching GTX 980 at 1440P is just nice.

IQ comparison - Well here is good IQ comparison. Not that crap from PCGH.
http://www.bsimracing.com/nvidia-image-quality-lower-on-newer-cards/

http://www.pcgameshardware.de/Dirt-Rally-Spiel-55539/Specials/Benchmark-Test-1182995/
Yes the game can take advantages of intels six core CPU, but it cannot do the same for AMD. Sure.


----------



## BradleyW

Quote:


> Originally Posted by *ValSidalv21*
> 
> You like to talk about IQ comparison but you call PCGH "the worst site" when they are one of the few that actually does that
> 
> 
> 
> 
> 
> 
> 
> 
> Check out their Dirt Rally review here
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ah we have IQ comparisons:
> AMD  NVIDIA
> 
> What's that? NVIDIA having better IQ?
> 
> 
> 
> 
> 
> 
> 
> No way!!! Blasphemy!!! Must be GimpWorks fault!!!
> 
> Oh wait... it's a Gaming Evolved title
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So AMD are apparently aware of the issue and are gonna fix it in drivers. Well I don't know if they did but it sure took a big performance hit going from 15.12 to 16.1:
> 
> 
> What we have in the end is NVIDIA winning both performance and IQ in a Gaming Evolved title. Who would have guessed with all the performance and IQ gimping


IQ differences were also seen in Crysis 2 and BF Hardline. AMD looked better.


----------



## Kuivamaa

Quote:


> Originally Posted by *magnek*
> 
> I have to
> 
> 
> 
> 
> 
> 
> 
> at the people saying PCGH isn't trustworthy. Yeah sure a site that actually labels the 970 as "3.5+0.5G" and was one of the first to actually do any frametime testing on the 970 is somehow nVidia biased. Goddamn.


Agreed. Pcgh is transparent. You may disagree with their approach or cards used but they are competent and have no obvious agenda.


----------



## ValSidalv21

Quote:


> Originally Posted by *Themisseble*
> 
> 
> IQ win for NVIDIA? Thats the problem PCGH is showing NVIDIA winning in Rally, but in my experience and many others its just different result.
> 
> http://forums.anandtech.com/showthread.php?t=2457260
> http://www.legitreviews.com/dirt-rally-performance-review-geforce-gtx-970-versus-radeon-r9-390_167054/4
> Sure AMD is having some problems with CPU driver overhead... but R9 290 matching GTX 980 at 1440P is just nice.
> 
> IQ comparison - Well here is good IQ comparison. Not that crap from PCGH.
> http://www.bsimracing.com/nvidia-image-quality-lower-on-newer-cards/
> 
> http://www.pcgameshardware.de/Dirt-Rally-Spiel-55539/Specials/Benchmark-Test-1182995/
> Yes the game can take advantages of intels six core CPU, but it cannot do the same for AMD. Sure.


And there it is again...
From the very review you linked:
AMD

NVIDIA


Look at that vegetation









Also, did you checked the date of that other review? It's from 2014. The game was in very, very early access back then.


----------



## Themisseble

Quote:


> Originally Posted by *ValSidalv21*
> 
> And there it is again...
> From the very review you linked:
> AMD
> 
> NVIDIA
> 
> 
> Look at that vegetation
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also, did you checked the date of that other review? It's from 2014. The game was in very, very early access back then.


Whaaat?! You look at vegetation and please dont show any of PCGH reviews.

DID you even read few post back what was I actually saying? Man you are so big NVIDIA fanboy.


----------



## ValSidalv21

Quote:


> Originally Posted by *Themisseble*
> 
> Whaaat?! You look at vegetation and please dont show any of PCGH reviews.


Those are from the ComputerBase review, and there is a clear difference. It's the same issue that PCGH found.


----------



## Themisseble

Quote:


> Originally Posted by *ValSidalv21*
> 
> Those are from the ComputerBase review, and there is a clear difference. It's the same issue that PCGH found.


Clear difference?!!


----------



## ValSidalv21

Quote:


> Originally Posted by *Themisseble*
> 
> Clear difference?!!


Yep. You just need to right click and open them in new tab


----------



## Themisseble

Quote:


> Originally Posted by *ValSidalv21*
> 
> Yep. You just need to right click and open them in new tab


Man, do you know that that thing has nothing to do with this vegetation settings...
And there are some object missing on NVIDIA, if you havent noticed.


----------



## Mahigan

New Ashes of the Singularity build benchmark results:


Spoiler: Warning: Spoiler!






Source: http://www.nordichardware.se/Grafikkort-Recensioner/ashes-of-the-singularity-beta-2-vi-testar-directx-12-och-multi-gpu/Prestandatester.html#content


----------



## BradleyW

As expected for 1080p. Fury X DX11 vs DX12. DX12 twice as quick. Fury X also 10 FPS faster than the 980 Ti when compared in DX12 mode. Fury X is also running far more effects such as smoke and lighting that's ran on Async only. Better image quality + better performance = It does not get much better than that for AMD.

I'm not suprised. 980 Ti is not designed for a low level API. Instead it's designed for single threaded dependant applications. AMD's GCN is the complete opposite. DX11 only uses about 60% of a Fury X's shader cores and other various dedicated processors due to GCN's deep and vastly complex pipeline. I suspect most if not all Nvidia game titles will be DX11 in future, due to such reasons, whilst AMD and neutral titles will be DX12.


----------



## magnek

Let's just hope nVidia doesn't use their dominance in the market to stunt the development of DX12 titles. Wait who am I kidding here...


----------



## Kana-Maru

Quote:


> Originally Posted by *Mahigan*
> 
> New Ashes of the Singularity build benchmark results:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> Source: http://www.nordichardware.se/Grafikkort-Recensioner/ashes-of-the-singularity-beta-2-vi-testar-directx-12-och-multi-gpu/Prestandatester.html#content


The link is broken by the way.

My goodness that DX11 serial data bottleneck is serious on AMD cards.

@1080p -DX11 to DX12- Fury X gets a crazy 113.60% Frame Rate increase!
@4K -DX11 to DX12- Fury X gets a crazy 59.80% Frame Rate increase.


----------



## sugarhell

Quote:


> Originally Posted by *BradleyW*
> 
> As expected. Fury X DX11 vs DX12. DX12 twice as quick. Fury X also 10 FPS faster than the 980 Ti when compared in DX12 mode. Fury X is also running far more effects such as smoke and lighting that's ran on Async only. Better image quality + better performance = It does not get much better than that for AMD.
> 
> I'm not suprised. 980 Ti is not designed for a low level API. Instead it's designed for single threaded dependant applications. AMD's GCN is the complete opposite. DX11 only uses about 60% of a Fury X's shader cores and other various dedicated processors due to GCN's deep and vastly complex pipeline. I suspect most if not all Nvidia game titles will be DX11 in future, due to such reasons, whilst AMD and neutral titles will be DX12.


This number that dx11 use only 60% of fury x's shaders . There is a fact for this? A research? Or a proof. Because i call it a BS

I am pretty sure that dx11 can use 100% of the fury's x shaders.

People that doesn't have a proper technical knowledge (ie being knowledge about tech news is not enough) SHOULDNT post things like that.

We know that amd use more cpu cycles for the shaders. We dont know why we can only speculate.


----------



## BradleyW

Quote:


> Originally Posted by *sugarhell*
> 
> This number that dx11 use only 60% of fury x's shaders . There is a fact for this? A research? Or a proof. Because i call it a BS
> 
> I am pretty sure that dx11 can use 100% of the fury's x shaders.
> 
> People that doesn't have a proper technical knowledge (ie being knowledge about tech news is not enough) SHOULDNT post things like that.
> 
> We know that amd use more cpu cycles for the shaders. We dont know why we can only speculate.


No I don't have any technical data which proves my comment, so we may categorise it as an opinion. I have completed 2 years of Computer Science at University. My Computer Science BSc degree covers hardware architecture in depth, but not on GCN specifically. It is in my opinion that AMD's GCN is not being fully utilized by single threaded dependant applications as seen on various DX11 titles, resulting in reduced performance as all cylinders might not be firing, or at least "not as many at the same time".


----------



## Mahigan

Quote:


> Originally Posted by *sugarhell*
> 
> This number that dx11 use only 60% of fury x's shaders . There is a fact for this? A research? Or a proof. Because i call it a BS
> 
> I am pretty sure that dx11 can use 100% of the fury's x shaders.
> 
> People that doesn't have a proper technical knowledge (ie being knowledge about tech news is not enough) SHOULDNT post things like that.
> 
> We know that amd use more cpu cycles for the shaders. We dont know why we can only speculate.











*Multi threaded command listing & deferred rendering (DirectX runtime MT):*
DirectX works by creating bundles (batches) of commands (command lists). These bundles or batches of commands are sent from the API to the Graphics driver. The driver can perform some changes to these commands (shader replacements, reordering of commands etc) and then translates them into ISA (Instruction Set Architecture, the GPUs language) command lists (Grids/threads) before sending them to the GPU for processing.

Multi-threaded command listing allows the DirectX driver to pre-record lists of commands on idling CPU cores. These lists of commands are then played back to the Graphics driver using the CPUs primary Core (thread 0). Why? The DirectX driver can only run on the primary CPU thread.









*Multi-threaded rendering (DirectX runtime MT + DirectX driver MT):*
Is more or less same as above (DirectX runtime can also scale past 4 cores) except the last part, the DirectX driver doesn't need to play back the commands over the primary CPU thread, any CPU core/thread can talk directly to the GPU driver and thus send its command lists to the Graphics driver. How? The DirectX driver is split amongst every CPU thread.

*NVIDIA*:
NVIDIA's driver uses more than one thread (hidden driver threads) to perform the DirectX driver translations into ISA. These commands are kept in system memory and fetched in bulk by the Gigathread engine. This saves on CPU time. Commands can be sent in bulk and then the CPU can handle other complex tasks without creating a stall. Lower DX11 API over head. Result: Higher draw call rate.

*AMD*:
The AMD driver wouldn't benefit from being multi-threaded because there is only a 64 thread slot in the commmand processor. So even if multi-threaded command listing and deferred rendering were used, the Command Processor could only fetch 64 threads at a time. That means constant fetching or streaming of work. If the CPU is busy with some other work, a stall occurs, and the GPU waits for the CPU to feed it. Hence GCNs higher DX11 API over head. Result: Lower draw call rate.


----------



## criminal

Quote:


> Originally Posted by *magnek*
> 
> Let's just hope nVidia doesn't use their dominance in the market to stunt the development of DX12 titles. Wait who am I kidding here...


----------



## sugarhell

Quote:


> Originally Posted by *Mahigan*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Multi threaded command listing & deferred rendering (DirectX runtime MT):*
> DirectX works by creating bundles (batches) of commands (command lists). These bundles or batches of commands are sent from the API to the Graphics driver. The driver can perform some changes to these commands (shader replacements, reordering of commands etc) and then translates them into ISA (Instruction Set Architecture, the GPUs language) command lists (Grids/threads) before sending them to the GPU for processing.
> 
> Multi-threaded command listing allows the DirectX driver to pre-record lists of commands on idling CPU cores. These lists of commands are then played back to the Graphics driver using the CPUs primary Core (thread 0). Why? The DirectX driver can only run on the primary CPU thread.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> *Multi-threaded rendering (DirectX runtime MT + DirectX driver MT):*
> Is more or less same as above (DirectX runtime can also scale past 4 cores) except the last part, the DirectX driver doesn't need to play back the commands over the primary CPU thread, any CPU core/thread can talk directly to the GPU driver and thus send its command lists to the Graphics driver. How? The DirectX driver is split amongst every CPU thread.
> 
> *NVIDIA*:
> NVIDIA's driver uses more than one thread (hidden driver threads) to perform the DirectX driver translations into ISA. These commands are kept in system memory and fetched in bulk by the Gigathread engine. This saves on CPU time. Commands can be sent in bulk and then the CPU can handle other complex tasks without creating a stall. Lower DX11 API over head. Result: Higher draw call rate.
> 
> *AMD*:
> The AMD driver wouldn't benefit from being multi-threaded because there is only a 64 thread slot in the commmand processor. So even if multi-threaded command listing and deferred rendering were used, the Command Processor could only fetch 64 threads at a time. That means constant fetching or streaming of work. If the CPU is busy with some other work, a stall occurs, and the GPU waits for the CPU to feed it. Hence GCNs higher DX11 API over head. Result: Lower draw call rate
> 
> 
> .


I am a graphics engine programmer with around average knowledge about graphics API. I know why this is happening. But even i dont have any data or facts why is this happening exactly because the bottlenecks and the examples are different everytime. And it's not only that you posted that hurts dx11 performance. For example the support for command lists with amd is bad. And many other things. My point is that someone with such an amount of reps should post his post as a speculation not as a fact


----------



## looniam

Quote:


> Originally Posted by *sugarhell*
> 
> I am a graphics engine programmer with around average knowledge about graphics API. I know why this is happening. But even i dont have any data or facts why is this happening exactly because the bottlenecks and the examples are different everytime. And it's not only that you posted that hurts dx11 performance. For example the support for command lists with amd is bad. And many other things. *My point is that someone with such an amount of reps should post his post as a speculation not as a fact*


don't worry, power point presentation slides always gives it away.


----------



## NightAntilli

Whether it's shaders or something else, it's quite clear that under DX11 AMD cards are nowhere near using their full potential. Even the mid range such as the R9 380 show this. The percentage is speculation, but it's nowhere near 90%+. And the higher end the card, the larger the gap.

The Fury X has been underperforming since the beginning.


----------



## Ha-Nocri

Quote:


> Originally Posted by *BradleyW*
> 
> No I don't have any technical data which proves my comment, so we may categorise it as an opinion. I have completed 2 years of Computer Science at University. My Computer Science BSc degree covers hardware architecture in depth, but not on GCN specifically. It is in my opinion that AMD's GCN is not being fully utilized by single threaded dependant applications as seen on various DX11 titles, resulting in reduced performance as all cylinders might not be firing, or at least "not as many at the same time".


Polaris will have bigger instruction buffer size for better single threaded performance. So, beside being great for DX12, it should improve CPU overhead in DX11.

Sadly, I don't think we will see a big Polaris chip in 4xx series, in 5xx we probably will


----------



## Mahigan

Quote:


> Originally Posted by *sugarhell*
> 
> I am a graphics engine programmer with around average knowledge about graphics API. I know why this is happening. But even i dont have any data or facts why is this happening exactly because the bottlenecks and the examples are different everytime. And it's not only that you posted that hurts dx11 performance. For example the support for command lists with amd is bad. And many other things. My point is that someone with such an amount of reps should post his post as a speculation not as a fact


Well lets take it a step further shall we?

*SM200 (GTX 980 Ti)*:
22 SMMs
Each SMM contains 128 SIMD cores.
Each SMM can execute 64 warps concurrently.
Each Warp is comprised of 32 threads.
So that's 2,048 threads per SMM or 128 SIMD cores.
2,048 x 22 = 45,056 threads in flight (executing concurrently).

*GCN3*:
64 CUs
Each CU contains 64 SIMD cores.
Each CU can execute 40 wavefronts concurrently.
Each Wavefront is comprised of 64 threads.
So that's 2,560 threads per CU or 64 SIMD cores.
2,560 x 64 = 163,840 threads in flight (executing concurrently).

Now factor this:


GCN3 SIMDs are more powerful than SM200 SIMDs core for core. It takes a GCN3 SIMD less time to process a MADD.

So what is the conclusion?

1. GCN3 is far more parallel.

2. GCN3 has less SIMD cores dedicated towards doing more work. SM200 has more SIMD cores dedicated towards doing less work.

3. If you're feeding your GPU with small amounts of compute work items, SM200 will come out on top. If you're feeding your GPU with large amounts of compute work, GCN3 will come out on top.

Now this is just the tip of the Iceberg, there's also this to consider (Multi-Engine support):


Spoiler: Warning: Spoiler!







GCN3 stands to benefit more from Asynchronous compute + graphics than SM200 does because GCN3 has more threads idling than SM200. So long as you feed both architectures with optimized code, they perform as expected.

What is as expected?


Spoiler: Warning: Spoiler!








Hitman, under DX12, will showcase GCN quite nicely I believe.


----------



## BradleyW

Quote:


> Originally Posted by *Mahigan*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Well lets take it a step further shall we?
> 
> *SM200 (GTX 980 Ti)*:
> 22 SMMs
> Each SMM contains 128 SIMD cores.
> Each SMM can execute 64 warps concurrently.
> Each Warp is comprised of 32 threads.
> So that's 2,048 threads per SMM or 128 SIMD cores.
> 2,048 x 22 = 45,056 threads in flight (executing concurrently).
> 
> *GCN3*:
> 64 CUs
> Each CU contains 64 SIMD cores.
> Each CU can execute 40 wavefronts concurrently.
> Each Wavefront is comprised of 64 threads.
> So that's 2,560 threads per CU or 64 SIMD cores.
> 2,560 x 64 = 163,840 threads in flight (executing concurrently).
> 
> Now factor this:
> 
> 
> GCN3 SIMDs are more powerful than SM200 SIMDs core for core. It takes a GCN3 SIMD less time to process a MADD.
> 
> So what is the conclusion?
> 
> 1. GCN3 is far more parallel.
> 
> 2. GCN3 has less SIMD cores dedicated towards doing more work. SM200 has more SIMD cores dedicated towards doing less work.
> 
> 3. If you're feeding your GPU with small amounts of compute work items, SM200 will come out on top. If you're feeding your GPU with large amounts of compute work, GCN3 will come out on top.
> 
> Now this is just the tip of the Iceberg, there's also this to consider (Multi-Engine support):
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> GCN3 stands to benefit more from Asynchronous compute + graphics than SM200 does because GCN3 has more threads idling than SM200. So long as you feed both architectures with optimized code, they perform as expected.
> 
> What is as expected?
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> Hitman, under DX12, will showcase GCN quite nicely I believe
> 
> 
> .


This is an excellent post and I very much agree with your conclusions, especially, "GCN3 stands to benefit more from Asynchronous compute + graphics than SM200 does because GCN3 has more threads idling than SM200."
+1


----------



## Hattifnatten

From my own testing in Dirt Rally
3840x2160 Ultra settings w/o Advanced blending unless specified. This is on a single 290X.


Spoiler: Warning: Spoiler!



No AA


No AA + Advanced blending


CMAA


8xMSAA


8xMSAA + Advanced blending



Click on Original.

I'd like to point out something that I noticed when the game was still in early access. Back then, when I turned on Advanced Blending, the entire forest would go white, with specks of green inbetween. I believe it had something to do with the vegetation using transparant textures. The road, the car, everything else, was completely fine. I believe transparent textures is the culprit for the IQ on AMD-cards in this game.


----------



## Potatolisk

I'm surprised in a little difference between DX11 and DX12 for 980ti in Ashes benchmark. But the game itself is a lot more heavier on CPU than the benchmark itself. And even nVidia cards would see a significant increase from less CPU overhead.

It's good to see Fury X doing so well. It was expected when they added significant (over 15%) async compute. But nVidia might get some boost too when they release their software emulation.


----------



## Forceman

Quote:


> Originally Posted by *Potatolisk*
> 
> I'm surprised in a little difference between DX11 and DX12 for 980ti in Ashes benchmark.


Might have something to do with this:
Quote:


> Originally Posted by *Kollock*
> 
> Async compute is currently forcibly disabled on public builds of Ashes for NV hardware.


----------



## caswow

Quote:


> Originally Posted by *Forceman*
> 
> Might have something to do with this:


in beta2 that was tested for today you can turn async on and off.

kepler and maxwell just cant handle context switching. no async for kepler/maxwell.


----------



## Forceman

Quote:


> Originally Posted by *caswow*
> 
> in beta2 that was tested for today you can turn async on and off.
> 
> kepler and maxwell just cant handle context switching. no async for kepler/maxwell.


If that's the case, why isn't there a performance penalty for Nvidia when it is on? The Anandtech test showed negative scaling but this one shows nothing.


----------



## Mahigan

Quote:


> Originally Posted by *Potatolisk*
> 
> I'm surprised in a little difference between DX11 and DX12 for 980ti in Ashes benchmark. But the game itself is a lot more heavier on CPU than the benchmark itself. And even nVidia cards would see a significant increase from less CPU overhead.
> 
> It's good to see Fury X doing so well. It was expected when they added significant (over 15%) async compute. But nVidia might get some boost too when they release their software emulation.


Asynchronous compute was turned off for NVIDIA hardware because NVIDIA hardware cannot run Asynchronous Compute + Graphics concurrently and the Ashes of the Singularity Async path makes use of that feature. This can result in negative scaling.

NVIDIA have implemented their version of Async compute in their latest drivers. The problem is that it is an NVIDIA specific implementation which benefits from Compute concurrent executions only (no Compute + Graphics support).

Kollock mentioned that we'd have to talk to NVIDIA about their specific implementation.


----------



## looniam

*IF* this translates over to hitman:
Quote:


> To take advantage of DirectX 12 native multi-GPU, you don't need any special drivers. Simply enable multi-GPU in the settings and the game will use the card with the monitor connected as primary and the next GPU to improve rendering performance.




Ashes of the Singularity DirectX 12 Mixed GPU Performance

looks like i'm taking my 780TI back out of the closet.


----------



## semitope

Quote:


> Originally Posted by *Forceman*
> 
> If that's the case, why isn't there a performance penalty for Nvidia when it is on? The Anandtech test showed negative scaling but this one shows nothing.


nvidia is probably sending it all through the graphics queue.


----------



## zealord

isn't ashes of singularity just to showcase dx12 and its features? I actually haven't looked into it so forgive me if I am wrong, but it isn't like a real game application is it?


----------



## pilihp2

Quote:


> Originally Posted by *zealord*
> 
> isn't ashes of singularity just to showcase dx12 and its features? I actually haven't looked into it so forgive me if I am wrong, but it isn't like a real game application is it?


It's a real game
It's actually pretty fun too.


----------



## Mahigan

Quote:


> Originally Posted by *semitope*
> 
> nvidia is probably sending it all through the graphics queue.


They are but within the Graphics queue, the order of execution is not defined hence (Asynchronous). Compute commands can be executed concurrently here on NVIDIA hardware. This is because NVIDIA don't have dedicated engines for Compute and for Graphics. Either work is processed within an SMM. In the SMM both compute and graphic work share an L1 cache. Since their share cache, they're processed in sequence. It's the same reason why a context switch hurts NVIDIA performance, a cache flush is required to switch between compute and graphics within an SMM.

AMD, on the other hand, has a higher level of hardware redundancy (and higher power usage as a result). AMD can split the work across 3 seperate queues which can execute concurrently and in parallel. Fences can be used to enforce synch points between queues. The CUs are also seperate from the render units. So not only is work executed concurrently and in parallel but also processed that way. A context switch therefore only takes one cycle on GCN.

As for Ashes, they're not using poor man's Async compute so context switching is not at fault here. What is at fault is the fact that graphics and compute cannot be executed concurrently. So the NVIDIA driver has to rebundle work items, keeping compute work in one batch and graphic work in another.


----------



## Kollock

Quote:


> Originally Posted by *zealord*
> 
> isn't ashes of singularity just to showcase dx12 and its features? I actually haven't looked into it so forgive me if I am wrong, but it isn't like a real game application is it?


The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.


----------



## zealord

Quote:


> Originally Posted by *Kollock*
> 
> The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.


Oh I thought it was like a game that was specifically designed to show how certain DX12 features perform.

But its great that it is actually a real game. Makes me look forward to DX12 in games like Quantum Break, Hitman, Gears of Wars and the likes


----------



## Mahigan

Quote:


> Originally Posted by *Kollock*
> 
> The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.


Wasn't Dan Baker the first to use Multi-threaded command listing ans deferred rendering in CIV5 as well?


----------



## Klocek001

Quote:


> Originally Posted by *Kollock*
> 
> The reverse is true. It's a game first, which uses DX12. Think Total Annihilation or Supreme Commander.


IDK if this has been asked before but are the results of 980Ti + 390X and example of AMD card handling the async ? This combination seems to have a ridiculous scaling for a mixed gpu setup. 42 fps on 980Ti, 70 fps on 980Ti + 390X. I wonder if the cards tested were overclocked, it's possible that 980Ti would be behind Fury X in single card mode but actually faster paired with 390X than Fury X + 390X.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Klocek001*
> 
> IDK if this has been asked before but are the results of 980Ti + 390X and example of AMD card handling the async ? This combination seems to have a ridiculous scaling for a mixed gpu setup. 42 fps on 980Ti, 70 fps on 980Ti + 390X.


Considering its so hard to get most games to use SLI and CFX to their full potential and sometimes get them to work I would say there is no hope or point for mixed GPUs. Trust me you want Single GPU unless you get the second for free. Right now Dual GPUs are not ideal. The fact that you have to wait for driver is already a pain.


----------



## Forceman

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Considering its so hard to get most games to use SLI and CFX to their full potential and sometimes get them to work I would say there is no hope or point for mixed GPUs. Trust me you want Single GPU unless you get the second for free. Right now Dual GPUs are not ideal. The fact that you have to wait for driver is already a pain.


It also has to be specifically coded for by the developer. So some smaller studios may put in the effort, but I really can't see Ubisoft or EA making much push for it (especially Ubisoft).


----------



## Klocek001

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Considering its so hard to get most games to use SLI and CFX to their full potential and sometimes get them to work I would say there is no hope or point for mixed GPUs. Trust me you want Single GPU unless you get the second for free. Right now Dual GPUs are not ideal. The fact that you have to wait for driver is already a pain.


I don't think you need a driver for mixed gpu mode.
we'll see what we see, it's interesting enough to see those number on the graph, not to mention if this could work in your rig.


----------



## mtcn77

Quote:


> Originally Posted by *Klocek001*
> 
> IDK if this has been asked before but are the results of 980Ti + 390X and example of AMD card handling the async ? This combination seems to have a ridiculous scaling for a mixed gpu setup. 42 fps on 980Ti, 70 fps on 980Ti + 390X. I wonder if the cards tested were overclocked, it's possible that 980Ti would be behind Fury X in single card mode but actually faster paired with 390X than Fury X + 390X.


Interestingly, I'd like to see 390X + 980Ti, too. Fury X performs better when leading 980Ti, why shouldn't 390X?


----------



## Klocek001

Quote:


> Originally Posted by *mtcn77*
> 
> Interestingly, I'd like to see 390X + 980Ti, too. Fury X performs better when leading 980Ti, why shouldn't 390X?


that's what I was also wondering about.


----------



## Kollock

Quote:


> Originally Posted by *Klocek001*
> 
> IDK if this has been asked before but are the results of 980Ti + 390X and example of AMD card handling the async ? This combination seems to have a ridiculous scaling for a mixed gpu setup. 42 fps on 980Ti, 70 fps on 980Ti + 390X. I wonder if the cards tested were overclocked, it's possible that 980Ti would be behind Fury X in single card mode but actually faster paired with 390X than Fury X + 390X.


I don't know, when you have 2 GPUs and multiple queues, you end up with a pretty complex web of signals and fences.

We've been debating why mixing GPUs from different vendors gets the best results. We don't really know why. One theory is that each GPU has a different hashing function for memory access, so by mixing them you end up with less overall PCI-e collisions. But, alas, this is beyond our knowledge base.


----------



## Themisseble

So again Am I against PCGH or what?


Spoiler: Warning: Spoiler!



Why are they contradicting every other site.
http://www.pcgameshardware.de/Ashes-of-the-Singularity-Spiel-55338/Specials/Benchmark-Test-DirectX-12-1187073/

1080P with async ON GTX 980Ti actually gaining 3 fps. Every other test shows fps loss while using async, but there arent...

Okay that could be bug, error or something... but it seems like they want to show best from GTX 980TI. Oooh, I get it R9 390 would be really close to it.
Yeah.... searching for every mistake... man. Well who I am to judge...


@Kollock
Didnt NVIDIA implement async shaders?


----------



## mtcn77

The performance delta from 1080p to 1440p is >94%. Fury X is pretty limited at these resolutions, lol.


----------



## Themisseble

O maybe Anandtech could use extreme or crazy present or at least overclocked i7 6700K with really fast ram.


----------



## airfathaaaaa

just a troll physics question here..

now that can put nvidia and amd together
if i use DSR AND VSR will it bring it to 8k?


----------



## NightAntilli

Quote:


> Originally Posted by *mtcn77*
> 
> The performance delta from 1080p to 1440p is >94%. Fury X is pretty limited at these resolutions, lol.


The GTX 680 gains performance from DX11 to DX12. Can it do async compute? xD lol.


----------



## ZealotKi11er

Quote:


> Originally Posted by *NightAntilli*
> 
> The GTX 680 gains performance from DX11 to DX12. Can it do async compute? xD lol.


DX12 is not just Async lol.


----------



## sugarhell

Quote:


> Originally Posted by *NightAntilli*
> 
> The GTX 680 gains performance from DX11 to DX12. Can it do async compute? xD lol.


No it cant do async compute with amd's term. If you check vulcan hardware database kepler and maxwell have only 1 queue


----------



## Disturbed117

Okay, Let's keep it on topic here. No more flaming, trolling, etc.

If this continues I will be forced to start handing out Warnings/Infractions.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *looniam*
> 
> don't worry, power point presentation slides always gives it away.


Pretty sure he was referring to BradleyW and not Mahigan.


----------



## ZealotKi11er

I will care one a game I like comes with DX12. Until then they can bench AOS all the want.


----------



## BradleyW

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Pretty sure he was referring to BradleyW and not Mahigan.


I was wondering also, given that I'd not post a single power point slide in this thread.


----------



## Klocek001




----------



## mtcn77

Quote:


> Originally Posted by *Klocek001*


With all due respect, I think testing for low settings will show the hardware in bad light.
You leave me no choice, but to post this.


----------



## Klocek001

Quote:


> Originally Posted by *mtcn77*
> 
> With all due respect, I think testing for low settings will show the hardware in bad light.
> You leave me no choice, but to post this.


are you serious? is THIS a trustworthy test for you...

that's the point, to show how faster intel is.
what you posted is a test on crazy settings where the GPU bottlenecks.

If you test a single GPU @4K 8x MSAA and Ultra quality in DX11 you'll also see i3's equal to i7.


----------



## PontiacGTX

Quote:


> Originally Posted by *Klocek001*
> 
> are you serious?
> 
> that's the point, to show how faster intel is.
> what you posted is a test on crazy settings where the GPU bottlenecks.
> 
> If you test a single GPU @4K 8x MSAA and Ultra quality in DX11 you'll also see i3's equal to i7.


if a dual core performs as fast on Directx 11 than iIrectx 12 with an i7...


----------



## Klocek001

Quote:


> Originally Posted by *PontiacGTX*
> 
> if a dual core performs better on Directx 11 than iIrectx 12 with an i7...


where?


----------



## PontiacGTX

Quote:


> Originally Posted by *Klocek001*
> 
> where?


I mean the g3920 as fast as a 6700k


----------



## Themisseble

Check this out.
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=http%3A%2F%2Fwww.pcgameshardware.de%2FGrafikkarten-Grafikkarte-97980%2FSpecials%2FPC-4K-Demo-Radeon-Geforce-Future-GPU-1187323%2Fhttp%3A%2F%2F


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Klocek001*
> 
> are you serious? is THIS a trustworthy test for you...
> 
> that's the point, to show how faster intel is.
> what you posted is a test on crazy settings where the GPU bottlenecks.
> 
> If you test a single GPU @4K 8x MSAA and Ultra quality in DX11 you'll also see i3's equal to i7.


I think that was his point actually. If you switch the load back onto the GPU with higher settings and higher resolutions then you can alleviate your CPU bottleneck. Nobody is going to run a game at a lower resolution and quality setting than their GPU can handle.


----------



## Gedm5

Quote:


> Originally Posted by *Themisseble*
> 
> Check this out.
> https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=http%3A%2F%2Fwww.pcgameshardware.de%2FGrafikkarten-Grafikkarte-97980%2FSpecials%2FPC-4K-Demo-Radeon-Geforce-Future-GPU-1187323%2Fhttp%3A%2F%2F


Interesting Find! But i didn't find anything about 4k? The only resolutions I saw was 720p and 1080p.


----------



## infranoia

Quote:


> Originally Posted by *Gedm5*
> 
> Interesting Find! But i didn't find anything about 4k? The only resolutions I saw was 720p and 1080p.


It's a 4K demo as in, 4 kilobytes. It measured pure shader performance. So a 290 OC was faster than a Titan X stock.


----------



## Gedm5

Quote:


> Originally Posted by *infranoia*
> 
> It's a 4K demo as in, 4 kilobytes. It measured pure shader performance. So a 290 OC was faster than a Titan X stock.


Ah! Thanks, Idk why I missed that...


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think that was his point actually. If you switch the load back onto the GPU with higher settings and higher resolutions then you can alleviate your CPU bottleneck. Nobody is going to run a game at a lower resolution and quality setting than their GPU can handle.


That has been the problem for years. That is how AMD has been hiding from CPU overhead. Who runs games @ 30 fps? AMD CPU cant even run 60 fps even if you lower settings but Intel can.


----------



## looniam

Quote:


> Originally Posted by *ZealotKi11er*
> 
> That has been the problem for years. That is how AMD has been hiding from CPU overhead. *Who runs games @ 30 fps?* AMD CPU cant even run 60 fps even if you lower settings but Intel can.


well, those that want that fancy "cinematic feel"









but yeah, it's one thing to show the limitation of a gpu, but when ALL the frame rates border on unplayable; i don't see the point.


----------



## magnek

^I'm sure there's a joke about console gamers in there somewhere. >_>


----------



## mtcn77

Stray no more. [Source]
Quote:


> Originally Posted by *Klocek001*
> 
> that's what I was also wondering about.


----------



## Assirra

I really hope the cross vendor GPU thing works well but i doubt it will get much support, if any at all sadly.


----------



## mtcn77

Quote:


> Originally Posted by *Assirra*
> 
> I really hope the cross vendor GPU thing works well but i doubt it will get much support, if any at all sadly.


I think the primary card has to have high vram.


----------



## Ha-Nocri

I don't know if cross vendor GPU's working in tandem is a DX12 thing or AOS specifically.


----------



## PontiacGTX

Quote:


> Originally Posted by *Ha-Nocri*
> 
> I don't know if cross vendor GPU's working in tandem is a DX12 thing or AOS specifically.


it is a DX12 feature

https://community.amd.com/community/gaming/blog/2015/08/10/directx-12-for-enthusiasts-explicit-multiadapter
https://software.intel.com/en-us/articles/multi-adapter-support-in-directx-12

but may be under the use of each developer


----------



## Xoriam

Apart from the whole DX12 things, the videos of the game that i've seen look like something from 2010. Looks visually like crap to me. Gameplay probably good though.


----------



## Mahigan

Looks like Hitman will release with DX12


----------

