# [Various] Hitman 2016 PC: DirectX 11 vs. DirectX 12 - Performance



## ZealotKi11er

GTX 980 Ti 1380MHz lol. I hope this is not true.


----------



## TopicClocker

Finally, some DX12 performance results!

I saw a benchmark with the GTX 970 and R9 390 in the Beta under DX11 as DX12 wasn't in the Beta, and the 390 was much faster than it, even under DX11.

It's very interesting to see DX12 performance results, especially against the more powerful GTX 980.

From the previous performance results of games and benchmarks under the DX12 API, it's fair to say that AMD has a clear advantage on the DX12 front.
GCN is well made to tackle DX12, especially with it's Asynchronous Compute Engines.

AMD had an advantage in Hitman without DX11 even with the R9 390 against the GTX 970, and is even faster than the GTX 980 under DX12.
It would be great to see DX11 and DX12 performance comparisons against multiple GPUs in this game.

I do have to wonder why the NVIDIA GPUs appear to be having a hard time in this game against their AMD equivalents.
This will surely be something to look into and study closely as more benchmarks are done, and more DX12 games become available.


----------



## PontiacGTX

Quote:


> Originally Posted by *TopicClocker*
> 
> Finally, some DX12 performance results!
> 
> I saw a benchmark with the GTX 970 and R9 390 in the Beta under DX11 as DX12 wasn't in the Beta, and the 390 was much faster than it, even under DX11.
> 
> It's very interesting to see DX12 performance results, especially against the more powerful GTX 980.
> 
> From the previous performance results of games and benchmarks under the DX12 API, it's fair to say that AMD has a clear advantage on the DX12 front.
> GCN is well made to tackle DX12, especially with it's Asynchronous Compute Engines.
> 
> AMD had an advantage in Hitman without DX11 even with the R9 390 against the GTX 970, and is even faster than the GTX 980 under DX12.
> It would be great to see DX11 and DX12 performance comparisons against multiple GPUs in this game.
> 
> I do have to wonder why the NVIDIA GPUs appear to be having a hard time in this game against their AMD equivalents.
> This will surely be something to look into and study closely as more benchmarks are done, and more DX12 games become available.


I saw what you mean, Interesting would be if the game has asynchronous compute enabled


----------



## Xoriam

This game looks no where near good enough to warrant those fps numbers.....


----------



## TopicClocker

Quote:


> Originally Posted by *PontiacGTX*
> 
> what were these?
> http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading


I probably should have said:
Quote:


> "Finally, some DX12 performance results, for *Hitman!*"


I do state that I am aware of those and also included:
Quote:


> "*From the previous performance results of games and benchmarks under the DX12 API*, it's fair to say that AMD has a clear advantage on the DX12 front."


----------



## iLeakStuff

Damn thats some huge lead for AMD.
Want to see some more reviews though. Also with Fury X and 980Ti.


----------



## daviejams

Quote:


> Originally Posted by *iLeakStuff*
> 
> Damn thats some huge lead for AMD.
> Want to see some more reviews though. Also with Fury X and 980Ti.


980ti benchmarks are on that site


----------



## Randomdude

In that review is the 390 OC'd or is that 1010 clock stock?


----------



## iLeakStuff

Quote:


> Originally Posted by *daviejams*
> 
> 980ti benchmarks are on that site


Yes but kinda useless without Fury X numbers


----------



## daviejams

Quote:


> Originally Posted by *iLeakStuff*
> 
> Yes but kinda useless without Fury X numbers


Well the 980ti is six frames faster than a mid-range 390 so I think the fury x may well "win" this one


----------



## GorillaSceptre

The 390 is flexing it's value recently.









Pretty sad that 970's are still recommended above them though..


----------



## killerhz

Quote:


> Originally Posted by *daviejams*
> 
> Well the 980ti is six frames faster than a mid-range 390 so I think the fury x may well "win" this one


win what? release day benchmarks?

well i have a Ti and hope to hell it plays this game smoothly.


----------



## daviejams

Quote:


> Originally Posted by *killerhz*
> 
> win what? release day benchmarks?
> 
> well i have a Ti and hope to hell it plays this game smoothly.


win in the sense that it runs the game better than the competition


----------



## killerhz

Quote:


> Originally Posted by *daviejams*
> 
> win in the sense that it runs the game better than the competition


ah gotcha. well it's a pretty nice card imho so am sure, be fine for gaming this game.


----------



## Faster_is_better

Quote:


> Originally Posted by *Randomdude*
> 
> In that review is the 390 OC'd or is that 1010 clock stock?


That should be stock, maybe an "OC" version of the card. Most 390s can get 1100 core OC no problem, many can get into the 1150 range and some up to 1200 core.


----------



## rainzor

Absolution all over again.
Seems like 390 gains 10% from DX12 while nvidia cards (with the exception of 980ti at 1080p) stay the same.
Too bad there's no Fiji DX12 benchies. They had some trouble with both vendors under DX12 apparently.


----------



## SuperZan

Quote:


> Originally Posted by *GorillaSceptre*
> 
> The 390 is flexing it's value recently.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pretty sad that 970's are still recommended above them though..


Of course it is. Don't you want to experience the way it's meant to be played?







In all seriousness though Tahiti/Hawaii cards have really delivered on lifespan. Every time they seem to be near the point where they must fall off, they get another jolt of life. I love my Fijis but I do sometimes miss those 7970's, they just felt like such an intelligent purchase hanging on the way they did.

These DX12 results are a good show for AMD but what's funny s that it's not telling us anything we didn't know even under DX11. Nvidia has a clear single-card winner in the 980 ti. In every other price bracket AMD has a superior offering with better legs. This is only becoming even more apparent as we start to see the DX12 titles cropping up.


----------



## ozit77

why 980ti gain 6fps on dx12?


----------



## SuperZan

Quote:


> Originally Posted by *killerhz*
> 
> win what? release day benchmarks?
> 
> well i have a Ti and hope to hell it plays this game smoothly.


Not for nothing but those day-of benchmarks have been central to Nvidia's marketing strategy.

Ti's a great card though, you shouldn't have any issue.


----------



## mcg75

Nvidia better get their butts in gear and decide if they can actually get anything at all out of Async compute or not.

The silent treatment doesn't look good.


----------



## sugarhell

Quote:


> Originally Posted by *mcg75*
> 
> Nvidia better get their butts in gear and decide if they can actually get anything at all out of Async compute or not.
> 
> The silent treatment doesn't look good.


I dont think its only an async compute problem. Even from the really early ashes bench we saw that their dx12 is bad compared to their dx11 performance. They gain almost nothing with the new API


----------



## L36

AMD is getting savage advantage over nvidia. About time.


----------



## Devnant

Quote:


> Originally Posted by *sugarhell*
> 
> I dont think its only an async compute problem. Even from the really early ashes bench we saw that their dx12 is bad compared to their dx11 performance. They gain almost nothing with the new API


On Ashes there is worse performance switching from DX11 to DX12. At least this is not happening with Hitman. There are some FPS gains to NVIDIA.

But DAMN! AMD still gets way more benefits from DX12! Appears there will be a DX12 trend of GCN destroyng Maxwell on DX12 games.


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *SuperZan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *GorillaSceptre*
> 
> The 390 is flexing it's value recently.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pretty sad that 970's are still recommended above them though..
> 
> 
> 
> Of course it is. Don't you want to experience the way it's meant to be played?
> 
> 
> 
> 
> 
> 
> 
> In all seriousness though Tahiti/Hawaii cards have really delivered on lifespan. Every time they seem to be near the point where they must fall off, they get another jolt of life. I love my Fijis but I do sometimes miss those 7970's, they just felt like such an intelligent purchase hanging on the way they did.
> 
> These DX12 results are a good show for AMD but what's funny s that it's not telling us anything we didn't know even under DX11. Nvidia has a clear single-card winner in the 980 ti. In every other price bracket AMD has a superior offering with better legs. This is only becoming even more apparent as we start to see the DX12 titles cropping up.
Click to expand...

Tell me about it, I was all well I need to start looking at getting a new GPU (I have 2x 280x), then this stuff pops out and I'm left being like well.. they still will do what they need to do, only reason to purchase a better card is if I got a higher res


----------



## kingduqc

Quote:


> Originally Posted by *sugarhell*
> 
> I dont think its only an async compute problem. Even from the really early ashes bench we saw that their dx12 is bad compared to their dx11 performance. They gain almost nothing with the new API


Could be that their cpu overhead is way less then AMD's? This is well documented that their dx11 pipeline isn't up to what nvidia has cpu overhead wise. Seeing more improvement on DX12 for AMD compare to Nvidia can just mean their dx11 performance was a crap.


----------



## NuclearPeace

Quote:


> Originally Posted by *mcg75*
> 
> Nvidia better get their butts in gear and decide if they can actually get anything at all out of Async compute or not.
> 
> The silent treatment doesn't look good.


What can they do? Pascal is already done being designed and current Maxwell cards cant be made retroactively faster


----------



## Kana-Maru

Things are looking good for AMD. That R9 390 is at least $320+ cheaper than the 980 Ti. The price\performance for the 390 is looking great so far [15% difference].

I can't wait to download this game and benchmark with my Fury X at midnight. I usually only run stock clocks, but I'll overclock it as well. I'll be providing benchmarks on my blog that I never update. I recently ran actual gameplay benchmark tests on The Witcher 3 and Ryse: Son of Rome 4K recently. Both were actually playable. I normally benchmark ACTUAL gameplay instead of using the built-in benchmark tools. I've been messing around with DX12 for sometime now so hopefully my benchmarking program performs fine _

*UPDATE:*

Fury X Benchmarks are here:
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks

There will be more updates coming._


----------



## criminal

Quote:


> Originally Posted by *NuclearPeace*
> 
> *What can they do?* Pascal is already done being designed and current Maxwell cards cant be made retroactively faster


They have a secret driver that fixes everything.


----------



## orlfman

Quote:


> Originally Posted by *kingduqc*
> 
> Could be that their cpu overhead is way less then AMD's? This is well documented that their dx11 pipeline isn't up to what nvidia has cpu overhead wise. Seeing more improvement on DX12 for AMD compare to Nvidia can just mean their dx11 performance was a crap.


this is what i always thought too

amd has admitted there is a TON of driver overhead in their drivers compared to nvidia. dx12 eliminates a lot of that driver overhead. with nvidia already having little in dx11, dx12 doesn't grant that big of a boost, if any.

i take this more as a showcase of how powerful amd gpu's actually are compared to nvidia. we have seen similar gains in dx11 with amd hardware vs nvidia after amd really hammered out their drivers for some games. one of the reasons why you hear "amd cards age well compared to nvidia."


----------



## OneB1t

my 2 year old 290X got second life


----------



## airfathaaaaa

Quote:


> Originally Posted by *mcg75*
> 
> Nvidia better get their butts in gear and decide if they can actually get anything at all out of Async compute or not.
> 
> The silent treatment doesn't look good.


and say that they card cant do async? its not like the 970 issue that they blamed the pr team we are talking about years and years of lying


----------



## iLeakStuff

Quote:


> Originally Posted by *mcg75*
> 
> Nvidia better get their butts in gear and decide if they can actually get anything at all out of Async compute or not.
> 
> The silent treatment doesn't look good.


Oh they got something cooking for Async for sure. They won`t go down like that. Too many software engineers on their team to admit defeat like this.


----------



## BradleyW

I'm waiting for more reviews here.









Do remember, AMD's long term plan was all oriented around low level API's. GCN from the ground up with years of work was built exactly for this environment.


----------



## huzzug

Quote:


> Originally Posted by *iLeakStuff*
> 
> Quote:
> 
> 
> 
> Originally Posted by *mcg75*
> 
> Nvidia better get their butts in gear and decide if they can actually get anything at all out of Async compute or not.
> 
> The silent treatment doesn't look good.
> 
> 
> 
> Oh they got something cooking for Async for sure. They won`t go down like that. Too many software engineers on their team to admit defeat like this.
Click to expand...

This. All I'd want to see is how they do rather than if they do.


----------



## sugarhell

They added a 390x... Its so close to a 980ti


----------



## MonarchX

390 is same thing as 290, but with faster RAM, isn't it?


----------



## orlfman

Quote:


> Originally Posted by *MonarchX*
> 
> 390 is same thing as 290, but with faster RAM, isn't it?


yeah its a 290 with higher stock clocks (think 290 with beefy overclocks) and 8gb of ram instead of 4gb.


----------



## Semel

Quote:


> Originally Posted by *Devnant*
> 
> But DAMN! AMD still gets way more benefits from DX12!


That's because amd's dx11 performance is kinda bad. So it seems they get more benefits from dx12..


----------



## rainzor

They also added 280X which gets no gain from DX12 and can't run the game above 1080p because of 3GB buffer. Apparently Fiji/Tonga cards suffer from some sort of frame lock under DX12 so there won't be any results from those until the problem is resolved.


----------



## iLeakStuff

Quote:


> Originally Posted by *Semel*
> 
> That's because amd's dx11 performance is kinda bad.


And why would Nvidia`s DX12 performance be worse than Nvidia`s DX11 performance when they have such huge driver team...?
Like I say, really doubt this will be typical for DX12.


----------



## Devnant

Quote:


> Originally Posted by *Semel*
> 
> That's because amd's dx11 performance is kinda bad. So it seems they get more benefits from dx12..


Wow! So if they were good at DX11 Nvidia would really be screwed up. I mean, that 390X is really close to the 980 TI.


----------



## Kana-Maru

Quote:


> Originally Posted by *iLeakStuff*
> 
> And why would Nvidia`s DX12 performance be worse than Nvidia`s DX11 performance when they have such huge driver team...?
> Like I say, really doubt this will be typical for DX12.


Well so far Ashes of Singularity and now Hitman are proving something with DX12. There's more DX12 games coming and existing games getting DX12 updates.


----------



## NuclearPeace

Even though Maxwell is getting spanked here I think its NVIDIA who's really gonna get the last laugh.

A 390 might be faster than a 980 now but neither cards have much life left on the market with both Polaris and Pascal coming up. With AMD coming up with victories only so late in a product's life cycle, I can't imagine AMD getting significant market share from this,

NVIDIA won this generation by being faster when it mattered. The 980 and the 970 were selling like mad and did so virtually unopposed by AMD. It wasn't until June of last year that AMD came up with the RX 300 series. That's almost nine months of grabbing market share and money with very little competition.

In order for AMD to really start putting the hurt on NVIDIA, they have to make sure they are getting all the performance they can possibly get out of their cards at launch. It doesn't really matter if a 7970 starts beating a 780 in games if by the time that occurs both products are already not being sold anymore.


----------



## Semel

Quote:


> Originally Posted by *iLeakStuff*
> 
> Oh they got something cooking for Async for sure. They won`t go down like that. Too many software engineers on their team to admit defeat like this.


They already got it.
It's called Gimpworks + tons of money.. After all, who cares if AMD delivers a very good performance using async compute if little to no one would be utilizing it in games?


----------



## airfathaaaaa

Quote:


> Originally Posted by *iLeakStuff*
> 
> And why would Nvidia`s DX12 performance be worse than Nvidia`s DX11 performance when they have such huge driver team...?
> Like I say, really doubt this will be typical for DX12.


do you have any insight as to why they will do something now ?
also why they didnt do anything till now?


----------



## EightDee8D

Quote:


> Originally Posted by *iLeakStuff*
> 
> And why would Nvidia`s DX12 performance be worse than Nvidia`s DX11 performance when they have such huge driver team...?
> Like I say, really doubt this will be typical for DX12.


because nvidia doesn't have full hardware scheduler so it kinda tanks cpu in dx12. resulting more cpu overhead. something like that.


----------



## KeepWalkinG

Amd have very good hardware better than Nvidia. But when we talk for the drivers and game optimization they are bad.


----------



## Master__Shake

Quote:


> Originally Posted by *GorillaSceptre*
> 
> The 390 is flexing it's value recently.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pretty sad that 970's are still recommended above them though..


gotta save all dem watts you know?


----------



## Semel

Quote:


> Originally Posted by *KeepWalkinG*
> 
> But when we talk for the drivers and game optimization they are bad.


Orly? *checking out nvidias driver issues in the latest months and most recently* I don't think so.


----------



## infranoia

So many opinions flying around.

Numbers speak, and DX12 is AMD's late 28nm savior. AMD was spanked, is spanking, may or may not continue to spank all based on whether Nvidia whiffed it with their parallel support in Pascal.

But since AMD and Nvidia are largely equal in non-Gameworks DX11, maybe you can eventually driver your way out of hardware, given the willpower. We'll have to see.


----------



## NuclearPeace

Quote:


> Originally Posted by *Semel*
> 
> Orly? *checking out nvidias driver issues in the latest months and most recently* I don't think so.


So? Just because NVIDIA hasn't been great with drivers doesn't automatically means that AMD has been great.

AMD DX11 overhead is a major issue in which nothing is being done about it because AMD hopes you only care about DX12 now. Sometimes, the overhead is so bad that you're better off with an OC 960 than a 380 if you have a Core i5.


----------



## dir_d

I think people are jumping to conclusions just a little bit. I think Nvidia will have a nice bump in performance as well once they get their act together.


----------



## Devnant

Quote:


> Originally Posted by *dir_d*
> 
> I think people are jumping to conclusions just a little bit. I think Nvidia will have a nice bump in performance as well once they get their act together.


I hope so, but I'm not really confident. Nvidia already has 80% of market share, and we are getting really close to the Pascal launch on June. Then NVIDIA will get at least three months of GPU supremacy until AMD launches Polaris around September.


----------



## infranoia

Quote:


> Originally Posted by *dir_d*
> 
> I think people are jumping to conclusions just a little bit. I think Nvidia will have a nice bump in performance as well once they get their act together.


Not too much of a jump, though-- AMD is two-for-two with native DX12 titles currently in the wild. And no, you can't count Gears of War, which was just a wrapper & a cluster.


----------



## Ha-Nocri

I rly don't think nVidia can do much with drivers. I think they are hardware limited, just like AMD is with DX11 CPU overhead.


----------



## infranoia

Quote:


> Originally Posted by *Devnant*
> 
> I hope so, but I'm not really confident. Nvidia already has 80% of market share, and we are getting really close to the Pascal launch on June. Then NVIDIA will get at least three months of GPU supremacy until AMD launches Polaris around September.


Whaa? You've got that backwards, all signs point to Polaris having the lead launch based on Zauba activity. We'll see for sure next week. So far we've heard that Pascal will be a mobile part release through the summer.

Let's not forget the TSMC earthquake that impacted Nvidia's 16nm Fab but did not impact AMD.


----------



## Sleazybigfoot

Quote:


> Originally Posted by *dir_d*
> 
> I think people are jumping to conclusions just a little bit. I think Nvidia will have a nice bump in performance as well once they get their act together.


Considering Ashes and now Hitman are the first proper DX12 titles with benchmark results (as far as I know) wouldn't you think they would try to get the best out of their hardware to show how good their hardware is when it's properly utilized?

I doubt there is going to change a lot, if I am wrong about this I think they've made a stupid mistake by waiting this long.


----------



## ZealotKi11er

Nvidia releases a driver and things are back to normal. This numbers only show one thing. They show Maxwell once Nvidia does the Kepler treatment. In about 1 year this will be common place.


----------



## zealord

Quote:


> Originally Posted by *sugarhell*
> 
> They added a 390x... Its so close to a 980ti


yeah its unreal considering the 980 Ti is roughly 65%~ more expensive but only 7-8% faster than the 390X accross all resolutions.

It is just one game though.

Looking forward to more DX12 games


----------



## PostalTwinkie

OCN, cracking me up.

People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.

To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.

The double standards are just hard baked into some. Makes OCN look really bad.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Devnant*
> 
> I hope so, but I'm not really confident. Nvidia already has 80% of market share, and we are getting really close to the Pascal launch on June. Then NVIDIA will get at least three months of GPU supremacy until AMD launches Polaris around September.


amd actually have 2 months on the lead of the hbm because of samsung its safe to say they will launch first in june with a discrete card (mobile sooner) nvidia will go for q3 most likely


----------



## Semel

Quote:


> Originally Posted by *NuclearPeace*
> 
> So? Just because NVIDIA hasn't been great with drivers doesn't automatically means that AMD has been great..


Show me where I said amd drivers were great








Quote:


> which nothing is being done about it because AMD hopes you only care about DX12 now


More likely it CAN'T be done due to how their hardware works. And BECAUSE of how their hardware works they win in dx12.


----------



## Kana-Maru

Quote:


> Originally Posted by *NuclearPeace*
> 
> So? Just because NVIDIA hasn't been great with drivers doesn't automatically means that AMD has been great.
> 
> AMD DX11 overhead is a major issue in which nothing is being done about it because AMD hopes you only care about DX12 now. Sometimes, the overhead is so bad that you're better off with an OC 960 than a 380 if you have a Core i5.


AMD has been improving their driver overhead with nearly every driver release since I retired my GTX SLI and switched to the Fury X. Without NV Gameworks involved AMD usually performs very well. Otherwise we have to wait for game patches and driver updates. From my testing AMD has been improving their overall performance through drivers and improving the DX11 overhead issues. DX12 and Vulkan is the future so Nvidia better starts worrying about the newest tech.


----------



## EightDee8D

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


open vs close. stop this hypocrisy and complain to nvidia about asc driver if the hardware actually supports it.


----------



## ZealotKi11er

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


Meh, Only time that happens is when GW is on effect. This game has no AMD tech.


----------



## infranoia

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


Since my post was the only mention of Gameworks in this thread, I'll let you show me where I'm complaining.
Quote:


> Originally Posted by *infranoia*
> 
> But since AMD and Nvidia are largely equal in non-Gameworks DX11, maybe you can eventually driver your way out of hardware, given the willpower. We'll have to see.


It's a statement of fact, not of judgment, and a comment on how AMD has been steadily improving their DX11 showing. In what way is that statement incorrect?


----------



## Ha-Nocri

Yep, they are using AMD's TressFX in this game


----------



## Yvese

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


Gameworks is terrible.

Yours truly,

980ti owner


----------



## NuclearPeace

Quote:


> Originally Posted by *Semel*
> 
> More likely it CAN'T be done due to how their hardware works. And BECAUSE of how their hardware works they win in dx12.


That's some incredibly dumb design. Why design a GPU microarchitecture that struggles with contemporary software and then suddenly turns golden by the time it gets replaced?


----------



## Devnant

Quote:


> Originally Posted by *infranoia*
> 
> Whaa? You've got that backwards, all signs point to Polaris having the lead launch based on Zauba activity. We'll see for sure next week. So far we've heard that Pascal will be a mobile part release through the summer.
> 
> Let's not forget the TSMC earthquake that impacted Nvidia's 16nm Fab but did not impact AMD.


Seriously? That's great news for me then! Because I'm thinking of going Polaris without second thoughts next gen.


----------



## zealord

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Yep, they are using AMD's TressFX in this game


Agent 47 is looking really great with the added TressFX effects


----------



## infranoia

Quote:


> Originally Posted by *NuclearPeace*
> 
> That's some incredibly dumb design. Why design a GPU microarchitecture that struggles with contemporary software and then suddenly turns golden by the time it gets replaced?


Because consoles.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Yep, they are using AMD's TressFX in this game


I thought they always did for all Hitman games. Its different type of Hair though.


----------



## xzamples

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


is this a serious post?

http://gpuopen.com/


----------



## MonarchX

I am not sure why everyone is so surprised. Let's top thinking of our personal wants and expectations and think of the companies themselves, most importantly about them as businesses. I based my analysis on both companies' historical data, past and recent announcements, predictions, rumors and whichever information was provided and available. I do not work for either company and do not know what is going on internally in either company, but much of the provided information fits into a nicely painted picture of what happened and what is to come, at least in NVidia's case.

*Part I*

*AMD*

AMD has almost always released better videocards than NVidia. AMD was more of a hardware-oriented company than NVidia, which was another reason why AMD cards were the ones to go into PS4 and Xbox One, but their software department (drivers) + loss to NVidia exclusives (or GameWorks) was why they suffered financially. AMD planned too far ahead, relying on Mantle's expected success to overtake DirectX 11, which never happened. At the time of 290 and 290X release, only Mantle could truly utilize those cards' potential. AMD's lack of proper planning and placing so much hope into a software-based solution is why they did not succeed like NVidia with 290 and 290X years ago (profit-wise). Again, AMD is a hardware-based company, and software (which includes drivers) was not their strength, and yet they put most of their eggs in one basket. We can now look at DirectX 12 performance tell that 290/290X potential was not even barely approached based on the amount of overhead AMD's DirectX 11 drivers suffered from. I don't know for sure, but AFAIK DirectX 12 and Vulkan were already in development when Mantle was announced. It literally made no tactical/strategic sense for a hardware-oriented company with a lower market share to push their own API, while similar and more popular API was already in development. Even if they were not aware of DirectX 12 and Vulkan, it still made it a bad decision to put so much effort and hope into Mantle because game developers had to appeal to more gamers, 7/10 of which (or whatever share %) had NVidia card in their rigs. Why would majority of game developers optimize games for the minority market from a company that historically sucked at making software? In the end Mantle died due to 1 - Severe lack of adoption, where only a few games supported Mantle, 2 - Early stage of development (buggy, unstable in Mantle-supporting games like BF4), and 3 - DirectX 12 and Vulkan.

Although Mantle was not successful, similar low-level API's (DirectX 12 and Vulkan) have broken the barrier between drivers, API, and hardware, allowing AMD videocards to finally reach their maximum potential. Highly optimized drivers were no longer required for DirectX 12 because the programs/games were the ones performing most optimizations through low-level API, which eliminated AMD's biggest weakness - software. AMD drivers did improve as of lately, but not to the level of NVidia's quality. IMO, the expenditure on hardware fully supporting low-level API features, such as asynchronous shaders, was not a good idea business-wise, when 290 and 290X were released. They should've been released at the time 390 and 390X cards were released...

Lower prices were another of AMD's strengths and this one served them well rather well, but at the same time it created an implication of lower quality. Although it isn't always true, people often associate cheaper prices with lower quality. This strongly suggests that AMD is the second-rate company, the B-company (not A), the cheaper + the lower-quality, a company not as good as NVidia... This isn't a good strategy / tactics to achieve a higher market share, but I don't know AMD's target goals. In addition, AMD company as a whole, struggles and receives bad ratings from financial analysts, which generally advice not to invest into AMD. Again, bad for AMD's image, furthering influencing the perception of being the cheaper, lower-quality, second-rate, below-NVidia company.

*NVidia*

NVidia was always more software-oriented company than AMD, although lately they had more driver-related problems than before. NVidia knews how to make profit using slower graphics cards, well-planned and timely graphics card released, excellent driver optimizations, game/game-feature-exclusivity (GameWorks), and of course - Marketing. NVidia's most recent competitor to 390 and 390X were 970 and 980, neither of which were planned to fully support low-level API's, forcing people to upgrade to Pascal cards. It was a good plan and good timing. AMD could not pull that off, but NVidia could. Why? NVidia already owned the larger market share and more gamers owned NVidia cards and their trust was won by NVidia's excellent software/driver support. When such a big company releases its new product, forcing people to upgrade to enjoy the latest and greatest, people WILL upgrade because they trust NVidia. Its not that AMD drivers suck bad, but that are more likely to cause issues in old games, upcoming games, current games, etc. Gamers want a simple plug&play experience without such bugs. NVidia delivered just that. Bigger market share is important to another HUGE factor I already mentioned - game exclusivity when it comes to graphics features. For game developers it was more important to make MORE gamers happy than to make a "neutral" stance and boycott NVidia's exclusive features because more people owned NVidia hardware than AMD hardware.

It seems like the situation has changed and AMD is gaining the higher ground at the moment, but I am certain NVidia's marketing and development intelligence was fully aware of what will happen when DirectX 12 comes out. GTX 970 and 980 were never meant to be DirectX 12 cards and GTX 980 Ti / Titan X were simply enthusiast-level cards with a lot of hardware power, which made up for the lack of Asynchronous Shaders capability, allowing at least some competition against AMD's newer cards (Nano, Fury, and Fury X).


----------



## PostalTwinkie

Quote:


> Originally Posted by *EightDee8D*
> 
> open vs close. stop this hypocrisy and complain to nvidia about asc driver if the hardware actually supports it.


Nvidia has stated several times, and 3rd parties have even confirmed, they are working on it. What? We don't know, they lack hardware in the same sense of AMD. So we can only guess it will be some type of software resolution, to some end.

What we do know is that ASC support of all kinds is disabled for Nvidia at the driver level.

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Meh, Only time that happens is when GW is on effect. This game has no AMD tech.


This is flat out wrong. Hell, the people I am talking about try and blame GameWorks and Nvidia in situations they aren't even present in. That is how bad it has gotten.
Quote:


> Originally Posted by *xzamples*
> 
> is this a serious post?
> 
> http://gpuopen.com/


Is your post serious? What does it have to do with the price of tea in China?

It doesn't change that people grab torches and pitchforks over Nvidia backed games. But won't even bat an eye at an AMD backed game, when in both situations you have issues.

It is a terrible mentality for a consumer.


----------



## HeadlessKnight

It is very well known how old Nvidia will act. Pascal is already around the corner. Don't expect to hear anything from Nvidia till Pascal releases and they will market how much faster it is than Maxwell in Async Compute.
They have no other choice, lol. AMD already baited them in the ass.


----------



## zealord

Quote:


> Originally Posted by *xzamples*
> 
> is this a serious post?
> 
> http://gpuopen.com/


he is serious, but he is known for being on the green side.


----------



## NuclearPeace

The real big question is will Pascal support Asynchronous Shaders? By the time DX12 games becomes commonplace Maxwell will be severely out of date, so I'm not worried about that.


----------



## Sleazybigfoot

Quote:


> Originally Posted by *NuclearPeace*
> 
> The real big question is will Pascal support Asynchronous Shaders? By the time DX12 games becomes commonplace Maxwell will be severely out of date, so I'm not worried about that.


I might be wrong but from what I've read, pascal is full Maxwell. Apparently current Maxwell cards have been cut down because of 28nm.


----------



## Ha-Nocri

Quote:


> Originally Posted by *MonarchX*
> 
> ......


I totally agree. Also, NVIDIA has more software than hardware engineers employed.


----------



## GorillaSceptre

Arguing with Postal is like arguing with a wall.. Others and myself have tried time and time again to try explain the difference between GameWorks and Asynchronous Compute, he thinks they are one in the same, ignoring the fact that Async is open to anyone to use as long as you have the hardware to support it.







Arguing with those who refuse to see logic will get you nowhere.


----------



## PostalTwinkie

Quote:


> Originally Posted by *zealord*
> 
> he is serious, but he is known for being on the green side.


Yea, I am totally "known" to be on the "green side".

Sorry, when was the last time any of you purchased AMD hardware?





If you want to make stupid, and completely wrong personal attacks, then see yourself to the door. You are only proving my point. Instead of people actually taking a smart position on where they should spend their money, they have blind loyalties to a company. A company who's legal goal is to maximize profits, and to take as much money from you as possible.

Maybe, instead of just throwing insults, you could think critically about things.

Quote:


> Originally Posted by *NuclearPeace*
> 
> The real big question is will Pascal support Asynchronous Shaders? By the time DX12 games becomes commonplace Maxwell will be severely out of date, so I'm not worried about that.


We don't know yet on if Pascal will have the full hardware set to support it.


----------



## EightDee8D

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yea, I am totally "known" to be on the "green side".
> 
> Sorry, when was the last time any of you purchased AMD hardware?
> 
> 
> 
> 
> 
> If you want to make stupid, and completely wrong personal attacks, then see yourself to the door. You are only proving my point. Instead of people actually taking a smart position on where they should spend their money, they have blind loyalties to a company. A company who's legal goal is to maximize profits, and to take as much money from you as possible.
> 
> Maybe, instead of just throwing insults, you could think critically about things.
> We don't know yet on if Pascal will have the full hardware set to support it.


im also using nvidia and have spent more on them. but i won't deny im an amd fanboy. so yeah using both doesn't really matter in that regard.


----------



## airfathaaaaa

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Arguing with Postal is like arguing with a wall.. Others and myself have tried time and time again to try explain the difference between GameWorks and Asynchronous Compute, he thinks they are one in the same, ignoring the fact that Async is open to anyone to use as long as you have the hardware to support it.
> 
> 
> 
> 
> 
> 
> 
> Arguing with those who refuse to see logic will get you nowhere.


you know what is more funny? that all of those poor souls think that async compute is something amd invented or whatever while it exists from the 50s....
http://archive.computerhistory.org/resources/text/IBM/Stretch/pdfs/03-07/102633990.pdf


----------



## PostalTwinkie

Quote:


> Originally Posted by *EightDee8D*
> 
> im also using nvidia and have spent more on them. but i won't deny im an amd fanboy. so yeah using both doesn't really matter in that regard.


You shouldn't be a Fanboy of either side. You look at the situation you need a product for, and see who offers the best solution for that situation, it is that simple.

I went from three 7970s to a single 780 Ti, because AMD's frame pacing issues literally made a multitude of games unplayable for me. As I become motion sick extremely easy. Their CrossFire support was also lacking, and the majority of the time I was running just one of the 7970s.

The 780 Ti was on a pretty big sale at the time, so it is what I purchased. The 290X linked above? That was $250 new, yea I should have picked up a few at that price. Either way, it was the best card available for the situation. The 380 above? Same deal, for the money, awesome card.

People buying hardware because they think they are a "fan", to a damn Corporation, is bad consumerism. Don't buy based on your little fuzzy gut feelings, buy based on the requirements of the situation and the ability of either side to fill that need.

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Arguing with Postal is like arguing with a wall.. Others and myself have tried time and time again to try explain the difference between GameWorks and Asynchronous Compute, he thinks they are one in the same, ignoring the fact that Async is open to anyone to use as long as you have the hardware to support it.
> 
> 
> 
> 
> 
> 
> 
> Arguing with those who refuse to see logic will get you nowhere.


Please quote me where I said Gameworks and ASC are the same. Really, please.

Again, if you want to have a conversation, keep the lies out of it.

You don't help yourself.


----------



## zealord

Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *PostalTwinkie*
> 
> Yea, I am totally "known" to be on the "green side".
> 
> Sorry, when was the last time any of you purchased AMD hardware?
> 
> 
> 
> 
> 
> If you want to make stupid, and completely wrong personal attacks, then see yourself to the door. You are only proving my point. Instead of people actually taking a smart position on where they should spend their money, they have blind loyalties to a company. A company who's legal goal is to maximize profits, and to take as much money from you as possible.
> 
> Maybe, instead of just throwing insults, you could think critically about things.
> We don't know yet on if Pascal will have the full hardware set to support it.






damn someone is having a bad day. Calm down mate.









Maybe I mistook you for a person with a similar avatar


----------



## Mahigan

Not surprised.


----------



## aweir

This is not that big of a surprise really. It's well known that AMD GPUs perform better than equally or slightly-higher priced Nvidia equivalents. With Nvidia you are basically paying for the power efficiency. the real game changer might be in AMD's 400 series which is claimed to provide 2x the performance per watt.

The only surprise being that the GTX costs $150 more than the r9 390.


----------



## Glottis

Quote:


> Originally Posted by *Mahigan*
> 
> Not surprised.


indeed. would be awkward if Nvidia did better in a AMD sponsored game.


----------



## Cyclops

Yep yep yep keep it going fellas. Thread lock coming soon. This is your captain speaking.


----------



## Serandur

Quote:


> Originally Posted by *NuclearPeace*
> 
> That's some incredibly dumb design. Why design a GPU microarchitecture that struggles with contemporary software and then suddenly turns golden by the time it gets replaced?


AMD in general, but especially their GPU division, have had relatively pitiful R&D budgets for several years now and likely saw the cuts coming even much earlier. They realized years ago that they couldn't afford to design multiple microarchitectures in quick succession like Nvidia and made the decision to go with a single microarchitecture that would last them from 2012 all the way into the indefinite future (I expect, at minimum, until the end or near the end of this console generation).

To ensure things developed according to plan, they influenced the industry's development by securing the console designs and using them + Mantle as leverage to push for APIs GCN is better suited for.

That's my take on it. It wouldn't be a smart decision at all if they had the budget to do otherwise because Nvidia aren't that shortsighted or underfunded themselves; AMD don't stand a chance of catching them off guard for more than a single brief generation or two, so Nvidia will have caught up to any significant AMD advantages too soon to make AMD's current marketshare and profit margin sacrifices worth it otherwise.

So, their real goal has been to maintain a competitive front on a shoestring budget which meant designing a microarchitecture not very optimal for existing software, but for the future where AMD is still using the architecture. That makes the strategy understandable.


----------



## EightDee8D

Quote:


> Originally Posted by *PostalTwinkie*
> 
> You shouldn't be a Fanboy of either side. You look at the situation you need a product for, and see who offers the best solution for that situation, it is that simple.
> 
> I went from three 7970s to a single 780 Ti, because AMD's frame pacing issues literally made a multitude of games unplayable for me. As I become motion sick extremely easy. Their CrossFire support was also lacking, and the majority of the time I was running just one of the 7970s.
> 
> The 780 Ti was on a pretty big sale at the time, so it is what I purchased. The 290X linked above? That was $250 new, yea I should have picked up a few at that price. Either way, it was the best card available for the situation. The 380 above? Same deal, for the money, awesome card.
> 
> People buying hardware because they think they are a "fan", to a damn Corporation, is bad consumerism. Don't buy based on your little fuzzy gut feelings, buy based on the requirements of the situation and the ability of either side to fill that need.


just so you know. being a fanboy doesn't mean i only buy amd/nvidia , just that i don't blindly follow one and try to piss on other company because i don't like it. i should have used a different word instead of fanboy. i know what im going to buy based on price/performance and features just like i have always done. but that doesn't change the fact i like the way AMD does their business which is open vs close and proprietary (nvidia).

you can be a fanboy without being biased. but when you are biased you are always a fanboy no matter how much you deny it.


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> Please quote me where I said Gameworks and ASC are the same. Really, please.
> 
> Again, if you want to have a conversation, keep the lies out of it.
> 
> You don't help yourself.


Really? So you have never equated what AMD is doing with Async to what Nvidia does with GameWorks? Come on.. You just did it in this thread..

This is your M.O.. You come into every thread related to DX12 and usually start with something similar to you first contribution in this one; "People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days." Always playing the hypocrisy card.. Even though no one compared this situation to GameWorks..

If you didn't flame-bait every thread with negativity we wouldn't even be discussing this.. We have had this conversation many times now.. I'll leave it to others to call you out from now on, I'm tired of repeating the same things over and over.


----------



## dragneel

Quote:


> Originally Posted by *NuclearPeace*
> 
> That's some incredibly dumb design. Why design a GPU microarchitecture that struggles with contemporary software and then suddenly turns golden by the time it gets replaced?


I might just be an idiot but to me it sounds like they've had years in advance designing an architecture to work well with low level API's where nVidia has not and could potentially mean Polaris will be everything we've hoped for.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Serandur*
> 
> AMD in general, but especially their GPU division, have had relatively pitiful R&D budgets for several years now and likely saw the cuts coming even much earlier. They realized years ago that they couldn't afford to design multiple microarchitectures in quick succession like Nvidia and made the decision to go with a single microarchitecture that would last them from 2012 all the way into the indefinite future (I expect, at minimum, until the end or near the end of this console generation).
> 
> To ensure things developed according to plan, they influenced the industry's development by securing the console designs and using them + Mantle as leverage to push for APIs GCN is better suited for.
> 
> That's my take on it. It wouldn't be a smart decision at all if they had the budget to do otherwise because Nvidia aren't that shortsighted or underfunded themselves; AMD don't stand a chance of catching them off guard for more than a single brief generation or two, so Nvidia will have caught up to any significant AMD advantages too soon to make AMD's current marketshare and profit margin sacrifices worth it otherwise.
> 
> So, their real goal has been to maintain a competitive front on a shoestring budget which meant designing a microarchitecture not very optimal for existing software, but for the future where AMD is still using the architecture. That makes the strategy understandable.


On top of the smaller budget...

AMD also bet the farm on highly efficient computing happening a lot sooner than it has. Realistically, AMD was expecting a lot more efficiency in software, to go with their hardware, than what we ended up getting. Which sucks for both us, and AMD, because it didn't happen.

The first dual core processor came out something like 10 years ago, not including hyper-threading. Multi-core software support is still trash today! On the GPU front? We have been on a fat and bloated pig of DX for sometime now, and it has hurt. What really pisses me off about DX on the PC being so obese, is that it went "low level" in 1999 on the Xbox!

AMD expected the market to shift in a particular way, to favor a lot of their design decisions, and it just didn't happen.

Quote:


> Originally Posted by *GorillaSceptre*
> 
> *Really? So you have never equated what AMD is doing with Async to what Nvidia does with GameWorks? Come on.. You just did it in this thread..*


Didn't even read the rest of your lie, because you couldn't go one line of text without lying. My original post was explicitly pointing at AMD Gaming Evolved and Nvidia's support.


----------



## Mahigan

Quote:


> Originally Posted by *Glottis*
> 
> indeed. would be awkward if Nvidia did better in a AMD sponsored game.


It has nothing to do with that. You've got DX12 multi-threaded rendering alleviating AMDs API over head under DX11 as well as Asynchronous compute + graphics.

So basically, this is a true DX12 implementation. You could add Conservative Rasterization and VXGI and the differences wouldn't be that pronounced.

This is what I've been saying for quite some time. So I'm not surprised.


----------



## Glottis

Quote:


> Originally Posted by *Mahigan*
> 
> It has nothing to do with that. You've got DX12 multi-threaded rendering alleviating AMDs API over head under DX11 as well as Asynchronous compute + graphics.
> 
> So basically, this is a true DX12 implementation. You could add Conservative Rasterization and VXGI and the differences wouldn't be that pronounced.
> 
> This is what I've been saying for quite some time. So I'm not surprised.


this has everything to do with game being a AMD title. you aren't fooling anyone here.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Glottis*
> 
> this has everything to do with game being a AMD title. you aren't fooling anyone here.


But it doesn't. - Yes, "sponsored" titles are normally going to have some bias in one direction or another. Thus my comment on OCN hilarity in my first post.

However;

AMD is finally getting to a place where the API isn't a big issue, or as big of an issue. It doesn't matter how much money AMD threw at this title, that would only optimize to a certain extent. The game is just taking advantage of new capabilities we have long needed.

See my last post above for more on that.


----------



## Mahigan

Quote:


> Originally Posted by *Glottis*
> 
> this has everything to do with game being a AMD title. you aren't fooling anyone here.


You're entitled to your opinion. Seems most people don't share it. Probably because folks are coming to realize just what DX12 offers. By alleviating software bottlenecks (driver wise) the hardware gets to shine.


----------



## Devnant

Quote:


> Originally Posted by *Mahigan*
> 
> It has nothing to do with that. You've got DX12 multi-threaded rendering alleviating AMDs API over head under DX11 as well as Asynchronous compute + graphics.
> 
> So basically, this is a true DX12 implementation. You could add Conservative Rasterization and VXGI and the differences wouldn't be that pronounced.
> 
> This is what I've been saying for quite some time. So I'm not surprised.


NVM the kids, Mahigan. You've been right all along ever since you've first made your first analysis about the lack of async compute on Maxwell architectures.


----------



## Glottis

Quote:


> Originally Posted by *Mahigan*
> 
> You're entitled to your opinion. Seems most people don't share it. Probably because folks are coming to realize just what DX12 offers. By alleviating software bottlenecks (driver wise) the hardware gets to shine. AMD simply has the more powerful GPU architecture.


this thread is glorifying AMD sponsored title performance running on a AMD hardware and big majority of posters here are AMD owners having deep hatred towards Nvidia. this was never objective or reasonable discussion to begin with.


----------



## caswow

why are people equating amd "sponsored" games with GW plagued games? typically amd sponsored games run well on all hardware.
Quote:


> Originally Posted by *Glottis*
> 
> this thread is glorifying AMD sponsored title performance running on a AMD hardware and big majority of posters here are AMD owners having deep hatred towards Nvidia. this was never objective or reasonable discussion to begin with.


according to you what does a amd sponsored game mean? secret nvidia path that runs badly?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Glottis*
> 
> this has everything to do with game being a AMD title. you aren't fooling anyone here.


That is very very true because if it was GW it would not ASync or DX12. ....... get it?


----------



## Assirra

Quote:


> Originally Posted by *zealord*
> 
> he is serious, but he is known for being on the green side.


Uhm, i think you are not one to say this with your title which actually is against the rules.
You made up your own title and used "various" in it to camouflage it is actually about only 1 site.


----------



## zealord

Quote:


> Originally Posted by *Assirra*
> 
> Uhm, i think you are not one to say this with your title which actually is against the rules.
> You made up your own title and used "various" in it to camouflage it is actually about only 1 site.


I wanted to add more benchmarks later on, but couldn't find any other than PCGH.

Do you forgive me ?


----------



## raghu78

Quote:


> Originally Posted by *Mahigan*
> 
> You're entitled to your opinion. Seems most people don't share it. Probably because folks are coming to realize just what DX12 offers. By alleviating software bottlenecks (driver wise) the hardware gets to shine.


People were telling AOTS is an exception. Now hitman joins that list. Soon any game which uses async compute will join that list. The funny part is some of the next gen console games (and thus game engines) are using async compute in a much more aggressive manner. We are already seeing gains of 10% on R9 390 and R9 390X with async compute and DX12. Gains of upto 20% are possible with async compute according to AMD. I think we will see such gains too in the near future. The key question here is does Pascal have a proper implementation of asynchronous compute in hardware. We will know in a few months.


----------



## headd

Quote:


> Originally Posted by *zealord*
> 
> yeah its unreal considering the 980 Ti is roughly 65%~ more expensive but only 7-8% faster than the 390X accross all resolutions.
> 
> It is just one game though.
> 
> Looking forward to more DX12 games


ANd Its 1400mhz GTX980TI.Stock GTX980TI will be slower than 390x.
Stock 980TI will be at 390 non x level.So 650USDGPU beat by 300USD GPU


----------



## zealord

Quote:


> Originally Posted by *headd*
> 
> ANd Its 1400mhz GTX980TI.Stock GTX980TI will be slower than 390x


Yeah but Nvidia cards do overclock really well. Testing a 980 Ti at stock would be a waste imho









I wish reviewers would test cards at :

1) stock versus stock

and

2) max OC versus max OC


----------



## raghu78

Quote:


> Originally Posted by *Glottis*
> 
> this thread is glorifying AMD sponsored title performance running on a AMD hardware and big majority of posters here are AMD owners having deep hatred towards Nvidia. this was never objective or reasonable discussion to begin with.


Nobody is hating on Nvidia. Do not come in with your bias and spoil the thread. The issue of performance gains using dx12 and async compute on AMD cards is not new. AOTS already showed its possible.


----------



## airfathaaaaa

Quote:


> Originally Posted by *caswow*
> 
> why are people equating amd "sponsored" games with GW plagued games? typically amd sponsored games run well on all hardware.
> according to you what does a amd sponsored game mean? secret nvidia path that runs badly?


probably amd found a way to turn gameworks to nvidia
or the new boldFX that really cripples the nvidia cards
or that insane use of water that nvidia cards cant handle


----------



## MerkageTurk

Because AMD is hardware based DX12 support which nVidia believes they can get away with it, by implementing it via software and releasing new hardware for consumer bots to purchase.


----------



## zealord

Quote:


> Originally Posted by *Glottis*
> 
> this thread is glorifying AMD sponsored title performance running on a AMD hardware and big majority of posters here are AMD owners having deep hatred towards Nvidia. this was never objective or reasonable discussion to begin with.


Sorry that we troubled you with facts.

Hitman has no special AMD effects that would somehow impact Nvidia cards negatively. It is just a game making use of DX12.

The benchmark numbers speak a clear language. No matter if you like them or not.


----------



## Glottis

Did any of you even analyze benchmarks that this thread is all about?

So according to some know-it-alls here Nvidia has big advantage over AMD in DX11, right? Then how come in Hitman DX11 mode lower end Nvidia cards perform so poorly? Did developers somehow magically enable Async in DX11 mode? No, they didn't. What DX11 performance proves here is that game is very very biased towards AMD GPUs, and that's all.

We can't draw conclusions about overall DX12 performance until about 1-2 years later when we had tens of DX12 games tested. What we have so far is this: 1 neutral now cancelled title Fable ran about the same on Nvidia and AMD, 1 slightly Nvidia title (only 1 gameworks effect afaik) Gears ran much better on Nvidia hardware, and 2 AMD titles Hitman and Ashes favor AMD hardware.


----------



## rainzor

Once again, Hitman Absolution


No DX12 magic here, yet 7870 almost matches gtx680. Not hard to see the pattern here is it.


----------



## xenophobe

Quote:


> Originally Posted by *zealord*
> 
> In 1080p the R9 390 (1010 mhz) is 15% faster than the GTX 980 (1316 mhz)
> The GTX 980 Ti (1380 mhz) is 16% faster than the R9 390 and 33% faster than the GTX 980.
> 
> *In 4K the the R9 390 (1010 mhz) is 33% faster than the GTX 980 (1316 mhz) and 70% higher min framerate!*
> The 980 Ti at 4K is still 15% faster than the R9 390.


LOL.... So how would that R9 390 compare to my run of the mill MSI 980 @1528mhz? Can't EVGA SC's hit 1700-1800?

How does the 390 compare against something like that?


----------



## JunkoXan

not sure if Glottis is so serious about this, it makes no sense to me...


----------



## sugarhell

Quote:


> Originally Posted by *Glottis*
> 
> Did any of you even analyze benchmarks that this thread is all about?
> 
> So according to some know-it-alls here Nvidia has big advantage over AMD in DX11, right? Then how come in Hitman DX11 mode lower end Nvidia cards perform so poorly? Did developers somehow magically enable Async in DX11 mode? No, they didn't. What DX11 performance proves here is that game is very very biased towards AMD GPUs, and that's all.
> 
> We can't draw conclusions about overall DX12 performance until about 1-2 years later when we had tens of DX12 games tested. What we have so far is this: 1 neutral now cancelled title Fable ran about the same on Nvidia and AMD, 1 slightly Nvidia title (only 1 gameworks effect afaik) Gears ran much better on Nvidia hardware, and 2 AMD titles Hitman and Ashes favor AMD hardware.


You will not gonna sleep tonight with these results right?

Amd's problems with gameworks games : many
Nvidia's problem with gaming evolved games : tomb raider reboot for 3 days. And it wasnt for performance reasons

The hitman games always favored compute over pure rendering methods

And please dont put Gears on comparison. Its a marketing game just to marketing Dx12 and it's a mess.


----------



## ku4eto

Quote:


> Originally Posted by *JunkoXan*
> 
> not sure if Glottis is so serious about this, it makes no sense to me...


Probably doesn't make much sense to him as well.
Question: how do these germans got their hands on Hitman before release?


----------



## JunkoXan

Quote:


> Originally Posted by *ku4eto*
> 
> Quote:
> 
> 
> 
> Originally Posted by *JunkoXan*
> 
> not sure if Glottis is so serious about this, it makes no sense to me...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Probably doesn't make much sense to him as well.
> Question: how do these germans got their hands on Hitman before release?
Click to expand...

best not to ask such questions...


----------



## infranoia

Quote:


> Originally Posted by *Glottis*
> 
> Did any of you even analyze benchmarks that this thread is all about?
> 
> *So according to some know-it-alls here Nvidia has big advantage over AMD in DX11, right?* Then how come in Hitman DX11 mode lower end Nvidia cards perform so poorly? Did developers somehow magically enable Async in DX11 mode? No, they didn't. What DX11 performance proves here is that game is very very biased towards AMD GPUs, and that's all.
> 
> We can't draw conclusions about overall DX12 performance until about 1-2 years later when we had tens of DX12 games tested. What we have so far is this: 1 neutral now cancelled title Fable ran about the same on Nvidia and AMD, 1 slightly Nvidia title (only 1 gameworks effect afaik) Gears ran much better on Nvidia hardware, and 2 AMD titles Hitman and Ashes favor AMD hardware.


Not any more. You are starting out with a wrong assumption. AMD DX11 has been steadily improving since the Hawaii launch. Look around, you'll see at every price point that AMD matches or beats the equivalent card with the exception of the 980Ti.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Glottis*
> 
> this has everything to do with game being a AMD title. you aren't fooling anyone here.


its the first game that fully uses dx12
Quote:


> Originally Posted by *Glottis*
> 
> Did any of you even analyze benchmarks that this thread is all about?
> 
> So according to some know-it-alls here Nvidia has big advantage over AMD in DX11, right? Then how come in Hitman DX11 mode lower end Nvidia cards perform so poorly? Did developers somehow magically enable Async in DX11 mode? No, they didn't. What DX11 performance proves here is that game is very very biased towards AMD GPUs, and that's all.
> 
> We can't draw conclusions about overall DX12 performance until about 1-2 years later when we had tens of DX12 games tested. What we have so far is this: 1 neutral now cancelled title Fable ran about the same on Nvidia and AMD, 1 slightly Nvidia title (only 1 gameworks effect afaik) Gears ran much better on Nvidia hardware, and 2 AMD titles Hitman and Ashes favor AMD hardware.


its very simple as the games become more demanding the devs are turning into compute for more performance maxwell/maxwell 2.0 compared to amd cgn is lacking both in dx12 and in raw compute power


----------



## zealord

Quote:


> Originally Posted by *xenophobe*
> 
> LOL.... So how would that R9 390 compare to my run of the mill MSI 980 @1528mhz? Can't EVGA SC's hit 1700-1800?
> 
> How does the 390 compare against something like that?


Yeah you are making a lot of sense.

Maybe we should compare a reference design R9 290X in silent mode that throttles to 700 mhz to a GTX 980 Ti Classified Zombie modded card that reaches 2100 MHZ ?

Would people like you be happy then?

jesus christ ...









Edit : Someone is constantly +repping me. please stop !


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Please quote me where I said Gameworks and ASC are the same. Really, please.
> 
> Again, if you want to have a conversation, keep the lies out of it.
> 
> You don't help yourself.


Quote:


> Originally Posted by *PostalTwinkie*
> 
> Didn't even read the rest of your lie, because you couldn't go one line of text without lying. My original post was explicitly pointing at AMD Gaming Evolved and Nvidia's support.


So I'm lying am i?

Lets see if you've tried to compare Async and GameWorks.
Quote:


> Originally Posted by *PostalTwinkie*
> 
> AMD is pushing ASC as their exclusive feature, just like Nvidia pushes Phsyx. Right now, regardless of why, ASC and Phsyx are both features that are "exclusive" to each side. Except on OCN one is OK, the other isn't.


You've done it on more than one occasion.

Lets also see how you started off in other threads about DX12; From the other Hitman-DX12 thread me and you argued in, about the same damn thing mind you..
Quote:


> Originally Posted by *PostalTwinkie*
> 
> I find it interesting that people can spend hours decrying Nvidia and their GameWorks and TWIMTBP campaign, but AMD does their version, and everyone gets excited. They are both closed systems....if one is fine, so is the other.


Why come into threads screaming hypocrisy and being so negative? It's yet another occasion where you've come in screaming hypocrisy where there was none.. Once again derailing a thread, i'm not going to bother replying anymore carry on.


----------



## PostalTwinkie

Quote:


> Originally Posted by *raghu78*
> 
> People were telling AOTS is an exception. Now hitman joins that list. Soon any game which uses async compute will join that list. The funny part is some of the next gen console games (and thus game engines) are using async compute in a much more aggressive manner. We are already seeing gains of 10% on R9 390 and R9 390X with async compute and DX12. Gains of upto 20% are possible with async compute according to AMD. I think we will see such gains too in the near future. The key question here is does Pascal have a proper implementation of asynchronous compute in hardware. We will know in a few months.


Previous developer comments said that what we are calling ASC is being used in consoles to an "extreme" degree, compared to AoTS. It will be super interesting to see how Nvidia plays out if for some reason Pascal can't do it.


----------



## Kana-Maru

Quote:


> Originally Posted by *rainzor*
> 
> Once again, Hitman Absolution
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> No DX12 magic here, yet 7870 almost matches gtx680. Not hard to see the pattern here is it.


You know dang well those were from "early" benchmarks when the game first released. Shortly afterwards Nvidia optimized their drivers. Don't even try to pull the "Gaming Evolved" sponsor title card. An Nvidia TWIMTBP title with Gameworks is much harder for AMD to optimized than it is for Nvidia to optimized an Gaming Evolved title [or any title backed by AMD or using AMD tech].

I wish you guys would stop bringing up Hitman: Absolution and\or Tomb Raider 2013\TressFX. Nvidia fixed those issues very quickly after release and Nvidia cards performed well. There was nothing preventing Nvidia from optimizing their drivers. AMD didn't have any black boxes in place for Nvidia. Patches and NV drivers had no issues getting out the door.


----------



## raghu78

Quote:


> Originally Posted by *ku4eto*
> 
> Probably doesn't make much sense to him as well.


that was a good one


----------



## k3rast4se

I can't believe i'm actually writing this since I was a huge amd fan for the last 3 years (7950,r9 290, recently switch to gtx 970). On paper, amd video always looked better than competition (512-bit memory bus, 3gb instead of 2gb, better directx12 performance, async ....) but at the end it doesn't matter. Nvidia video card have less stutter, run better and are more optimized in games that MATTER. Most gamers don't give a crap about ashes of sing., hitman (not a huge AAA title). When it come to top AAA games, nvidia is always more stable, driver ready for the launch and more feature (even if I think gameworks is bad for the industry). Because at the end, it's a question of market share. Why would WB,Ubi,... optimized and work with AMD when 80% of theirs CUSTOMER have a Nvidia card. AMD have a huge lead in Directx12 currently, but unfortunately it doesn't matter right now because the games we play are Directx11. And by the time AAA title are DX12, most likely nvidia will have card optimized for these games.

p.s.: I don't want/like nvidia dominance and market share of the last 2 years since it's bad for us, gamer/customer.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GorillaSceptre*
> 
> So I'm lying am i?


Or lack basic comprehension, yes.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> Lets see if you've tried to compare Async and GameWorks.


Context, son, context. That statement was in response to the discussion that ASC was exclusive to AMD, because "Nvidia lacked the hardware". It was also a followup statement to an AMD presented slide, that showed ASC as their own exclusive support.

As I was very clear in that statement, I wasn't saying that they were the same thing. Just each (separate and different) technology was being pushed as exclusive by both sides. Because it was/is, it is that simple.

There is a difference in claiming exclusivity for something, and those somethings being the same (what you think I have said). But keep trying. As, again, my comments were speaking directly to the claims of exclusivity and not if those exclusive pieces were the same thing.

Quote:


> Originally Posted by *GorillaSceptre*
> 
> You've done it on more than one occasion.
> 
> Lets also see how you started off in other threads about DX12; From the other Hitman-DX12 thread me and you argued in, about the same damn thing mind you..
> Why come into threads screaming hypocrisy and being so negative? It's yet another occasion where you've come in screaming hypocrisy where there was none.. Once again derailing a thread, i'm not going to bother replying anymore carry on.


Not sure where you were going with that last quote of mine, as it has nothing to do with ASC. In fact, I am being very explicit in that statement about comparing Nvidia's and AMD's respective game sponsorship programs.


----------



## Devnant

Well, I'm actually curious about what NVIDIA's PR will say this time. When Ashes's Alpha benchmark came out, the first DX12 benchmark, NVIDIA said Ashes was not representative of DX12. So will they say Hitman now isn't either? Maybe every single game that comes out now won't be representative of DX12 for NVIDIA?


----------



## Mahigan

Quote:


> Originally Posted by *headd*
> 
> ANd Its 1400mhz GTX980TI.Stock GTX980TI will be slower than 390x.
> Stock 980TI will be at 390 non x level.So 650USDGPU beat by 300USD GPU


And that's with NVIDIA game ready drivers because GPUOpen allows them full access to the source code. Hence the gains of 6fps on the GTX 980 Ti.

Polaris will refine this further. NVIDIA need to revampe their entire architecture and designate separate L1 caches for rendering and compute in each SMM. NVIDIA will also need a larger L2 cache as their Maxwell architecture keeps spilling more and more work into the L2 as they add more SMMs (hence the added arithmetic latency going from a GTX 970 --> GTX 980 --> GTX 980 Ti --> TitanX). Finally, NVIDIA will need ACE-like units. I'm thinking Volta will rectify these issues and not Pascal.

Since game developers are coding in more compute jobs than before, due to the XBoxOne and PS4 APU design constraints in terms of the rendering pipeline, we've been seeing more and more traditional rendering work processed in the compute pipeline. It's been part of all of the SIGGRAPH and GDC talks for over two years now.

We've witnessed an increase in DX11 performance for AMDs GCN architecture, relative to NVIDIAs Kepler and Maxwell architectures, in newer DX11 titles derived from shared console/pc titles. This is why.

Now if you couple the higher ratio of compute:rendering jobs, the alleviation of DX11 API overhead with DX12/Vulkan and Asynchronous compute + graphics then you have this situation we see unravelling before our eyes.

I realize that this upsets people but it was only a matter of time before GCNs architectural strengths became apparent.

Now for Deus Ex : Mankind Divided.


----------



## sugarhell

Quote:


> Originally Posted by *k3rast4se*
> 
> I can't believe i'm actually writing this since I was a huge amd fan for the last 3 years (7950,r9 290, recently switch to gtx 970). On paper, amd video always looked better than competition (512-bit memory bus, 3gb instead of 2gb, better directx12 performance, async ....) but at the end it doesn't matter. Nvidia video card have less stutter, run better and are more optimized in games that MATTER. Most gamers don't give a crap about ashes of sing., hitman (not a huge AAA title). When it come to top AAA games, nvidia is always more stable, driver ready for the launch and more feature (even if I think gameworks is bad for the industry). Because at the end, it's a question of market share. Why would WB,Ubi,... optimized and work with AMD when 80% of theirs CUSTOMER have a Nvidia card. AMD have a huge lead in Directx12 currently, but unfortunately it doesn't matter right now because the games we play are Directx11. And by the time AAA title are DX12, most likely nvidia will have card optimized for these games.
> 
> p.s.: I don't want/like nvidia dominance and market share of the last 2 years since it's bad for us, gamer/customer.


----------



## criminal

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


Hardly the same thing. Gameworks is horrible and closed and DX12/async is open. I see no double standards in this thread, only truth.

Quote:


> Originally Posted by *PostalTwinkie*
> 
> You shouldn't be a Fanboy of either side. You look at the situation you need a product for, and see who offers the best solution for that situation, it is that simple.
> 
> I went from three 7970s to a single 780 Ti, because AMD's frame pacing issues literally made a multitude of games unplayable for me. As I become motion sick extremely easy. Their CrossFire support was also lacking, and the majority of the time I was running just one of the 7970s.
> 
> The 780 Ti was on a pretty big sale at the time, so it is what I purchased. The 290X linked above? That was $250 new, yea I should have picked up a few at that price. Either way, it was the best card available for the situation. The 380 above? Same deal, for the money, awesome card.
> 
> People buying hardware because they think they are a "fan", to a damn Corporation, is bad consumerism. Don't buy based on your little fuzzy gut feelings, buy based on the requirements of the situation and the ability of either side to fill that need.
> Please quote me where I said Gameworks and ASC are the same. Really, please.
> 
> Again, if you want to have a conversation, keep the lies out of it.
> 
> You don't help yourself.


Says the guy who buys a gsync monitor and locks himself into Nvidia ecosystem.

Also I remember the thread where you went on and on comparing Gameworks and ASC as if they were the same type of thing.


----------



## narmour

Any on Fury X vs 980ti yet??


----------



## zealord

[/quote]
Quote:


> Originally Posted by *criminal*
> 
> Hardly the same thing. Gameworks is horrible and closed and DX12/async is open. I see no double standards in this thread, only truth..


That is true.

I have to be honest though. I picked a somewhat provocative thread title. It is the truth because the 390 does beat an overclocked GTX 980, but it still makes people a little bit angry









Edit : Oh some mod even changed my thread title. It is maybe for the better. Less entertaining, but better for peace I guess !


----------



## PostalTwinkie

Quote:


> Originally Posted by *criminal*
> 
> Hardly the same thing. Gameworks is horrible and closed and DX12/async is open. I see no double standards in this thread, only truth.


Why do you refuse to accept that I have compared Nvidia's sponsorship program to that of AMD's, and not to that of DX12 and the ability to handle an ASC situation? ASC isn't a program, or a gimiick, but a way of doing things.

Once again, for those that can't read; I have directly compared Nvidia's sponsored titles, under their banner, to those of AMD's sponsored titles.

Quote:


> Originally Posted by *criminal*
> 
> Says the guy who buys a gsync monitor and locks himself into Nvidia ecosystem.
> 
> Also I remember the thread where you went on and on comparing Gameworks and ASC as if they were the same type of thing.


Yup, I am totally locked in to Nvidia with my G-Sync display, nevermind my last several GPU purchases (links on page 4) have been AMD.

Keep up the adhoms, because they really prove your point.


----------



## raghu78

Quote:


> Originally Posted by *Devnant*
> 
> Well, I'm actually curious about what NVIDIA's PR will say this time. When Ashes's Alpha benchmark came out, the first DX12 benchmark, NVIDIA said Ashes was not representative of DX12. So will they say Hitman now isn't either? Maybe every single game that comes out now won't be representative of DX12 for NVIDIA?


Till Nvidia has async compute in hardware (which I assume is Pascal) they will keep coming up with some excuse. Then once they have the feature it will be the most important and a must have feature on the next gen cards for their marketing team to spin to Maxwell owners.


----------



## GorillaSceptre

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Or lack basic comprehension, yes.


You can move the goal posts and argue semantics all you want. We're on a forum, any one of these members can look through your post history and see you damage control anything to do with Nvidia and DX12..

My fault for taking the bait as usual.. I need to learn how to internetz.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GorillaSceptre*
> 
> You can move the goal posts and argue semantics all you want. We're on a forum, any one of these members can look through your post history and see you damage control anything to do with Nvidia and DX12..
> 
> My fault for taking the bait as usual.. I need to learn how to internetz.


Sorry, no, you made a claim, and couldn't back it. That isn't my fault. You attempted to pull comments completely out of context, to back your position, and you failed.

Quote:


> Originally Posted by *criminal*
> 
> I hope you pass on the gsync monitor this time, if/when you go 144/4k!


I will purchase whichever display provides me with the best and most comfortable experience. I went G-Sync because it was the only option that accomplished what I needed, FreeSync falling behind for at least a year.

When 144/4K comes out, I will make my decision based on the same logic. Which product is the best at filling the need?


----------



## Semel

Quote:


> Originally Posted by *PostalTwinkie*
> 
> OCN, cracking me up.
> 
> People in a thread about an AMD Gaming Evolved title, complaining about Nvidia GameWorks. Jesus, people really are nuts these days.
> 
> To those people, let me use your own complaints and criticism; if this was an Nvidia backed title, with Nvidia leading, you would all be crying about AMD not getting time to optimize drivers.
> 
> The double standards are just hard baked into some. Makes OCN look really bad.


It's completely nuts comparing AMDGE titles and GImpworks titles in terms of performance on nvidia\amd for obvious to any unbiased men reasons\facts..







And this is me saying,who played AMGE titles on nvidia and Gimpworks titles on AMD. Who experienced first hand open source tressfx glorious performance on both cards card and disastrous gimpworks hair on on the same cards... I can go on and on.. comparing AMGE titles to Gimpworks titles?Please.....


----------



## k3rast4se

Quote:


> Originally Posted by *Semel*
> 
> It's completely nuts comparing AMDGE titles and GImpworks titles in terms of performance on nvidia\amd for obvious to any unbiased men reasons\facts..
> 
> 
> 
> 
> 
> 
> 
> And this is me saying,who played AMGE titles on nvidia and Gimpworks titles on AMD. Who experienced first hand open source tressfx glorious performance on both cards card and disastrous gimpworks hair on on the same cards... I can go on and on.. comparing AMGE titles to Gimpworks titles?Please.....


As I said earlier, TressFX, another useless crap that didn't matter. While HBAO+ and Physx DOES matter and improve the experience.


----------



## chuy409

The hitman series has been historically favored AMD. Is this news?


----------



## zealord

I am delighted to see so many reasonable people on OCN disagreeing with PortalTwinkie. I am glad people realize the difference between AMD sponsored games that are open for all GPU vendors to be optimized and Gamework titles that impact performance negatively


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Why do you refuse to accept that I have compared Nvidia's sponsorship program to that of AMD's, and not to that of DX12 and the ability to handle an ASC situation? ASC isn't a program, or a gimiick, but a way of doing things.
> 
> Once again, for those that can't read; I have directly compared Nvidia's sponsored titles, under their banner, to those of AMD's sponsored titles.
> Yup, I am totally locked in to Nvidia with my G-Sync display, nevermind my last several GPU purchases (links on page 4) have been AMD.
> 
> Keep up the adhoms, because they really prove your point.


Why do you use this excuse all the time?

For my work i use quadro. Because i need to. Or for my htpc i use a 750ti.

But i cant claim that i dont have a bias. All the people have a bias from their nature. And it is not a bad thing if it is a product of correct reasoning.

Because you buy amd doesnt mean you are neutral. Because you have a bias and its obvious when you compare async with gameworks. (for real?)

Dont fight to be so neutral


----------



## criminal

Quote:


> Originally Posted by *sugarhell*
> 
> Why do you use this excuse all the time?
> 
> For my work i use quadro. Because i need to. Or for my htpc i use a 750ti.
> 
> But i cant claim that i dont have a bias. All the people have a bias from their nature. And it is not a bad thing if it is a product of correct reasoning.
> 
> Because you buy amd doesnt mean you are neutral. Because you have a bias and its obvious when you compare async with gameworks. (for real?)
> 
> Dont fight to be so neutral


Agree!


----------



## PostalTwinkie

Quote:


> Originally Posted by *chuy409*
> 
> The hitman series has been historically favored AMD. Is this news?


The news isn't AMD outperforming Nvidia. It is the performance gains seen with DX 12, especially on AMD hardware. Massive gains!


----------



## GorillaSceptre

Quote:


> Originally Posted by *sugarhell*
> 
> Why do you use this excuse all the time?
> 
> For my work i use quadro. Because i need to. Or for my htpc i use a 750ti.
> 
> But i cant claim that i dont have a bias. All the people have a bias from their nature. And it is not a bad thing if it is a product of correct reasoning.
> 
> Because you buy amd doesnt mean you are neutral. Because you have a bias.
> 
> For real dont fight to be so neutral, because you are not


Quote:


> Originally Posted by *zealord*
> 
> I am delighted to see so many reasonable people on OCN disagreeing with PortalTwinkie. I am glad people realize the difference between AMD sponsored games that are open for all GPU vendors to be optimized and Gamework titles that impact performance negatively


Quote:


> Originally Posted by *Semel*
> 
> It's completely nuts comparing AMDGE titles and GImpworks titles in terms of performance on nvidia\amd for obvious to any unbiased men reasons\facts..
> 
> 
> 
> 
> 
> 
> 
> And this is me saying,who played AMGE titles on nvidia and Gimpworks titles on AMD. Who experienced first hand open source tressfx glorious performance on both cards card and disastrous gimpworks hair on on the same cards... I can go on and on.. comparing AMGE titles to Gimpworks titles?Please.....


Quote:


> Originally Posted by *criminal*
> 
> Hardly the same thing. Gameworks is horrible and closed and DX12/async is open. I see no double standards in this thread, only truth.
> Says the guy who buys a gsync monitor and locks himself into Nvidia ecosystem.
> 
> Also I remember the thread where you went on and on comparing Gameworks and ASC as if they were the same type of thing.


Quote:


> Originally Posted by *PostalTwinkie*
> 
> Sorry, no, you made a claim, and couldn't back it. That isn't my fault. You attempted to pull comments completely out of context, to back your position, and you failed.


If you've ever felt like everyone in the world is crazy but you, maybe it's the other way around.


----------



## Kana-Maru

Quote:


> Originally Posted by *k3rast4se*
> 
> As I said earlier, TressFX, another useless crap that didn't matter. While HBAO+ and Physx DOES matter and improve the experience.


Lol can't be serious. TressFX still looks great in 2015. I was playing some TR 2013 last night at very high resolutions. Much better than the standard hair. PhysX wasn't even created by Nvidia. I can live without HBAO\HBAO+ since there are many alternatives out there and there will continue to be newer open source alternatives.


----------



## k3rast4se

https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support
Quote:


> Originally Posted by *Kana-Maru*
> 
> Lol can't be serious. TressFX still looks great in 2015. I was playing some TR 2013 last night at very high resolutions. Much better than the standard hair. PhysX wasn't even created by Nvidia. I can live without HBAO\HBAO+ since there are many alternatives out there and there will continue to be newer open source alternatives.


You would choose TressFX over Physx?!?


----------



## PostalTwinkie

Quote:


> Originally Posted by *sugarhell*
> 
> Why do you use this excuse all the time?
> 
> For my work i use quadro. Because i need to. Or for my htpc i use a 750ti.
> 
> But i cant claim that i dont have a bias. All the people have a bias from their nature. And it is not a bad thing if it is a product of correct reasoning.
> 
> Because you buy amd doesnt mean you are neutral. Because you have a bias and its obvious when you compare async with gameworks. (for real?)
> 
> Dont fight to be so neutral


Again, I offer the same challenge to you as Gorilla.

Quote me where I said ASC and GameWorks are the same thing.


----------



## GoLDii3

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Context, son, context. That statement was in response to the discussion that ASC was exclusive to AMD, because "Nvidia lacked the hardware". It was also a followup statement to an AMD presented slide, that showed ASC as their own exclusive support.
> .


But it is,as of now,no nVidia GPU supports hardware based Async Compute unlike AMD's GCN.

So even if tomorrow nVidia delevops software based async compute,they will still be free to claim

"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs"

Because nVidia's implementation will be nothing but emulation. Unless they decide to build a new architecture.


----------



## MonarchX

This is a continuation of my previous post here - http://www.overclock.net/t/1594186/various-hitman-dx12-benchmarks-r9-390-much-faster-than-ocd-gtx-980/70#post_24977140

*Part II*

*NVidia*

This is a short recap of the previous post. .As I mentioned in Part I:
- AMD was more hardware-oriented and NVidia is more software-oriented. Those were the strengths of those companies.
- Low-Level API's, like DirectX 12 and Vulkan created a threat for NVidia because these API's remove the driver barrier, which was one of AMD's greatest weaknesses.
- Low-Level API's also threatened NVidia's game-feature exclusivity

*How will NVidia respond?*
The hints are already on the internet and if we put it together, we can come to the conclusion that NVidia is fully aware of these threats and is prepared to face by changing their own strategy and tactics. How? Here's how:

1. NVidia realized Low-Level API's threat and that being software-oriented alone will not win the market going forth. They adapted to the new situation and now are prepared to battle with hardware + software, not software alone. Notice the approximate 20-30% difference in performance between Kepler and Maxwell II. Then notice the INSANE difference between Maxwell II performance and Pascal, which approximately is 50%. The graph of projected performance increase is somewhere on the internet, but the increase is huge. We haven't seen such an increase in performance in a very long time.

2. NVidia did not adopt Mantle because:
A - AMD controlled the development.
B - Other low-level API's, such as the more popular DirectX 12, were already in development.
C - Mantle was in very early stage of development.
D - NVidia realized if they waited, a better API would eventually come up that would not be controlled by AMD.

This served them by:
A - Looking strong and not budging when Mantle was released, which also undermined AMD.
B - Supporting the more evolved API (from Mantle to Vulkan).
C - Adopting Vulkan as API from a 3rd party group - Khronos Group, not from AMD.
D - Showing gamers they are willing to support low-level API's and try to play "fair".

Marvelous preparation and planning on NVidia's part. Now we just have to see how its going to be executed.

*NVidia's Weaknesses:*
NVidia made a ton of mistake too, but almost all of them were good for business and bad for customers:
A - NVidia messed up and misinformed the public about their GTX 970 3.5 - 4GB VRAM issue, which made them look less trust-worthy and negatively "sly".
B - This negative "sly" image was then further implied when they claimed Maxwell II had a true DirectX 12 architecture, knowing ahead the benefits and lack of full hardware Async Shader support. Then again, NVidia is the cut-throat company and they might have lost sales to AMD if they truthfully claimed Maxwell II had only partial support and was not designed for DirectX 12 Target Market.
C - Although there is a chance NVidia will somehow compensate for lack of proper Async Shader support, for now their statements about Ashes of Singularity's Beta-Stage with lack of optimizations for NVidia cards proved to be false by other recently released DirectX 12 games that performed poorly on NVidia cards. Once again that made NVidia seem less trust-worthy, lacking proper public relations and misinforming customers.
D - Aside from misinforming the public about Maxwell II poor DirectX 12 support, NVidia deliberately planned it to be that way to milk people's money on upgrades.
E - Prices were OUTRAGEOUS and still are.
F - They not only milk customers by providing non-future-proof hardware and heavynprices, but also by releasing the super-expensive and usually half-full versions first. Pascal in June will be a 980-equivalent 1080 card for $1100 and yet lack proper HBM2 VRAM that will come later, most likely in Fall 2016 and outperform
G - They began releasing buggier drivers, which was the edge they could not afford to lose. They also abandoned proper optimizations for previous generation architecture cards.

*AMD*

*How will AMD fight back?*
Considering AMD's historical data on bad decisions, its difficult to figure out. With drivers no longer being a problem in new Low-Level API's,
1. AMD will hold more power than it used in the world of discrete GPU competition due major change in API architecture. They were, are, and will be are) well-prepared.
2. AMD, hopefully, will provide faster or equally-fast hardware than NVidia at lower prices.

*AMD's Weaknesses:*
AMD historically made poor decisions, which made it difficult to predict their strategy. Some of the things they stood for and did backfired in DirectX 11 era:
1. Excellent and fast hardware is not going to win, especially if it gets released pre-maturely, and doesn't come with excellent software support. With Low-Level API, this situation is changed for the better.
2. Innovation, like Mantle. Innovation is only worth it if the adoption is great. Adoption from a company that owned less market share than their single competitor was an unlikely event. More people owned NVidia card and it behooved developers to please more people for their own profits. Mantle was an actual innovation that failed, but other technologies that succeeded, like FreeSync, were actually started with NVidia's G-Sync, so you can't really put them into Innovation category.
3. Fairness and Neutrality in the gaming world. IMHO, it is an honorable ideal, but its also the funniest one of all! Its like they live on a different planet, one where business is about some kind of "proper" competition, not the actual cut-throat rivalry with consistent undermining of each other, and race to the top. Instead of creating their own libraries that would work best on their own hardware, they either provided none or made sure it was fair to NVidia (like with TressFX and PureHair technologies).
4. Seriously bad Marketing campaign. IMO their video ads were horrible. I mean they were stupid, Stupid, STUPID, not funny, just weak...
5. Always being late to releasing game drivers. Their attempts to work with developers mostly failed.


----------



## PostalTwinkie

Quote:


> Originally Posted by *GoLDii3*
> 
> But it is,as of now,no nVidia GPU supports hardware based Async Compute unlike AMD's GCN.


Yes, this is true. Thus the quote of mine I was responding to.


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Again, I offer the same challenge to you as Gorilla.
> 
> Quote me where I said ASC and GameWorks are the same thing.


This is next level juke to avoid answering to my post.


----------



## chuy409

Quote:


> Originally Posted by *PostalTwinkie*
> 
> The news isn't AMD outperforming Nvidia. It is the performance gains seen with DX 12, especially on AMD hardware. Massive gains!


Well TP and thread is full of amd vs nvidia type of conversation.


----------



## Kana-Maru

Quote:


> Originally Posted by *k3rast4se*
> 
> https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support
> You would choose TressFX over Physx?!?


Where did you get that out of my post? I never said that I would choose TressFX over anything. Goodness.
http://www.overclock.net/t/1594186/various-hitman-2016-pc-directx-11-vs-directx-12-performance/140#post_24977452

Anyways TressFX did and does matter. It was night and day when you compared the standard hair to TressFX in TR 2013. TressFX evolved to PureHair. Next up......Deus Ex: Mankind Divided will be using PureHair and I will be enjoying it.


----------



## PostalTwinkie

Quote:


> Originally Posted by *sugarhell*
> 
> This is next level juke to avoid answering to my post.


Oh, I didn't know you actually wanted an answer to that question - it came off as a bit rhetorical.

Why do I consider myself unbiased? I suppose after 20+ years of buying and handling hardware, you just stop caring about brand loyalties. You have been around long enough to be screwed by all parties involved. Experience in business, etc, etc.

Eventually you get to the point that you literally don't care who comes out with "it", just that "it" works great.

EDIT:

Examples,

People in this thread keep claiming AMD sponsored titles launch without issue. Really? So we should forget games like Battlefield 4? Or with Nvidia, the last few Batmans games.

Or, for neither side, and speaking to the terrible developer choices...

*insert just about any game released in the last two years*.


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Oh, I didn't know you actually wanted an answer to that question - it came off as a bit rhetorical.
> 
> Why do I consider myself unbiased? I suppose after 20+ years of buying and handling hardware, you just stop caring about brand loyalties. You have been around long enough to be screwed by all parties involved. Experience in business, etc, etc.
> 
> Eventually you get to the point that you literally don't care who comes out with "it", just that "it" works great.


You dont act as neutral.


----------



## PostalTwinkie

Quote:


> Originally Posted by *sugarhell*
> 
> You dont act as neutral.












Because I don't get behind the bandwagon that if a game is Nvidia sponsored it is crap, and an AMD sponsored game launches just fine?

See my last reply edit for that one.


----------



## k3rast4se

By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


----------



## sugarhell

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Because I don't get behind the bandwagon that if a game is Nvidia sponsored it is crap, and an AMD sponsored game launches just fine?
> 
> See my last reply edit for that one.


OK tell me the last game that didint work on nvidia because of amd features.

Because you are telling me that bf4 mess was done by gaming evolved. When bf4 used only mantle and nothing else from amd features.

There is no point to discuss this with you because you will never change. Have a good day


----------



## xenophobe

Quote:


> Originally Posted by *zealord*
> 
> Quote:
> 
> 
> 
> Originally Posted by *xenophobe*
> 
> LOL.... So how would that R9 390 compare to my run of the mill MSI 980 @1528mhz? Can't EVGA SC's hit 1700-1800?
> 
> How does the 390 compare against something like that?
> 
> 
> 
> Yeah you are making a lot of sense.
> 
> Maybe we should compare a reference design R9 290X in silent mode that throttles to 700 mhz to a GTX 980 Ti Classified Zombie modded card that reaches 2100 MHZ ?
> 
> Would people like you be happy then?
Click to expand...

What would make me happy as both a gamer and consumer is to see products compared at their reasonable maximum overclocks, not at stock speeds especially when one product overclocks much better than another.

FWIW, I'm not a brand loyalist. I just happen to have Intel and nVidia at the moment. I'll own whichever product is the best for me at the time I make a purchase and brand does not play a consideration.


----------



## Stuuut

Quote:


> Originally Posted by *PostalTwinkie*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Because I don't get behind the bandwagon that if a game is Nvidia sponsored it is crap, and an AMD sponsored game launches just fine?
> 
> See my last reply edit for that one.


Exactly what kind of special AMD features does this game have that would gimp Nvidia cards? Havn't looked into it actually didn't even know this game was releasing.


----------



## maltamonk

Quote:


> Originally Posted by *k3rast4se*
> 
> By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


Isn't that the case for most every games' benchmarks?


----------



## ku4eto

Quote:


> Originally Posted by *k3rast4se*
> 
> By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


I am pretty sure, that nVidia already released 2 days ago the Hitman Game Ready drivers :

http://www.geforce.com/drivers/results/99512
Quote:


> Game Ready
> Learn more about how to get the optimal experience for Tom Clancy's The Division, Hitman, Need for Speed, Ashes of the Singularity, and Rise of the Tomb Raider


----------



## magnek

Quote:


> Originally Posted by *xenophobe*
> 
> What would make me happy as both a gamer and consumer is to see products compared at their reasonable maximum overclocks, not at stock speeds especially when one product overclocks much better than another.
> 
> FWIW, I'm not a brand loyalist. I just happen to have Intel and nVidia at the moment. I'll own whichever product is the best for me at the time I make a purchase and brand does not play a consideration.


A 980 Ti going from 1380/7000 to 1530/7800 gains about 10-15% performance depending on the game. So yes there's a bit left in the tank for that 980 Ti Strix, but not all that much.

(source: I own a 980 Ti and have done actual testing







)


----------



## inedenimadam

This is good for everybody going forward. DX12 looks like it could push AMD out of the market share rut they have been in, and it could force NVidia to behave like they have competition. I loved my 7970s to the point that I didn't resell them when I bought 980s, my 980s and I have not had the same quality of love affair..


----------



## zealord

Quote:


> Originally Posted by *k3rast4se*
> 
> By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


and when the drivers are out someone is going to say :

"Lets wait for Pascal to come out. Nvidia cards are not yet optimized for DX12"

?
Quote:


> Originally Posted by *xenophobe*
> 
> What would make me happy as both a gamer and consumer is to see products compared at their reasonable maximum overclocks, not at stock speeds especially when one product overclocks much better than another.
> 
> FWIW, I'm not a brand loyalist. I just happen to have Intel and nVidia at the moment. I'll own whichever product is the best for me at the time I make a purchase and brand does not play a consideration.


If you go back a few pages you see me saying (I quote myself here) :
Quote:


> Originally Posted by *zealord*
> 
> Yeah but Nvidia cards do overclock really well. Testing a 980 Ti at stock would be a waste imho
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I wish reviewers would test cards at :
> 
> 1) stock versus stock
> 
> and
> 
> 2) max OC versus max OC


You need to understand that :

- not everyone overclocks their cards to maximum
- not everyone has water cooling or good custom cards
- not everyone has great air flow in their case
- some people are afraid to overclock. Yeah maybe not on overclock.net, but you get my point.

I bet you that 95% of people that play games like Hitman do *NOT* overclock their cards to the max.

I don't even overclock my cards to the max, because I don't want to increase the power draw, heat and voltage so much.


----------



## Semel

*MonarchX*
Quote:


> Then notice the INSANE difference between Maxwell II performance and Pascal.


Pascal? What's this? Where is it? Do you mean the fake one they showed not so long ago pretending it was real ?


----------



## PostalTwinkie

Quote:


> Originally Posted by *zealord*
> 
> and when the drivers are out someone is going to say :
> 
> "Lets wait for Pascal to come out. Nvidia cards are not yet optimized for DX12"
> 
> ?
> If you go back a few pages you see me saying (I quote myself here) :
> You need to understand that :
> 
> - not everyone overclocks their cards to maximum
> - not everyone has water cooling or good custom cards
> - not everyone has great air flow in their case
> - some people are afraid to overclock. Yeah maybe not on overclock.net, but you get my point.
> 
> I bet you that 95% of people that play games like Hitman do *NOT* overclock their cards to the max.
> 
> I don't even overclock my cards to the max, because I don't want to increase the power draw, heat and voltage so much.


This.

Few really push an overclock, if they overclock at all.


----------



## Mahigan

Quote:


> Originally Posted by *k3rast4se*
> 
> By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


Those benchmarks are using NVIDIA gameready drivers:
Quote:


> The new GeForce Game Ready 364.51 WHQL drivers for Tom Clancy's The Division, Need For Speed, *Hitman*, and Ashes of the Singularity are now available to download via GeForce Experience.


Source: http://www.geforce.com/whats-new/articles/tom-clancys-the-division-game-ready-drivers-released


----------



## magnek

Quote:


> Originally Posted by *k3rast4se*
> 
> By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


Isn't this what nVidia users made fun of AMD users for when AMD was losing the benchmarks?

_"inb4 wait for AMD to optimize drivers"_

This is besides the point that these benchmarks did use nVidia optimized drivers as pointed out above.


----------



## airfathaaaaa

Quote:


> Originally Posted by *k3rast4se*
> 
> By the way, the benchmark compared optimized driver (amd 16.3) vs non-optimized driver(nvidia). Let's wait a couple days before judging performance of AMD vs Nvidia.


http://us.download.nvidia.com/Windows/364.51/364.51-win10-win8-win7-winvista-desktop-release-notes.pdf
pretty sure it is


----------



## EightDee8D

Quote:


> Originally Posted by *Semel*
> 
> *MonarchX*
> Pascal? What's this? Where is it? Do you mean the fake one they showed not so long ago pretending it was real ?


10x performance over maxwell .









and lots of deep learning on how to use dx12.


----------



## k3rast4se

Quote:


> Originally Posted by *magnek*
> 
> Isn't this what nVidia users made fun of AMD users for when AMD was losing the benchmarks?
> 
> _"inb4 wait for AMD to optimize drivers"_


The game is not even released. Where are AMD driver optimized for The Division?


----------



## magnek

Quote:


> Originally Posted by *EightDee8D*
> 
> 10x performance over maxwell .
> 
> 
> 
> 
> 
> 
> 
> 
> 
> and lots of deep learning on how to use dx12.


also need nVLink for 8-way GPUgasm








Quote:


> Originally Posted by *k3rast4se*
> 
> The game is not even released. Where are AMD driver optimized for The Division?


lol @ the goalpost shifting


----------



## ku4eto

Quote:


> Originally Posted by *Stuuut*
> 
> You know what will get this thread locked? Posts like this...
> I mean we can all like or not like Postal to each their own but no personal attacks needed.....


Pretty sure i am not attacking or insulting anyone here... Calling some one "not smart" doesn't mean "stupid, dumb" or along those lines. If you check their posts, you will see... almost 0 information regarding the thread details.


----------



## Mahigan

Quote:


> Originally Posted by *k3rast4se*
> 
> The game is not even released. Where are AMD driver optimized for The Division?


Doesn't look like AMD really need them for the Division though.


----------



## zealord

Quote:


> Originally Posted by *k3rast4se*
> 
> The game is not even released. Where are AMD driver optimized for The Division?


AMD cards are actually performing really well in The Division.

http://www.guru3d.com/articles_pages/the_division_pc_graphics_performance_benchmark_review,6.html

You can clearly see that old AMD cards are still performing well, while Kepler cards are basically left 4 dead.


----------



## ku4eto

Quote:


> Originally Posted by *PostalTwinkie*
> 
> This.
> 
> Few really push an overclock, if they overclock at all.


I only push per profile overclock, global is giving too much problems, increased heat and noise for games that do not really need it. Profile for a game with 10% OC without touching the voltage controls = all i need.


----------



## p4inkill3r

Quote:


> Originally Posted by *Mahigan*
> 
> Doesn't look like AMD really need them for the Division though.


In this community, one's horse not in #1 automatically means that the driver isn't optimized.


----------



## sugarhell

Quote:


> Originally Posted by *zealord*
> 
> AMD cards are actually performing really well in The Division.
> 
> http://www.guru3d.com/articles_pages/the_division_pc_graphics_performance_benchmark_review,6.html
> 
> You can clearly see that old AMD cards are still performing well, while Kepler cards are basically left 4 dead.


I see what you did there.

Left 4 dead 2 is for maxwell after pascal?


----------



## k3rast4se

Quote:


> Originally Posted by *zealord*
> 
> AMD cards are actually performing really well in The Division.
> 
> http://www.guru3d.com/articles_pages/the_division_pc_graphics_performance_benchmark_review,6.html
> 
> You can clearly see that old AMD cards are still performing well, while Kepler cards are basically left 4 dead.


what if we use the same website (the german one).

http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/


----------



## zealord

Quote:


> Originally Posted by *sugarhell*
> 
> I see what you did there.
> 
> Left 4 dead 2 is for maxwell after pascal?


Actually I think AMD just did really bad with cards when they released, but over time the gains and support is incredible. I wish they would get their act together when they release a new GPU with good drivers.

When I got my 290X (got it dirt cheap from a friend) and he bought a GTX 980 then. I considered selling the 290X and buying a 970/780 Ti, but looking at it now the 290X is constantly beating the 970 and 780 Ti.


----------



## sugarhell

Quote:


> Originally Posted by *k3rast4se*
> 
> what if we use the same website (the german one).
> 
> http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/


What is wrong?


----------



## GoLDii3

Quote:


> Originally Posted by *p4inkill3r*
> 
> In this community, one's horse not in #1 automatically means that the driver isn't optimized.


And cause butthurt people,don't forget that.


----------



## zealord

Quote:


> Originally Posted by *k3rast4se*
> 
> what if we use the same website (the german one).
> 
> http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/


Don't know what you are getting at. AMD cards are performing well in this benchmark. (maybe look beyond 1080p aswell)

It doesn't look like AMD needs to push out any improved drivers for The Division (like you suggested they should in previous post)


----------



## magnek

Quote:


> Originally Posted by *k3rast4se*
> 
> what if we use the same website (the german one).
> 
> http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/


Ok and your point being?


----------



## p4inkill3r

Quote:


> Originally Posted by *sugarhell*
> 
> What is wrong?


Look at nvidia's *dominance*.


----------



## k3rast4se

Quote:


> Originally Posted by *sugarhell*
> 
> What is wrong?


THIS

http://www.newegg.com/Product/Product.aspx?Item=N82E16814127889&cm_re=980_ti-_-14-127-889-_-Product

VS

http://www.newegg.com/Product/Product.aspx?Item=N82E16814127890&cm_re=r9_fury_x-_-14-127-890-_-Product


----------



## magnek

Quote:


> Originally Posted by *GoLDii3*
> 
> And cause butthurt people,don't forget that.


Yeah it's sad how salty some nVidia owners get when their company loses in a select few titles.


----------



## ku4eto

Quote:


> Originally Posted by *k3rast4se*
> 
> what if we use the same website (the german one).
> 
> http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/


You should actually look at the numbers in the graph at higher resolutions, rather than the GPU position. Dont know why they left the GPU's on the same spot, while they are scoring 10 FPS less than the GPU below it. The "worst" results are only under 1080p. Also, note, they are using reference FuryX, not overclocked. Same with Fury. They are put to AIB factory OC'ed 980Ti (the best at the market on top of that), which is not really apples to apples.


----------



## k3rast4se

Quote:


> Originally Posted by *ku4eto*
> 
> The "worst" results are only under 1080p.


http://store.steampowered.com/hwsurvey


----------



## ku4eto

Quote:


> Originally Posted by *k3rast4se*
> 
> THIS
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814127889&cm_re=980_ti-_-14-127-889-_-Product
> 
> VS
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814127890&cm_re=r9_fury_x-_-14-127-890-_-Product


Well, AMD did what they could with the FuryX, its obviously overwhelmed by driver overhead in DX11 and it is not designed for 1080p actually. The % FPS decrease when increasing the resolution is 50% smaller than that on the 980 Ti.


----------



## airfathaaaaa

Quote:


> Originally Posted by *k3rast4se*
> 
> http://store.steampowered.com/hwsurvey


till when you will keep ignoring the elephant on the room?


----------



## DjXbeat

NV where is Async? Where?????


----------



## EightDee8D

too much off-topic









the irony


----------



## magnek

Quote:


> Originally Posted by *k3rast4se*
> 
> THIS
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814127889&cm_re=980_ti-_-14-127-889-_-Product
> 
> VS
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814127890&cm_re=r9_fury_x-_-14-127-890-_-Product


Quote:


> Originally Posted by *k3rast4se*
> 
> http://store.steampowered.com/hwsurvey


Tell us something we don't know already.

And seriously, this went from "AMD had optimized drivers but nVidia didn't (which was a LIE)", to "who cares this game isn't even released" to "can we use the same German site" to "LOOK AT THE FURY X'S PRICE" to "STEAM HARDWARE SURVEY"

Jesus freakin' christ, I've never seen someone shift the goalposts so many times so rapidly.


----------



## ku4eto

Quote:


> Originally Posted by *k3rast4se*
> 
> http://store.steampowered.com/hwsurvey


You do know, the reason why nVidia is so far ahead in market shares, beside their marketing strategy, was actually having their GPU's "top" the AMD/ATi ones with the whole amazing... 1 to 5% for the same or more price. And i am pretty sure, that the thing you are trying to say that 980 Ti is used for 1080p... you are wrong. Most used cards for 1080p are the midrange ones, like 950,960,970, AMD 270,280,285,290,370,380. Comparing enthusiast level cards for super high resolutions on a mainstream one is... not the right way.


----------



## Mahigan

Quote:


> Originally Posted by *k3rast4se*
> 
> what if we use the same website (the german one).
> 
> http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/


I like the results comparing HBAO+/PCSS on and off like these here:


Spoiler: Warning: Spoiler!



HBAO+/PCSS off 1080p









HBAO+/PCSS on 1080p









HBAO+/PCSS on 1440p








Source: http://www.purepc.pl/karty_graficzne/test_wydajnosci_tom_clancys_the_division_pc_bedzie_sie_dzialo?page=0,9



I like these results as it shows you the effect that Blackbox Game works effects have in terms of performance on AMD and NV hardware.


----------



## PostalTwinkie

Quote:


> Originally Posted by *ku4eto*
> 
> Pretty sure i am not attacking or insulting anyone here... Calling some one "not smart" doesn't mean "stupid, dumb" or along those lines. If you check their posts, you will see... almost 0 information regarding the thread details.


Several of my posts were directly discussing the thread content, I was really having two conversations at once. I simply ignored your original post, because frankly, it didn't make any sense.
Quote:


> Originally Posted by *ku4eto*
> 
> I only push per profile overclock, global is giving too much problems, increased heat and noise for games that do not really need it. Profile for a game with 10% OC without touching the voltage controls = all i need.


That is good and well, but you are even in the minority further. I would wager more people run a general OC on their GPU, than run OCs specific to games. Of course, that is speaking to the larger user base, and not our very small community.


----------



## ku4eto

Quote:


> Originally Posted by *PostalTwinkie*
> 
> Several of my posts were directly discussing the thread content, I was really having two conversations at once. I simply ignored your original post, because frankly, it didn't make any sense.
> That is good and well, but you are even in the minority further. I would wager more people run a general OC on their GPU, than run OCs specific to games. Of course, that is speaking to the larger user base, and not our very small community.


I am sorry (or not really??), tired from work, and i just re read my comment. Doesn;t make much sense to me either









People being non-knowledgable is the reason for lots of stuff going on (actually not going on). I like to fiddle with things, even if its for fun.


----------



## iLeakStuff

Its ONE game lol....


----------



## ZealotKi11er

I am not sure why people with Kepler GPU defend Nvidia. You are same class as AMD users. You get no special treatment. To Nvidia Kepler + AMD users are the most potential buyers of Pascal.


----------



## Master__Shake

Quote:


> Originally Posted by *iLeakStuff*
> 
> Its ONE game lol....


with a new API.

it may just set a trend.


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> Its ONE game lol....


and it spawned a thread like this. It's just embarrassing and frankly disgraceful.

(no offense to OP and definitely not directed towards him of course)


----------



## clao

Quote:


> Originally Posted by *xenophobe*
> 
> What would make me happy as both a gamer and consumer is to see products compared at their reasonable maximum overclocks, not at stock speeds especially when one product overclocks much better than another.
> 
> FWIW, I'm not a brand loyalist. I just happen to have Intel and nVidia at the moment. I'll own whichever product is the best for me at the time I make a purchase and brand does not play a consideration.


because doing so would be too difficult. Mostly due to people have different experience with overclocking.


----------



## Master__Shake

Quote:


> Originally Posted by *xenophobe*
> 
> What would make me happy as both a gamer and consumer is to see products compared at their reasonable maximum overclocks, not at stock speeds especially when one product overclocks much better than another.
> 
> FWIW, I'm not a brand loyalist. I just happen to have Intel and nVidia at the moment. I'll own whichever product is the best for me at the time I make a purchase and brand does not play a consideration.


maybe you should stick to linus tech reviews then.


----------



## iLeakStuff

Quote:


> Originally Posted by *magnek*
> 
> and it spawned a thread like this. It's just embarrassing and frankly disgraceful.


Its like Nvidia people can`t take it and desperately attack the results while AMD people need to rub it in.
I`d say good for AMD for showing a big lead but I doubt this will be typical for DX12 which I know many AMDers think.


----------



## raghu78

Quote:


> Originally Posted by *iLeakStuff*
> 
> Its ONE game lol....


actually the second game after AOTS to use async compute. AOTS is scheduled for a launch on Mar 31st. The results are less flattering if you are a Maxwell owner. Anyway we will have Quantum Break next month. We will see how the performance is in future DX12 games. But for now its not looking great.


----------



## Kpjoslee

Quote:


> Originally Posted by *magnek*
> 
> and it spawned a thread like this. It's just embarrassing and frankly disgraceful.


Well, this is the year of new GPU releases and that is when fanboy war becomes more rampant lol. It will only get crazier later this year.


----------



## zealord

Quote:


> Originally Posted by *clao*
> 
> because doing so would be too difficult. Mostly due to people have different experience with overclocking.


That aswell yes.

If a reviewers says "I have a GTX 980 and could overclock it to a maximum of 1577 mhz. here you can see the results" then someone is going to say in the comments "My GTX 980 overclocks to 1585 mhz you should redo your tests!!"


----------



## magnek

Quote:


> Originally Posted by *iLeakStuff*
> 
> Its like Nvidia people can`t take it and desperately attack the results while AMD people need to rub it in.
> I`d say good for AMD for showing a big lead but I doubt this will be typical for DX12 which I know many AMDers think.


If you're so vested in your hardware that one single game in which AMD pulls ahead is enough to make you salty, well I very respectfully suggest you reexamine your priorities.


----------



## Charcharo

Market share depends mostly on marketing and brand. Your average PC Gamer is ... a lowly creature to be fair, so those things easily work on him/her.

AMD needs to market better, whine less. The hardware and price/performance is generally excellent. Which is... what counts really.

Now it does great in DX12. Which is good, but AMD can turn even a victory into a loss...


----------



## ZealotKi11er

Quote:


> Originally Posted by *zealord*
> 
> That aswell yes.
> 
> If a reviewers says "I have a GTX 980 and could overclock it to a maximum of 1577 mhz. here you can see the results" then someone is going to say in the comments "My GTX 980 overclocks to 1585 mhz you should redo your tests!!"


Not many people can run overclocks that high unless they are under water. It's not practical for 24/7. Nvidia OC models of cards push the card very far. You can add another 10% but there is not much to extract from them. GTX980 Ti vs OC ~ 20% faster. Fury X gets ~ 10% from OC from what I have seen. So if Fury X wins at Stock then a OCed 980 Ti will match it or slightly beat it. I know this is OCN but from year of overclocking GPUs I think out of BOX clocks are the relevant ones that most people will use.
Quote:


> Originally Posted by *Charcharo*
> 
> Market share depends mostly on marketing and brand. Your average PC Gamer is ... a lowly creature to be fair, so those things easily work on him/her.
> 
> AMD needs to market better, whine less. The hardware and price/performance is generally excellent. Which is... what counts really.
> 
> Now it does great in DX12. Which is good, but AMD can turn even a victory into a loss...


Most people do not see through Nvidias Marketing. Stealth Marketing is best marketing. Just go to Twitch and see PC Specs of Top Streamers. They all have GTX. Where do you thing you PC games learn about these thing.. You convince 2-3 people on OCN and Twitch convives 100s.


----------



## Kpjoslee

Quote:


> Originally Posted by *Charcharo*
> 
> Market share depends mostly on marketing and brand. Your average PC Gamer is ... a lowly creature to be fair, so those things easily work on him/her.
> 
> AMD needs to market better, whine less. The hardware and price/performance is generally excellent. Which is... what counts really.
> 
> Now it does great in DX12. Which is good, but AMD can turn even a victory into a loss...


AMD has to make sure they have great performance from day 1. 290x is great hardware, but they are like 1.5 years late to the party and damage has already been done. They have to stop that trend with Polaris.


----------



## magnek

Quote:


> Originally Posted by *Charcharo*
> 
> Market share depends mostly on marketing and brand. Your average PC Gamer is ... a lowly creature to be fair, so those things easily work on him/her.
> 
> AMD needs to market better, whine less. The hardware and price/performance is generally excellent. Which is... what counts really.
> 
> Now it does great in DX12. Which is good, but AMD can turn even a victory into a loss...


I think the best marketing AMD could do, is to stop hyping (or overly hyping at least), stick to the facts, and let the results speak for themselves. Nobody likes to be underdelievered on an overpromise, but overdelivering when the expectations are low always works out great psychologically.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Kpjoslee*
> 
> AMD has to make sure they have great performance from day 1. 290x is great hardware, but they are like 1.5 years late to the party and damage has already been done. They have to stop that trend with Polaris.


I do not think that is the problem. 290X did not get faster... Nvidia got slower... Read my post above. 5-10% mean nothing to the average Joe.
Quote:


> Originally Posted by *magnek*
> 
> I think the best marketing AMD could do, is to stop hyping (or overly hyping at least), stick to the facts, and let the results speak for themselves. Nobody likes to be underdelievered on an overpromise, but overdelivering when the expectations are low always works out great psychologically.


Again their Marketing only is seen by people like me and you which are far in-between.


----------



## Mahigan

Quote:


> Originally Posted by *magnek*
> 
> and it spawned a thread like this. It's just embarrassing and frankly disgraceful.
> 
> (no offense to OP and definitely not directed towards him of course)


Two games now, AotS and Hitman. Kollock stated that AotS engine work is done. Only the campaign needs to be tacked onto the game.

Any bets on Deus Ex: Mankind Divided?


----------



## iLeakStuff

Quote:


> Originally Posted by *magnek*
> 
> I think the best marketing AMD could do, is to stop hyping (or overly hyping at least), stick to the facts, and let the results speak for themselves. Nobody likes to be underdelievered on an overpromise, but overdelivering when the expectations are low always works out great psychologically.


That plus getting release cycles close to Nvidia. Should recapture a lot of market share if they do that.

I do also think that proper Zen performance vs Intel will grant them a key to more system builders and OEMs because it doesnt hurt these people`s businesses reputation if the hardware was worse than competition. With that AMD can also sell a sort of package to these OEMs where they also get AMD GPUs in the machines.

From there it will be a snowball effect, word gets around about AMD, AMD becomes more "common" and known like Nvidia which should ultimately turn AMD in profit again.


----------



## zealord

Quote:


> Originally Posted by *Mahigan*
> 
> Two games now, AotS and Hitman. Kollock stated that AotS engine work is done. Only the campaign needs to be tacked onto the game.
> 
> *Any bets on Deus Ex: Mankind Divided?*


Sadly was delayed to August









Maybe Forza and Quantum Break will be true DX12 games and not just Windows Store shove DX12 down their throat victims


----------



## HeadlessKnight

Why they didn't include the Fury X results when they have it







?


----------



## Kpjoslee

Quote:


> Originally Posted by *Mahigan*
> 
> Two games now, AotS and Hitman. Kollock stated that AotS engine work is done. Only the campaign needs to be tacked onto the game.
> 
> Any bets on Deus Ex: Mankind Divided?


I think it would be more about Polaris vs Pascal when Deus Ex gets released in August lol.


----------



## Master__Shake

Quote:


> Originally Posted by *k3rast4se*
> 
> http://store.steampowered.com/hwsurvey


that graph is so indicative of hardware it's not even funny.



what year was this survey taken?

i haven't had a 1gb vram card since i bought a 4890.... at launch


----------



## clao

eh i got my 290x on the cheap wasn't expecting much of it but I guess I did choose correctly over the 970


----------



## Devnant

Quote:


> Originally Posted by *HeadlessKnight*
> 
> Why they didn't include the Fury X results when they have it
> 
> 
> 
> 
> 
> 
> 
> ?


My guess is that AMD drivers are bugged for Fiji, and they are locked at 60 FPS on DX12.
Quote:


> tomorrow guys
> Unfortunately, we were happy too early. The Framelock under Direct X 12 will remain in Fiji, also affected is Tonga. In Hawaii, however, the frame rate as in DirectX 11 are unlimited. Possibly this is due to a driver error. Measurements can thus only of limited use, a retest would be necessary. We have AMD already informed. We complement our benchmarks at this point with the R9 280X, but the Tahiti GPU can not really benefit from Direct X 12, but it is probably also under DirectX 11 is already operating at full capacity. Moreover Hitman limited our settings as in the beta, with 3 GiByte memory we have only high texture details are available, in addition, we can select a maximum resolution 1920 × 1080. We also measured 12 Direct X with the HIS R9 390X IceQ X² / 8G a relatively strong overclocked Hawaii full expansion, the expected can zoom close slide to the GTX 980 Ti is. More graphics cards, including GTX 970, GTX 780 Ti follow morning. We also have hope that by then we will get presents an approach to the problem with the frame lock in Fiji and Tonga. The performance of Hawaii GPUs suggests at least a strong potential for performance gains under Direct X 12, unfortunately this can not currently validated by results with other Radeon GPUs. Overlooking the very brief performance ratio between Hawaii and GM200 Nvidia should at least be acknowledged that no custom Geforce driver for Hitman and Direct X 12 is released, while AMD and the customized Radeon Software placed 16.3 Non-WHQL available


Source: https://www.reddit.com/r/Amd/comments/49u358/early_hitman_dx12_benchmarks_390_is_15_faster/d0v1v7o


----------



## Kpjoslee

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I do not think that is the problem. 290X did not get faster... Nvidia got slower... Read my post above. 5-10% mean nothing to the average Joe.


Unless you are talking about new game releases, Nvidia certainly didn't get slower in previously benched games in newer drivers.


----------



## Charcharo

Quote:


> Originally Posted by *Kpjoslee*
> 
> AMD has to make sure they have great performance from day 1. 290x is great hardware, but they are like 1.5 years late to the party and damage has already been done. They have to stop that trend with Polaris.


The 290X was ... an incredibly good card when it came out.

It still is. If you have a 290X, less you have the "Enthusiast's syndrome" you do not need to upgrade at all.

AMD needs better marketing. It is that simple. The hardware is great (except Fiji I guess).


----------



## Kpjoslee

Quote:


> Originally Posted by *Charcharo*
> 
> The 290X was ... an incredibly good card when it came out.
> 
> It still is. If you have a 290X, less you have the "Enthusiast's syndrome" you do not need to upgrade at all.
> 
> AMD needs better marketing. It is that simple. The hardware is great (except Fiji I guess).


I think you can put some blame on mining craze that happened during 290x launch. I am sure quite a few people wanted to purchase 290x but couldn't because of mining craze inflated 290x price by a lot. I remember quite a people bought 780ti instead.


----------



## Charcharo

Quote:


> Originally Posted by *Kpjoslee*
> 
> I think you can put some blame on mining craze that happened during 290x launch. I am sure quite a few people wanted to purchase 290x but couldn't because of mining craze inflated 290x price by a lot. I remember quite a people bought 780ti instead.


Poor 780 TI owners.

That craze passed by me I guess.


----------



## Kpjoslee

Quote:


> Originally Posted by *Charcharo*
> 
> Poor 780 TI owners.
> 
> That craze passed by me I guess.


780Ti was pretty good until Maxwell release that halted its further driver optimization. I had pretty good experience with 780ti SLI until moved onto 980ti SLI.


----------



## looniam

Quote:


> Originally Posted by *Master__Shake*
> 
> Quote:
> 
> 
> 
> Originally Posted by *k3rast4se*
> 
> http://store.steampowered.com/hwsurvey
> 
> 
> 
> that graph is so indicative of hardware it's not even funny.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> what year was this survey taken?
> 
> i haven't had a 1gb vram card since i bought a 4890.... at launch
Click to expand...

those are monthly so no older january 2016.

take a look at display resolution; about ~33% >1080 so 1Gb of vram can handle older titles w/decent settings (hello my old 570). pretty much goes to show not everyone is maxing out the newest AAA games at 1440 or 4K.


----------



## JunkoXan

Kpjoslee's Avatar sums up the threads direction, just saying...


----------



## Charcharo

Quote:


> Originally Posted by *Kpjoslee*
> 
> 780Ti was pretty good until Maxwell release that halted its further driver optimization. I had pretty good experience with 780ti SLI until moved onto 980ti SLI.


Not everyone should or does upgrade like that. It is... technically throwing money away .

I get it, you are fine, but still... the 780 TI should have lasted an entire gen of Good performance... It already is going... somewhat not good.


----------



## rluker5

Quote:


> Originally Posted by *Kpjoslee*
> 
> I think you can put some blame on mining craze that happened during 290x launch. I am sure quite a few people wanted to purchase 290x but couldn't because of mining craze inflated 290x price by a lot. I remember quite a people bought 780ti instead.


I was one of those. Was on the fence. Thought the 780ti was just a little better, cost just a little more and was a considerably safer bet on consistent performance. I would have bought a 290x if it cost the same as a 780. No regrets. 780tis have worked great the whole time and the 290X's will be better when I move on.


----------



## AstuteCobra

Another thing that hurt the 290/X launch was the lack of aftermarket aib options.


----------



## GorillaSceptre

Quote:


> Originally Posted by *AstuteCobra*
> 
> Another thing that hurt the 290/X launch was the lack of aftermarket aib options.


Agreed, probably the biggest reason imo. "burn your house down" is still a thing even to this day..


----------



## Charcharo

Quote:


> Originally Posted by *AstuteCobra*
> 
> Another thing that hurt the 290/X launch was the lack of aftermarket aib options.


That is really bad. I agree.


----------



## Kpjoslee

Quote:


> Originally Posted by *Charcharo*
> 
> Not everyone should or does upgrade like that. It is... technically throwing money away .
> 
> I get it, you are fine, but still... the 780 TI should have lasted an entire gen of Good performance... It already is going... somewhat not good.


Yeah, well, Nvidia has different strategy of releasing new architecture each generation while AMD invested in long term plan with GCN. I have no problem agreeing with AMD being safer long term investment if you are planning to keep your GPU for few years.
Quote:


> Originally Posted by *rluker5*
> 
> I was one of those. Was on the fence. Thought the 780ti was just a little better, cost just a little more and was a considerably safer bet on consistent performance. I would have bought a 290x if it cost the same as a 780. No regrets. 780tis have worked great the whole time and the 290X's will be better when I move on.


780Ti SLI is still a fine choice for up to 1440p. I don't think people who bought 780tis regrets their choice.


----------



## pengs

Quote:


> Originally Posted by *k3rast4se*
> 
> As I said earlier, TressFX, another useless crap that didn't matter. While HBAO+ and Physx DOES matter and improve the experience.


Or we could had just done ambient occlusion correctly in the first place using a global illumination method like voxel cone tracing which is ray tracing on a much smaller scale but I believe it's very compute oriented.

Main point -- AO is a nasty hack and has always been no matter the amount of samples, it's inherently incorrect. And while PhysX is pretty it has no real use other than visuals which can also be done in a more useful way without such overhead.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Kpjoslee*
> 
> Yeah, well, Nvidia has different strategy of releasing new architecture each generation while AMD invested in long term plan with GCN. I have no problem agreeing with AMD being safer long term investment if you are planning to keep your GPU for few years.
> 780Ti SLI is still a fine choice for up to 1440p. I don't think people who bought 780tis regrets their choice.


There are not many left. Most upgraded.


----------



## Sleazybigfoot

Quote:


> Originally Posted by *iLeakStuff*
> 
> Its like Nvidia people can`t take it and desperately attack the results while AMD people need to rub it in.
> I`d say good for AMD for showing a big lead but I doubt this will be typical for DX12 which I know many AMDers think.


Quote:


> Originally Posted by *iLeakStuff*
> 
> It have never stopped GTX 970 before and was only struggling in some extremely rare incidents, why should it struggle now......
> I will mention it every single time there is a DX12 game where Nvidia is doing better than AMD. Just to rub it in after all this praising from people that Async will be the savior of the brand AMD.
> There are other DX12 features that Nvidia does better than AMD, so this wont be the last one


http://www.overclock.net/t/1593588/gamegpu-gears-of-war-ultimate-edition-test-gpu/50#post_24961952


----------



## Asmodian

Quote:


> Originally Posted by *pengs*
> 
> And while PhysX is pretty it has no real use other than visuals which can also be done in a more useful way without such overhead.


Pretty but no real use? Doesn't that describe every single graphic effect ever?

These results are great and it is excellent to see all the transistors working well on AMD cards. For the implecations of general "dx12 performance" we do have to remember that there is a lot more to optimizing for an architecture than "using Async" or "using dx12". In the same way that it is possible for Nvidia to code DX11 effects that run well on their hardware (compared to AMD's) it is possible for AMD to help the developer code DX12 that runs well on their hardware compared to Nvidia's. It isn't a case of "it is open so anyone can optimize it", I am talking about the code the game is running, not driver optimizations.

I would be shocked if the 390X wasn't usually faster than a 980 in DX12 games but so far we only have benches for AMD evolved titles. I do believe Oxide when they say they tried to make Nvidia cards run well to but they ARE a gaming evolved title so we can be sure they would not use a technique if it ran poorly on AMD hardware.


----------



## sblantipodi

Is there any bench using sli?


----------



## PontiacGTX

Quote:


> Originally Posted by *Asmodian*
> 
> I would be shocked if the 390X wasn't usually faster than a 980 in DX12 games but so far we only have benches for AMD evolved titles. I do believe Oxide when they say they tried to make Nvidia cards run well to but they ARE a gaming evolved title so we can be sure they would not use a technique if it ran poorly on AMD hardware.


for a reason most of the games that use asynchronous compute arent sponsored by nvidia ,if it were the strong point of nvidia there should have been at least a game with physx or gameworks using async compute


----------



## pengs

Quote:


> Originally Posted by *Asmodian*
> 
> Pretty but no real use? Doesn't that describe every single graphic effect ever?


I guess it depends on how it's looked at. I mean physics itself is a pretty intrinsic feature within all games now at days, wouldn't it make sense that a feature called PhsyX do something physical with which it can react to or effect a player, the environment, other objects?

I'm not doging PhsyX, I loved it's use in PS2 and was quite mad when it was removed. It can definitely create a more vibrant environment but it could also effect the environment which is my main point. PhysX itself is just the infant version of a fully interactive environment which requires massive amounts of processing and at that point PhysX will be a dead feature because the interaction will be native at the engine level.


----------



## MerkageTurk

290x amazing card but go for the Sapphire edition

How old is this gpu again? Amazing, plus you can get the 390x bios on it.


----------



## rluker5

Quote:


> Originally Posted by *ZealotKi11er*
> 
> There are not many left. Most upgraded.


I didn't see a need to upgrade this gen since I can still hold [email protected] with an acceptable amount of compromises. But the list keeps getting bigger. When 1440p max looks better than 4k with compromises I'll be out of excuses.
Your not many left statement can be backed up by the ghost town that is the 780ti owners club thread.
I like that my cards could play all the games I wanted to well. Don't know if either side will be able to claim that over the next 3 years.


----------



## KyadCK

Quote:


> Originally Posted by *xenophobe*
> 
> Quote:
> 
> 
> 
> Originally Posted by *zealord*
> 
> In 1080p the R9 390 (1010 mhz) is 15% faster than the GTX 980 (1316 mhz)
> The GTX 980 Ti (1380 mhz) is 16% faster than the R9 390 and 33% faster than the GTX 980.
> 
> *In 4K the the R9 390 (1010 mhz) is 33% faster than the GTX 980 (1316 mhz) and 70% higher min framerate!*
> The 980 Ti at 4K is still 15% faster than the R9 390.
> 
> 
> 
> LOL.... So how would that R9 390 compare to my run of the mill MSI 980 @1528mhz? Can't EVGA SC's hit 1700-1800?
> 
> How does the 390 compare against something like that?
Click to expand...

Well.. I mean... 1500 / 1300 = +15%... but 390s can still go to 1100 themselves, and 1100/1010 = +8%... And that's not even a 390X, just a 390. It wont make a significant difference.
Quote:


> Originally Posted by *k3rast4se*
> 
> https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support
> Quote:
> 
> 
> 
> Originally Posted by *Kana-Maru*
> 
> Lol can't be serious. TressFX still looks great in 2015. I was playing some TR 2013 last night at very high resolutions. Much better than the standard hair. PhysX wasn't even created by Nvidia. I can live without HBAO\HBAO+ since there are many alternatives out there and there will continue to be newer open source alternatives.
> 
> 
> 
> You would choose TressFX over Physx?!?
Click to expand...

I would chose well done and light-on-performance "realistic" hair anyone can use over what basically amounts to nothing but extra particles that I can't use even if I put an nVidia card in my computer, yes.

I wish I could use it. nVidia decided even if I spent money it wasn't going to happen. For that reason alone I will not support it.


----------



## looniam

is there something i am missing here???

i am seeing ~10% performance gains _across the board_ DX11/DX12. maybe folks aren't looking or not knowing the article is "live?"

so if i am suppose to be "_unsatisfied_," someone kindly let me know - that would be great.

TIA.


----------



## Asmodian

Quote:


> Originally Posted by *PontiacGTX*
> 
> for a reason most of the games that use asynchronous compute arent sponsored by nvidia ,if it were the strong point of nvidia there should have been at least a game with physx or gameworks using async compute


Right, I doubt an Nvidia sponsored game will make heavy use of Async compute but it is very likely there will be a DX12 title that runs better on Nvidia hardware. Optimization is a lot more complex then most posters seem to understand and there is almost never one "correct" way to do things. Like the commonly asserted "there is another way to do it that is faster and better" without any understanding of what is being done or how it could be done better; they are probably right but no one knows what that method is yet so it would need to be invented.

Async compute is not inheritly faster, it is sort of like hyperthreading. It allows you to run something else when you have idle shaders but doesn't help at all if you do not.


----------



## Ha-Nocri

It is very easy to implement async compute in games and not doing so would most likely be on purpose.


----------



## xenophobe

Quote:


> Originally Posted by *zealord*
> 
> Yeah but Nvidia cards do overclock really well. Testing a 980 Ti at stock would be a waste imho
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I wish reviewers would test cards at :
> 
> 1) stock versus stock
> 
> and
> 
> 2) max OC versus max OC


Agreeed and , sorry, didn't read the whole thread. And what I've said is basically the position I've had for years... subjective numbers don't really mean much to me as a consumer when you're not testing easily achieved overclocked speeds for both products, which should be done.

I didn't have to fight to get 1500/8000 on my GTX, and I'm on stock cooling. I'd expect any 980 to get 1500/8000 without trying too much, mine isn't some golden chip either.


----------



## Asmodian

Quote:


> Originally Posted by *Ha-Nocri*
> 
> It is very easy to implement async compute in games and not doing so would most likely be on purpose.


Is it? What needs doing to implement it? Which tasks is Async compute best used for and are there no other ways the same performance and effects could be achieved? These blanket assertions don't strike me as realistic, even AotS doesn't use very much Async compute, why?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Asmodian*
> 
> Is it? What needs doing to implement it? Which tasks is Async compute best used for and are there no other ways the same performance and effects could be achieved? These blanket assertions don't strike me as realistic, even AotS doesn't use very much Async compute, why?


It all about path. You tell the GPU what paths the ASync should handle. Problem is that have to make special path for Nvidia.


----------



## infranoia

Quote:


> Originally Posted by *Asmodian*
> 
> Is it? What needs doing to implement it? Which tasks is Async compute best used for and are there no other ways the same performance and effects could be achieved? These blanket assertions don't strike me as realistic, even AotS doesn't use very much Async compute, why?


Not a blanket assertion, per @Kollock arranging tasks to take advantage of async compute is not difficult. In fact to avoid an asynchronous workload would look very suspicious. Emphasis mine.
Quote:


> Originally Posted by *Kollock*
> 
> I'm not sure you understand the nature of this feature. There are 2 main types of tasks for a GPU, graphics and compute. D3D12 exposes main 2 queue types, a universal queue (compute and graphics), and a compute queue. For Ashes, use of this feature involves taking compute jobs which are already part of the frame and marking them up in a way such that hardware is free to coexecute it with other work. Hopefully, this is a relatively straightfoward tasks. No additional compute tasks were created to exploit async compute. It is merely moving work that already exists so that it can run more optimally. That is, if async compute was not present, the work would be added to the universal queue rather than the compute queue. The work still has to be done, however.
> 
> The best way to think about it is that the scene that is rendered remains (virtually) unchanged. In D3D12 the work items are simply arranged and marked in a manner that allows parallel execution. Thus, *not using it when you could is seems very close to intentionally sandbagging performance.*


----------



## Imouto

Quote:


> Originally Posted by *ZealotKi11er*
> 
> It all about path. You tell the GPU what paths the ASync should handle. Problem is that have to make special path for Nvidia.


No, just no. Asynchronous compute must be implemented on hardware level. If you want to do something similar you will get a huge performance penalty. Pretty much the same happened with mining. AMD GPUs happened to have the right hardware for that task, Nvidia was a billion years behind because it had to waste a ton of cycles for what it took way fewer for AMD hardware.

I'm sorry if you believed the damage control about Nvidia fixing ASync Compute on Maxwell GPUs by just updating their drivers.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Imouto*
> 
> No, just no. Asynchronous compute must be implemented on hardware level. If you want to do something similar you will get a huge performance penalty. Pretty much the same happened with mining. AMD GPUs happened to have the right hardware for that task, Nvidia was a billion years behind because it had to waste a ton of cycles for what it took way fewer for AMD hardware.
> 
> I'm sorry if you believed the damage control about Nvidia fixing ASync Compute on Maxwell GPUs by just updating their drivers.


No I mean if they do use Aysnc for AMD then they have to make a special path for Nvidia so they do no use Aysnc or they will face per degredation.


----------



## magnek

Quote:


> Originally Posted by *xenophobe*
> 
> Agreeed and , sorry, didn't read the whole thread. And what I've said is basically the position I've had for years... subjective numbers don't really mean much to me as a consumer when you're not testing easily achieved overclocked speeds for both products, which should be done.
> 
> I didn't have to fight to get 1500/8000 on my GTX, and I'm on stock cooling. I'd expect any 980 to get 1500/8000 without trying too much, mine isn't some golden chip either.


Slightly OT but I see a lot of people OC memory to 8000 and call it a day. Thing is with GDDR5, because of its error correcting ability, errors that start occurring due to instability but not serious enough to show up as artifacts or crash altogether simply get buried, but still negatively impact performance. The reason I run my memory at 7800 is because after some benchmarking, I found that performance actually regressed slightly (about 3%) if I push memory to 8000 instead of 7800. So make sure you're not getting negative scaling with that memory OC.

Of course I could be preaching to the choir here, in which case


----------



## Imouto

Quote:


> Originally Posted by *ZealotKi11er*
> 
> No I mean if they do use Aysnc for AMD then they have to make a special path for Nvidia so they do no use Aysnc or they will face per degredation.


For Nvidia the workload will be the same. It just won't have the ASync Compute to manage it getting a performance hit.

That or Nvidia will have less characters on screen, particles, physics and such if you mean that with "path".


----------



## ZealotKi11er

Quote:


> Originally Posted by *Imouto*
> 
> For Nvidia the workload will be the same. It just won't have the ASync Compute to manage it getting a performance hit.
> 
> That or Nvidia will have less characters on screen, particles, physics and such if you mean that with "path".


Its more work for dev. Most willl just decide its not worth it to implement 2 different path.


----------



## boredgunner

Quote:


> Originally Posted by *pengs*
> 
> And while PhysX is pretty it has no real use other than visuals which can also be done in a more useful way without such overhead.


Advanced physics can greatly affect gameplay, depending on the game in question. PhysX is underrated and could seriously advance game development.


----------



## Imouto

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its more work for dev. Most willl just decide its not worth it to implement 2 different path.


Oh, we are not going to see taking any advantage of this feature until Nvidia is ready. Be sure of that.

By the time Async Compute is on prime the current and older GCN cards will be obsolete and all this fuss will be for nothing. Nvidia way smarter than AMD in this regard: implement something only when needed or create that need.

The story just repeats again and again and again. By the time 64 bit software and OSs were common the Athlon 64 generation was *ancient*.


----------



## Remij

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its more work for dev. Most willl just decide its not worth it to implement 2 different path.


Apparently it's easy though, so why aren't these initial DX12 games more asynchronously computed then they are?


----------



## Remij

Quote:


> Originally Posted by *Imouto*
> 
> Oh, we are not going to see taking any advantage of this feature until Nvidia is ready. Be sure of that.
> 
> By the time Async Compute is on prime the current and older GCN cards will be obsolete and all this fuss will be for nothing. Nvidia way smarter than AMD in this regard: implement something only when needed or create that need.
> 
> The story just repeats again and again and again. By the time 64 bit software and OSs were common the Athlon 64 generation was *ancient*.


Indeed the story does... everything is Nvidia's fault.


----------



## infranoia

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its more work for dev. Most willl just decide its not worth it to implement 2 different path.


Read @Kollock's quote above. It's not a special render path-- that's DX9, 10, 11 thinking. Either your tasks are asynchronous or they are not. If they are not asynchronous, then you're punishing all GPU vendors alike. AMD now, and Nvidia in the future.


----------



## Remij

Quote:


> Originally Posted by *infranoia*
> 
> Read @Kollock's quote above. It's not a special render path-- that's DX9, 10, 11 thinking. Either your tasks are asynchronous or they are not. If they are not asynchronous, then you're punishing all GPU vendors alike. AMD now, and Nvidia in the future.


Like AMD punished their users all throughout the DX11 era, I guess.


----------



## ZealotKi11er

Who is right though? Nvidia gives you reasons to upgrade more offten while AMD gives you less. I get very salty when AMD talks about 300 series this and that without including 200 given that there are identical. I would have a heard attack from all the salt of I had a Nvidia GPU.


----------



## Remij

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Who is right though? Nvidia gives you reasons to upgrade more offten while AMD gives you less. I get very salty when AMD talks about 300 series this and that without including 200 given that there are identical. I would have a heard attack from all the salt of I had a Nvidia GPU.


With AMD you always feel like the best is yet to come... or the support is yet to come... or the feature is yet to come... but it's coming.

With Nvidia, you feel like you have a high performing well supported product that has a lot of features that are important to gamers.

I'm in the minority probably, but as an enthusiast who upgrades often, I want what's best and most supported now. I respect that that's not the case for most people, and cards like the 390x offer incredible performance and have great longevity.


----------



## Kana-Maru

Quote:


> Originally Posted by *boredgunner*
> 
> Advanced physics can greatly affect gameplay, depending on the game in question. PhysX is underrated and could seriously advance game development.


Most AAA game engines have their own physics built-in. So hopefully more and more engines continues to support it natively that can work across all platforms and GPUs evenly without obstruction from any company.

Quote:


> Originally Posted by *Imouto*
> 
> Oh, we are not going to see taking any advantage of this feature until Nvidia is ready. Be sure of that.
> By the time Async Compute is on prime the current and older GCN cards will be obsolete and all this fuss will be for nothing. Nvidia way smarter than AMD in this regard: implement something only when needed or create that need.


Nah....then it will be time to upgrade. GPUs run their course. If you make the right purchase you can easily get about 3-4 years out of the GPUs. The 290X is on it's 3rd year and has been great for users so far. The 295X2 is on it's 2nd year and AMD still supports the CFX profiles. I still have friends that are running the old 2012 7970Ghz and getting good gameplay at 1080p. Once the cards run their course it's time to get rid of em. I ran my GTX 670 2-way SLI for 3 years and Nvidia forgot about me [kepler] and I decided to upgrade. I had a choice of the 980 Ti or the Fury X. Went AMD this time around.

Quote:


> Originally Posted by *Imouto*
> 
> The story just repeats again and again and again. By the time 64 bit software and OSs were common the Athlon 64 generation was *ancient*.


That was just AMD pushing the market forward as they normally do. Instead of focusing on serial and basic tech they look down the line and they have been right time after time.


----------



## infranoia

Quote:


> Originally Posted by *Remij*
> 
> Like AMD punished their users all throughout the DX11 era, I guess.


Cute but meaningless response, and not the current reality nor the topic of conversation. You're bleeding green.


----------



## Kana-Maru

Quote:


> Originally Posted by *Remij*
> 
> With AMD you always feel like the best is yet to come... or the support is yet to come... or the feature is yet to come... but it's coming.


OR you feel like this card is amazing. The heat is always below 45c. There's no need to overclock in order to enjoy 1440p\1600p and 4K gaming with no micro stutter or screen tearing.
I'm speaking for my actual Fury X experience. Speak for yourself. When I was running Nvidia GPUs all of those years I got my money worth. It's a new day and I'm enjoying 4K gaming with my latest GPU decision.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Master__Shake*
> 
> that graph is so indicative of hardware it's not even funny.
> 
> 
> 
> what year was this survey taken?
> 
> i haven't had a 1gb vram card since i bought a 4890.... at launch


Intel Graphics for sure.


----------



## 12Cores

Do we know if this game will see multiple gpu's as one, I am really interested in this DirectX 12 feature?


----------



## Remij

Quote:


> Originally Posted by *Kana-Maru*
> 
> OR you feel like this card is amazing. The heat is always below 45c. There's no need to overclock in order to enjoy 1440p\1600p and 4K gaming with no micro stutter or screen tearing.
> I'm speaking for my actual Fury X experience. Speak for yourself. When I was running Nvidia GPUs all of those years I got my money worth. It's a new day and I'm enjoying 4K gaming with my latest GPU decision.


I thought it was clear that I was speaking from my own experience.

You actually thought they 'YOU' in my response was referring to you specifically and not generally? lol..


----------



## Kana-Maru

Quote:


> Originally Posted by *Remij*
> 
> It's the truth, so there's that. And secondly, who are you to say developers are punishing vendors if they don't code for async?
> 
> If they don't code for async, they are effectively not hindering performance more on one card than another. Unless of course you hate all devs that don't take advantage of every feature a GPU has and all games that don't make use of every feature... It's up to AMD to get developers to code for their architecture in the end. If they don't... blame AMD.


Did you clearly not read what he posted earlier. The actual developer stated that. The entire reasoning behind Mantle\DX12 and Vulcan are the low level and parallel functions. If you aren't going to use the tech properly then stick with DX11 serial like API.

http://www.overclock.net/t/1594186/various-hitman-2016-pc-directx-11-vs-directx-12-performance/260#post_24978151
Quote:


> Originally Posted by *infranoia*
> 
> Originally Posted by Kollock
> 
> I'm not sure you understand the nature of this feature. There are 2 main types of tasks for a GPU, graphics and compute. D3D12 exposes main 2 queue types, a universal queue (compute and graphics), and a compute queue. For Ashes, use of this feature involves taking compute jobs which are already part of the frame and marking them up in a way such that hardware is free to coexecute it with other work. Hopefully, this is a relatively straightfoward tasks. No additional compute tasks were created to exploit async compute. It is merely moving work that already exists so that it can run more optimally. That is, if async compute was not present, the work would be added to the universal queue rather than the compute queue. The work still has to be done, however.
> 
> The best way to think about it is that the scene that is rendered remains (virtually) unchanged. In D3D12 the work items are simply arranged and marked in a manner that allows parallel execution. Thus, not using it when you could is seems very close to intentionally sandbagging performance.


----------



## Twist86

Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX 980 Ti 1380MHz lol. I hope this is not true.


I do, I want AMD to get strong so Nvidia wont continue milking us like Intel currently is doing on processors.


----------



## Remij

Quote:


> "I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented."


Someone then asks Nvidia and they say

https://twitter.com/PellyNV/status/702601030094041088

It should be case closed until further information is provided.


----------



## Mahigan

Quote:


> Originally Posted by *Remij*
> 
> Yea, he knows that the driver supports the feature, yet says to ask Nvidia how it works.
> Someone then asks Nvidia and they say
> 
> https://twitter.com/PellyNV/status/702601030094041088
> 
> It should be case closed until further information is provided.


Yes, Kollock knows that the microsoft GPU profiling tool indicates that the NVIDIA drivers report the Async compute feature as being available. That's from a developers perspective.

A user asks NVIDIA and they don't answer why their driver exposes the feature only that the feature itself is not available in their driver.

So, for some reason, NVIDIAs driver exposes a feature as being available when it isn't and Microsoft still grant them WHQL status. This is rather interesting no?

On the flip side NVIDIA have been telling us to wait for 8 months now which is a rather long waiting period for a simple feature to make its way into a driver.

The argument, put forward by some folks, is that there's no reason to implement Async compute right now because no games use the feature.

Odd, considering NVIDIA have conservative rasterization available in their driver even though it has never been used in a game or publicly available benchmark so far...

Curious no?


----------



## Remij

Quote:


> Originally Posted by *Mahigan*
> 
> Yes, Kollock knows that the microsoft GPU profiling tool indicates that the NVIDIA drivers report the Async compute feature as being available. That's from a developers perspective.
> 
> A user asks NVIDIA and they don't answer why their driver exposes the feature only that the feature itself is not available in their driver.
> 
> So, for some reason, NVIDIAs driver exposes a feature as being available when it isn't and Microsoft still grant them WHQL status. This is rather interesting no?
> 
> On the flip side NVIDIA have been telling us to wait for 8 months now which is a rather long waiting period for a simple feature to make its way into a driver.
> 
> The argument, put forward by some folks, is that there's no reason to implement Async compute right now because no games use the feature.
> 
> Odd, considering NVIDIA have conservative rasterization available in their driver even though it has never been used in a game or publicly available benchmark so far...
> 
> Curious no?


Maybe there's something wrong with Microsofts GPU profiling tool









And it's not really curious that ConRast hasn't been used in a game yet really. But don't worry, I'm sure when it is, you'll be right there to blame AMDs terrible performance on it.


----------



## pengs

Quote:


> Originally Posted by *boredgunner*
> 
> Advanced physics can greatly affect gameplay, depending on the game in question.


Which was my point exactly.
Quote:


> PhysX is underrated and could seriously advance game development.


How so other than aesthetically? I'm not sure PhysX itself can be implemented in any meaningful way to actually effect gameplay, that is the problem with it.


----------



## EightDee8D

Quote:


> Originally Posted by *Remij*
> 
> Maybe there's something wrong with Microsofts GPU profiling tool
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And it's not really curious that ConRast hasn't been used in a game yet really. But don't worry, I'm sure when it is, you'll be right there to blame AMDs terrible performance on it.


ofc , i guess that's why nvidia isn't enabling asc yet, cuz Microsofts GPU profiling tool doesn't work so they cannot really test it.








it will be implemented once pascal launches and everyone will be on it. nobody even remember this.


----------



## boredgunner

Quote:


> Originally Posted by *pengs*
> 
> Which was my point exactly.
> How so other than aesthetically? I'm not sure PhysX itself can be implemented in any meaningful way to actually effect gameplay, that is the problem with it.


It's a physics engine, just the best one. Are you saying you can't see how physics can affect gameplay? Think of physics based games like Half-Life 2 and the Crysis games. Whether it's solving physics puzzles or using the environment against your enemies in procedural fashion. Or perhaps much better physics based spells in fantasy RPGs. Or a shooter in a war setting with advanced, procedural destruction, compensating for windage, and other things like this that have been attempted. PhysX would make them a lot better since it's by far the best game physics engine out there.


----------



## inedenimadam

Nvidia has a new line of cards coming out soon. What would the incentive to upgrade be if everybody still loves their 900 series and how well they run under DX12 ?


----------



## Mahigan

Quote:


> Originally Posted by *Remij*
> 
> Maybe there's something wrong with Microsofts GPU profiling tool
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And it's not really curious that ConRast hasn't been used in a game yet really. But don't worry, I'm sure when it is, you'll be right there to blame AMDs terrible performance on it.


Provided that a game is using multi-threaded rendering (unlike GoW UE) then software based CR can be used in the place of NVIDIAs tier one CR. Any Voxel based operation's relying on CR are compute bound. So CR will alleviate some CPU overhead with the initial calculations required for Voxel based shadows or ambient occlusion.

I won't blame AMDs terrible performance unless the engine is gimped on purpose (no multi-threaded rendering). I look forward so seeing VXGI in action.


----------



## Maintenance Bot

Several cards tested in DX11 version. Reviewer in footnote says they are all overclocked.

https://www.youtube.com/watch?v=3OKuvDr39PI


----------



## Mampus

I'm curious about frame time though. Hope DigitalFoundry will do the bench for PC version (PS4 and X1 is already out)


----------



## looniam

as much as all this talk about NV drivers in AotS is fascinating but not sure its relevant:
(translated)
Quote:


> Unfortunately, we were happy too early. The Framelock under Direct X 12 in Fiji (Radeon R9 Fury / Nano) remains also affected is Tonga (R9 380 [X], 285). In Hawaii, however (R9 390 [X] / 290 [X]), the frame rates are unlimited as under DirectX 11th Possibly this is due to a driver error. Measurements can thus be of limited value, a retest would be necessary. We have AMD already informed and wait for feedback. We complement our benchmarks at this point with the R9 280X, but the old Tahiti GPU can not really benefit from Direct X 12, but it is only capable of the feature level 11_0. . .
> . . More graphics cards, including GTX 970, GTX 780 Ti and R9 280 / 270X, follow tomorrow. We also have hope that by then we will get presents an approach to the problem with the frame lock in Fiji and Tonga. The performance of Hawaii GPUs suggests at least a strong potential for performance gains under Direct X 12, unfortunately this can not currently validated by results with other Radeon GPUs.


just saying.


----------



## keikei

Quote:


> Originally Posted by *inedenimadam*
> 
> Nvidia has a new line of cards coming out soon. What would the incentive to upgrade be if everybody still loves their 900 series and how well they run under DX12 ?


That is one hell of a business move and frankly makes _perfect_ business sense. I'll give nividia one thing, they know how to make $$.


----------



## SuperZan

Quote:


> Originally Posted by *Remij*
> 
> Apparently it's easy though, so why aren't these initial DX12 games more asynchronously computed then they are?


Obvious viridescence aside and at the risk of tu quoque I feel I have to ask: were you equally concerned with PhysX implementation? PhysX is at home with CUDA-equipped Nvidia, whilst AMD cards function with PhysX run on the CPU at a performance cost. PhysX was completely proprietary before 2015, and yet async compute as discussed in this thread is something that could have been implemented at the hardware level by Nvidia as well. They chose not to do so. Now people like @Kollock who are actively developing in DX12 tell us that it's a logical thing to implement if available, but we should all feel badly for Nvidia?


----------



## pengs

Quote:


> Originally Posted by *boredgunner*
> 
> It's a physics engine, just the best one. Are you saying you can't see how physics can affect gameplay? Think of physics based games like Half-Life 2 and the Crysis games. Whether it's solving physics puzzles or using the environment against your enemies in procedural fashion. Or perhaps much better physics based spells in fantasy RPGs. Or a shooter in a war setting with advanced, procedural destruction, compensating for windage, and other things like this that have been attempted. PhysX would make them a lot better since it's by far the best game physics engine out there.


No, what I'm saying is that PhysX itself is not able to effect gameplay in any way which is how it functions, it's purely aesthetical. The Source engine used Havok physics and Crysis was Sandbox I believe, both of which were implemented at the engine level.

For physics to effect the gameplay, world, player, ect. it has to be implemented at the engine level which the PhysX API is not suited. It's basically a drop in built for aesthetics, no doubt it could have some real uses and may had even been implemented at a lower level by a few developers but it's main purpose serves no impact for gameplay.


----------



## EightDee8D

Quote:


> Originally Posted by *looniam*
> 
> as much as all this talk about NV drivers in AotS is fascinating but not sure its relevant:
> (translated)
> just saying.


yes someone already posted that before. hopefully a hotfix/ driver update will fix this. as Hawaii has no such limitations.


----------



## boredgunner

Quote:


> Originally Posted by *pengs*
> 
> No, what I'm saying is that PhysX itself is not able to effect gameplay in any way which is how it functions, it's purely aesthetical. The Source engine used Havok physics and Crysis was Sandbox I believe, both of which were implemented at the engine level.
> 
> For physics to effect the gameplay, world, player, ect. it has to be implemented at the engine level which the PhysX API is not suited. It's basically a drop in built for aesthetics, no doubt it could have some real uses and may had even been implemented at a lower level by a few developers but it's main purpose serves no impact for gameplay.


I am of course speaking of PhysX used as the sole physics engine for a game, and GPU accelerated PhysX at that. This of course isn't used anymore, I'm just speaking hypothetically of what could have been.


----------



## looniam

Quote:


> Originally Posted by *EightDee8D*
> 
> yes someone already posted that before. hopefully a hotfix/ driver update will fix this. as Hawaii has no such limitations.


yeah i saw a mention that got ignored (for the most part). so until a hotfix/benchmark update _just more gum flapping and posturing_?

yeah, sounds right.









i'lll just exit stage left (unsub).


----------



## EightDee8D

Quote:


> Originally Posted by *looniam*
> 
> yeah i saw a mention that got ignored (for the most part). so until a hotfix/benchmark update _just more gum flapping and posturing_?
> 
> yeah, sounds right.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i'lll just exit stage left (unsub).


isn't that's how almost all game bench threads go?


----------



## SuperZan

Quote:


> Originally Posted by *EightDee8D*
> 
> isn't that's how almost all game bench threads go?


I often imagine Khrushchev banging the shoe on the dais in these threads.


----------



## KarathKasun

Quote:


> Originally Posted by *boredgunner*
> 
> I am of course speaking of PhysX used as the sole physics engine for a game, and GPU accelerated PhysX at that. This of course isn't used anymore, I'm just speaking hypothetically of what could have been.


No, PhysX was not well suited to general purpose physics calculations last I checked. Its really good at cloth and water type interactions while used in the GPU, but it is marginally faster to marginally slower when doing whole world physics interactions compared to a moderately fast CPU.

Havok in its OpenCL variations was arguably better, but not nearly as dev friendly. You cant make games based on NVidia only tech (PhysX) and not cut out a decent chunk of the market, that is the main reason Havok is better in my eyes.


----------



## k3rast4se

https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-AsynchronousBufferTransfers.pdf

Could we stay asynchronous buffer and asynchronous shader are equivalent? After going through this document, both seems to be related to work queue transfer from CPU to GPU.


----------



## boredgunner

Quote:


> Originally Posted by *KarathKasun*
> 
> No, PhysX was not well suited to general purpose physics calculations last I checked. Its really good at cloth and water type interactions while used in the GPU, but it is marginally faster to marginally slower when doing whole world physics interactions compared to a moderately fast CPU.
> 
> Havok in its OpenCL variations was arguably better, but not nearly as dev friendly. You cant make games based on NVidia only tech (PhysX) and not cut out a decent chunk of the market, that is the main reason Havok is better in my eyes.


I've never seen PhysX implemented on a huge scale, but judging purely on realism I've never found anything better for any kind of physics interaction. Maybe a mixed GPU and CPU implementation would be best, I can't say. But yes, PhysX would have to be OpenCL based and open standard for it to be truly relevant. Open standard and NVIDIA... 0% chance. Back to 10+ year old physics we go.


----------



## ZealotKi11er

Quote:


> Originally Posted by *boredgunner*
> 
> I've never seen PhysX implemented on a huge scale, but judging purely on realism I've never found anything better for any kind of physics interaction. Maybe a mixed GPU and CPU implementation would be best, I can't say. But yes, PhysX would have to be OpenCL based and open standard for it to be truly relevant. Open standard and NVIDIA... 0% chance. Back to 10+ year old physics we go.


There are 2 different PhysX. The one Nvidia used to have in games like Batman, Mafia 2, Borderlands etc. and the one that does in game physics like in PCars. The GPU Accelerated PhysX is basically GameWorks from my understanding. The other one runs on CPU.


----------



## boredgunner

Quote:


> Originally Posted by *ZealotKi11er*
> 
> There are 2 different PhysX. The one Nvidia used to have in games like Batman, Mafia 2, Borderlands etc. and the one that does in game physics like in PCars. The GPU Accelerated PhysX is basically GameWorks from my understanding. The other one runs on CPU.


"GameWorks" is fairly new, I'm not even sure if it has a concrete meaning since it's largely marketing. Then there's "FleX" which also stems from PhysX.


----------



## ZealotKi11er

Quote:


> Originally Posted by *boredgunner*
> 
> "GameWorks" is fairly new, I'm not even sure if it has a concrete meaning since it's largely marketing. Then there's "FleX" which also stems from PhysX.


Actually its almost 2 years old now. Was Introduced with Maxwell. Its basically a way for Nvidia to transfer all their IP game dev for ease of use. Nvidia had to put a strong grasp on game development after losing both consoles. If GW did not exist Market Share would be different. PhysX never effect AMD and was a nice thing.


----------



## boredgunner

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Actually its almost 2 years old now. Was Introduced with Maxwell. Its basically a way for Nvidia to transfer all their IP game dev for ease of use. Nvidia had to put a strong grasp on game development after losing both consoles. If GW did not exist Market Share would be different. PhysX never effect AMD and was a nice thing.


Time really flies then. What I meant though was I don't think GameWorks refers to any specific effects, it just means the game will be using some NVIDIA technology (HBAO+ perhaps being the most common). And yes PhysX was largely ignored even when it was heavily promoted. GameWorks is of course damaging.


----------



## ZealotKi11er

Quote:


> Originally Posted by *boredgunner*
> 
> Time really flies then. What I meant though was I don't think GameWorks refers to any specific effects, it just means the game will be using some NVIDIA technology (HBAO+ perhaps being the most common). And yes PhysX was largely ignored even when it was heavily promoted. GameWorks is of course damaging.


The one used right now is called VisualFX. This includes HairWorks, HBAO+, Fire, Smoke etc. VisualFX can run on AMD and Nvidia.
PhsyX is also part of GameWorks but last I checked they have not used it. Just using some of VisualFX stuff is already overloading the developer.
Some stuff are cool but I think the dev should either use Open stuff or take Nvidia stuff and make them their own. Most dev do not need to rely on AMD and Nvidia effect. I feel like one the ones that want money do so. Witcher 3 dev made the wrong choice to support GameWorks. I think they added GW in the very end for some fast cash...


----------



## boredgunner

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Witcher 3 dev made the wrong choice to support GameWorks. I think they added GW in the very end for some fast cash...


Game devs are also lazy and will take any opportunity to avoid doing more work themselves. Another common example of this is with audio APIs and devs not using OpenAL because there is more work involved (but the results are can be better).


----------



## ZealotKi11er

Quote:


> Originally Posted by *boredgunner*
> 
> Game devs are also lazy and will take any opportunity to avoid doing more work themselves. Another common example of this is with audio APIs and devs not using OpenAL because there is more work involved (but the results are better).


Never saw the light of Day with TrueAudio







. I think for the better since it was closed to AMD. Devs after a point are mostly porting to PC and even wen they are long they will only go to certain lengths before giving quits.


----------



## infranoia

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Never saw the light of Day with TrueAudio
> 
> 
> 
> 
> 
> 
> 
> . I think for the better since it was closed to AMD. Devs after a point are mostly porting to PC and even wen they are long they will only go to certain lengths before giving quits.


The only saving grace for TrueAudio is that there is a Wwise plugin for it, so it's easy to drop-in, and it can save up to 10% CPU usage.*

*according to some made-up stuff I read on Wikipedia.


----------



## Mahigan

Quote:


> Originally Posted by *boredgunner*
> 
> Game devs are also lazy and will take any opportunity to avoid doing more work themselves. Another common example of this is with audio APIs and devs not using OpenAL because there is more work involved (but the results are can be better).


I think it is unfair to claim that game developers are lazy. I don't think they're lazy. I think that the time constraints placed upon them by their publishers play a large role here.

I think that kickstarter, and other such mediums, help developers free themselves from publishers. This in turn grants developers the ability to focus on their art and deliver in-house solutions rather than having their hands tied by a Ubisoft or EA crew of executives. I don't think developers like Gameworks all that much but the publishers certainly do. Gameworks cuts down on tye development time. Although I'm sure some less skilled developers certainly like the idea of premade libraries.


----------



## degenn

After reading through the *32 pages of drivel in 12hrs* that is this thread, I can't help but just laugh... I mean wow OCN...









That original thread-title was hilarious, too.


----------



## PostalTwinkie

Quote:


> Originally Posted by *Remij*
> 
> He's the same guy that talks like he knows more about Nvidia's drivers than they do, right?


Eh, it was a legitimate mistake....see below.
Quote:


> Originally Posted by *Mahigan*
> 
> NVIDIAs Direct3D driver exposes Async Compute as being supported in their driver when using the Microsoft developer tools. So from a developers p.o.v, Async Compute is exposed by the NVIDIA driver as being functional. He was right.
> 
> NVIDIA are the ones doing something fishy, they expose the feature as being available when it isn't. NVIDIA claim that the feature will be available in a future software driver update.


----------



## Unkzilla

No doubt AMD has a big lead on this title

Seems my 980 will level up to the 390 with a overclock - will be interesting to see more DX12 titles and if Nvidia can close the gap at all with some driver revisions.

This game hasn't been high on my to buy list so happy to wait it out anyway.

And as I have commented on all recent new releases - I don't see how the visuals warrant the performance achieved on either side


----------



## Dargonplay

Quote:


> Originally Posted by *Unkzilla*
> 
> No doubt AMD has a big lead on this title
> 
> Seems my 980 will level up to the 390 with a overclock - will be interesting to see more DX12 titles and if Nvidia can close the gap at all with some driver revisions.
> 
> This game hasn't been high on my to buy list so happy to wait it out anyway.
> 
> And as I have commented on all recent new releases - I don't see how the visuals warrant the performance achieved on either side


The amount of Polygons, real time calculations, assets and scripts happening on screen when playing is absolutely stunning, every asset have its own physics being processed in real time, there are thousands of assets at any given moment, each with their own individually calculated shooting trajectory that respects all other objects In-Game and their determined physics, every single one have their own turret with complex Physics and highly detailed characteristics, all of them are interactive and the amount of them doing all of this at the same time is staggering.

The game is probably one of the most advanced graphical examples available and its performance is superb, what happens is that people often link eye candy with the FPS Genre quest for "Realism".


----------



## YellowBlackGod

290/390 series GPUs are the best vaue for money cards for directX 12. Performing almost equal to the 980ti. Sooner or later DX12 will dominate as main api along with Vulkan perhaps.


----------



## blue1512

Can anyone necro my thread








http://www.overclock.net/t/1561394/dx-12-will-bring-290x-to-the-level-of-980ti


----------



## Tonza

What is exactly the fuss, i see 10% improvements in FPS compared to DX11 render, i thought this new HiTMAN supposed to have "best usage of async compute" ever. Is there even difference in graphics DX11 vs DX12?. Not to mention this is AMD gaming evolved title, if it would be Gameworks, everyone would go mad.


----------



## mtcn77

Quote:


> Originally Posted by *Tonza*
> 
> What is exactly the fuss, i see 10% improvements in FPS compared to DX11 render, i thought this new HiTMAN supposed to have "best usage of async compute" ever. Is there even difference in graphics DX11 vs DX12?. Not to mention this is AMD gaming evolved title, if it would be Gameworks, everyone would go mad.


Well... yes, they would. It would be quite stupid to argue otherwise - no offense.


----------



## Tonza

Quote:


> Originally Posted by *mtcn77*
> 
> Well... yes, they would. It would be quite stupid to argue otherwise - no offense.


Care to explain what those 2 pictures are supposed to show, all i see that there is Rise Of The Tomb Raider (2016) and the old Tomb Raider from 2013?


----------



## mtcn77

Quote:


> Originally Posted by *Tonza*
> 
> Care to explain what those 2 pictures are supposed to show, all i see that there is Rise Of The Tomb Raider (2016) and the old Tomb Raider from 2013?


Exactly. Gaming Evolved versus GameWorks versions make for stark contrasts.


----------



## Kana-Maru

Quote:


> Originally Posted by *Tonza*
> 
> What is exactly the fuss, i see 10% improvements in FPS compared to DX11 render, i thought this new HiTMAN supposed to have "best usage of async compute" ever. Is there even difference in graphics DX11 vs DX12?. Not to mention this is AMD gaming evolved title, if it would be Gameworks, everyone would go mad.


No one would care about Gameworks if it didn't have a well known crap history with AMD and even Nvidia's own GPUs. I found myself disabling GWs since it wasn't worth the Frame Rate\Time loss. If GWs can be disabled I'm usually fine with that, but time and time again I'm seeing GWs titles have some serious issues at launch. Which prompts several update patches and by then everyone gives AMD a thumb down due to crap releases while trying to optimize their GPUs.

DX11 vs DX12 isn't really the way to think about it. DX12 offers more beneficial improvements that will push PC gaming forward. DX12 will help reduce power consumption and provides a well needed parallel pipeline. Of course there's other benefits.

I'm pretty sure if this was a Gameworks title it would be DirectX 11 for sure if Nvidia was behind it.

You are also missing the fact that a AMD GPU that's at least $320+ cheaper than the 980 Ti is only 15% behind it. The 980 Ti has a huge overclock advantage of the card as well. In other words the 980 Ti cost nearly 100+% more than than the R9 390 and is only showing a 15% performance increase. An extra $320+ for 7-8fps is laughable. 980Ti users aren't gaming @ 1080p, well unless they just have money to waste on a low resolution.


----------



## Unkzilla

Quote:


> Originally Posted by *Dargonplay*
> 
> The amount of Polygons, real time calculations, assets and scripts happening on screen when playing is absolutely stunning, every asset have its own physics being processed in real time, there are thousands of assets at any given moment, each with their own individually calculated shooting trajectory that respects all other objects In-Game and their determined physics, every single one have their own turret with complex Physics and highly detailed characteristics, all of them are interactive and the amount of them doing all of this at the same time is staggering.
> 
> The game is probably one of the most advanced graphical examples available and its performance is superb, what happens is that people often link eye candy with the FPS Genre quest for "Realism".


Fair enough. Seems like one of these games that might be a bit more impressive in motion then.

Will pick it up once I get through The Division and a few others


----------



## blue1512

Anyone remember the shady move of nVidia to delay the Dx10 patch of Assassin Creed 2? Just because their flagship at that time ate dust in Dx10









Luckily this time M$ needs Dx12 to save the crappy XB1 so they will push it hard enough to avoid those moves from nVidia


----------



## Kana-Maru

Quote:


> Originally Posted by *blue1512*
> 
> Anyone remember the shady move of nVidia to delay the Dx10 patch of Assassin Creed 2? Just because their flagship at that time ate dust in Dx10
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Luckily this time M$ needs Dx12 to save the crappy XB1 so they will push it hard enough to avoid those moves from nVidia


Anyone remember when Nvidia had Ubisoft remove DirectX 10.1 from Assassin's Creed 1. Nvidia didn't support 10.1, but ATI did. DX 10.1 required fewer passes when getting the most out of the quality [running things like AA etc.] Assassin's Creed was an Nvidia TWIMTBP title. Similar to Intel contracts with vendors like Dell, Nvidia TWIMTBP titles pretty much shows that money talks. They can't stand to see someone else ahead even if it's a legit win for ATI\AMD.


----------



## caswow

some people say TR2016 had dx12 and async







good thing its coming back soon


----------



## Charcharo

Quote:


> Originally Posted by *boredgunner*
> 
> Advanced physics can greatly affect gameplay, depending on the game in question. PhysX is underrated and could seriously advance game development.


It can, but it is never used correctly. And since its proprietary ... it wont get much use. An unfortunate casualty of dev stupidity, PC gamer stupidity, AMD incompetence and Nvidia's behavior









Such is life in gaming.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Kana-Maru*
> 
> Anyone remember when Nvidia had Ubisoft remove DirectX 10.1 from Assassin's Creed 1. Nvidia didn't support 10.1, but ATI did. DX 10.1 required fewer passes when getting the most out of the quality [running things like AA etc.] Assassin's Creed was an Nvidia TWIMTBP title. Similar to Intel contracts with vendors like Dell, Nvidia TWIMTBP titles pretty much shows that money talks. They can't stand to see someone else ahead even if it's a legit win for ATI\AMD.


actually you can think of ark survival...we cant give dx12 support because..of reasons


----------



## Pro3ootector

Does anyone remember GF 5900 Ultra? My first high end GPU choice was this, or Radeon 9800pro.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Charcharo*
> 
> It can, but it is never used correctly. And since its proprietary ... it wont get much use. An unfortunate casualty of dev stupidity, PC gamer stupidity, AMD incompetence and Nvidia's behavior
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Such is life in gaming.


actually you need to rethink about physx and if its useable at all while there is havok on pretty much every game
http://www.havok.com/showcase/


----------



## Charcharo

Quote:


> Originally Posted by *airfathaaaaa*
> 
> actually you need to rethink about physx and if its useable at all while there is havok on pretty much every game
> http://www.havok.com/showcase/


I read more posts after that.

Havok seems good, but I will still say the same.

For some stupid reason Gamers pay more attention to graphics than they do to AI or physics. Which is sad.

Havok can do greater things still


----------



## k3rast4se

Quote:


> Originally Posted by *Kana-Maru*
> 
> No one would care about Gameworks if it didn't have a well known crap history with AMD and even Nvidia's own GPUs. I found myself disabling GWs since it wasn't worth the Frame Rate\Time loss. If GWs can be disabled I'm usually fine with that, but time and time again I'm seeing GWs titles have some serious issues at launch. Which prompts several update patches and by then everyone gives AMD a thumb down due to crap releases while trying to optimize their GPUs.
> 
> DX11 vs DX12 isn't really the way to think about it. DX12 offers more beneficial improvements that will push PC gaming forward. DX12 will help reduce power consumption and provides a well needed parallel pipeline. Of course there's other benefits.
> 
> I'm pretty sure if this was a Gameworks title it would be DirectX 11 for sure if Nvidia was behind it.
> 
> You are also missing the fact that a AMD GPU that's at least $320+ cheaper than the 980 Ti is only 15% behind it. The 980 Ti has a huge overclock advantage of the card as well. In other words the 980 Ti cost nearly 100+% more than than the R9 390 and is only showing a 15% performance increase. An extra $320+ for 7-8fps is laughable. 980Ti users aren't gaming @ 1080p, well unless they just have money to waste on a low resolution.


Money to waste and low resolution?!? The 980ti is one of the only cards able to max latest game @ 60 fps in 1080p. I don't see this as wasting money. 4k gaming might be there but hardware is not ready. Unless pascal/polaris offer 3/4x processing power with 8gb of hbm2 memory (which I highly doubt), next generatiom won't be enough for 4k/60fps @ max settings.


----------



## huzzug

Quote:


> Originally Posted by *k3rast4se*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Kana-Maru*
> 
> No one would care about Gameworks if it didn't have a well known crap history with AMD and even Nvidia's own GPUs. I found myself disabling GWs since it wasn't worth the Frame Rate\Time loss. If GWs can be disabled I'm usually fine with that, but time and time again I'm seeing GWs titles have some serious issues at launch. Which prompts several update patches and by then everyone gives AMD a thumb down due to crap releases while trying to optimize their GPUs.
> 
> DX11 vs DX12 isn't really the way to think about it. DX12 offers more beneficial improvements that will push PC gaming forward. DX12 will help reduce power consumption and provides a well needed parallel pipeline. Of course there's other benefits.
> 
> I'm pretty sure if this was a Gameworks title it would be DirectX 11 for sure if Nvidia was behind it.
> 
> You are also missing the fact that a AMD GPU that's at least $320+ cheaper than the 980 Ti is only 15% behind it. The 980 Ti has a huge overclock advantage of the card as well. In other words the 980 Ti cost nearly 100+% more than than the R9 390 and is only showing a 15% performance increase. An extra $320+ for 7-8fps is laughable. 980Ti users aren't gaming @ 1080p, well unless they just have money to waste on a low resolution.
> 
> 
> 
> Money to waste and low resolution?!? The 980ti is one of the only cards able to max latest game @ 60 fps in 1080p. I don't see this as wasting money. 4k gaming might be there but hardware is not ready. Unless pascal/polaris offer 3/4x processing power with 8gb of hbm2 memory (which I highly doubt), next generatiom won't be enough for 4k/60fps @ max settings.
Click to expand...

You literally have placed goalposts on the entire playing field rendering any arguments on the offside. Defend your stance then move to the next rather than splitting grass.


----------



## Serios

Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX 980 Ti 1380MHz lol. I hope this is not true.


And I think it goes even higher than 1380mhz.


----------



## sblantipodi

Quote:


> Originally Posted by *caswow*
> 
> some people say TR2016 had dx12 and async
> 
> 
> 
> 
> 
> 
> 
> good thing its coming back soon


what is TR2016?
I have finished Tomb Raider 2016 and it has no DX12.


----------



## k3rast4se

Quote:


> Originally Posted by *huzzug*
> 
> You literally have placed goalposts on the entire playing field rendering any arguments on the offside. Defend your stance then move to the next rather than splitting grass.


My stence is the same since the beginning. Here's one phrase to resume it.

What matter is today games at today resolution aka 1080p.


----------



## Silent Scone

Quote:


> Originally Posted by *iLeakStuff*
> 
> Yes but kinda useless without Fury X numbers


More like the Fury X numbers are useless. When was the last time you saw reasonable stock levels.


----------



## huzzug

Quote:


> Originally Posted by *k3rast4se*
> 
> Quote:
> 
> 
> 
> Originally Posted by *huzzug*
> 
> You literally have placed goalposts on the entire playing field rendering any arguments on the offside. Defend your stance then move to the next rather than splitting grass.
> 
> 
> 
> My stence is the same since the beginning. Here's one phrase to resume it.
> 
> What matter is today games at today resolution aka 1080p.
Click to expand...

Oohh, didn't know we were in 2001. If Nvidia & AMD made games for todays screens, we'd not move to the future. Besides, all of us are saying that anything above 970 is high end which sort of dictates the grunt required to run games of today on future resolutions. If you want to talk about 1080p, talk about the 970, 390, 380, 960, 950 etc. Even there Nvidia seems to lack performance compared to AMD on these DX12 benches. Your move !!


----------



## caswow

i actually would wish people who say nvidia doesnt need dx12 or anything related to it would get only dx11 paths so they can stay where nvidia wants them to be just like kepler is and maxwell will









this whole thread is out of this world. psychologists goldmine right there


----------



## Devnant

Anybody noticed the worse DX12 performance on NVIDIA going above 1080p resolutions comparing to DX11? It´s the same thing with Ashes.

For instance, at 4K, the 980 TI goes from 37.2 FPS on DX11 to 36.7 FPS on DX12.

There are negative DX12 benefits for NVIDIA, as seen first on Ashes and now Hitman.


----------



## rage fuury

Update 3 (with a workaround for Fiji cards):
http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


----------



## Noufel

Quote:


> Originally Posted by *Devnant*
> 
> Anybody noticed the worse DX12 performance on NVIDIA going above 1080p resolutions comparing to DX11? It´s the same thing with Ashes.
> 
> For instance, at 4K, the 980 TI goes from 37.2 FPS on DX11 to 36.7 FPS on DX12.
> 
> There are negative DX12 benefits for NVIDIA, as seen first on Ashes and now Hitman.


the same goes for the fury at 2K it goes from 59.1 FPS on DX11 to 58.6 FPS on DX12








bottom line : i think we will see true DX12 beneficts in the upcomming polaris and pascal GPUs


----------



## killerhz

Quote:


> Originally Posted by *SuperZan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *killerhz*
> 
> win what? release day benchmarks?
> 
> well i have a Ti and hope to hell it plays this game smoothly.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Not for nothing but those day-of benchmarks have been central to Nvidia's marketing strategy.
> 
> Ti's a great card though, you shouldn't have any issue.
Click to expand...

true. i myself don't put much into day one benchmarks. with me being an nvidia fan boy i know to well, all seems good then boom game not go so well.
either way been waiting for this game just wish it wasn't episodes. waiting around forever for the next one and so on.

cant wait to get home to play it today. will make sure to post my benchmarks here


----------



## Devnant

Quote:


> Originally Posted by *Noufel*
> 
> the same goes for the fury at 2K it goes from 59.1 FPS on DX11 to 58.6 FPS on DX12
> 
> 
> 
> 
> 
> 
> 
> 
> bottom line : i think we will see true DX12 beneficts in the upcomming polaris and pascal GPUs


Yeah, and that´s really weird. But going above 2K Fury still gets huge benefits from DX12 (@4K from 33.3 FPS on DX11 to 38.1 FPS on DX12).

It´s just the 980 TI that is consistently worse off at DX12 going above 1080p.


----------



## Noufel

i think Mahigan can give us some explanations for those weird results, why maxwell gpus are taking benefict from DX12 ( and i suppose Async to ) in 1080p as the hawaii and fiji GPUs and for the hit that takes the fiji gpu at 2K.
at 4K the fps are too low to take it as a refference 980ti, fury and the 390X are in the same margine +/- 2 fps.
Async is a weird thing


----------



## Devnant

Quote:


> Originally Posted by *Noufel*
> 
> i think Mahigan can give us some explanations for those weird results, why maxwell gpus are taking benefict from DX12 ( and i suppose Async to ) in 1080p as the hawaii and fiji GPUs and for the hit that takes the fiji gpu at 2K.
> at 4K the fps are too low to take it as a refference 980ti, fury and the 390X are in the same margine +/- 2 fps.
> Async is a weird thing


You know, I´ve tested back home many times the Ashes of the Singularity benchmark. Even disabling async compute, DX11 performance is still better than DX12. Think Mahigan has mentioned before that NVIDIA´s DX11 drivers are already very close to the metal, so there are no benefits because of that.

Fact is, if you have a NVIDIA GPU, and play above 1080p, you´re better off sticking with DX11 on these first two titles.


----------



## NvNw

Quote:


> Originally Posted by *ZealotKi11er*
> 
> There are 2 different PhysX. The one Nvidia used to have in games like Batman, Mafia 2, Borderlands etc. and the one that does in game physics like in PCars. The GPU Accelerated PhysX is basically GameWorks from my understanding. The other one runs on CPU.


Not really, PhysX on those games are the same, the differences on those examples are that on games like Batman, Mafia 2, etc PhysX is a optional setting you can turn off or on if you have support or not... Instead on Project Cars, the physics engine that run the game (a core element of the process) is optimized to run with PhysX, with AMD cards that same physics process is done with the CPU and make AMD cards go slower because of how the developer tight the game to run on PhysiX.


----------



## Noufel

Quote:


> Originally Posted by *Devnant*
> 
> You know, I´ve tested back home many times the Ashes of the Singularity benchmark. Even disabling async compute, DX11 performance is still better than DX12. Think Mahigan has mentioned before that NVIDIA´s DX11 drivers are already very close to the metal, so there are no benefits because of that.
> 
> Fact is, if you have a NVIDIA GPU, and play above 1080p, you´re better off sticking with DX11 on these first two titles.


exactly what i was thinking








i hope that in future DX12 titles the beneficts won't be this marginal for both venders


----------



## Devnant

Quote:


> Originally Posted by *Noufel*
> 
> exactly what i was thinking
> 
> 
> 
> 
> 
> 
> 
> 
> i hope that in future DX12 titles the beneficts won't be this marginal for both venders


I disagree a little bit. There are great benefits for AMD right now going to DX12. I mean, looking at that Fury at 4K on Hitman, going from DX11 to DX12 is about 15% better performance! That´s pretty good!


----------



## rage fuury

Another test for Hitman DX12 @computerbase.de:

http://www.computerbase.de/2016-03/hitman-benchmarks-directx-12/2/#diagramm-hitman-mit-directx-12-3840-2160


----------



## Noufel

Quote:


> Originally Posted by *Devnant*
> 
> I disagree a little bit. There are great benefits for AMD right now going to DX12. I mean, looking at that Fury at 4K on Hitman, going from DX11 to DX12 is about 15% better performance! That´s pretty good!


yes indeed but the % of 4k users is very marginal that what i was reffering to


----------



## BradleyW

Hopefully we see some more performance reviews today.


----------



## Devnant

Quote:


> Originally Posted by *rage fuury*
> 
> Another test for Hitman DX12 @computerbase.de:
> 
> http://www.computerbase.de/2016-03/hitman-benchmarks-directx-12/2/#diagramm-hitman-mit-directx-12-3840-2160


As I´ve imagined, comparing stock vs stock the 390X is already ahead of the 980 TI.


----------



## PontiacGTX

Quote:


> Originally Posted by *rage fuury*
> 
> Another test for Hitman DX12 @computerbase.de:
> 
> http://www.computerbase.de/2016-03/hitman-benchmarks-directx-12/2/#diagramm-hitman-mit-directx-12-3840-2160


A R9 390x in DX11 is faster than a 980Ti on DX12,even on 1080


----------



## Devnant

Quote:


> Originally Posted by *PontiacGTX*
> 
> A R9 390x in DX11 is faster than a 980Ti on DX12,even on 1080


AMD Gaming *Devolved* for NVIDIA hahaha


----------



## BradleyW

Those 1080p results are very good for AMD. Shows that overhead is not much of a problem.

DX12 can now tap into everything GCN has to offer, and shows how limited Nvidia GPU's are when they don't have limited single thread operations thrown at them. I'm sure Nvidia's next GPU's will do better at DX12, but we'll have to wait and see what comes of all this.


----------



## GorillaSceptre

It also proves what Oxide and Chris Roberts said, optimizing/refactoring for DX12 will also benefit games in DX11. Win-Win situation for AMD going forward.


----------



## zealord

Hey guys I am back.

I will update the OP now with the benchmark results.

Sorry that it too so long !


----------



## sblantipodi

Quote:


> Originally Posted by *rage fuury*
> 
> Update 3 (with a workaround for Fiji cards):
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


this is another proof of what I saied previously.
Async compute will improve things but will not revolutionize anything.

The Fury Nitro remains comparable with 980 Ti in Async titles and remains well behing on other titles.
Async is like Mantle, good but overrated technology.

There is no "free upgrade" by the unlock of this feature, so stay calm and keep playing.


----------



## zealord

Updated OP with computerbase benchmarks. Great comparison for DX11/DX12 directly. Probably stock cards ! (before you guys freak out)


----------



## Devnant

Quote:


> Originally Posted by *Stige*
> 
> That is just fanboy talk when a 390X almost matches your overpriced 980Ti
> 
> 
> 
> 
> 
> 
> 
> 
> 
> DX12 is the future, and Nvidia is far behind in it.


Well, the current trend is: DX12 makes 980 TI perform worse on all AMD Evolved titles.

So you may actually be right, at least as far as AMD Evolved titles are concerned.


----------



## Stige

Quote:


> Originally Posted by *Devnant*
> 
> Well, the current trend is: DX12 makes 980 TI perform worse on all AMD Evolved titles.
> 
> So you may actually be right, at least as far as AMD Evolved titles are concerned.


Excuses









It won't perform any better


----------



## OneB1t

fury and fury x have comparable power efficiency as gtx 980ti/980


----------



## sblantipodi

Quote:


> Originally Posted by *Stige*
> 
> That is just fanboy talk when a 390X almost matches your overpriced 980Ti
> 
> 
> 
> 
> 
> 
> 
> 
> 
> DX12 is the future, and Nvidia is far behind in it.


I'm not a fanboy, I had many AMD/Ati cards and AMD cpus in past, I like to choose and I'm glad that AMD is choosing
open standards over close one in comparison to nvidia.

I now that 390X is really cheaper than my GTX980 Ti, surely it obliteratte my 980 Ti for the price/quality ratio but this is not the point of my discussion.

*Async Compute is not a game changer, infact it changed nothing even on AMD.
*DX12 performance are quite equal to the DX11 one.

R390X is as fast because it is a good card and it is comparable with a GTX980 Ti even without async compute.
The only scenarios where AMD cards are not comparable to the nvidia one is when there is gameworks or heavy tessellation but this is not the case.

As I saied hitman in dx12 demonstrated only one things, AMD cards are excellent when not using gameworks/tessellation with or without async
and async simply changed nothing.

So please stop saying that async is the new game changer. There will be revolution in this sense.


----------



## huzzug

Quote:


> Originally Posted by *sblantipodi*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Stige*
> 
> That is just fanboy talk when a 390X almost matches your overpriced 980Ti
> 
> 
> 
> 
> 
> 
> 
> 
> 
> DX12 is the future, and Nvidia is far behind in it.
> 
> 
> 
> I'm not a fanboy, I had many AMD/Ati cards and AMD cpus in past, I like to choose and I'm glad that AMD is choosing
> open standards over close one in comparison to nvidia.
> 
> I now that 390X is really cheaper than my GTX980 Ti, surely it obliteratte my 980 Ti for the price/quality ratio but this is not the point of my discussion.
> 
> *Async Compute is not a game changer, infact it changed nothing even on AMD.
> *DX12 performance are quite equal to the DX11 one.
> 
> R390X is as fast because it is a good card and it is comparable with a GTX980 Ti even without async compute.
> The only scenarios where AMD cards are not comparable to the nvidia one is when there is gameworks or heavy tessellation but this is not the case.
> 
> As I saied hitman in dx12 demonstrated only one things, AMD cards are excellent when not using gameworks/tessellation with or without async
> and async simply changed nothing.
> 
> So please stop saying that async is the new game changer. There will be revolution in this sense.
Click to expand...

You got the 980 and the 980ti mixed up really bad. The 390x competes with the 980 (which is not is this benchmark because it's FPS is so much at the bottom, they did not bother using it







)


----------



## PontiacGTX

Quote:


> Originally Posted by *Devnant*
> 
> Well, the current trend is: DX12 makes 980 TI perform worse on all AMD Evolved titles.
> 
> So you may actually be right, at least as far as AMD Evolved titles are concerned.


indeed is asynchronous shaders that arent handled properly by maxwell,kepler,fermi
Quote:


> Originally Posted by *sblantipodi*
> 
> *Async Compute is not a game changer, infact it changed nothing even on AMD.
> *DX12 performance are quite equal to the DX11 one.
> 
> R390X is as fast because it is a good card and it is comparable with a GTX980 Ti even without async compute.


this game is optimized in part for amd but DX12 allowed better performance kn cpu bound scenarios,aswell in scenarios the API was the issue


----------



## pengs

The almost twice over boost it's giving AMD's FX processors.


Quote:


> Originally Posted by *computerbase.de*
> Since AMD's DirectX 11 driver in the CPU limit has more problems than the Nvidia puts the Radeon R9 Fury X on the FX-8370, 75 percent even more to. And a disturbing stuttering as under DirectX 11, there is no longer with DirectX 12th


----------



## GorillaSceptre

Quote:


> Originally Posted by *sblantipodi*
> 
> I'm not a fanboy, I had many AMD/Ati cards and AMD cpus in past, I like to choose and I'm glad that AMD is choosing
> open standards over close one in comparison to nvidia.
> 
> I now that 390X is really cheaper than my GTX980 Ti, surely it obliteratte my 980 Ti for the price/quality ratio but this is not the point of my discussion.
> 
> *Async Compute is not a game changer, infact it changed nothing even on AMD.
> *DX12 performance are quite equal to the DX11 one.
> 
> R390X is as fast because it is a good card and it is comparable with a GTX980 Ti even without async compute.
> The only scenarios where AMD cards are not comparable to the nvidia one is when there is gameworks or heavy tessellation but this is not the case.
> 
> As I saied hitman in dx12 demonstrated only one things, AMD cards are excellent when not using gameworks/tessellation with or without async
> and async simply changed nothing.
> 
> So please stop saying that async is the new game changer. There will be revolution in this sense.


See my earlier post, the reason why AMD is good in DX11 is because Hitman is optimized for DX12. If DX12 wasn't in this game at all then AMD would still suffer with overhead as usual.

Members like @Mahigan would be able to explain it far better, but you can look up statements from Oxide and Chris Roberts why DX12 games also benefit cards running in DX11 mode.


----------



## PontiacGTX

Quote:


> Originally Posted by *pengs*
> 
> The almost twice over boost it's giving AMD's FX processors.


A cpu benchmark of hitman would show a slighly clear image
Quote:


> Originally Posted by *GorillaSceptre*
> 
> See my earlier post, the reason why AMD is good in DX11 is because Hitman is optimized for DX12. If DX12 wasn't in this game at all then AMD would still suffer with overhead as usual.
> 
> Members like @Mahigan would be able to explain it far better, but you can look up statements from Oxide and Chris Roberts why DX12 games also benefit cards running in DX11 mode.


DX11 and DX12 dont take different path?


----------



## Devnant

Quote:


> Originally Posted by *PontiacGTX*
> 
> indeed is asynchronous shaders that arent handled properly by maxwell,kepler,fermi


Even with async OFF, the 980 TI performs worse on DX12 than on DX11 on Ashes of the Singularity. I just don´t know anymore.
Quote:


> Originally Posted by *PontiacGTX*
> 
> indeed is asynchronous shaders that arent handled properly by maxwell,kepler,fermi
> this game is optimized in part for amd but DX12 allowed better performance kn cpu bound scenarios,aswell in scenarios the API was the issue


Yes.I´m starting to think that those Hitman benchmarks on NVIDIA have async forced ON. Turning them OFF should yield better performance on DX12. I´ll report back after I get acces to the game.


----------



## sblantipodi

Quote:


> Originally Posted by *GorillaSceptre*
> 
> See my earlier post, the reason why AMD is good in DX11 is because Hitman is optimized for DX12. If DX12 wasn't in this game at all then AMD would still suffer with overhead as usual.
> 
> Members like @Mahigan would be able to explain it far better, but you can look up statements from Oxide and Chris Roberts why DX12 games also benefit cards running in DX11 mode.


I know it and you are right but the improvements you are talking about is very marginal.
The reason why AMD plays as good in hitman is simply because they are great cards and hitman does not hit the weakness of this cards, tessellation.

Most games uses tessellation and AMD is very weak in tessellation, this is the reason why AMD suffer in some games
and this is the reason why AMD plays good on hitman.
Hitman does not use brutal tessellation.


----------



## PontiacGTX

Quote:


> Originally Posted by *Devnant*
> 
> Even with async OFF, the 980 TI performs worse on DX12 than on DX11 on Ashes of the Singularity. I just don´t know anymore.
> Yes.I´m starting to think that those Hitman benchmarks on NVIDIA have async forced ON. Turning them OFF should yield better performance on DX12. I´ll report back after I get acces to the game.


the computerbase test has the comparison async shader on/off and DX11 vs DX12

http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/3/
http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/5/


----------



## MonarchX

Quote:


> Originally Posted by *BradleyW*
> 
> Those 1080p results are very good for AMD. Shows that overhead is not much of a problem.
> 
> DX12 can now tap into everything GCN has to offer, and shows how limited Nvidia GPU's are when they don't have limited single thread operations thrown at them. I'm sure Nvidia's next GPU's will do better at DX12, but we'll have to wait and see what comes of all this.


If you take a look at my 2 posts in this thread (long, but good reads IMO) here (Part I) and here (Part II), you'll see that releasing cards with great potential for Low-Level API's worked out great for the customer, but bad for the AMD. 290 and 290X were released too early. Think about it! A ton of people already own 290 and 290X, which now makes little sense to upgrade. *No need to upgrade = fewer sales for AMD, which in turn = lower profit,* which in turn = lack of resources and overall decline for the company. This is one of AMD's downfalls - they don't know how to make money...

Everyone keeps on comparing Maxwell II to AMD's newest cards, but it doesn't make much sense because *NVidia is not yet competing for DirectX 12 superiority.* It was a better business decision to make Maxwell II a great DirectX 11 performer for DirectX 11 era. Too few games use DirectX 12 and most other AAA games use DirectX 11. Although AMD wins in DirectX 11 battle in this game, it isn't the case for the majority of other games, where NVidia usually has better performance and stability. 2016 is the year of early adoption and transition into DirectX 12. 2017 is when DirectX 12 will be on its way to become mainstream and NVidia is already releasing Pascal with INSANELY high performance improvement over Maxwell II.


----------



## Devnant

Quote:


> Originally Posted by *PontiacGTX*
> 
> the computerbase link has the comparison DX11 vs DX12


Yes, but what I´m talking is

DX12 ASYNC ON
vs
DX12 ASYNC OFF
vs
DX 11

Like this:


----------



## sblantipodi

Quote:


> Originally Posted by *PontiacGTX*
> 
> the computerbase test has the comparison async shader on/off and DX11 vs DX12
> 
> http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/3/
> http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/5/


as you can see there is improvements but no game changer.
we are talking about few fps difference in particular scenarios.


----------



## GorillaSceptre

Quote:


> Originally Posted by *PontiacGTX*
> 
> A cpu benchmark of hitman would show a slighly clear image
> DX11 and DX12 dont take different path?


No idea.. I recall things like MSAA paths being virtually unchanged from DX11 to DX12.

I can't find Kollock's comments on what i said. I was so sure it was him who said it.. Disregard what i said for now, I'll keep looking where i heard that DX12 optimizations will also benefit DX11 use too. I know Chris Roberts said it, but it was in a video and i have no idea which..









Edit.

http://www.gamersnexus.net/gg/2114-chris-roberts-star-citizen-on-dx12-vulkan-and-tech

"Some of the refactoring will help no matter what with Dx11, because just being better at organizing your resources and data and jobifying it, even though you may still be bottlenecked on the draw call issue, there's plenty of other stuff that can be parallized on the engine."- Chris Roberts.


----------



## Potatolisk

I don't see a problem in DX12 performing worse than DX11 in certain scenarios. The the drivers/optimizations for DX12 aren't as mature. If you are not getting significant gains from async compute or lower CPU overhead, I would definitely expect DX12 to perform worse for now. It will get better as time goes on.


----------



## Semel

*MOnarch*
Quote:


> \with INSANELY high performance improvement over Maxwell II.


Sorry but where is the proof of this? Pretty pictures and diagrams? They couldn't even show a real prototype back then....


----------



## sblantipodi

IMHO users are loosing the real benefit of DX12.
The possibility to use two cards even if they are not the same cards.
The possibility to use an AMD card with an nvidia one or a GTX980 with a GTX970.

Why there is no test in this sense yet?


----------



## MonarchX

Quote:


> Originally Posted by *Potatolisk*
> 
> I don't see a problem in DX12 performing worse than DX11 in certain scenarios. *The the drivers/optimizations for DX12 aren't as mature*. If you are not getting significant gains from async compute or lower CPU overhead, I would definitely expect DX12 to perform worse for now. It will get better as time goes on.


Drivers play a far lesser role when it comes to Low-Level API's. This is why one AMD's greatest weaknesses, software (drivers), is fading away with release of Low-Level API's. The HEAT IS ON and NVidia is concerned, worried, but not yet sweating.

The only and ONLY possibility for Maxwell II to get performance increase in DirectX 12 is for NVidia to figure our how to bypass or adjust or relocate Async Shader partial support into something that can handle it better. I think it is possible and I don't think NVidia has yet released drivers that contain that improvement because there was no special announcement.

I DO see a problem with DirectX 12 performing worse in general, with Async Shaders and without. It was supposed to be a generally optimized Low-Level API that would increase performance across all DirectX 11 cards by some 20% or so.Its not happening... not at all. Someone tried to explain it to me, but I don't understand the technicalities of DirectX 12 API...


----------



## pengs

Quote:


> Originally Posted by *Devnant*
> 
> Even with async OFF, the 980 TI performs worse on DX12 than on DX11 on Ashes of the Singularity. I just don´t know anymore.
> Yes.I´m starting to think that those Hitman benchmarks on NVIDIA have async forced ON. Turning them OFF should yield better performance on DX12. I´ll report back after I get acces to the game.


Kollock summed this up:
Quote:


> Originally Posted by *Kollock*
> 
> I suspect it's not quite that simple. I believe the windows 10 scheduler can dispatch jobs from virtual queues to actual hardware queues sort of like multi-tasking on a single CPU, though I don't believe any current GPUs have pre-emption. The tasks have fences and signals on them, and I believe as part of the D3D12 specification a fence ends up flushing the GPU, which could mean a 100 us stall or so. TLDR is that adding a command to a compute queue or any queue , or rather the act of synchronizing it will have a tiny bit of overhead. Thus, if the hardware doesn't have some intrinsic gain from doing it parallel, you'll likely end up with a tiny loss. Even an architecture that can do them in parrelel will likely lose a little bit. It's just that the net gain is more then the loss.
> 
> Unfortunately, fences in D3D12 are a bit expensive because they operate at an OS level. I don't believe Vulkan would have this limitation. Fine grained synchronization probably won't be expensive if the hardware supports it.


http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading/780_20#post_24969132

Maxwell is stalling because it is given a request to do parallel work which it then has to sync as it doesn't meet the requirements. It's a very small loss.

I see what your saying now, even with async off. AMD's cards do also but the benefit of running DX12 on these cards outweigh the small negative impact - I believe it's a fencing issue which is OS level so it's taking a hit regardless of async being used.


----------



## MonarchX

Quote:


> Originally Posted by *Semel*
> 
> *MOnarch*
> Sorry but where is the proof of this? Pretty pictures and diagrams? They couldn't even show a real prototype back then....


Its as much as we're going to get. I based my assumption or existing evidence. I'm not God, I can't see the future and neither can use. If were to read my other Part I and Part II posts, you'd understand why NVidia is very likely to make the difference in performance between Maxwell II and Pascal much greater than the difference between Kepler and Maxwell II. I do not want to be redundant and repetitive, saying the thing over and over again.


----------



## pengs

Quote:


> Originally Posted by *sblantipodi*
> 
> IMHO users are loosing the real benefit of DX12.
> The possibility to use two cards even if they are not the same cards.
> The possibility to use an AMD card with an nvidia one or a GTX980 with a GTX970.
> 
> Why there is no test in this sense yet?


That's a insanely large rock you've been under


http://www.extremetech.com/gaming/216880-ashes-of-the-singularity-demonstrates-nvidia-amd-gpus-working-side-by-side


----------



## MonarchX

Quote:


> Originally Posted by *pengs*
> 
> That's a insanely large rock you've been under
> 
> 
> http://www.extremetech.com/gaming/216880-ashes-of-the-singularity-demonstrates-nvidia-amd-gpus-working-side-by-side


Yeah, but if you look at frame TIME, you'll see Multi-GPU (AMD + NVidia) will provide stuttery mess instead of smooth and consistent frame rate + frame time. Those FPS don't mean much if there's no fluid motion.


----------



## sblantipodi

Quote:


> Originally Posted by *pengs*
> 
> That's a insanely large rock you've been under
> 
> 
> http://www.extremetech.com/gaming/216880-ashes-of-the-singularity-demonstrates-nvidia-amd-gpus-working-side-by-side


We are in the hitman thread, I'm talking about hitman.


----------



## huzzug

You won't get multi GPU support if the developer decides not to. Simple


----------



## pengs

Quote:


> Originally Posted by *sblantipodi*
> 
> IMHO users are loosing the real benefit of DX12.
> The possibility to use two cards even if they are not the same cards.
> The possibility to use an AMD card with an nvidia one or a GTX980 with a GTX970.
> 
> Why there is no test in this sense yet?


Quote:


> Originally Posted by *sblantipodi*
> 
> We are in the hitman thread, I'm talking about hitman.


lol
It's not going to be supported by every game.


----------



## clao

Quote:


> Originally Posted by *Imouto*
> 
> Oh, we are not going to see taking any advantage of this feature until Nvidia is ready. Be sure of that.
> 
> By the time Async Compute is on prime the current and older GCN cards will be obsolete and all this fuss will be for nothing. Nvidia way smarter than AMD in this regard: implement something only when needed or create that need.
> 
> The story just repeats again and again and again. By the time 64 bit software and OSs were common the Athlon 64 generation was *ancient*.


I kind of doubt that due to AMD having a control on the console system they may have a say in getting AAA games there to go with dx 12 so any games that are ports will sooner or later support dx 12


----------



## Lass3

Quote:


> Originally Posted by *Devnant*
> 
> Anybody noticed the worse DX12 performance on NVIDIA going above 1080p resolutions comparing to DX11? It´s the same thing with Ashes.
> 
> For instance, at 4K, the 980 TI goes from 37.2 FPS on DX11 to 36.7 FPS on DX12.
> 
> There are negative DX12 benefits for NVIDIA, as seen first on Ashes and now Hitman.


And both are AMD sponsored titles. AotS is even in alpha/beta state. Hitman just released with no Nvidia gameready driver available, for first time this year I think. Lets see a retest when Nvidia outs a driver.

Meanwhile Nvidia wins big in GoW @ DX12, which is a neutral title, even after AMD's driver optimization for the game (Crimson 16.3).
http://www.overclock3d.net/reviews/gpu_displays/gears_of_war_ultimate_edition_pc_performance_retest_with_amd_crimson_16_3_drivers/1
Looking forward to see more neutral DX12 benchmarks. 3DMark DX12 patch should be out soon.

But I probably won't be playing any of these games. I would love to see a DX12 patch for Rise of the Tomb Raider tho, maybe even with a built in benchmark like the one from 2013 had. The menu suggested this.

But DX11 performance will matter for years to come, I'm not sure why some AMD owners celebrate over these results. It's not like DX12 changes the fact that pretty much all upcoming titles are based on DX11 thruout 2016 and deep into 2017. AMD needs to fix their DX11 performance.


----------



## clao

Quote:


> Originally Posted by *Lass3*
> 
> And both are AMD sponsored titles. AotS is even in alpha/beta state. Hitman just released with no Nvidia gameready driver available, for first time this year I think. Lets see a retest when Nvidia outs a driver.
> 
> Meanwhile Nvidia wins big in GoW @ DX12, which is a neutral title, even after AMD's driver optimization for the game (Crimson 16.3).
> http://www.overclock3d.net/reviews/gpu_displays/gears_of_war_ultimate_edition_pc_performance_retest_with_amd_crimson_16_3_drivers/1
> Looking forward to see more neutral DX12 benchmarks. 3DMark DX12 patch should be out soon.
> 
> But I probably won't be playing any of these games. I would love to see a DX12 patch for Rise of the Tomb Raider tho, maybe even with a built in benchmark like the one from 2013 had. The menu suggested this.
> 
> But DX11 performance will matter for years to come, I'm not sure why some AMD owners celebrate over these results. It's not like DX12 changes the fact that pretty much all upcoming titles are based on DX11 thruout 2016 and deep into 2017. AMD needs to fix their DX11 performance.


friendly reminder Nvidia did have a driver update for hitman

http://www.overclock.net/t/1594180/nvidia-geforce-drivers-v364-51

http://www.overclock.net/t/1593827/nvidia-geforce-drivers-v364-47


----------



## sblantipodi

Quote:


> Originally Posted by *pengs*
> 
> lol
> It's not going to be supported by every game.


I know it but is there any news on hitman support in this sense?


----------



## caswow

since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


----------



## Tgrove

The amount of butthurt and denial these results are generating is kind of sad. This is the time weve all been waiting for. Our 6 and 8 core cpus will finally give scalable gains instead of a waste for gaming. Gpus finally taken advantage of, what a time to be a pc gamer. The almighty node shrink is finally here, finally getting away from dx11. We should be happy amd is looking poised for success. Have we forgot how much we need them?


----------



## NuclearPeace

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


I experience it everyday with my 380x and my i3 4370.


----------



## Mahigan

Quote:


> Originally Posted by *Lass3*
> 
> And both are AMD sponsored titles. AotS is even in alpha/beta state. Hitman just released with no Nvidia gameready driver available, for first time this year I think. Lets see a retest when Nvidia outs a driver.
> 
> Meanwhile Nvidia wins big in GoW @ DX12, which is a neutral title, even after AMD's driver optimization for the game (Crimson 16.3).
> http://www.overclock3d.net/reviews/gpu_displays/gears_of_war_ultimate_edition_pc_performance_retest_with_amd_crimson_16_3_drivers/1
> Looking forward to see more neutral DX12 benchmarks. 3DMark DX12 patch should be out soon.
> 
> But I probably won't be playing any of these games. I would love to see a DX12 patch for Rise of the Tomb Raider tho, maybe even with a built in benchmark like the one from 2013 had. The menu suggested this.
> 
> But DX11 performance will matter for years to come, I'm not sure why some AMD owners celebrate over these results. It's not like DX12 changes the fact that pretty much all upcoming titles are based on DX11 thruout 2016 and deep into 2017. AMD needs to fix their DX11 performance.


GoW is one title that is not representative of DX12 at all...

GoW uses the UE3 engine from the DX9 days (around 2006). GoW does not even make use of Multi-threaded rendering. Basically GoW os a straight port from DX9 to DX12. A straight port grants you multi threaded D3DRuntime benefits (command list recording and maybe deferred rendering) but those are available under DX11 already.

If you simply port a DX9/10/11 engine to DX12 you get only one tweak and that's the general D3DRuntime multi-thread optimizations which are the Explicit CPU/GPU synchronizations.

In order to port UE3 to DX12, properly, you would have to rewrite all of the shaders and set some of your rendering work to work away from the main thread and onto separate worker threads I mean at least multi-thread your shadow maps thus moving them away from the main thread.

The main thread would retain the main scene cmd lists while the worker threads would execute the shadow map cmd lists in parallel. That would at least save you some CPU time.

Another tweak would be to execute indirect. So you buffer multiple draw calls into an indirect draw call buffer and then execute them all with a single draw call. So each draw call executea many draw calls.

Another improvement could be achieved by using DX12's shader cache (dev controlled). Which GoW could use.

Then of course you have the resource binding where you could bundle up to 1 million resources on NV hardware (or unlimited/limited by memory on AMD hardware aka full heap) but none of that appears to be used as this would require a significant rewrite of the UE3 engine (might as well use UE4).

I mean there are a ton of optimizations that could have been used for GoW if it weren't built on an ancient DX9 UE3 engine and that's ignoring full multi-engine support.

So no, GoW UE is not at all a good representation of DX12. It's not even a good representation of DX11. It really is the worst possible example of DX12 imaginable (and I am not using that lightly).

Heck there aren't even Conservative Rasterization or ROVs (two NVIDIA DX12 FL 12_1 rendering features) tweaks used in the game.

GoW not using multi-threaded rendering? This is multi-threaded rendering:


Spoiler: Warning: Spoiler!















This is GoW:


Spoiler: Warning: Spoiler!















PCWorld looked into it here:
http://www.pcworld.com/article/3039552/hardware/tested-how-many-cpu-cores-you-really-need-for-directx-12-gaming.html

And concluded:
Quote:


> To be fair to Gears of War, my testing was done solely in the game's built-in performance benchmark. While multi-core efficiency is one of the feature achievements of DirectX 12, other aspects of the new API would give Gears of War the DirectX 12 check-off. All I know is the performance benchmark doesn't seem to improve as you increase CPU cores.





Spoiler: Warning: Spoiler!















GoW stops scaling at 2 cores with ht so GoW truly cannot be propped up as an example of DX12.


----------



## Tgrove

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


Only honest negative experience ive had in that case is gta v. Cant turn on all the advanced graphics options cuz of the overhead, where nvidia can. Nvidia was better in dx11, thats fine but now the tables have turned.


----------



## PontiacGTX

Quote:


> Originally Posted by *NuclearPeace*
> 
> I experience it everyday with my 380x and my i3 4370.


which games and how?


----------



## Charcharo

Guys, is there a reason for Hitman's fairly low performance numbers even on AMD DX12 benches?
The game looks... well OK, but nothing that should be getting such low results.

Is it advanced AI or some type of simulation? Is there a reason?


----------



## DeathMade

Guys. BTW GoW was fixed by AMD because they disabled async in the driver which was causing all issues you saw in GoW


----------



## Stige

Quote:


> Originally Posted by *NuclearPeace*
> 
> I experience it everyday with my 380x and my i3 4370.


Cheap parts.


----------



## Assirra

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?






From 7:15 you see a perfect example.


----------



## BradleyW

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


Yes.

http://www.overclock.net/t/1573982/amd-gpu-drivers-the-real-truth


----------



## ZealotKi11er

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


Most review sites use the fastest Intel CPU. If you use 6700K OCed then overhead will be minnal but not eveyone has 6700K. Also overhead does not alway hit in every part of the game.

https://www.youtube.com/watch?v=cMUbHdQBSiQ


----------



## BradleyW

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


Yes.

http://www.overclock.net/t/1573982/amd-gpu-drivers-the-real-truth


----------



## NuclearPeace

Quote:


> Originally Posted by *PontiacGTX*
> 
> which games and how?


It's pretty noticeable in Battlefield 4 and Dragon Age: Inquisition when I am in DX11 mode. When I run both games in Mantle its noticeably smoother, minimums improve a ton, and I even get a bit higher maximum framerate.
Quote:


> Originally Posted by *Stige*
> 
> Cheap parts.


I don't have the money to blow $300 on a new i7 every generation like most people on this forum. Also, its not completely unreasonable to use a $150 MSRP CPU with a $230 MSRP GPU...i've seen worse CPUs with faster GPUs.

Look at the video posted a few posts above. Even with an i5 2500k overclocked to 4.6GHz, the 390 loses badly to the 970 in Tomb Raider.


----------



## Sleazybigfoot

Quote:


> Originally Posted by *Lass3*
> 
> And both are AMD sponsored titles. AotS is even in alpha/beta state. Hitman just released with *no Nvidia gameready driver available*, for first time this year I think. Lets see a retest when Nvidia outs a driver.
> 
> Meanwhile Nvidia wins big in *GoW @ DX12*, which is a neutral title, even after AMD's driver optimization for the game (Crimson 16.3).
> http://www.overclock3d.net/reviews/gpu_displays/gears_of_war_ultimate_edition_pc_performance_retest_with_amd_crimson_16_3_drivers/1
> Looking forward to see more neutral DX12 benchmarks. 3DMark DX12 patch should be out soon.
> 
> But I probably won't be playing any of these games. I would love to see a *DX12 patch for Rise of the Tomb Raider* tho, maybe even with a built in benchmark like the one from 2013 had. The menu suggested this.
> 
> But DX11 performance will matter for years to come, I'm not sure why some AMD owners celebrate over these results. It's not like DX12 changes the fact that pretty much all upcoming titles are based on DX11 thruout 2016 and deep into 2017. AMD needs to fix their DX11 performance.


You're funny.


----------



## PontiacGTX

Quote:


> Originally Posted by *NuclearPeace*
> 
> It's pretty noticeable in Battlefield 4 and Dragon Age: Inquisition when I am in DX11 mode. When I run both games in Mantle its noticeably smoother, minimums improve a ton, and I even get a bit higher maximum framerate.
> I don't have the money to blow $300 on a new i7 every generation like most people on this forum. Also, its not completely unreasonable to use a $150 MSRP CPU with a $230 MSRP GPU...i've seen worse CPUs with faster GPUs.
> 
> Look at the video posted a few posts above. Even with an i5 2500k overclocked to 4.6GHz, the 390 loses badly to the 970 in Tomb Raider.


those two games already are cpu limited with a core i3 and if you take that mantle improves cpu limited scenario it should show a performance gain,now try something w/o mantle,that works just fine with 4t and compare with someone using same hw but a nvidia gpu.

BF4 DX11.1
BF4 Mantle

Also you could buy an i7 SB which performs 3% off a 3770k,7-10% a 4770k and 20% vs 6700k


----------



## Forceman

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


I think this post shows it pretty clearly. Nearly double the performance on the AMD processor.
Quote:


> Originally Posted by *pengs*
> 
> The almost twice over boost it's giving AMD's FX processors.


----------



## gamervivek

Quote:


> Originally Posted by *caswow*
> 
> since everyone and their grandmother is calling out amds bad dx11 perf. could someone show me really bad results of the so called bad amd overhead?


It's not as much CPU overhead as their inability to use more than one CPU thread for the graphics driver leading to CPU bottleneck on lower resolutions especially with CPUs with low single core performance. They have improved the driver substantially over the past year, people were seeing 50% improvement in the 3dmark draw calls test.

DX12 still improves it further while showing next to no improvement or even regression for nvidia.


----------



## ZealotKi11er

Quote:


> Originally Posted by *gamervivek*
> 
> It's not as much CPU overhead as their inability to use more than one CPU thread for the graphics driver leading to CPU bottleneck on lower resolutions especially with CPUs with low single core performance. They have improved the driver substantially over the past year, people were seeing 50% improvement in the 3dmark draw calls test.
> 
> DX12 still improves it further while showing next to no improvement or even regression for nvidia.


We will not see true DX12 game until DX11 pipeline is removed. Right now even DX12 games are build with DX11 in mind. Not sure what that Is considering people wil older GPU and GCN or Kepler should be upgrading and people should get DX12. Why dont dev just push DX12 by not including DX11 at all? Its the Windows 7 users fault.


----------



## fewness

Just a FYI, Rise of the Tomb Raider updated today to support DX12 and built-in benchmark.


----------



## zGunBLADEz

so in the end results the performance is not that big between fury and 980ti so this is not a big miracle even with nvidia and his issues on async.. Only win here is for mid tier adopters... GG amd waiting for next revision of your cards.


----------



## PontiacGTX

Quote:


> Originally Posted by *ZealotKi11er*
> 
> We will not see true DX12 game until DX11 pipeline is removed. Right now even DX12 games are build with DX11 in mind. Not sure what that Is considering people wil older GPU and GCN or Kepler should be upgrading and people should get DX12. Why dont dev just push DX12 by not including DX11 at all? Its the Windows 7 users fault.


http://www.techspot.com/news/63797-quantum-break-pc-requires-windows10-dx12.html
Windows crossplatform compatibility


----------



## bigjdubb

Quote:


> Originally Posted by *ZealotKi11er*
> 
> We will not see true DX12 game until DX11 pipeline is removed. Right now even DX12 games are build with DX11 in mind. Not sure what that Is considering people wil older GPU and GCN or Kepler should be upgrading and people should get DX12. Why dont dev just push DX12 by not including DX11 at all? Its the Windows 7 users fault.


This is why I would like to see devs use the Vulkan API, the windows 10 requirement for DX12 will hold everything back for too long. I wouldn't have a problem with devs choosing to push DX12 and leave non win10 gamers in the dust though.


----------



## pengs

Quote:


> Originally Posted by *Lass3*
> 
> Meanwhile Nvidia wins big in GoW @ DX12, which is a neutral title, even after AMD's driver optimization for the game (Crimson 16.3).
> http://www.overclock3d.net/reviews/gpu_displays/gears_of_war_ultimate_edition_pc_performance_retest_with_amd_crimson_16_3_drivers/1


You mean the DX9 game running in a DX12 wrapper?
Quote:


> Originally Posted by *Charcharo*
> 
> Guys, is there a reason for Hitman's fairly low performance numbers even on AMD DX12 benches?
> The game looks... well OK, but nothing that should be getting such low results.
> 
> Is it advanced AI or some type of simulation? Is there a reason?


My take on it is that there has been a massive shift over the last few years toward deferred rendering methods and full blown real-time lighting and with many light sources which is quite different from the old approach (which seems to have coincided with the next gens), at least the that's the trend with most games. The Division, NFS 2016, RS Siege, Assassins Creed.

There's an immediate demand for more power as lighting techniques advance especially for great looking volumetric lighting and tracing algorithms. It also seems to be more efficient to use these methods from the developers point of view because it provides an upgrade path for better lighting and lessens the need to force lighting or shading artistically as it were with forward rendering - a better lighting technique also destroys the need for grossly inaccurate effects like ambient occlusion and baked light maps (an advanced lighting method will do occlusion naturally).


----------



## mirzet1976

Quote:


> Originally Posted by *NuclearPeace*
> 
> It's pretty noticeable in Battlefield 4 and Dragon Age: Inquisition when I am in DX11 mode. When I run both games in Mantle its noticeably smoother, minimums improve a ton, and I even get a bit higher maximum framerate.
> I don't have the money to blow $300 on a new i7 every generation like most people on this forum. Also, its not completely unreasonable to use a $150 MSRP CPU with a $230 MSRP GPU...i've seen worse CPUs with faster GPUs.
> 
> Look at the video posted a few posts above. Even with an i5 2500k overclocked to 4.6GHz, the 390 loses badly to the 970 in Tomb Raider.


Yes Mantle improves min FPS 100% on my FX-8320 and R9 290.


----------



## Stige

Quote:


> Originally Posted by *mirzet1976*
> 
> Yes Mantle improves min FPS 100% on my FX-8320 and R9 290.


As if that is gonna make any difference lol, just luck of draw.


----------



## gamervivek

Quote:


> Originally Posted by *Forceman*
> 
> I think this post shows it pretty clearly. Nearly double the performance on the AMD processor.


It's not that better with 980Ti which gets 41fps with dx11 on 8370


----------



## Kriant

Hmmm Hitman doesn't use SLI in dx12 mode?


----------



## Forceman

Quote:


> Originally Posted by *gamervivek*
> 
> It's not that better with 980Ti which gets 41fps with dx11 on 8370


Edit: I see what you are saying now. The 980TI gets almost the same gains from DX12 with the 8370. So it isn't just AMD cards that benefit in that situation.


----------



## brandonb21

not bad..


----------



## Glottis

so game is out and as expected from beta, final version performance is atrocious as well, especially on Nvidia GPUs.


----------



## BradleyW

Could someone check to see if DXTory works with HITMAN?


----------



## sugarhell

Quote:


> Originally Posted by *Glottis*
> 
> so game is out and as expected from beta, final version performance is atrocious as well, especially on Nvidia GPUs.


Welcome. Please use an excuse from 1-7:

Nvidia didint had game ready drivers.
This game favors AMD
Nvidia will release new drivers to rekt AMD as always
DX12 is the master plan of AMD to kill nvidia
Mantle wasnt open( this is true actually)
Amd drivers are unstable
No Gsync

It is a joke.


----------



## BradleyW

Could someone check to see if DXTory works with HITMAN?


----------



## magnek

Quote:


> Originally Posted by *sugarhell*
> 
> Welcome. Please use an excuse from 1-7:
> 
> Nvidia didint had game ready drivers.
> This game favors AMD
> Nvidia will release new drivers to rekt AMD as always
> DX12 is the master plan of AMD to kill nvidia
> Mantle wasnt open( this is true actually)
> Amd drivers are unstable
> No Gsync
> 
> It is a joke.


I'll take all 7 please.









But yeah, AMD biased game is biased, Gaming Evolved is cancer, and nVidia still has much better drivers (in DX11) and doesn't make space heaters.

it's actually a dig at the fanboys


----------



## BradleyW

Quote:


> Originally Posted by *magnek*
> 
> I'll take all 7 please.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> But yeah, AMD biased game is biased, Gaming Evolved is cancer, and nVidia still has much better drivers (in DX11) and doesn't make space heaters.
> 
> not being serious of course


Until now, every Gaming Evolved title has ran well on both Nvidia and AMD. Gameworks on the other hand.


----------



## magnek

See updated minuscule text


----------



## Glottis

oh yes this game is so well optimized, 40fps on medium settings on GTX970 at 1080p and constant crashes.


----------



## Kriant

Don't have an issue with "Game Evolved', now Gameworks is a form of cancer lol.

Also, DX11 "SlI" (which, according to drivers, is just a forced alternative frame rendering) gives me same avg fps as DX12 single card performance o_0.
No G sync so that sucks, yes.

Ah, and yes, I am very disappointed in Nvidia for screwing up with async compute, if only Fury X had 8 gb of HBM and not 4 gb...that wouldn't been a no brainier for me.


----------



## BradleyW

Quote:


> Originally Posted by *Kriant*
> 
> Don't have an issue with "Game Evolved', now Gameworks is a form of cancer lol.
> 
> Also, DX11 "SlI" (which, according to drivers, is just a forced alternative frame rendering) gives me same avg fps as DX12 single card performance o_0.
> No G sync so that sucks, yes.
> 
> Ah, and yes, I am very disappointed in Nvidia for screwing up with async compute, if only Fury X had 8 gb of HBM and not 4 gb...that wouldn't been a no brainier for me.


If Pascal does not support Async well, it could be be years of waiting for Nvidia develop an architecture similar to GCN and it's deep and vastly complex pipeline.


----------



## fewness

Where is the result after running the benchmark?


----------



## ZealotKi11er

Quote:


> Originally Posted by *BradleyW*
> 
> If Pascal does not support Async well, it could be be years of waiting for Nvidia develop an architecture similar to GCN and it's deep and vastly complex pipeline.


Nvidia started all this with Fermi and then they found out if you make a general purpose card it will run too hot. AMD probably looked at Fermi and made GCN. Nvidia already had plans to make Gaming only Cards.


----------



## Kriant

Quote:


> Originally Posted by *BradleyW*
> 
> If Pascal does not support Async well, it could be be years of waiting for Nvidia develop an architecture similar to GCN and it's deep and vastly complex pipeline.


Seems like the lvl of cow manure (no cursing on overclock.net, I remember that, I do, I swear) coming from Nvidia is incredible. First 970 4gb ended up being 3.5 + slow 512. Then the "we support DX12 fully, unlike the competition" turns out to be false (hello async). Kinda having second thoughts on my Titan Xs =\.


----------



## Kriant

P.S. I wonder whether pcgameshardware ran a customer benchmark of sorts, because a) the benchmark inbuild into the game just quits without results when it ends, and the min. fps that my cards always register is around 4ish in dx12, which is similar to what METRO benchmark used to do, thus not sure where does pcgameshardware gets their minimums at 30 and above.


----------



## fewness

Quote:


> Originally Posted by *Kriant*
> 
> P.S. I wonder whether pcgameshardware ran a customer benchmark of sorts, because a) the benchmark inbuild into the game just quits without results when it ends, and the min. fps that my cards always register is around 4ish in dx12, which is similar to what METRO benchmark used to do, thus not sure where does pcgameshardware gets their minimums at 30 and above.


So it does just quit without letting me write down the number? WHY!


----------



## Semel

1) The game gets stuck at black screen when starting it in dx12 fullscreen mode. (I had to use a windowed mode and then switch to fullscreen)

2) The game freezes at exactly the same place in the training mission in DX12 mode.

3) amd fury is locked to 60 fps whether you have vsync enabled or disabled.

They claim this is not Early Access? Just look at the official steam forums lol....


----------



## Kriant

Quote:


> Originally Posted by *fewness*
> 
> So it does just quit without letting me write down the number? WHY!


Yup. And no Clue. Event log doesn't have any errors or anything, and it does the same with both DX11 and DX12 renders, so I assume that's intended.


----------



## BradleyW

Quote:


> Originally Posted by *Semel*
> 
> 1) The game gets stuck at black screen when starting it in dx12 fullscreen mode. (I had to use a windowed mode and then switch to fullscreen)
> 
> 2) The game freezes at exactly the same place in the training mission in DX12 mode.
> 
> *3) amd fury is locked to 60 fps whether you have vsync enabled or disabled.*
> 
> They claim this is not Early Access? Just look at the official steam forums lol....


That links to what PC Per said about DX12 locking to 60 frames, even if the game is not under the Windows Store!


----------



## fewness

We really are just beta testers nowadays, and we are paying the companies to beta test!

Should only buy games at Steam year end sale......less cost, better games....


----------



## BIGTom

Quote:


> Originally Posted by *fewness*
> 
> So it does just quit without letting me write down the number? WHY!


Benchmark results are saved in the following location:

[OSDRIVE]:\Users\[Username]\hitman\profiledata.txt


----------



## fewness

Quote:


> Originally Posted by *BIGTom*
> 
> Benchmark results are saved in the following location:
> 
> [OSDRIVE]:\Users\[Username]\hitman\profiledata.txt


Thank you!









And that is a very detailed result file. Good job! My participation in the beta test is justified now.


----------



## Semel

Quote:


> Originally Posted by *BradleyW*
> 
> That links to what PC Per said about DX12 locking to 60 frames, even if the game is not under the Windows Store!


Switching to Windowed Mode removes 60 fps lock.

Fullscreen and Exclusive Fullscreen have 60 fps lock


----------



## ZealotKi11er

Quote:


> Originally Posted by *fewness*
> 
> We really are just beta testers nowadays, and we are paying the companies to beta test!
> 
> Should only buy games at Steam year end sale......less cost, better games....


The way they make games now is different. I really would like to play a game without having to wait for the next patch or driver update. We can always wait but it just seem wrong to begin with. They used to have dedicated Beta testers for games. They should really do something like Blizzard with Closed Beta testing. You always have to have the core game working before adding new features. Overwatched when I tried it back in Dec did not feel like a Beta and it still launching 5 month after. Most Studios are thinking about the next game before the new one is even out.


----------



## Kriant

Spoiler: Warning: Spoiler!



Graphics Settings:
RESOLUTION: 2560 x 1440
ResolutionWidth = 2560
ResolutionHeight = 1440
Refreshrate = 60
Fullscreen = 1
ExclusiveFullscreen = 0
VSync = 0
VSyncInterval = 1
Monitor = 0
Adapter = 0
Aspectratio = 0
WindowPosX = 0
WindowPosY = 0
WindowWidth = 2560
WindowHeight = 1440
Stereoscopic = 0
Stereo_Depth = 3.000000
Stereo_Strength = 0.030000
WindowMaximized = 0
FocusLoss = 0
UseGdiCursor = 0
ShadowQuality = 3
ShadowResolution = 2
TextureResolution = 2
TextureFilter = 4
SSAO = 1
MirrorQuality = 1
AntiAliasing = 2
LevelOfDetail = 3
MotionBlur = 0
Bokeh = 0
SuperSampling = 1.000000
Gamma = 1.000000
QualityProfile = 4

Benchmark Results:
---- CPU ----
7432 frames
65.14fps Average
9.57fps Min
247.28fps Max
15.35ms Average
4.04ms Min
104.49ms Max
---- GPU ----
7432 frames
65.39fps Average
8.96fps Min
105.34fps Max
15.29ms Average
9.49ms Min
111.64ms Max



Here's my quick bench
The game doesn't support SLI in dx12, so it's 1 card, stock clocks.
Settings Maxed out via menu. Super Sampling at 1.00


----------



## Noufel

why is the 390X so much close to a fury even at 2and 4K in DX12? isn't cause there are less ACE units in fijy GPUs ?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Noufel*
> 
> why is the 390X so much close to a fury even at 2and 4K in DX12? isn't cause there are less ACE units in fijy GPUs ?


Well apart from HMB1 Fury only has a bit more SP. 2816 vs 3584 ~ 25% more SP which do not really translate into performance. They both I think have similar amount of ACEs.


----------



## Kana-Maru

*Incoming:*


----------



## PontiacGTX

Quote:


> Originally Posted by *Noufel*
> 
> why is the 390X so much close to a fury even at 2and 4K in DX12? isn't cause there are less ACE units in fijy GPUs ?


overclocked 2816SP and stock 3584sp, and the higher resolution reduces a bit the fps and closes the gap
Quote:


> Originally Posted by *ZealotKi11er*
> 
> Well apart from HMB1 Fury only has a bit more SP. 2816 vs 3584 ~ 25% more SP which do not really translate into performance. They both I think have similar amount of ACEs.


some people show some diagrams with 4ACEs and 2 HWS


----------



## SuperZan

I haven't seen any extreme concern trolling directed towards Nvidia yet..


----------



## mcg75

*Attention guys.

Please refrain from personal call outs of members.

It's not constructive to the thread.

Thanks.*


----------



## mcg75

Reopened.


----------



## Felix39

I don't get some people, especially those saying indirectly that we should all drop DX12 because Nvidia Gpus dont run well on it.
Now, keeping in mind what DX12 could mean for the gamer, even if its not at full potential yet, not even on AMD cards, like better PC resource management, getting more with less, dismissing DX12 sounds illogical. So, we should regress gaming and dismiss kinda big potential because Nvidia don't like it?
The same Nvidia that brought us the 4GB that is not so 4GB, or the "100% DX 12 ready" Gpus?
Its not DX12 fault that Nvidias runs unexpectedly weak on it, nor is it AMDs, nor are the developers to blame for using better tech, Asynchronous Compute is better tech, make no mistake, only fault lies at Nvidia because they did not know/predict/care what impact AMDs better implementation would have.
There seems to be a majority that think that Pascal will fix this, im not so sure, Pascal is almost done, and late changes are hard to implement, I would even bet it wont be that groundbreaking in Pascal, might be wrong though. So what then? Drop advances and be stuck in time so Nvidia is happy? Why?


----------



## SuperZan

Here here. This is an API that both makers can build around, it's not a hidden black box of code dropped into a game last-second. AMD didn't optimise for DX11 and greenies were all too happy to point that out, even to those of us who found DX11 performance under AMD acceptable if not remarkable. Now the tables are turned however briefly and it's as though Raja cleft the universe and unleashed the next set of Biblical plagues.

AMD built for async, the cards do well in async, and according to Kollock not using it feels like leaving performance on the table. That should be enough to quiet the uproar.


----------



## sblantipodi

Quote:


> Originally Posted by *Kriant*
> 
> Hmmm Hitman doesn't use SLI in dx12 mode?


just tested it, no.


----------



## looniam

welp color me disappointed. i keep looking and looking but i just don't see *any card* becoming "playable" between DX11/DX12. seems the only "winners" are AMD cpus.

maybe [H]ocp will give an indepth review and point out any magic i'm missing . .









oh yeah, i put on my flame suit before posting


----------



## ZealotKi11er

Quote:


> Originally Posted by *looniam*
> 
> welp color me disappointed. i keep looking and looking but i just don't see *any card* becoming "playable" between DX11/DX12. seems the only "winners" are AMD cpus.
> 
> maybe [H]ocp will give an indepth review and point out any magic i'm missing . .
> 
> 
> 
> 
> 
> 
> 
> 
> 
> oh yeah, i put on my flame suit before posting


I do not think this is "DX12" game to worry about. They are just testing it. At min DX12 will have to remove CPU overhead with AMD. Other things are not going to happen unless developer support them. Dual GPUs are a feature dev can chose to implement.


----------



## Cyclops

As an Nvidia user, I agree with Nvidia bashing. It's time for them to wake up and put on their big boy pants.

They should welcome the challenge instead of frown and boycott.


----------



## BradleyW

Is crossfire working?


----------



## magnek

Quote:


> Originally Posted by *Cyclops*
> 
> As an Nvidia user, I agree with Nvidia bashing. It's time for them to wake up and put on their big boy pants.
> 
> They should welcome the challenge instead of frown and boycott.


As an nVidia user who bashes nVidia, I approve of this post.









And such objectivity from an nVidia user is pretty rare these days. *looks at location* Oh that makes much more sense now.


----------



## 12Cores

Is the intro running in Directx 12?


----------



## Cyclops

Quote:


> Originally Posted by *magnek*
> 
> As an nVidia user who bashes nVidia, I approve of this post.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And such objectivity from an nVidia user is pretty rare these days. *looks at location* Oh that makes much more sense now.


Oh, it's not just Nvidia. Look at Intel settling in their cold dark cave and producing locked processors, and forcing motherboard manufacturers to lock their stuff down too. They are getting away with it so screw them both.

The key to objectivity is not being a fanboy of anything. Criticize them all. If their products have problems, lack features that should be there, or if the company itself is being down right shady or anti-competitive.


----------



## yesitsmario

A 390x is an 8gb 290x right? Crazy how relevant a 290x can still be nowadays.


----------



## jologskyblues

The 290 and 390 series are looking to be a great GPUs to have with the first wave of DX12 titles coming out. I'm surprised at how they're leaving behind the direct competition and interestingly, how close they perform with the Fury series.

I'm guessing the games are more readily optimized out the gate because the Hawaii architecture's GCN revision shares more similarities with the GCN in the PS4 and XB1, the lowest common denominator. Fiji, being a newer revision of GCN, may still need further optimizations to bring the best out of it.

As for Maxwell 2...Meh.

After watching Tom Petersen's interview in PCPer's 12-hour stream, his vague replies to questions regarding DX12 and Async performance did not instill confidence to me as a GTX 980 Ti owner.


----------



## zealord

Quote:


> Originally Posted by *yesitsmario*
> 
> A 390x is an 8gb 290x right? Crazy how relevant a 290x can still be nowadays.


Basically yes.

Slightly higher mem and core clocks and 8GB Vram. From a perception standpoint it was a good move from AMD since the 290X had a bad reputation of being way too hot and loud due to the bad reference design they presented the card with.
More than a few think the R9 390X and 390 are some sort of new cards that perform quite well and challenge 980/970. The real winners are the people that grabbed a cheap 290(X) shortly before or around the time the 390(X) cards released.


----------



## 12Cores

Really looking forward to the 14/16nm gpu's this summer, my gpu's are 4 years old. I will happily pay $300 for single card as fast as these two







. I don't think we will see any major benefits with Directx 12 until that stuff hits, we may have to wait until BF5.


----------



## huzzug

Can anyone with a Fermi bench this game with compute on to see what kind of performance they get since Fermi was sort of a compute heavy card from Nvidia.


----------



## josephimports

Quote:


> Originally Posted by *BradleyW*
> 
> Is crossfire working?


Not for me although 16.3 driver states crossfire profile available.


----------



## SkyNetSTI

And what about sli support and gtx 780 series performance?
Don't understand why in benc chart 770 represents 7 series.


----------



## airfathaaaaa

Quote:


> Originally Posted by *SkyNetSTI*
> 
> And what about sli support and gtx 780 series performance?
> Don't understand why in benc chart 770 represents 7 series.


because 780ti on the bench is actually where the 980 would have been and that will be a big shame for nvidia


----------



## Kana-Maru

I have uploaded my Fury X benchmarks to my blog. If anyone want's to see my results or how well it's running on my 1st generation X58 check here:

http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks

I will be updating with more info soon. It takes awhile to go through all of the data, but think this is enough for now.


----------



## Remij

Quote:


> Originally Posted by *Kana-Maru*
> 
> I have uploaded my Fury X benchmarks to my blog. If anyone want's to see my results or how well it's running on my 1st generation X58 check here:
> 
> http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks
> 
> I will be updating with more info soon. It takes awhile to go through all of the data, but think this is enough for now.


Nice job!


----------



## Mahigan

Quote:


> Originally Posted by *Kana-Maru*
> 
> I have uploaded my Fury X benchmarks to my blog. If anyone want's to see my results or how well it's running on my 1st generation X58 check here:
> 
> http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks
> 
> I will be updating with more info soon. It takes awhile to go through all of the data, but think this is enough for now.


Nice







What's your CPU usage like?


----------



## Kana-Maru

Quote:


> Originally Posted by *Remij*
> 
> Nice job!


Thanks.
Quote:


> Originally Posted by *Mahigan*
> 
> Nice
> 
> 
> 
> 
> 
> 
> 
> What's your CPU usage like?


Thanks. I still have more things to upload and benchmark. I have the CPU+HT chart as well. I haven't uploaded on my page, but all of my cores were definitely working together. My chart showed all 12 logical cores, but I eliminated the HT since it made the chart look REALLY crazy.

Here is the CPU usage chart I didn't upload to my article yet:

Click Me


----------



## xenophobe

Quote:


> Originally Posted by *magnek*
> 
> Slightly OT but I see a lot of people OC memory to 8000 and call it a day. Thing is with GDDR5, because of its error correcting ability, errors that start occurring due to instability but not serious enough to show up as artifacts or crash altogether simply get buried, but still negatively impact performance. The reason I run my memory at 7800 is because after some benchmarking, I found that performance actually regressed slightly (about 3%) if I push memory to 8000 instead of 7800. So make sure you're not getting negative scaling with that memory OC.
> 
> Of course I could be preaching to the choir here, in which case


I've benchmarked a few things, nothing but improvement to 1528 / 8000something, but at those speeds I do get a rare system freeze. I'm inclined to believe that it wants a little more power, but I don't think I can adjust voltage on my MSI 980, so if that happens, I lower the memory to 7900 and ti's fine.


----------



## Mahigan

Quote:


> Originally Posted by *Kana-Maru*
> 
> Thanks.
> Thanks. I still have more things to upload and benchmark. I have the CPU+HT chart as well. I haven't uploaded on my page, but all of my cores were definitely working together. My chart showed all 12 logical cores, but I eliminated the HT since it made the chart look REALLY crazy.
> 
> Here is the CPU usage chart I didn't upload to my article yet:
> 
> Click Me


Very nice and clear multi-threaded rendering


----------



## Dargonplay

Quote:


> Originally Posted by *xenophobe*
> 
> I've benchmarked a few things, nothing but improvement to 1528 / 8000something, but at those speeds I do get a rare system freeze. I'm inclined to believe that it wants a little more power, but I don't think I can adjust voltage on my MSI 980, so if that happens, I lower the memory to 7900 and ti's fine.


I can confirm what he said about memory OC, after certain threshold performance actually regress as you pump the memory frequency up even if you don't experience freezes, artifacts or other common glitches associated with overclocking, there's this 300MHz threshold on my card where I would be 100% stable but performance goes backwards, this doesn't apply to core clock of course.

Always wondered why this happened, never found any solid information on the matter so I had assumed it was the result of trading Latency for Bandwitch to the point latency goes down enough to hurt performance outweighing the benefits of the extra bandwitch.

I can see how I was wrong, his explanation have more sense, we'd need to test this thoroughly though.


----------



## Olivon

Quote:


> Unfortunately, we were happy too early. The Framelock under Direct X 12 in Fiji (Radeon R9 Fury / Nano) remains also affected is Tonga (R9 380 [X], 285). In Hawaii, however (R9 390 [X] / 290 [X]), the frame rates are unlimited as under DirectX 11th Possibly this is due to a driver error. Measurements can thus be of limited value, a retest would be necessary. We have AMD already informed and wait for feedback.


Quote:


> With a pinch of investigative journalism and a lively correspondence and exchange of ideas with AMD, we could now actually develop a somewhat exotic workaround for framelock problem. And how it works: A 144-Hz display must be connected in addition to the UHD display of our test system, the maximum resolution is not important. Now the 144-Hz monitor is set as the primary display, then finally the display will be placed on the secondary display the launcher of Hitman. Now eliminates the 60 fps lock in exclusive full screen mode. Whether this is a driver error or a bug in the game, it is still unclear to the elimination of the problem is but worked. It will therefore appear in the foreseeable future either a game update or a hotfix, the Radeon Software 16.3. And we can now provide benchmark values for AMD's latest GCN GPUs, beginning with a Sapphire Radeon R9 Fury Nitro / 4G, which provides full HD a convincing performance.


http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/
Quote:


> Even if we managed to get the performance figures from the game's DirectX 12 mode, we wanted to not use these in sharp aim for one simple reason - the actual game experience was disastrous.


Quote:


> Our hope is to return to the title at a later stage, when both the developers and graphics card smakarna has received the order of machinery. Until then, we would strongly advise against Hitman DirectX 12-mode.


http://www.sweclockers.com/test/21855-snabbtest-grafikprestanda-i-hitman/4#content

Well, I expected way better regarding DX12 launch. For the moment, we didn't see one good DX12 game launch.


----------



## Kana-Maru

^ I've been enjoying the game in DX12 @ 4K with no issues. I just ran a quick gameplay test at 1080p and got great fps. I'm sure we will get a patch soon, but my experience has been nothing less than pleasant. DX12 has been great for me so far. Next up is Tomb Raider once I finish messing around in Hitman.


----------



## xenophobe

Quote:


> Originally Posted by *Dargonplay*
> 
> Quote:
> 
> 
> 
> Originally Posted by *xenophobe*
> 
> I've benchmarked a few things, nothing but improvement to 1528 / 8000something, but at those speeds I do get a rare system freeze. I'm inclined to believe that it wants a little more power, but I don't think I can adjust voltage on my MSI 980, so if that happens, I lower the memory to 7900 and ti's fine.
> 
> 
> 
> I can confirm what he said about memory OC, after certain threshold performance actually regress as you pump the memory frequency up even if you don't experience freezes, artifacts or other common glitches associated with overclocking, there's this 300MHz threshold on my card where I would be 100% stable but performance goes backwards, this doesn't apply to core clock of course.
> 
> Always wondered why this happened, never found any solid information on the matter so I had assumed it was the result of trading Latency for Bandwitch to the point latency goes down enough to hurt performance outweighing the benefits of the extra bandwitch.
> 
> I can see how I was wrong, his explanation have more sense, we'd need to test this thoroughly though.
Click to expand...

Yeah, I never noticed a performance degradation and never got any artifacting or overheating before a hard freeze, so I assume that it was due to trying to push the stock voltage too hard. In the few games that it does affect, I can either run 1528 core or 8000+ memory but not both... so far the only game I recollect this happening to is Far Cry 4, no other games seemed to have an issue. So I'm not really sure why. FWIW, I only OC when I'm running a game that has some <60fps drops.

So I don't know, I haven't tried testing too much.


----------



## jologskyblues

Quote:


> Originally Posted by *xenophobe*
> 
> Yeah, I never noticed a performance degradation and never got any artifacting or overheating before a hard freeze, so I assume that it was due to trying to push the stock voltage too hard. In the few games that it does affect, I can either run 1528 core or 8000+ memory but not both... so far the only game I recollect this happening to is Far Cry 4, no other games seemed to have an issue. So I'm not really sure why. FWIW, I only OC when I'm running a game that has some <60fps drops.
> 
> So I don't know, I haven't tried testing too much.


Interesting. I also noticed that my 3DMark Firestrike Extreme scores go down a bit when I overclock the memory past 7.6GHz effective.

My scores do increase in Valley/Heaven as memory overclocks go up though, so those benchmarks do not seem to be affected with this anomaly.


----------



## Semel

considering dx12 in this game is a bit of a mess atm I see no reason benchmarking it..

Let's see what we got in dx11 benchmark..

[email protected],16GB @1600, amd fury @1120/560, win10, the game is installed on hdd

1920x1080, everything maxed out:
Quote:


> 85.35fps Average


Pretty good .. but.. the game doesn't look stunning..if anything it looks dated.. so not really surprising..

Another thing. this avg fps number doesn't represent, for instance, performance in Paris.It is considerably lower.

Quote:


> Originally Posted by *Kana-Maru*
> 
> So once again I highly suggest that AMD Fury X users keep the Render Target Reuse Enabled..


When I had it on auto my fps was locked to 60 fps. When I disabled it fps lock was removed but the game had micro stutters or something.

I'm gonna give "Enabled" a try later.

PS Nope...still fps locked...

Auto- 60 fps lock
Enabled- 60 fps lock
Disabled- no fps lock but performance worse than with Enabled in a windowed mode(it removes fps lock)

PPS Ahhh I see.. you used Windowed mode for your tests..


----------



## ku4eto

Quote:


> Originally Posted by *zealord*
> 
> Basically yes.
> 
> Slightly higher mem and core clocks and 8GB Vram. From a perception standpoint it was a good move from AMD since the 290X had a bad reputation of being way too hot and loud due to the bad reference design they presented the card with.
> More than a few think the R9 390X and 390 are some sort of new cards that perform quite well and challenge 980/970. The real winners are the people that grabbed a cheap 290(X) shortly before or around the time the 390(X) cards released.


I bought a Sapprihe Tri-X R9 290 for 180$ (330 leva local currency), used. Overclocks 10% core and 125Mem on stock perfectly. Best bang for buck ever (almost wrote the F word here ).
Also, why the reviewers are not testing with Phenoms II ? I want to know how well do they fare.I can imagine 30% CPU gains in this game only due to DX12.


----------



## magnek

Quote:


> Originally Posted by *Dargonplay*
> 
> I can confirm what he said about memory OC, after certain threshold performance actually regress as you pump the memory frequency up even if you don't experience freezes, artifacts or other common glitches associated with overclocking, there's this 300MHz threshold on my card where I would be 100% stable but performance goes backwards, this doesn't apply to core clock of course.
> 
> Always wondered why this happened, never found any solid information on the matter so I had assumed it was the result of trading Latency for Bandwitch to the point latency goes down enough to hurt performance outweighing the benefits of the extra bandwitch.
> 
> I can see how I was wrong, his explanation have more sense, we'd need to test this thoroughly though.


Quote:


> Originally Posted by *xenophobe*
> 
> Yeah, I never noticed a performance degradation and never got any artifacting or overheating before a hard freeze, so I assume that it was due to trying to push the stock voltage too hard. In the few games that it does affect, I can either run 1528 core or 8000+ memory but not both... so far the only game I recollect this happening to is Far Cry 4, no other games seemed to have an issue. So I'm not really sure why. FWIW, I only OC when I'm running a game that has some <60fps drops.
> 
> So I don't know, I haven't tried testing too much.


Quote:


> Originally Posted by *jologskyblues*
> 
> Interesting. I also noticed that my 3DMark Firestrike Extreme scores go down a bit when I overclock the memory past 7.6GHz effective.
> 
> My scores do increase in Valley/Heaven as memory overclocks go up though, so those benchmarks do not seem to be affected with this anomaly.


If you guys are interested AnandTech has a small piece on GDDR5's error correcting ability.


----------



## xenophobe

Quote:


> Originally Posted by *jologskyblues*
> 
> Interesting. I also noticed that my 3DMark Firestrike Extreme scores go down a bit when I overclock the memory past 7.6GHz effective.
> 
> My scores do increase in Valley/Heaven as memory overclocks go up though, so those benchmarks do not seem to be affected with this anomaly.


I did use Valley and in game benchmarks. Tomb Raider being one... and I don't recall the others offhand. Random stuff in my Steam Library. I didn't use 3DMark, so perhaps i would experience the same thing. And I only briefly tested when I got my card over a year ago and called it good. Perhaps that was a little premature, I dunno.


----------



## Semel

I tried DX12 benchmark in a windowed mode(removes fps lock :

[email protected],16GB @1600, amd fury @1120/560, win10, the game is installed on hdd
Quote:


> *DX12*, 1920x1080,windowed mode, everything maxed out
> 
> *96.46fps* Average


Quote:


> *DX11*, 1920x1080, exclusive fullscreen, everything maxed out
> 
> *85.35 fps* Average


Paris level is considerably more demanding though as in the average fps above doesn't represent performance there.


----------



## ku4eto

Quote:


> Originally Posted by *Semel*
> 
> I tried DX12 benchmark in a windowed mode(removes fps lock :
> 
> [email protected],16GB @1600, amd fury @1120/560, win10, the game is installed on hdd
> 
> Paris level is considerably more demanding though as in the average fps above doesn't represent performance there.


Mind that Windowed mode puts more strain on the AMD GPU's, where they do not run really well.


----------



## Semel

Yeah I know.I never use it when gaming.

I wonder though.. Will this fps lock be fixed any time soon?


----------



## Cybertox

Isn't this an AMD game? Not surprised seeing such results.


----------



## Stige

This isn't the first time people say there is a FPS lock in DX12 but I don't have one? In fullscreen.


----------



## looniam

Quote:


> Originally Posted by *Stige*
> 
> This isn't the first time people say there is a FPS lock in DX12 but I don't have one? In fullscreen.


the work around was using a refresh rate higher than 60Hz.

and your's is . . .120Hz? do you see frames higher than that?


----------



## Kana-Maru

Just finished running some benchmarks at 7860x2160!

http://s26.postimg.org/xtiy2ly93/Hitman_Kana_Maru_7680x2160_small.jpg

The game is actually playable. Even more playable after I lowered some settings. I didn't think the game would run so well, but it did.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Cybertox*
> 
> Isn't this an AMD game? Not surprised seeing such results.


this is amd sponsored game it doesnt have any funky black box api extension installed on it


----------



## PlugSeven

Quote:


> Originally Posted by *Cybertox*
> 
> Isn't this an AMD game? Not surprised seeing such results.


I disagree, this actually bucks the trend, normally in dx11, the cards fall were you would expect them in GE titles. Here however, the radeons are advantaged more because they're simply better suited to this API than any current geforce. I was actually expecting more of rout than this TBH.


----------



## jezzer

lol @ a 980Ti on Dx12


----------



## mtcn77

Quote:


> Originally Posted by *PlugSeven*
> 
> I disagree, this actually bucks the trend, normally in dx11, the cards fall were you would expect them in GE titles. Here however, the radeons are advantaged more because they're simply better suited to this API than any current geforce. I was actually expecting more of rout than this TBH.


It suits their cpus more, TBH.


----------



## Cybertox

Quote:


> Originally Posted by *PlugSeven*
> 
> I disagree, this actually bucks the trend, normally in dx11, the cards fall were you would expect them in GE titles. Here however, the radeons are advantaged more because they're simply better suited to this API than any current geforce. I was actually expecting more of rout than this TBH.


What is there to disagree with? I havent made any particular statement.

Yes, you are correct. The latest AMD cards are indeed better suited for DX12 than the ones from Nvidia. However this game doesn't represent DX12 performance that well mainly due to the fact that it is AMD favored and most likely more optimized for these cards. If the game wouldn't favor anyone and would have been optimized for both equally well, the difference in performance would be lesser and not as drastic.


----------



## PlugSeven

Quote:


> Originally Posted by *Cybertox*
> 
> What is there to disagree with? I havent made any particular statement.


Well then perhaps you should have fleshed the quote below more.
Quote:


> Originally Posted by *Cybertox*
> 
> Isn't this an AMD game? Not surprised seeing such results.


----------



## Stige

Quote:


> Originally Posted by *looniam*
> 
> the work around was using a refresh rate higher than 60Hz.
> 
> and your's is . . .120Hz? do you see frames higher than that?


Well that explains why it's not capped for me heh. 120Hz it is.


----------



## infranoia

Quote:


> Originally Posted by *Cybertox*
> 
> What is there to disagree with? I havent made any particular statement.
> 
> Yes, you are correct. The latest AMD cards are indeed better suited for DX12 than the ones from Nvidia. However this game doesn't represent DX12 performance that well mainly due to the fact that it is AMD favored and most likely more optimized for these cards. If the game wouldn't favor anyone and would have been optimized for both equally well, the difference in performance would be lesser and not as drastic.


But isn't this whole assumption incorrect? Didn't I read that DX12 doesn't lend itself to vendor-specific optimizations or discrete rendering paths?

So far the only thing that makes these DX12 releases "AMD games" is that they don't have GameWorks, and they use DX12.

I mean, isn't it just a little bit telling that we haven't seen an "Nvidia-sponsored" game written from the ground-up in DX12?


----------



## Remij

Quote:


> Originally Posted by *infranoia*
> 
> But isn't this whole assumption incorrect? Didn't I read that DX12 doesn't lend itself to vendor-specific optimizations or discrete rendering paths?
> 
> So far the only thing that makes these DX12 releases "AMD games" is that they don't have GameWorks, and they use DX12.
> 
> I mean, isn't it just a little bit telling that we haven't seen an "Nvidia-sponsored" game written from the ground-up in DX12?


Not really. Games in DX12 just like in DX11 can be more suited to a certain type of architecture over another. The reason why you're seeing AMD sponsored -built from the ground up- DX12 titles is because AMD have the most to gain from it and are actively pushing devs to adopt it. Not to mention AMD has done a lot of the work already that Nvidia seems to be catching up on.

The problem I have is that according to 'the other side', any game that has gameworks code in any sense...being code integrated into the engine, or code applied on top of, whether it's actually being hardware accellerated or not, is labeled as Nvidia trash and hindering the competition. Of course, any good representations of both code paths working well on both vendors is dismissed and attributed to the developer working with the vendor from the start to maintain parity. Such as The Division for example.

In my opinion, that should be the standard. Devs should feel compelled by the vendors to implement and code well for their hardware and to ensure things work properly at release.


----------



## Cybertox

Quote:


> Originally Posted by *infranoia*
> 
> But isn't this whole assumption incorrect? Didn't I read that DX12 doesn't lend itself to vendor-specific optimizations or discrete rendering paths?
> 
> So far the only thing that makes these DX12 releases "AMD games" is that they don't have GameWorks, and they use DX12.
> 
> I mean, isn't it just a little bit telling that we haven't seen an "Nvidia-sponsored" game written from the ground-up in DX12?


No, as I mentioned in my post. A game which does not favor any of the two vendors, equally optimized for both, is going to portray true DX12 performance. Better optimization is not necessarily achieved through usage of alternative or discrete rendering paths, so what you read does not really apply. I reckon the same thing was said about DX11 but that was obviously not the case.


----------



## KarathKasun

Quote:


> Originally Posted by *Cybertox*
> 
> No, as I mentioned in my post. A game which does not favor any of the two vendors, equally optimized for both, is going to portray true DX12 performance. Better optimization is not necessarily achieved through usage of alternative or discrete rendering paths, so what you read does not really apply. I reckon the same thing was said about DX11 but that was obviously not the case.


Async compute as is implemented by AMD is vastly superior to what NV is currently doing. Allowing fast context switching on a cluster scale rather than macro/chip wide scale is the way forward. Kinda like OoO versus in order CPU's, it eats extra power but allows far superior performance for equal coding effort.


----------



## Cybertox

Quote:


> Originally Posted by *KarathKasun*
> 
> Async compute as is implemented by AMD is vastly superior to what NV is currently doing. Allowing fast context switching on a cluster scale rather than macro/chip wide scale is the way forward. Kinda like OoO versus in order CPU's, it eats extra power but allows far superior performance for equal coding effort.


Again, refer to my original post. AMD cards are better suited for DX12 due to as you mentioned more advanced A-Sync compute but this is not the only factor which contributes to the overall picture. I stated that if the effort put into optimization would have been the same, the difference in DX12 performance between AMD and Nvidia would be not as significant as it can be seen here.

There is No doubt that the current AMD GPUs excel Nvidia GPUs in DX12. But not to such a drastic degree as here.


----------



## Assirra

Quote:


> Originally Posted by *infranoia*
> 
> But isn't this whole assumption incorrect? Didn't I read that DX12 doesn't lend itself to vendor-specific optimizations or discrete rendering paths?
> 
> So far the only thing that makes these DX12 releases "AMD games" is that they don't have GameWorks, and they use DX12.
> 
> I mean, isn't it just a little bit telling that we haven't seen an "Nvidia-sponsored" game written from the ground-up in DX12?


Does it show the "gaming evolved" logo on the start up? This is an honest question since i don't know.
If it does it is an AMD game in the means of AMD having better contact with the developers and better access with the game when it was in the making. Look at what happened with Tomb Raider 2013 for instance. Nvidia had the final code really late which resulted in a mess on launch for nvdia cards.
Remember that AMD and nvidia games existed before GameWorks was a thing.


----------



## infranoia

Quote:


> Originally Posted by *Cybertox*
> 
> No, as I mentioned in my post. A game which does not favor any of the two vendors, equally optimized for both, is going to portray true DX12 performance. Better optimization is not necessarily achieved through usage of alternative or discrete rendering paths, so what you read does not really apply. I reckon the same thing was said about DX11 but that was obviously not the case.


Well so far, the only Nvidia-specific vendor path optimization performed in AotS-- as one of the two released DX12 titles-- was to turn off Async Compute. And there are no vendor-specific paths in AotS for AMD, per @Kollock.
Quote:


> Originally Posted by *Kollock*
> 
> Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path.


In other words, AotS isn't path optimized for either vendor with the exception of Async Compute on/off, so it likely represents a clean-slate comparison of the two architectures under DX12.

Optimizing for AMD and optimizing for Nvidia with vendor-specific paths might be a better bench for a real-world game experience, but it stacks the deck on either end with dirty tricks, and isn't a *real* architecture-level DX12 benchmark.

...But then you can respond by saying, "but that's the way the world works, son!" and you'd be absolutely right. I only question what tricks Nvidia has with DX12 that could improve their showing, if any are possible. I honestly have no idea, nor does anyone here who isn't buried deep in Nvidia's driver group.


----------



## Defoler

Quote:


> Originally Posted by *infranoia*
> 
> But isn't this whole assumption incorrect? Didn't I read that DX12 doesn't lend itself to vendor-specific optimizations or discrete rendering paths?
> 
> So far the only thing that makes these DX12 releases "AMD games" is that they don't have GameWorks, and they use DX12.
> 
> I mean, isn't it just a little bit telling that we haven't seen an "Nvidia-sponsored" game written from the ground-up in DX12?


And what makes DX11 being favourite to one vendor or the other?

It doesn't matter to what API you are developing on. The way you make the calls, the way amount of data you are sending to the API, order of calls,, there are a hell of a lot of little things which can "make or break" performance.
A developer who works closely with one or another, will actively make the API run better, and since both vendors are very different in how they are implementing the hardware and drivers, you can really mess up with one vendor, and make your game very hard to optimise later to the other developer.

DX12 does not change that at all. It only makes a few things more dependent on the developer than on the vendor in terms of how to run the API.

I guess we haven't seen nvidia rushing with DX12, is because they are still working on it. While AMD have been pretty much concentrating on DX12 as their goal, nvidia did not. Hopefully nvidia will push more into DX12 as time go on. For 2016 I'm not sure it really matters that much, as the amount of DX12 in 2016 you can count on one hand right now (with all the late delays and issues lately). Maybe at the end of the year, and by then, pascal might make a few changes to nvidia (like polaris to AMD).


----------



## PontiacGTX

Quote:


> Originally Posted by *Assirra*
> 
> Does it show the "gaming evolved" logo on the start up? This is an honest question since i don't know.
> If it does it is an AMD game in the means of AMD having better contact with the developers and better access with the game when it was in the making. .


more than this? http://www.guru3d.com/news-story/nvidia-wanted-oxide-dev-dx12-benchmark-to-disable-certain-settings.html


----------



## BradleyW

Based on the benchmark between DX11 and DX12 @ 1080p, the R9 200 series does not seem to benefit from DX12. However, the R9 300 series does.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> Based on the benchmark between DX11 and DX12 @ 1080p, the R9 200 series does not seem to benefit from DX12. However, the R9 300 series does.


Interesting. I reflashed my 290x to a 390x just yesterday, so I'll have to compare benches once/if I get the game.

It's not like the old 6800GS to GT flash, though-- the 290x still shows up as a 290x, just with 390x bios and timings.


----------



## Defoler

Quote:


> Originally Posted by *infranoia*
> 
> Well so far, the only Nvidia-specific vendor path optimization performed in AotS-- as one of the two released DX12 titles-- was to turn off Async Compute. And there are no vendor-specific paths in AotS for AMD, per @Kollock.
> In other words, AotS isn't path optimized for either vendor with the exception of Async Compute on/off, so it likely represents a clean-slate comparison of the two architectures under DX12.
> 
> Optimizing for AMD and optimizing for Nvidia might be a better bench for a real-world game experience, but it stacks the deck on either end with dirty tricks, and isn't a *real* architecture-level DX12 benchmark.


Problem is that you don't need to "path optimised" to make the game optimised to one vendor.
All you need to do is work with the API in a way that favours one vendor or the other, and you tip the scales on way or the other.

There are several examples in this thread showing both nvidia and amd taking advantage of it on some games.

DX12 will not be different at all in that aspect. In the future we will most likely see DX12 games favouriting nvidia instead of amd because how they are being done.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Interesting. I reflashed my 290x to a 390x just yesterday, so I'll have to compare benches once/if I get the game.
> 
> It's not like the old 6800GS to GT flash, though-- the 290x still shows up as a 290x, just with 390x bios and timings.


That would be great if you could test.


----------



## Defoler

Quote:


> Originally Posted by *PontiacGTX*
> 
> more than this? http://www.guru3d.com/news-story/nvidia-wanted-oxide-dev-dx12-benchmark-to-disable-certain-settings.html


But that was pre async drivers and pre DX12 optimised drivers from nvidia.

Nvidia official statement back then was that once the game is out, there will be good DX12 optimised drivers with full async support, and for development and benchmarking tests, they asked for certain things to be disabled in order to make a decent comparison between the two as comparing optimised vs non-optimised drivers is well... cheating in a way.

I guess everyone forgot that a couple of months later nvidia provided a beta update to DX12 which showed them being neck in neck with AMD even with a bit of async in the mix.

overall, both vendors will progress and fix DX12 issues (hopefully, as microsoft seems to try and put sticks in the DX12 wheels).


----------



## Semel

Quote:


> Originally Posted by *Assirra*
> 
> Look at what happened with Tomb Raider 2013 for instance. Nvidia had the final code really late which resulted in a mess on launch for nvdia cards.


I remember that. I had a titan back then. The difference is it was fixed and quite fast. Unlike most if not all gimpworks games that still have bad performance on anything except for 980ti (latest games).

Add to that closed source vs open source...

Add to that nvidia pubic hair's disastrous performance vs open source tressfx 3.0 outstanding performance..

Yeah..


----------



## Carniflex

So - has anyone figured out if the DX12 version of the game actually _*shows*_ any frames higher than 60 fps? Or it just drops them and happily reports ~5% higher fps than DX11 which goes the extra step to actually show these frames?


----------



## Tgrove

So now people are comparing dx12 to gameworks/proprietary tech just becsuse nvidia hasnt poised itself to take advantage....

I seem to have 2 gtx 970 boxes that has dirext 12 listed on the front of the box.

So how is it proprietary if theyre cards are dx12 enabled? Oh you mean nvidia plays the short game so now its coming back to haunt them?

When i started pc gaming, and still to this day, the biggest problem i see is our hardware not being taken full advantage of.

That day seems to finally be upon us, but people are not happy like they should be.

Maybe they shouldnt have marketed their cards as dx12?


----------



## Assirra

Quote:


> Originally Posted by *Semel*
> 
> I remember that. I had a titan back then. The difference is it was fixed and quite fast. Unlike most if not all gimpworks games that still have bad performance on anything except for 980ti (latest games).
> 
> Add to that closed source vs open source...
> 
> Add to that nvidia pubic hair's disastrous performance vs open source tressfx 3.0 outstanding performance..
> 
> Yeah..


I am not saying GameWorks is a good. Sadly it had potential but never used well. Those animals in the Witcher 3 look gorgeous with hairworks but sadly it tanks performance pretty hard.

My point was that the whole "amd game" and "nvidia game" was out way before then.
From my point of view the post i quoted was implying there i no such thing and the only reason i get claimed as "AMD game" is because it doesn't use gameworks when it past it just meant who had more access to the game in the works and worked closer to the devs.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Defoler*
> 
> Problem is that you don't need to "path optimised" to make the game optimised to one vendor.
> All you need to do is work with the API in a way that favours one vendor or the other, and you tip the scales on way or the other.
> 
> There are several examples in this thread showing both nvidia and amd taking advantage of it on some games.
> 
> DX12 will not be different at all in that aspect. In the future we will most likely see DX12 games favouriting nvidia instead of amd because how they are being done.


dx12 by default will lock out all the external injected apis its something ms did it and said since the beginning and this is what makes it "pure" in a sense
now vulkan being open source will actually be very biased on nvidia games i mean heck even the demos nvidia made for vulkan cannot be run on amd cards which is stupid considering that vulkan is mantle..


----------



## killerhz

well finally got to install windows 10, hitman and game and benchmark just keeps crashing ugh...


----------



## Kana-Maru

For those who missed my post late last night, I have my Fury X benchmarks up on my blog. I have actual in-game benchmarks as well.

*Hitman DX 12 Fury X Benchmarks*

Check out my results here:
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks

Next up will be the Paris level benchmarked - 100% maxed @ 4K.


----------



## fewness

Had to disable SLI to get DX12 mode running.....


Spoiler: Settings



RESOLUTION: 3840 x 2160
ResolutionWidth = 3840
ResolutionHeight = 2160
Refreshrate = 60
Fullscreen = 1
ExclusiveFullscreen = 0
VSync = 0
VSyncInterval = 1
Monitor = 0
Adapter = 0
Aspectratio = 0
WindowPosX = 0
WindowPosY = 0
WindowWidth = 3840
WindowHeight = 2160
Stereoscopic = 0
Stereo_Depth = 3.000000
Stereo_Strength = 0.030000
WindowMaximized = 0
FocusLoss = 0
UseGdiCursor = 0
ShadowQuality = 3
ShadowResolution = 2
TextureResolution = 2
TextureFilter = 4
SSAO = 1
MirrorQuality = 0
AntiAliasing = 2
LevelOfDetail = 3
MotionBlur = 0
Bokeh = 0
SuperSampling = 1.000000
Gamma = 1.000000
QualityProfile = 4



Benchmark Results:
---- CPU ----
4908 frames
42.97fps Average
9.34fps Min
96.40fps Max
23.27ms Average
10.37ms Min
107.11ms Max
---- GPU ----
4908 frames
43.07fps Average
8.74fps Min
89.84fps Max
23.22ms Average
11.13ms Min
114.38ms Max


----------



## Glottis

This game's launch is a disaster, this game's performance and optimization is a complete disaster. Surprised how little people here talk about that, i guess everyone is too focused on reading websites benchmark dx11 vs 12 to see what disastrous launch this game had.









"49% of the 2341 reviews are positive." So much for AMD games being well optimized.


----------



## sugarhell

Quote:


> Originally Posted by *Glottis*
> 
> This game's launch is a disaster, this game's performance and optimization is a complete disaster. Surprised how little people here talk about that, i guess everyone is too focused on reading websites benchmark dx11 vs 12 to see what disastrous launch this game had.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> "49% of the 2341 reviews are positive." So much for AMD games being well optimized.


Stop trolling.

Because it is optimized doesnt mean it is a good game.


----------



## Kana-Maru

Quote:


> Originally Posted by *Glottis*
> 
> This game's launch is a disaster, this game's performance and optimization is a complete disaster. Surprised how little people here talk about that, i guess everyone is too focused on reading websites benchmark dx11 vs 12 to see what disastrous launch this game had.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> "49% of the 2341 reviews are positive." So much for AMD games being well optimized.


Do we REALLY need to go down the list of Nvidia sponsored titles since 2008 up until now. No we don't need to tackle that long list. I'm actually playing while benchmarking and enjoying the game. Just like all games released in the digital age, we will have to wait for patches before all issues are addressed. Man you guys act like AMD developed the game or something or this isn't common [Patches\Day 1 patches\hot-fixes]. The devs will address the issues some people are having.


----------



## Glottis

Quote:


> Originally Posted by *sugarhell*
> 
> Stop trolling.
> 
> Because it is optimized doesnt mean it is a good game.


Stop denial.

It's neither optimized nor is it good. http://store.steampowered.com/app/236870/


----------



## GoLDii3

Quote:


> Originally Posted by *Glottis*
> 
> Stop denial.
> 
> It's neither optimized nor is it good. http://store.steampowered.com/app/236870/


Your point being? Thanks captain obvious.


----------



## airfathaaaaa

Quote:


> Originally Posted by *Glottis*
> 
> Stop denial.
> 
> It's neither optimized nor is it good. http://store.steampowered.com/app/236870/


since when opinions do form a factual result about a game being nice or not?


----------



## valentyn0

By that logic then nothing should matter, if opinion is not considered then nothing is considered !

Now if we talk about statistics, they both are made up of opinions that result in a conclusion (fact).

Should i spell it out loud for you?


----------



## airfathaaaaa

Quote:


> Originally Posted by *valentyn0*
> 
> By that logic then nothing should matter, if opinion is not considered then nothing is considered !
> 
> Now if we talk about statistics, they both are made up of opinions that result in a conclusion (fact).
> 
> Should i spell it out loud for you?


so if 100 people says its bad and i actually enjoy it doesnt that mean the 100 was right and i was wrong? since when? since when YOUR opinion matters more than someones else?


----------



## killerhz

hope they patch or update this game so i can play it.


----------



## Stige

Quote:


> Originally Posted by *Glottis*
> 
> This game's launch is a disaster, this game's performance and optimization is a complete disaster. Surprised how little people here talk about that, i guess everyone is too focused on reading websites benchmark dx11 vs 12 to see what disastrous launch this game had.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> "49% of the 2341 reviews are positive." So much for AMD games being well optimized.


I can run it at constant 60-65+ with my 390 at 1440p by lowering both shadow settings to Medium, rest at max. AA off.

Seems pretty acceptable to me at 1440p, and it looks decent enough if you ask me.


----------



## Glottis

Quote:


> Originally Posted by *Stige*
> 
> I can run it at constant 60-65+ with my 390 at 1440p by lowering both shadow settings to Medium, rest at max. AA off.
> 
> Seems pretty acceptable to me at 1440p, and it looks decent enough if you ask me.


My friend with a 970 OCed and 4790K OCed gets 40fps with medium settings in Hitman at 1080p in DX11 and DX12 for him isn't even playable, constant crashfest. IMO this is absurd performance on such specs.


----------



## sugarhell

Quote:


> Originally Posted by *Glottis*
> 
> My friend with a 970 OCed and 4790K OCed gets 40fps with medium settings in Hitman at 1080p in DX11 and DX12 for him isn't even playable, constant crashfest. IMO this is absurd performance on such specs.


I have a friend with a 970 OCed and 4790k OCed and he gets 120 fps @4k all maxed out


----------



## Stige

Quote:


> Originally Posted by *Glottis*
> 
> My friend with a 970 OCed and 4790K OCed gets 40fps with medium settings in Hitman at 1080p in DX11 and DX12 for him isn't even playable, constant crashfest. IMO this is absurd performance on such specs.


Nvidia hah

Quote:


> Originally Posted by *sugarhell*
> 
> I have a friend with a 970 OCed and 4790k OCed and he gets 120 fps @4k all maxed out


If you are gonna troll, atleast try harder.


----------



## Cybertox

Individual performance experiences and overall game impressions vary to great degrees. However, the statistical majority is what gives us credible reference points which partially reflect the actual quality of a game. Would that be in terms of technical optimization or game design.


----------



## GorillaSceptre

Steam performance reviews will always be useless to me until they show/verify what systems people are running on. At this point about 90% of Steam users use 980 Ti's..









Reading some of the reviews and what people expect from their 280X's and 780m's makes me









The always online is funny as hell though, what morons thought that was a good idea?







I wouldn't be surprised if that has completely broken the game since benches were posted..


----------



## Catscratch

One thing is sure: AMD cpus greatly benefit from dx12. Intel is in trouble







Probably why DX12 titles show AMD on par with NVIDIA on dx11 mode. The game is written for dx12 so use more resources which gives lost potential back to AMD.


----------



## Catscratch

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Steam performance reviews will always be useless to me until they show/verify what systems people are running on. At this point about 90% of Steam users use 980 Ti's..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Reading some of the reviews and what people expect from their 280X's and 780m's makes me
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The always online is funny as hell though, what morons thought that was a good idea?
> 
> 
> 
> 
> 
> 
> 
> I wouldn't be surprised if that has completely broken the game since benches were posted..


What 280x users should see that "our" cards should be left behind by 380 and 285 in dx12 mode but still gain a few more fps over dx11. Otherwise the game's dx12 implemantation is just for show.


----------



## BradleyW

It seems the R9 290 did not benefit from Direct-X 12 in Hitman. I'd say AMD have reserved their performance gains for 300/Fury series only, *just to screw us 200 users into buying a new GPU.*


----------



## Catscratch

Quote:


> Originally Posted by *BradleyW*
> 
> It seems the R9 290 did not benefit from Direct-X 12 in Hitman. I'd say AMD have reserved their performance gains for 300/Fury series only, *just to screw us 200 users into buying a new GPU.*


People forget, how games coded directly affects the performance too regardless of dx12. Remember there are dx11 titles that favor AMD.

The first time I heard about DX12 and Async Compute, also Star Swarm tech demo, I just thought this could be big for RTS where you have many units and AI has to issue lotsa commands. I never thought FPS games would benefit that much. Then again, it also depends on how the game is coded. This is just the beginning, noone represents the best of dx12 yet.


----------



## killerhz

Quote:


> Originally Posted by *Glottis*
> 
> My friend with a 970 OCed and 4790K OCed gets 40fps with medium settings in Hitman at 1080p in DX11 and *DX12 for him isn't even playable, constant crashfest.* IMO this is absurd performance on such specs.


this is what happens to me and i refuse to play it until it's fixed. $60 game just taking up much needed space on my SSD and pissing me off...


----------



## Cybertox

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Steam performance reviews will always be useless to me until they show/verify what systems people are running on. At this point about 90% of Steam users use 980 Ti's..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Reading some of the reviews and what people expect from their 280X's and 780m's makes me
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The always online is funny as hell though, what morons thought that was a good idea?
> 
> 
> 
> 
> 
> 
> 
> I wouldn't be surprised if that has completely broken the game since benches were posted..


Always online is a counter-measure for piracy. And that is the sole reason why certain games are online only. Its not about a good or a bad idea but its about how to prevent or decrease piracy. You must admit, its been pretty effective so far. No constant online verification, no-way for pirates to play the game. Some game assets are even streamed trough the internet connection.


----------



## GorillaSceptre

Quote:


> Originally Posted by *Catscratch*
> 
> What 280x users should see that "our" cards should be left behind by 380 and 285 in dx12 mode but still gain a few more fps over dx11. Otherwise the game's dx12 implemantation is just for show.


I agree, i was meaning the context in which people are talking about them. I saw one guy in the Hitman reviews say; "My 780m runs the Division at 180p/60fps Ultra" Yeah right..








Quote:


> Originally Posted by *BradleyW*
> 
> It seems the R9 290 did not benefit from Direct-X 12 in Hitman. I'd say AMD have reserved their performance gains for 300/Fury series only, *just to screw us 200 users into buying a new GPU.*


That's unacceptable if true..







They won't get away with it if that's the case.









Got a link ?
Quote:


> Originally Posted by *Cybertox*
> 
> Always online is a counter-measure for piracy. And that is the sole reason why certain games are online only. Its not about a good or a bad idea but its about how to prevent or decrease piracy. You must admit, its been pretty effective so far. No constant online verification, no-way for pirates to play the game. Some game assets are even streamed trough the internet connection.


I know, but there are other measures that can be taken, Denuvo etc. Making people's games crash and forcing them to be online is garbage, and i'll never touch a SP game that does it. It seems like it's a big cause of the performance issues in this title.


----------



## BradleyW

Quote:


> Originally Posted by *GorillaSceptre*
> 
> I agree, i was meaning the context in which people are talking about them. I saw one guy in the Hitman reviews say; "My 780m runs the Division at 180p/60fps Ultra" Yeah right..
> 
> 
> 
> 
> 
> 
> 
> 
> That's unacceptable if true..
> 
> 
> 
> 
> 
> 
> 
> They won't get away with it if that's the case.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Got a link ?


It is my guess based on the performance review in the OP. Click the link, go to the 1080p result and compare 290 performance on DX11 and DX12. Then compare again with the 390 and Fury. You'll see little to no gain on the R9 200 series. (link at top of OP).


----------



## sugarhell

Quote:


> Originally Posted by *Stige*
> 
> Nvidia hah
> If you are gonna troll, atleast try harder.


I tho that my sarcasm was obvious..


----------



## GorillaSceptre

Quote:


> Originally Posted by *BradleyW*
> 
> It is my guess based on the performance review in the OP. Click the link, go to the 1080p result and compare 290 performance on DX11 and DX12. Then compare again with the 390 and Fury. You'll see little to no gain on the R9 200 series. (link at top of OP).


Yeah, i don't get it.. The same thing happened with the 290X/390X in Blops3 even though they're pretty much identical. I wonder if it's drivers or the game itself? The 200 series is fine in Ashes so i don't think it's intentional whatever it is.


----------



## EightDee8D

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yeah, i don't get it.. The same thing happened with the 290X/390X in Blops3 even though they're pretty much identical. I wonder if it's drivers or the game itself? The 200 series is fine in Ashes so i don't think it's intentional whatever it is.


i think it's 8gb vram, and faster memory speeds. anyway someone with 290x can overclock and try this game to see if it improves performance to the same lvl as 390x.


----------



## BradleyW

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Yeah, i don't get it.. The same thing happened with the 290X/390X in Blops3 even though they're pretty much identical. I wonder if it's drivers or the game itself? The 200 series is fine in Ashes so i don't think it's intentional whatever it is.


Yeah BO3 was a driver issue.
Ashes might be fine, but AMD might have limited the 200 series to boost Polaris and current gen sales.


----------



## Cybertox

For those wondering about the 290X / 390.


----------



## fewness

Guys, how do you get DX12 mode for Fury X to work? it's either instant crash or black screen for me so far....


----------



## BradleyW

Quote:


> Originally Posted by *Cybertox*
> 
> For those wondering about the 290X / 390.


Well this is all obvious stuff. Unless the 290 is running out of VRAM, I don't see any reason why it's not showing a gain in DX12, especially compared to the 390.


----------



## MonarchX

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Steam performance reviews will always be useless to me until they show/verify what systems people are running on. *At this point about 90% of Steam users use 980 Ti's*..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Reading some of the reviews and what people expect from their 280X's and 780m's makes me
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The always online is funny as hell though, what morons thought that was a good idea?
> 
> 
> 
> 
> 
> 
> 
> I wouldn't be surprised if that has completely broken the game since benches were posted..


I assume you're being sarcastic, right? Less than 1% own GTX 980 Ti. However, Steam reviews are not useless. I often find them to be more accurate than GameRankings and/or Metacritic reviews because Steam reviews are raw user opinions. Also, most of the opinions voiced about recent games are mostly about gameplay, not visuals and/or performance. Divison, for example, is getting high scores on GameRankings, but the game is incredible boring, shallow, and not that pretty. Steam reviews (some 68% Mixed rating) are DEAD-ON. Critic reviews say the game is about 8/10 or even 8.5/10. The same critics gave the same high ratings to the latest COD game...


----------



## BradleyW

Quote:


> Originally Posted by *fewness*
> 
> Guys, how do you get DX12 mode for Fury X to work? it's either instant crash or black screen for me so far....


Make sure "Render Target Reuse" is Enabled in the options.


----------



## fewness

Quote:


> Originally Posted by *BradleyW*
> 
> Make sure "Render Target Reuse" is Enabled in the options.


That's enabled already. Auto or enabled, both crash.


----------



## Mahigan

Quote:


> Originally Posted by *Cybertox*
> 
> No, *as I mentioned in my post. A game which does not favor any of the two vendors, equally optimized for both, is going to portray true DX12 performance*. Better optimization is not necessarily achieved through usage of alternative or discrete rendering paths, so what you read does not really apply. I reckon the same thing was said about DX11 but that was obviously not the case.


Ashes of the Singularity.


----------



## BradleyW

Quote:


> Originally Posted by *fewness*
> 
> That's enabled already. Auto or enabled, both crash.


Could be a driver issue then.
Tried disabling it?


----------



## fewness

Quote:


> Originally Posted by *BradleyW*
> 
> Could be a driver issue then.
> Tried disabling it?


Well others have benchmarked it so there got have a way to run...

Actually I just found that in Window mode it will run, but resolution stuck at 1080p....no matter what resolution I select it runs at 1080p....


----------



## BradleyW

Quote:


> Originally Posted by *fewness*
> 
> Well others have benchmarked it so there got have a way to run...
> Actually I just found that in Window mode it will run, but resolution stuck at 1080p....no matter what resolution I select it runs at 1080p....


This "might" help with that, maybe....
Quote:


> UPDATE 3: Workaround for Framelock on Fiji and Tonga
> With a pinch of investigative journalism and a lively correspondence and exchange of ideas with AMD, we could now actually develop a somewhat exotic workaround for framelock problem. And how it works: A 144-Hz display must be connected in addition to the UHD display of our test system, the maximum resolution is not important. Now the 144-Hz monitor is set as the primary display, then finally the display will be placed on the secondary display the launcher of Hitman. Now eliminates the 60 fps lock in exclusive full screen mode. Whether this is a driver error or a bug in the game, it is still unclear to the elimination of the problem is but worked. It will therefore appear in the foreseeable future either a game update or a hotfix, the Radeon Software 16.3. And we can now provide benchmark values for AMD's latest GCN GPUs, beginning with a Sapphire Radeon R9 Fury Nitro / 4G, which provides full HD a convincing performance. To demonstrate the memory hunger of the game a little more transparent, we have also tested against a Sapphire R9 290 Tri-X OC / 4G. Now also adds: AMD Tonga full expansion in the form of slightly overclocked PowerColor R9 380X Myst / 4G and Nvidia GM204 circumcised in the form he vigorously boost border Gainward GTX 970 Phoenix / 3.5 + 0.5G. Interestingly, both graphics cards with DirectX 12 performance lower than under DirectX 11, in the case of the GTX 970 however, this is analogous to most other Nvidia GPUs. The R9 380X is possibly very well utilized already under DX11 - so much so that asynchronous Compute possibly already slightly slows. This would certainly be a logical explanation: If a GPU already at capacity limit, bring the splitting and outsourcing of operations little or no performance gain, perhaps even lost some performance. In this aspect is not unlike asychronous Compute hyperthreading.


http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


----------



## Charcharo

Quote:


> Originally Posted by *valentyn0*
> 
> By that logic then nothing should matter, if opinion is not considered then nothing is considered !
> 
> Now if we talk about statistics, they both are made up of opinions that result in a conclusion (fact).
> 
> Should i spell it out loud for you?


If 1 000 people say something is bad but I consider it good, and it is a subjective piece of art... then that means 1 000 people are wrong









Sure I can and will listen to their arguments, but that majority opinion should not be used as a hammer. Hitman is anti-consumer it seems and half-baked. All valid points to dislike it. As for whether the ACTUAL game is good? I have no idea. That is for everyone to decide for themselves.

Only informed and well argumented and thought out opinions should be used in discussions though.

Still, all this is about the game's performance. Must it be so demanding ? I again do not know, but remember, there is more to a game that can make it demanding than just graphics.


----------



## GorillaSceptre

Quote:


> Originally Posted by *MonarchX*
> 
> I assume you're being sarcastic, right? Less than 1% own GTX 980 Ti. However, Steam reviews are not useless. I often find them to be more accurate than GameRankings and/or Metacritic reviews because Steam reviews are raw user opinions. Also, most of the opinions voiced about recent games are mostly about gameplay, not visuals and/or performance. Divison, for example, is getting high scores on GameRankings, but the game is incredible boring, shallow, and not that pretty. Steam reviews (some 68% Mixed rating) are DEAD-ON. Critic reviews say the game is about 8/10 or even 8.5/10. The same critics gave the same high ratings to the latest COD game...


Yup, i was being sarcastic.

The Division looks fun to me, a few of my friends are enjoying it so i may pick it up. That's fine if you appreciate peoples opinions on the game itself, but i leave subjective things up to me. I mainly use reviews for performance reasons, except for reviewers i know enjoy the same types of thing as i do, and performance reviews mean nothing when i don't know/ can't be sure what hardware they're on.


----------



## infranoia

Ugh. You guys are going to make me buy this murder simulator just to bench my 290x -> 390x reflash.

I'm gonna have to think about this. Hitman's never been my thing. My bench nerd is up against my gaming nerd.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Ugh. You guys are going to make me buy this murder simulator just to bench my 290x -> 390x reflash.
> 
> I'm gonna have to think about this. Hitman's never been my thing. My bench nerd is up against my gaming nerd.


Do it


----------



## Cybertox

I highly doubt that a flashed 290X is going to get same results as the 390 in the showcased benchmarks. The differences in specifications are more likely to be the changing factors and not the BIOS.


----------



## fewness

Quote:


> Originally Posted by *BradleyW*
> 
> This "might" help with that, maybe....
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


Thanks but that's not the problem I'm facing. But I figured out how to change the window resolution now...you JUST NEED TO MAXIMIZE IT!







Full screen is still a no but at least I can benchmark in windowed mode now.


----------



## Kriant

Sooo wait, the R9 390 and 390x rebrands get a hefty boost, but R9 290 and 290x don't show any meaningful gains? o_0. Did devs messed up somewhere, or did AMD just took a chapter out of Nvidia's "dirty tricks to force ppl to upgrade" book?


----------



## zealord

Quote:


> Originally Posted by *Kriant*
> 
> Sooo wait, the R9 390 and 390x rebrands get a hefty boost, but R9 290 and 290x don't show any meaningful gains? o_0. Did devs messed up somewhere, or did AMD just took a chapter out of Nvidia's "dirty tricks to force ppl to upgrade" book?


pardon me what benchmark are you actually referring to?


----------



## Kriant

Quote:


> Originally Posted by *zealord*
> 
> pardon me what benchmark are you actually referring to?


Hitman 2016 bench. Looking at pcgamehardware.de it seems that where r9 390 gains 7-10fps, R9 290 gains 1-2 o_0 by virtue of switching to DX12


----------



## Kana-Maru

^The 390X can perform more concurrent commands than the 290X right? The 390X is a updated 290X pretty much.


----------



## Remij

Quote:


> Originally Posted by *BradleyW*
> 
> It seems the R9 290 did not benefit from Direct-X 12 in Hitman. I'd say AMD have reserved their performance gains for 300/Fury series only, *just to screw us 200 users into buying a new GPU.*


You mean to screw 200 users into buying essentially the same gpu again.


----------



## zealord

Quote:


> Originally Posted by *Kriant*
> 
> Hitman 2016 bench. Looking at pcgamehardware.de it seems that where r9 390 gains 7-10fps, R9 290 gains 1-2 o_0 by virtue of switching to DX12


They are a little bit different. The 390 has a slightly higher core clock. a much higher mem clock and twice the RAM. so naturally it has to perform better. If I were to take a guess it has something to do with DX12 being less limited by the mem clock and VRAM size, but that is a total guess

It is something that we should keep an eye on when the sample size gets bigger, but as for now it might be an inconsistency or be related to what I said above.

AMD said :
Quote:


> AMD is pleased to bring you the new R9 390 series which has been in development for a little over a year now. To clarify, the new R9 390 comes standard with 8GB of GDDR5 memory and outpaces the 290X. Some of the areas AMD focused on are as follows:
> 
> 1) Manufacturing process optimizations allowing AMD to increase the engine clock by 50MHz on both 390 and 390X while maintaining the same power envelope
> 
> 2) New high density memory devices allow the memory interface to be re-tuned for faster performance and more bandwidth
> · Memory clock increased from 1250MHz to 1500MHz on both 390 and 390X
> · Memory bandwidth increased from 320GB/s to 384GB/s
> · 8GB frame buffer is standard on ALL cards, not just the OC versions
> 
> 3) Complete re-write of the GPUs power management micro-architecture
> · Under "worse case" power virus applications, the 390 and 390X have a similar power envelope to 290X
> · Under "typical" gaming loads, power is expected to be lower than 290X while performance is increased"


It is optimized, probably working better with DX12 because of that. They are pretty close though.


----------



## Mahigan

Check the memory usage. It's probably related to the framebuffer usage. I think that an 8GB 290x would gain a boost.


----------



## looniam

Quote:


> Originally Posted by *Remij*
> 
> Quote:
> 
> 
> 
> Originally Posted by *BradleyW*
> 
> It seems the R9 290 did not benefit from Direct-X 12 in Hitman. I'd say AMD have reserved their performance gains for 300/Fury series only, *just to screw us 200 users into buying a new GPU.*
> 
> 
> 
> You mean to screw 200 users into buying essentially the same gpu again.
Click to expand...

ouch. i was going saying something, "welcome to the green side, we have cookies and enjoy your stay."

but that was plain mean.









i like it


----------



## Remij

Quote:


> Originally Posted by *looniam*
> 
> ouch. i was going saying something, "welcome to the green side, we have cookies and enjoy your stay."
> 
> but that was plain mean.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> i like it


Tis a cruel cruel world in the land of discrete gpus.


----------



## OneB1t

nope 290X will not get 390 results

because it will get 390X results


----------



## infranoia

Is the benchmark included in the Intro Pack? 15 bucks is a bit easier to swallow than $60 for a game I'll never play.


----------



## Remij

Quote:


> Originally Posted by *infranoia*
> 
> Is the benchmark included in the Intro Pack? 15 bucks is a bit easier to swallow than $60 for a game I'll never play.


Yeah it is.


----------



## steadly2004

Quote:


> Originally Posted by *infranoia*
> 
> Is the benchmark included in the Intro Pack? 15 bucks is a bit easier to swallow than $60 for a game I'll never play.


Quote:


> Originally Posted by *Remij*
> 
> Yeah it is.


good to know. downloading now just to see what my setup does and how it is less powerful than furyX.... lol


----------



## fewness

OK, my results. Tested at 4K max settings. Only difference is TitanX was running in full screen, FuryX was in windowed mode, because full screen FuryX always crashes.
Results of the built-in benchmark.

For TTX to catch up with [email protected] using Auto: you need oc it to 1180
For TTX to catch up with [email protected] using Enable: oc to 1282

For TTX to catch up with [email protected] using Auto: oc TTX to 1258
For TTX to catch up with [email protected] using Enable: oc TTX to 1361

And just for fun....if the trends hold for both cards, at Core frequency > 1735, or 2369, depends on your choice of Render Target Reuse, TTX will eventually win by delivering more fps/MHz...
just for fun


----------



## infranoia

My results, 290x reflashed to 390x and bumped to 1100/1350. All settings maxed per resolution. On a Haswell OC to 4.7GHz.

DirectX 11 @1920 x 1080:
Benchmark Results:

Code:



Code:


---- CPU ----
9294 frames
 81.32fps Average
 10.86fps Min
263.57fps Max
 12.30ms Average
  3.79ms Min
 92.08ms Max

Well, so much for that. Sorry folks, live and learn. I cannot get DX12 to start without a pixel glitch and a CTD, even dropping down to stock speeds.

Since Ashes of the Singularity runs any number of times on these clocks in DX12 (not to mention Hitman's own DX11 path), I know just where to point the finger at.


----------



## steadly2004

ok, I downloaded the intro pack and can run the benchmark.

The DX11 runs just fine, but at the end I don't get a final result? do I have to look at the top left of the screen and hope I can see the numbers just before the test shuts off?

Also SLI was broken until updating to newest driver. 362.00 didn't work, but 364.51 was fine.

DX12 runs and and completes but just shows a black screen until the end in which it will return to the desktop.... but I know its running because I can hear the audio in the background.

*note* I'm testing at 4k


----------



## infranoia

Quote:


> Originally Posted by *steadly2004*
> 
> ok, I downloaded the intro pack and can run the benchmark.
> 
> The DX11 runs just fine, but at the end I don't get a final result? do I have to look at the top left of the screen and hope I can see the numbers just before the test shuts off?
> 
> Also SLI was broken until updating to newest driver. 362.00 didn't work, but 364.51 was fine.
> 
> DX12 runs and and completes but just shows a black screen until the end in which it will return to the desktop.... but I know its running because I can hear the audio in the background.
> 
> *note* I'm testing at 4k


I can't even get that far on DX12. Does it write a %userdir%\hitman\profiledata.txt file?


----------



## steadly2004

Quote:


> Originally Posted by *infranoia*
> 
> I can't even get that far on DX12. Does it write a %userdir%\hitman\profiledata.txt file?


I don't see it in the hitman folder.... ah I found it. It's in the user file like you said. Same place the pictures and downloads folder is located.

OK, DX12 showed while it was benchmarking this time.

GPU's at 1440 or so Mhz.
With SLI on FPS average:
DX11- 78
DX12- 55

and for some reason on DX12 it never ramped up the 2nd gpu, but apparently used it some because it sat at ~900mhz instead of 135 or whatever the idle freq was.


----------



## OneB1t

sli not working


----------



## BradleyW

Looks like I won't be playing this crap in DX12 mode.


----------



## OneB1t

it have very good results with AMD cpus














(hello console lowlevel optimalization)


----------



## magnek

Quote:


> Originally Posted by *fewness*
> 
> OK, my results. Tested at 4K max settings. Only difference is TitanX was running in full screen, FuryX was in windowed mode, because full screen FuryX always crashes.
> Results of the built-in benchmark.
> 
> For TTX to catch up with [email protected] using Auto: you need oc it to 1180
> For TTX to catch up with [email protected] using Enable: oc to 1282
> 
> For TTX to catch up with [email protected] using Auto: oc TTX to 1258
> For TTX to catch up with [email protected] using Enable: oc TTX to 1361
> 
> And just for fun....if the trends hold for both cards, at Core frequency > 1735, or 2369, depends on your choice of Render Target Reuse, TTX will eventually win by delivering more fps/MHz
> just for fun


Stop giving AMD's marketing team ideas dammit!


----------



## GoLDii3

So i bought the intro pack just for fun...gonna get this game sometime but not now. brb waiting for the rest of the missions. No thanks.

And i have had absolutely no problems playing in DX12 with my 7970...

I played it on my 7970 at 1130 MHz core with those settings

Medium textures - VRAM does not go over 2,5 GB. Anyways textures look legit top notch.
SMAA
Shadows on High and Shadow resolution to medium
SSAO
LOD High
1080p

Only hit 30 FPS on areas with a lot of NPC's,most of the time was at atleast 45 FPS. This all in the intro level and the russian commander level.

AND the Steam overlay FPS counter worked just fine.

Benchmark DX11 vs DX12 1080p



Spoiler: DX11



HD 7970 @1140 MHz/GDDR5 1560 MHz i5 4440 W10

Benchmark Results:
---- CPU ----

51.94fps Average
1.95fps Min
183.25fps Max





Spoiler: DX12



HD 7970 @1140 MHz/GDDR5 1560 MHz i5 4440 W10

---- CPU ----

55.32fps Average
4.92fps Min
203.97fps Max

---- GPU ----

55.77fps Average
4.74fps Min
116.10fps Max


----------



## BradleyW

AMD, we all know what you are up to! I'm willing to call you out right now.

HITMAN BENCHMARK 1080P
Quote:


> : 290 DX11 min fps 52
> : 290 DX12 min fps 53
> 
> : 390 DX11 min fps 52
> : 390 DX12 min fps 59


Source

Stop trying to hinder the R9 200 performance! We won't fall for it, nor stand for it.


----------



## OneB1t

flash 390 bios into 290 and performance will be same


----------



## Mahigan

Quote:


> Originally Posted by *BradleyW*
> 
> AMD, we all know what you are up to! I'm willing to call you out right now.
> 
> HITMAN BENCHMARK 1080P
> Source
> 
> Stop trying to hinder the R9 200 performance! We won't fall for it, nor stand for it.


Code running on Hawaii and Grenada would behave the same. So there's no way AMD could be hindering performance.

It is more likely that the game uses more VRAM than the 4GB buffer on the 290 series. This has been the case for many games being released, AMD or NVIDIA supported.

So more memory optimizations are in order here


----------



## OneB1t

there are memory related differences in 390 bios







thats why it performs better


----------



## Kriant

Quote:


> Originally Posted by *Mahigan*
> 
> Code running on Hawaii and Grenada would behave the same. So there's no way AMD could be hindering performance.
> 
> It is more likely that the game uses more VRAM than the 4GB buffer on the 290 series. This has been the case for many games being released, AMD or NVIDIA supported.
> 
> So more memory optimizations are in order here


More than 4gb vram being used at 1080p? Highly doubt that.


----------



## magnek

Quote:


> Originally Posted by *Mahigan*
> 
> Code running on Hawaii and Grenada would behave the same. So there's no way AMD could be hindering performance.
> 
> It is more likely that the game uses more VRAM than the 4GB buffer on the 290 series. This has been the case for many games being released, AMD or NVIDIA supported.
> 
> So more memory optimizations are in order here


This can be easily tested by using a 290X 8GB card.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Mahigan*
> 
> Code running on Hawaii and Grenada would behave the same. So there's no way AMD could be hindering performance.
> 
> It is more likely that the game uses more VRAM than the 4GB buffer on the 290 series. This has been the case for many games being released, AMD or NVIDIA supported.
> 
> So more memory optimizations are in order here


Nope or Fury would fall same here.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> Since Ashes of the Singularity runs any number of times on these clocks in DX12 (not to mention Hitman's own DX11 path), *I know just where to point the finger at.*


Nvidia?








Quote:


> Originally Posted by *Mahigan*
> 
> Code running on Hawaii and Grenada would behave the same. So there's no way AMD could be hindering performance.
> 
> It is more likely that the game uses more VRAM than the 4GB buffer on the 290 series. This has been the case for many games being released, AMD or NVIDIA supported.
> 
> So more memory optimizations are in order here


52 FPS min doesn't sound like it is hitting a VRAM limit. The 19 FPS (and stuttering) you see in RotR, on the other hand.


----------



## Remij

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nope or Fury would fall same here.


Yea, something fishy is going on.

This whole thing is interesting though.

Still, across the board (besides AotS) I feel DX12 benchmarks are unreliable.


----------



## BradleyW

AMD also withheld performance optimizations from the 290 series with their tessellation improvements for the first two weeks. After much outcry, they combined the optimization for both 200/300 cards. Also the scalar is the same between 200/300, yet 300 allows higher resolutions through VSR. So AMD have a history of withholding things without good reason.


----------



## Forceman

Quote:


> Originally Posted by *BradleyW*
> 
> AMD also withheld performance optimizations from the 290 series with their tessellation improvements for the first two weeks. After much outcry, they combined the optimization for both 200/300 cards. Also the scalar is the same between 200/300, yet 300 allows higher resolutions through VSR. So AMD have a history of withholding things without good reason.


That's sounds suspiciously like you are saying AMD is, gasp, _gimping_ older cards.


----------



## looniam

Quote:


> Originally Posted by *Forceman*
> 
> Quote:
> 
> 
> 
> Originally Posted by *BradleyW*
> 
> AMD also withheld performance optimizations from the 290 series with their tessellation improvements for the first two weeks. After much outcry, they combined the optimization for both 200/300 cards. Also the scalar is the same between 200/300, yet 300 allows higher resolutions through VSR. So AMD have a history of withholding things without good reason.
> 
> 
> 
> That's sounds suspiciously like you are saying AMD is, gasp, _gimping_ older cards.
Click to expand...


----------



## Remij

Quote:


> Originally Posted by *Forceman*
> 
> That's sounds suspiciously like you are saying AMD is, gasp, _gimping_ older cards.


Both are guilty of this. But of course they are gonna incentivize their newer cards over their older cards... especially when they are rebrands. It is a business after all.


----------



## magnek

Quote:


> Originally Posted by *Forceman*
> 
> That's sounds suspiciously like you are saying AMD is, gasp, _gimping_ older cards.


Not providing further optimizations =/= intentional gimping

_oh wait..._


----------



## mtcn77

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nope or Fury would fall same here.


Check @sugarhell's post:
Quote:


> Originally Posted by *sugarhell*
> 
> Hawaii improved the tessellation vs the first generation.


If you look closely, the new "drivers" set a precedent between GCN 1.1 series parts. PCGH's special commentary on the subject is that they *couldn't* get the 290X to run on the new driver(as they did it on the R9 285 with the help of a special modded inf file). Fiji is GCN 2.0, FYI. Totally different reasons.







Quote:


> Catalyst 15:15 with strong tessellation improvements
> It has already indicated that AMD has improved the Catalyst Beta 15:15 the Tessellationsleistung. Now we have the results in black and white. Using our tessellation tests from the DirectX SDK (extended to Tessellationsfaktoren to 64) we have the performance of R9 390X, R9 380 R9 290X and R9 285 again take a close look. For the 200 models, we used a modified 15:15 driver modified inf file and again benched for comparison with the 15.5 beta. The tests with the 300 Radeons show that improvements are pure driving thing here.


However...
Quote:


> The published with the Radeon 300 series Catalyst 15:15 Beta is *only* suitable for the new 300-generation and Fury X. An *R9 290X* we could with the driver testweise *not get up and running*, here the Catalyst 15.6 Beta is the first choice. AMD promised to publish the Omega-1 driver, however, that in future all 200 Radeons should receive full support VSR - still helps here just waiting.


----------



## BradleyW

Quote:


> Originally Posted by *mtcn77*
> 
> Check @sugarhell's post:
> If you look closely, the new "drivers" set a precedent between GCN 1.1 series parts. PCGH's special commentary on the subject is that they *couldn't* get the 290X to run on the new driver(as they did it on the R9 285 with the help of a special modded inf file). Fiji is GCN 2.0, FYI. Totally different reasons.
> 
> 
> 
> 
> 
> 
> 
> 
> However...


After this they combined the drivers again. However I believe that certain hard coded optimisations are only for 300/Fury, even though 200 series can run them too.


----------



## ZealotKi11er

Nvidia user may be ok with Nvidias practices but if AMD gimps my card they will burn in Hell.


----------



## BradleyW

I better not receive the GTX 780 treatment on my R9 290X's.


----------



## infranoia

OK, do-overs. Knowing now that AMD cards can't do fullscreen DX12 without crashing (at least, this one can't, nor can the Fury apparently), and are limited to window rendering only, everything below is in windowed mode. To recap, 290x masquerading as 390x:



All settings maxed per resolution. On a Haswell OC to 4.7GHz.

DX11 @ 1920 x 1080 (Windowed):

Code:



Code:


Benchmark Results:
---- CPU ----
9316 frames
 81.55fps Average
  9.55fps Min
232.41fps Max
 12.26ms Average
  4.30ms Min
104.72ms Max

[/code]

DX12 @ 1920 x 1080 (Windowed):

Code:



Code:


Benchmark Results:
---- CPU ----
9833 frames
 86.04fps Average
  9.50fps Min
232.20fps Max
 11.62ms Average
  4.31ms Min
105.23ms Max
---- GPU ----
9817 frames
 86.23fps Average
  1.61fps Min
142.92fps Max
 11.60ms Average
  7.00ms Min
621.25ms Max

*M m m m e e e h h h h . . .*









Gonna wait this one out until they fix it. AotS this is not.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> OK, do-overs. Knowing now that AMD cards can't do fullscreen DX12 without crashing (at least, this one can't, nor can the Fury apparently), and are limited to window rendering only, everything below is in windowed mode. To recap, 290x masquerading as 390x:
> 
> 
> 
> All settings maxed per resolution. On a Haswell OC to 4.7GHz.
> 
> DX11 @ 1920 x 1080 (Windowed):
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> Benchmark Results:
> ---- CPU ----
> 9316 frames
> 81.55fps Average
> 9.55fps Min
> 232.41fps Max
> 12.26ms Average
> 4.30ms Min
> 104.72ms Max
> 
> [/code]
> 
> DX12 @ 1920 x 1080 (Windowed):
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> Benchmark Results:
> ---- CPU ----
> 9833 frames
> 86.04fps Average
> 9.50fps Min
> 232.20fps Max
> 11.62ms Average
> 4.31ms Min
> 105.23ms Max
> ---- GPU ----
> 9817 frames
> 86.23fps Average
> 1.61fps Min
> 142.92fps Max
> 11.60ms Average
> 7.00ms Min
> 621.25ms Max
> 
> *M m m m e e e h h h h . . .* Regression? Seriously?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gonna wait this one out until they fix it. AotS this is not.


Now we need the same test with your 290X on it's stock BIOS to see if there's a change.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> Now we need the same test with your 290X on it's stock BIOS to see if there's a change.


I thought the point was to prove delta. Isn't this enough to map against the 390x improvements?

TL;DR : DX11 to DX12 on an OC 290X masquerading with 390x BIOS and voltages = LITTLE TO NO IMPROVEMENT, but in windowed mode I might expect that.

Don't pull out the pitchforks until Exclusive Fullscreen works.


----------



## infranoia

The other thing that really chaps my hide about this benchmark is that it is different every time you run it.

The backstage crowd scenes have various numbers of NPCs walking around, doing different things in each run of the bench. It's fundamentally a lousy benchmark, only intended to give a rough estimate of your own gameplay experience with it, not to compare across devices. There is no control variable, it's all just higgledy-piggledy.


----------



## looniam

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Nvidia user may be ok with Nvidias practices but if AMD gimps my card they will burn in Hell.


easy now, where did anyone say being ok with anything?

wow touchy people.


----------



## ZealotKi11er

Quote:


> Originally Posted by *looniam*
> 
> easy now, where did anyone say being ok with anything?
> 
> wow touchy people.


It was a joke. Its People vs Corporations. We are in the same side. AMD is not better.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> I thought the point was to prove delta. Isn't this enough to map against the 390x improvements?


It would make the test accurate and fair. Comparing results between PC's can lead to inaccurate conclusions due to the difference in variables. It's just my opinion.

Edit: So the benchmark is different slightly with each run?
I'd do 3 runs of DX11 and DX12 with the flashed BIOS, then repeat with stock BIOS. It may just prove to unlock some answers, or leave us with more questions. Either way, It's progress.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> It would make the test accurate and fair. Comparing results between PC's can lead to inaccurate conclusions due to the difference in variables. It's just my opinion.


All that would bench is how a stock 290x on my system compares to a modded 390x BIOS, which isn't really the topic at hand. It would give no insight into whether the 290x DX11 to DX12 improvements match those of the 390x if it's pretending to be one.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> All that would bench is how a stock 290x on my system compares to a modded 390x BIOS, which isn't really the topic at hand. It would give no insight into whether the 290x DX11 to DX12 improvements match those of the 390x if it's pretending to be one.


But using the 390X BIOS could trick hard coded optimizations within the driver to run on your 290X, if you test between stock and 390X BIOS and see distinct improvements of course. It would just be interesting and it's all theory and guesses right now.


----------



## ZealotKi11er

Quote:


> Originally Posted by *infranoia*
> 
> All that would bench is how a stock 290x on my system compares to a modded 390x BIOS, which isn't really the topic at hand. It would give no insight into whether the 290x DX11 to DX12 improvements match those of the 390x if it's pretending to be one.


I do not think 390X Bios is even 390X bios on 290X. Most stuff you change do not affect gaming other then 3DMark.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> But using the 390X BIOS could trick hard coded optimizations within the driver to run on your 290X, if you test between stock and 390X BIOS and see distinct improvements of course.


Well, based on my numbers, we know that's not true.









Ah, I see what you're getting at. You think perhaps both DX11 and DX12 numbers are up from the 290x due to the reflash. Well, again-- this is about Hitman, not about whether the BIOS flash is a good thing or not. That discussion is probably best for this thread: http://www.overclock.net/t/1564219/modded-r9-390x-bios-for-r9-290-290x-updated-02-16-2016/1100_100#post_24983770


----------



## OneB1t

nope 390X bios is just faster as it have lesser timings for memory controller








driver side its same for both 290X/390X


----------



## Forceman

Quote:


> Originally Posted by *BradleyW*
> 
> But using the 390X BIOS could trick hard coded optimizations within the driver to run on your 290X, if you test between stock and 390X BIOS and see distinct improvements of course. It would just be interesting and it's all theory and guesses right now.


I didn't think flashing the 390X makes the system think it is a 390X though, it is still recognized as a 290X. So hard-coded fixes wouldn't work. Or do the new BIOSes actually make it look like a 390x?


----------



## infranoia

Quote:


> Originally Posted by *Forceman*
> 
> I don't think flashing the 390X makes the system think it is a 390X though, it is still recognized as a 290X. So hard-coded fixes wouldn't work. Or do the new BIOSes actually make it look like a 390x?


No, it's not like the old 6800GS to GT flash. It's still a 290x. The BIOS string and internal voltages all change, but the card is still a 290x and represents itself as such.


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> I didn't think flashing the 390X makes the system think it is a 390X though, it is still recognized as a 290X. So hard-coded fixes wouldn't work. Or do the new BIOSes actually make it look like a 390x?


That's exactly what I'm wondering, provided these hard coded optimizations exist. Of course it is just a theory at this stage.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Forceman*
> 
> I didn't think flashing the 390X makes the system think it is a 390X though, it is still recognized as a 290X. So hard-coded fixes wouldn't work. Or do the new BIOSes actually make it look like a 390x?


They changed something in the card. You could flash HD 7970 to 280X but you cant with 290X.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> No, it's not like the old 6800GS to GT flash. It's still a 290x. The BIOS string and internal voltages all change, but the card is still a 290x and represents itself as such.


This might put us back to square one since we can't trick the driver into thinking we have a definite 390X in the system. However I do recognise that you see an improvement between DX11 and 12 on the benchmark. Everything is very open at the moment without true answers.


----------



## mtcn77

There is a clear explanation for this, though it might seem counterintuitive.
Quote:


> Radeon 300: Game Performance
> We have had AMD's new stretching compete against diverse predecessor and Nvidia Maxwell respectively Kepler-high-end team. Unfortunately we had in the case of Radeons in two different drivers have recourse, because the current Catalyst 15:15 Beta runs exclusively on the newcomers. *Supposedly to AMD revised the microcontroller*, which would explain the incompatibility to older offshoots. We expect that future driver releases will be available for all GCN-based GPUs again.


----------



## pengs

Quote:


> Originally Posted by *mtcn77*
> 
> Check @sugarhell's post:
> If you look closely, the new "drivers" set a precedent between GCN 1.1 series parts. PCGH's special commentary on the subject is that they *couldn't* get the 290X to run on the new driver(as they did it on the R9 285 with the help of a special modded inf file). Fiji is GCN 2.0, FYI. Totally different reasons.
> 
> 
> 
> 
> 
> 
> 
> 
> However...


Not sure why. I ran the modded 15.15 for months. Saw quite an improvement in GTA5 with them.

That driver also brought a lot of DX11 draw call improvements with it which was demonstrated on Guru3D's forum. It makes sense that they would release the DX11 driver improvements they had been working with the release of a few new GPU's.


----------



## infranoia

Sheep in wolf's clothing. Unfortunately the reflash made my Sapphire (hurray!) into an XFX (boo!). Besides the BIOS string and the internal voltages it's a 290x-- the clocks don't even change until you change them, even Crimson believes that the baseline clock is the standard 1000/1250 of the stock Sapphire.

GPU Shark v0.9.7
(C)2013 Geeks3D - www.geeks3d.com

- Elapsed time: 00:00:20
- Windows 10 64-bit build 10586
- OpenGL info:
- GL_VERSION: 4.5.13431 Compatibility Profile/Debug Context 16.150.0.0 (# ext: 309)
- GL_RENDERER: AMD Radeon R9 200 Series
GPU 1 - AMD Radeon R9 290X
- GPU: Hawaii(GCN1.1)
- Bus ID: 1
- Device ID: 1002-67B0
- Subvendor: XFX (1682-9395)
- Driver version: 16.150.0.0 atig6pxx.dll
-- Crimson 16.3
-- Crimson rel. ver.: 16.15-160307a-300321E
-- Crimson rel date: 3-7-2016
*- Bios version: 113-GRENADA_XT_C671_D5_8GB_HY_W83
*- GPU memory size: 4095MB
- GPU memory type: GDDR5
- GPU temp: 28.0°C (min:28.0°C - max:30.0°C)
- Fan speed: 20.0 RPM
- GPU cores: 2816
- GPU ROPs: 64
- GPU Texture units: 176
- TDP: 250 Watts
- # Pstates: 2
- Pstate
- GPU: 300.0MHz
- Texture fillrate: 52.8 GTexel/s
- Pixel fillrate: 19.2 GPixel/s
- Mem: 150.0MHz
- Pstate
- GPU: 1100.0MHz
- Texture fillrate: 193.6 GTexel/s
- Pixel fillrate: 70.4 GPixel/s
- Mem: 1350.0MHz
- Current Pstate
- GPU: 305.0MHz
- Texture fillrate: 53.7 GTexel/s
- Pixel fillrate: 19.5 GPixel/s
- Mem: 150.0MHz
- GPU usage:
- GPU: 0.0%, max: 54.0%


----------



## OneB1t

Old 290 + 390bios = 390 performance
Card id will be same as its fused (only allowed id change is between 290 and 290x)


----------



## mtcn77

Quote:


> Originally Posted by *pengs*
> 
> Not sure why. I ran the modded 15.15 for months. Saw quite an improvement in GTA5 with them.
> 
> That driver also brought a lot of DX11 draw call improvements with it which was demonstrated on Guru3D's forum. It makes sense that they would release the DX11 driver improvements they had been working with the release of a few new GPU's.


I appreciate it. An R9 290X plus fujipoly seems better than ever - fire up those Asynchronous Compute Engines! I can take 0.4 kw, I swears.








I'll need a backplate, though. That heat is not going to insulate itself.


----------



## looniam

Quote:


> Originally Posted by *ZealotKi11er*
> 
> It was a joke. Its People vs Corporations. We are in the same side. AMD is not better.


oops. someone turned off the sarcasm setting on my monitor, sorry.


----------



## infranoia

A bit off-topic, but there's no way I can test 4K with this BIOS-- it completely hoses my Club3D active 4K/60Hz adapter and sends the card into a TDR loop. If I cared more about 4K I'd have to revert to stock BIOS, which works well enough with the Club3D, but I like the 1080p stability at 1100/1350 with 390x voltages way too much to revert.

Anyway 4K is for the next card.


----------



## ZealotKi11er

Maybe we should let PCPer know and they can test both cards.


----------



## BradleyW

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Maybe we should let PCPer know and they can test both cards.


Well it's about 4AM here (bed time for me) In the UK so if someone could email them with the link to the hitman benchmarks along with a short description of what to look for, that would be great.


----------



## Fyrwulf

AMD gimping cards? Really? With the 290X beating Titan Xs in AotS in Dx12? How about the developer just wrote lousy code? If both the Fury and reference 290Xs can't run in Dx12 fullscreen, then the game is hemorrhaging memory like a sonovagun.


----------



## infranoia

Don't attribute malice when ignorance fits... or something...

Hitman isn't the only DX12 bench, why would AMD gimp 290x here and not in AotS? Well, they wouldn't.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> Don't attribute malice when ignorance fits... or something...
> 
> Hitman isn't the only DX12 bench, why would AMD gimp 290x here and not in AotS? Well, they wouldn't.


Did anyone actually test both the 290X and 390X in DX11/DX12 in the same review? Anand didn't, and I can't find any other sites that did either.


----------



## ZealotKi11er

Quote:


> Originally Posted by *infranoia*
> 
> Don't attribute malice when ignorance fits... or something...
> 
> Hitman isn't the only DX12 bench, why would AMD gimp 290x here and not in AotS? Well, they wouldn't.


Gimp? More like not optimize 290/290X. There is a difference.


----------



## Forceman

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Gimp? More like not optimize 290/290X. There is a difference.


Tell that to the "Nvidia is gimping Kepler" crowd.


----------



## Kana-Maru

I've updated my Hitman DirectX 12 Fury X benchmarks with the in-game Paris level info.

Check it out here: [on the 3rd page]
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks

You'll notice that I speak about lowering ONE setting. This setting *increased my FPS performance by 30.6% @ 4K*! I'll have to play around more with the settings. I have more charts and in-game info, but it's not ready yet. I'm about to download Rise of the Tomb Raider and provide some Fury X DX12 info as well.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Forceman*
> 
> Tell that to the "Nvidia is gimping Kepler" crowd.


I mean Nvidia can be excused a bit but AMD can not because 290X and 390X are identical.


----------



## magnek

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Gimp? More like not optimize 290/290X. There is a difference.


Quote:


> Originally Posted by *Forceman*
> 
> Tell that to the "Nvidia is gimping Kepler" crowd.


I'm psychic










Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *magnek*
> 
> Not providing further optimizations =/= intentional gimping
> 
> _oh wait..._


----------



## mtcn77

Thank you Husky Tester! I5 [email protected] & [email protected]/1260 MHz.



Quote:


> Frame-rate Comparison - Ultra 1080P
> Windows 10 Game test DX12
> Radeon Crimson Driver 16.3
> intel Core i5 [email protected]
> RAM 16 GB DDR3 Bus 2400Mhz
> Asus R9 290 4GB 512bit DC2OC (1000MHz/1260MHz)
> 
> Game Seting
> Ultra [email protected] 1920*1080
> +10 FPS Without Record
> 
> Test in Spec
> CPU: I5 4690k @4.2Ghz
> COOLER: CM Saiden 120 V Plus
> VGA: Asus R9 290 4GB DC2OC
> M/B: Asrock Z97 Extream 4
> RAM: TeamExtream 16GB @BUS2400 CL10
> HDD: 2Tb WD Black
> PSU: 750W Super Flower 80+GOLD
> 
> OS
> Windows 10 Pro 64 bit B.10586


----------



## ZealotKi11er

Quote:


> Originally Posted by *mtcn77*
> 
> Thank you Husky Tester! I5 [email protected] & [email protected]/1260 MHz.


Being that in DX11 the GPU is 99% there is no CPU overhead so the only thing DX12 can do really is Async. The difference seems very small.


----------



## Assirra

Quote:


> Originally Posted by *mtcn77*
> 
> Thank you Husky Tester! I5 [email protected] & [email protected]/1260 MHz.


Maybe me but i find the DX12 worse with the constant changing framerate. I rather have a consistent lower than a constant changing one, espcially when it can change as much as 10fps every couple seconds.


----------



## mtcn77

Quote:


> Originally Posted by *Assirra*
> 
> Maybe me but i find the DX12 worse with the constant changing framerate. I rather have a consistent lower than a constant changing one, espcially when it can change as much as 10fps every couple seconds.


Yes, I felt that, too; however there's a scene where Directx 11 drops to 32 which Directx 12 avoids albeit the fluctuations as low as 28 from what I've seen from the 30p stream.


----------



## Dargonplay

Quote:


> Originally Posted by *mtcn77*
> 
> Thank you Husky Tester! I5 [email protected] & [email protected]/1260 MHz.


Why would the DX12 version be more Jumpy regarding its Framerate? I find that odd given how the more multi threaded and parallel software is the more stable and consistent it performs, this seems contradictory.

Maybe DX12 is optimized for Hyperthreading? Hyperthreading have been mostly useless over all games using DirectX 11, I my self have disabled Hyperthreading on my 5820K several times for latency related uses, I've found no difference in performance on any game I've benched, popular benchmark sites will also confirm that Hyperthreading doesn't increase performance in games BUT...

As someone else posted in a Benchmark Graph, Hyperthreading does improve performance tremendously when running over DirectX 12, improving Min FPS greatly as well as average and Max FPS which got me thinking, what if these "Jumpy" Min FPS we see on Husky's video of Hitman running over DirectX 12 are caused by the lack of Hyperthreading on his OCed i5?

Maybe DirectX 12 runs worse with lower Min FPS than DirectX 11 when both are running without hyperthreading.

DirectX 12 do seem to like Hyperthreading, so much in fact it now makes an i7 a worthy processor to have over an i5 for gaming, maybe Hyperthreading will become a variable in the "Recommended and Minimum Requirements" for games over DX12 this generation.

Considering Zen is coming with a new implementation (Hopefully vastly improved) of Hyperthreading I do believe Intel will finally change it's CPU scheme adding hyperthreading to its i5 line of processors.

Someone else told me I was nuts to recommend a 5820K over a 6700K in another thread, it seems I wasn't that crazy after all.


----------



## mtcn77

Quote:


> Originally Posted by *Dargonplay*
> 
> Why would the DX12 version be more Jumpy regarding its Framerate? I find that odd given how the more multi threaded and parallel something is the more stable and consistent it performs, this seems contradictory.
> 
> Maybe DX12 is optimized for Hyperthreading? Hyperthreading have been mostly useless over all games using DirectX 11, I my self have disabled Hyperthreading on my 5820K several times for latency related uses, I've found no difference in performance on any game I've benched, benchmarks will also confirm that Hyoerthreading doesn't increase performance BUT...
> 
> As someone else posted in a Benchmark Graph, Hyperthreading does improve performance tremendously when running over DirectX 12, improving Min FPS greatly as well as average and Max FPS which got me thinking, what if these "Jumpy" Min FPS we see on Husky's video of Hitman running over DirectX 12 are caused by the lack of Hyperthreading on his OCed i5?
> 
> Someone else told me I was nuts to recommend a 5820K over a 6700K in another thread, it seems I wasn't that crazy after all.


It's the exception that proves the rule.








Perhaps it is due to the I5's that aren't stronger than Fx'es that cause this issue in this title.


----------



## Fyrwulf

Quote:


> Originally Posted by *ZealotKi11er*
> 
> I mean Nvidia can be excused a bit but AMD can not because 290X and 390X are identical.


No, they're not. Beyond the architectural tweaks, there's the little issue of the 390X having double the VRAM. As has been pointed out, the Fury X is having the same issues that the 290X is having, and the only thing those two cards have in common is the 4GB of VRAM. This should surprise nobody, since the huge crowds in this game are going to punish the hell out of any card that does not have massive amounts of VRAM.


----------



## Majin SSJ Eric

Don't worry guys, if AMD really is gimping DX12 improvements on the 290(X) you can bet Ryan Shrout and PCPer will be on the scene with a 25 page editorial slamming them for it faster than you can say "Gimp!"


----------



## EightDee8D

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Don't worry guys, if AMD really is gimping DX12 improvements on the 290(X) you can bet Ryan Shrout and PCPer will be on the scene with a 25 page editorial slamming them for it faster than you can say "Gimp!"


----------



## Defoler

Quote:


> Originally Posted by *Fyrwulf*
> 
> No, they're not. Beyond the architectural tweaks, there's the little issue of the 390X having double the VRAM. As has been pointed out, the Fury X is having the same issues that the 290X is having, and the only thing those two cards have in common is the 4GB of VRAM. This should surprise nobody, since the huge crowds in this game are going to punish the hell out of any card that does not have massive amounts of VRAM.


There are also 290x with 8GB cards, which are pretty identical to the 390x.
Also at 1920x1080, a 4GB card should not get into memory performance issues unless the game is really punishing the cards.
Quote:


> Originally Posted by *mtcn77*
> 
> Thank you Husky Tester! I5 [email protected] & [email protected]/1260 MHz.


The DX12 looks really jumpy even when it has more fps or similar to DX11.
I wonder how the FCAT will look like on the 290x.


----------



## MerkageTurk

Someone try 290x 8gb

If true amd deserves the backlash


----------



## Stige

Quote:


> Originally Posted by *Kriant*
> 
> Sooo wait, the R9 390 and 390x rebrands get a hefty boost, but R9 290 and 290x don't show any meaningful gains? o_0. Did devs messed up somewhere, or did AMD just took a chapter out of Nvidia's "dirty tricks to force ppl to upgrade" book?


They are not rebrands, that's where you are wrong. More like a refresh.


----------



## Defoler

Quote:


> Originally Posted by *Stige*
> 
> They are not rebrands, that's where you are wrong. More like a refresh.


The core is the same core. The tales about "better manufacturing so we can give 50mhz more" is just silly tbh.
The difference is just the memory. The 390x memory is more dense, hence the 8GB and higher clocks. But they could have done the same thing with the current 290x just putting on it better memory.

A refresh is taking brand A and calling it A-improved (just like haswell refresh).
A rebrand is taking brand A and calling it B and say B is better than A.

It is a rebrand, not a refresh. AMD keeps saying that 300 series is not the 200 series, but we know it is not really true. Hence it is a rebrand, and not a refresh. If they called them 295x, it would have been a refresh.


----------



## Kriant

Quote:


> Originally Posted by *Stige*
> 
> They are not rebrands, that's where you are wrong. More like a refresh.


Yeah yeah, I know that out of the box core and mem clocks are a bit faster (well, mostly mem clocks), but other than that, the "highlighted" stuff that AMD PR department chose to put as strong points of the product does not contain any big architectural changes outside of revised power consumption,

http://www.hardocp.com/article/2015/06/18/msi_r9_390x_gaming_8g_video_card_review/9#.VuUucZKV02Y <---- according to "original" clock for clock comparison, they are pretty much identical. So why doesn't R9 290 gets the same FPS boost as R9 390 gets from DX12 in that benchmark is just strange. I don't believe that Hitman is hitting the RAM limits, otherwise we would see glorious swapfest with fps dips into single digits.


----------



## huzzug

Quote:


> Originally Posted by *Defoler*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Stige*
> 
> They are not rebrands, that's where you are wrong. More like a refresh.
> 
> 
> 
> The core is the same core. The tales about "better manufacturing so we can give 50mhz more" is just silly tbh.
> The difference is just the memory. The 390x memory is more dense, hence the 8GB and higher clocks. But they could have done the same thing with the current 290x just putting on it better memory.
> 
> A refresh is taking brand A and calling it A-improved (just like haswell refresh).
> A rebrand is taking brand A and calling it B and say B is better than A.
> 
> It is a rebrand, not a refresh. AMD keeps saying that 300 series is not the 200 series, but we know it is not really true. Hence it is a rebrand, and not a refresh. If they called them 295x, it would have been a refresh.
Click to expand...

So tell me, if both A & B perform the same but B consumes less that A (~10% efficiency) with higher core clock, more memory & faster mem frequency, how is it a rebrand. Shouldn't a rebrand consume same Watts as the previous chip if all variables are equal _(*cough_ GTX 770 _cough*)_


----------



## Charcharo

There was a good article on Anandtech that talks about what has been done on the 390/390X.
http://www.anandtech.com/show/9387/amd-radeon-300-series/3


----------



## SRV

No matter what was done to R9 390/390X, R9 290/290X should get bigger gains in DX12.

At the time of R9 390/390X launch, AMD intentionally enabled new driver optimizations just for Grenada chips, in order to show Grenada in better light. Clocks alone Grenada is about 10% faster than Hawaii, but when you add new driver optimizations, that went up to 20%. Later all cards got new drivers and difference between Hawaii and Grenada was just pure megahertz difference + some gains got from memory timings.


----------



## Kuivamaa

Quote:


> Originally Posted by *SRV*
> 
> No matter what was done to R9 390/390X, R9 290/290X should get bigger gains in DX12.
> 
> At the time of R9 390/390X launch, AMD intentionally enabled new driver optimizations just for Grenada chips, in order to show Grenada in better light. Clocks alone Grenada is about 10% faster than Hawaii, but when you add new driver optimizations, that went up to 20%. Later all cards got new drivers and difference between Hawaii and Grenada was just pure megahertz difference + some gains got from memory timings.


From PCGH I see the 390 having only 10%+15% oerf over the 290 which sounds about right when the first has +10Mhz on core, 400 on vram and tighter timings.


----------



## Kriant

Quote:


> Originally Posted by *Kuivamaa*
> 
> From PCGH I see the 390 having only 10%+15% oerf over the 290 which sounds about right when the first has +10Mhz on core, 400 on vram and tighter timings.


The point we are trying to make is that a jump from dx11 to dx12 render on r9 290 does not scale the same as on r9 390, which it should, because the architecture is the same.
We are not saying that r9 290 and r9 390 should pump out same performance fps-wise.


----------



## cowie

Quote:


> Originally Posted by *huzzug*
> 
> So tell me, if both A & B perform the same but B consumes less that A (~10% efficiency) with higher core clock, more memory & faster mem frequency, how is it a rebrand. Shouldn't a rebrand consume same Watts as the previous chip if all variables are equal _(*cough_ GTX 770 _cough*)_


''

cough cough how lame









a fan and a few bios settings it was a simple refresh/brand just like the 770 with a flu


----------



## OneB1t

Quote:


> Originally Posted by *Kriant*
> 
> The point we are trying to make is that a jump from dx11 to dx12 render on r9 290 does not scale the same as on r9 390, which it should, because the architecture is the same.
> We are not saying that r9 290 and r9 390 should pump out same performance fps-wise.


¨¨

my 290X getting same fps as 390X
there are memory controller timing differences in 390X bios

by flashing 390X bios into 290X you get same performance everywhere (except when more than 4GB of memory is needed)


----------



## Defoler

Quote:


> Originally Posted by *huzzug*
> 
> So tell me, if both A & B perform the same but B consumes less that A (~10% efficiency) with higher core clock, more memory & faster mem frequency, how is it a rebrand. Shouldn't a rebrand consume same Watts as the previous chip if all variables are equal _(*cough_ GTX 770 _cough*)_


Please try to troll less.

The definition of what is rebrand and what is a refresh are pretty clear and they are in a way as I stated earlier.

Both a refresh and a rebrand can both be done with updated things.
For example the 4790K is a haswell refresh, because it is using the same brand series, "haswell". If they called it "shazwell" instead, it would have been a rebrand because they used a new name to give it a new meaning, a new brand, *new*.

Re*brand* and refresh are not because of the product is new or not updated, but because of the *brand*. R9 300 series is a *brand*. Learn the difference.

Similar product, new brand = rebrand.
Similar product, same brand = refresh.

And in your example, since they are using the same core, and B(390X) is running faster in both core and memory compared to A(290X), and it staking 8-10% more power on some reviews (depends on the version of the cards), it looks like the same card, only a bit polished. It could be a refresh, if they used the same *brand* name, as in 200 series. But they did not.


----------



## looniam

for all you hawaii cats out there in guest land:

*Hawaii Bios Editing ( 290 / 290X / 295X2 / 390 / 390X )*


----------



## huzzug

Quote:


> Originally Posted by *Defoler*
> 
> Quote:
> 
> 
> 
> Originally Posted by *huzzug*
> 
> So tell me, if both A & B perform the same but B consumes less that A (~10% efficiency) with higher core clock, more memory & faster mem frequency, how is it a rebrand. Shouldn't a rebrand consume same Watts as the previous chip if all variables are equal _(*cough_ GTX 770 _cough*)_
> 
> 
> 
> Please try to troll less.
> 
> The definition of what is rebrand and what is a refresh are pretty clear and they are in a way as I stated earlier.
> 
> Both a refresh and a rebrand can both be done with updated things.
> For example the 4790K is a haswell refresh, because it is using the same brand series, "haswell". If they called it "shazwell" instead, it would have been a rebrand because they used a new name to give it a new meaning, a new brand, *new*.
> 
> Re*brand* and refresh are not because of the product is new or not updated, but because of the *brand*. R9 300 series is a *brand*. Learn the difference.
> 
> Similar product, new brand = rebrand.
> Similar product, same brand = refresh.
> 
> And in your example, since they are using the same core, and B(390X) is running faster in both core and memory compared to A(290X), and it staking 8-10% more power on some reviews (depends on the version of the cards), it looks like the same card, only a bit polished. It could be a refresh, if they used the same *brand* name, as in 200 series. But they did not.
Click to expand...

Gotcha !!. Will link people to this for future reference.


----------



## mtcn77

*Pub stomp.







* [Source]


----------



## Kriant

*Achtung!! Achtung!! Alarme! Alarme!* Abandon Ship!

If Deus Ex and Doom will show same tendencies, I guess my trip to the "green" side will be very short lived.


----------



## zealord

Quote:


> Originally Posted by *Kriant*
> 
> *Achtung!! Achtung!! Alarme! Alarme!* Abandon Ship!
> 
> If Deus Ex and Doom will show same tendencies, I guess my trip to the "green" side will be very short lived.


We already have some DOOM alpha benchmarks









Although I think they are not indicative of the final game with final optimization and final drivers.


----------



## Kriant

Quote:


> Originally Posted by *zealord*
> 
> We already have some DOOM alpha benchmarks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Although I think they are not indicative of the final game with final optimization and final drivers.


Lol, Alpha doesn't count because it's not a released product and because Nvidia hasn't made a single driver yet that would have Doom optimizations in "highlights"...unlike Hitman "optimizations" in the latest driver.


----------



## Kriant

Besides their numbers are off. I might have been in Alpha, and I might have tested it on stock titan X, and I might have gotten avg fps above 40 at 4k.


----------



## zealord

Quote:


> Originally Posted by *Kriant*
> 
> Lol, Alpha doesn't count because it's not a released product and because Nvidia hasn't made a single driver yet that would have Doom optimizations in "highlights"...unlike Hitman "optimizations" in the latest driver.


Quote:


> Originally Posted by *Kriant*
> 
> Besides their numbers are off. I might have been in Alpha, and I might have tested it on stock titan X, and I might have gotten avg fps above 40 at 4k.


oops someone is getting defensive very fast. I said those numbers probably aren't indicative.

Also you say Nvidia doesn't have a driver for DOOM, but does AMD have a driver for DOOM?


----------



## Kriant

Idk, who knows what those "reds" have in their drivers







)))).


----------



## zealord

Quote:


> Originally Posted by *Kriant*
> 
> Idk, who knows what those "reds" have in their drivers
> 
> 
> 
> 
> 
> 
> 
> )))).


you definitely don't and it looks like you never ever will lol.







))))))


----------



## looniam

ahem:
Quote:


> And the main priority for the graphics API OpenGL DOOM Alpha is.


http://gamegpu.com/action-/-fps-/-tps/doom-alpha-test-gpu.html


----------



## Shogon

Really depressing seeing so many people buying this game over Ashes. Maybe the cheap price is the reason for that, or people just don't like RTS games.


----------



## GoLDii3

Quote:


> Originally Posted by *Shogon*
> 
> Really depressing seeing so many people buying this game over Ashes. Maybe the cheap price is the reason for that, or people just don't like RTS games.


Or maybe they want to play Hitman...instead of Ashes. Pretty sure no one is buying this just because it has a DX12 version. The same for Ashes.


----------



## Assirra

Quote:


> Originally Posted by *Shogon*
> 
> Really depressing seeing so many people buying this game over Ashes. Maybe the cheap price is the reason for that, or people just don't like RTS games.


What do those games even have in common besides both using DX12?


----------



## infranoia

Quote:


> Originally Posted by *Assirra*
> 
> What do those games even have in common besides both using DX12?


AotS is doing well enough. Oxide has stated that it broke all their sales projections, and will be an ongoing property for them (think Sins of a Solar Empire).


----------



## Assirra

Quote:


> Originally Posted by *infranoia*
> 
> AotS is doing well enough. Oxide has stated that it broke all their sales projections, and will be an ongoing property for them (think Sins of a Solar Empire).


Well of course it did, it was the first DX12 game to show its face. That alone put them in a huge spotlight. If the same game came out with DX11 nobody would even know about it except the actual RTS crowd.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Assirra*
> 
> Well of course it did, it was the first DX12 game to show its face. That alone put them in a huge spotlight. If the same game came out with DX11 nobody would even know about it except the actual RTS crowd.


Its the only game where they genuinely tried to use DX12. All other games are DX12 port.


----------



## Assirra

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its the only game where they genuinely tried to use DX12. All other games are DX12 port.


Yea i know, my point was that the AoS is doing great salewise more because it is DX12 rather than being an actual good game.
I am not saying it aint good tough, have not tried it.
Also, shouldn't Deus Ex also have DX12 from the ground up?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Assirra*
> 
> Yea i know, my point was that the AoS is doing great salewise more because it is DX12 rather than being an actual good game.
> I am not saying it aint good tough, have not tried it.
> Also, shouldn't Deus Ex also have DX12 from the ground up?


About the same as Hitman but they got more time. Hopefully BF5 will be DX12 and there we will see true gains.


----------



## bossie2000

Played some beta Aos and would say it's pretty awesome RTS!


----------



## infranoia

Quote:


> Originally Posted by *Assirra*
> 
> Yea i know, my point was that the AoS is doing great salewise more because it is DX12 rather than being an actual good game.
> I am not saying it aint good tough, have not tried it.
> Also, shouldn't Deus Ex also have DX12 from the ground up?


If you haven't tried it, you might do so before talking about its qualities as a game. It's pretty fun, and the devs are from the Civ 5 team (the first threaded DX11 engine). Their gold standard was Supreme Commander and it looks like a worthy successor.

I guess you'd have to ask if it's a crossover hit. Not sure. I can't stand RTS in general but I really love Sins, so we'll see what it's like at launch. So far AotS is a nice diversion, but not a main course.

But that name. Geez, what is it with Stardock and ridiculous video game names?


----------



## Forceman

Interesting theory in the Ashes thread about power limit's impact on why the 390 gains so much more than the 290 in DX12. So in the name of science (of course) I got the intro pack and tested it.

Code:



Code:


PL        DX11         DX12 CPU         DX12 GPU
100%      76.60          77.16             77.60
150%      76.78          76.70             77.22

and also

80%        67.45         68.67             69.06

So power limit has no effect on DX11/DX12 gains, and DX12 shows basically no gain at all. That's on a 290X at 1100/1375.


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> Interesting theory in the Ashes thread about power limit's impact on why the 390 gains so much more than the 290 in DX12. So in the name of science (of course) I got the intro pack and tested it.
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> PL        DX11         DX12 CPU         DX12 GPU
> 100%      76.60          77.16             77.60
> 150%      76.78          76.70             77.22
> 
> So power limit has no effect on DX11/DX12 gains, and DX12 shows basically no gain at all. That's on a 290X at 1100/1375.


Looks like my theory still stands at this time. I truly believe AMD want to limit R9 200 DX12 performance to justify their blatant 300 series rebrand.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> Looks like my theory still stands at this time. I truly believe AMD want to limit R9 200 DX12 performance to justify their blatant 300 series rebrand.


Zero sense in that. You're pretending AotS doesn't exist.


----------



## infranoia

Can someone please explain the CPU/GPU bench statistics in DX12, when the DX11 bench has only CPU?


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Zero sense in that. You're pretending AotS doesn't exist.


I get the feeling Oxide Dev's wouldn't play along with gimping. Polaris is on the way. How can AMD force the 290 users to upgrade? That's right, stomp on the only thing that'll save them from having to upgrade.


----------



## SuperZan

Quote:


> Originally Posted by *BradleyW*
> 
> I get the feeling Oxide Dev's wouldn't play along with gimping. Polaris is on the way. How can AMD force the 290 users to upgrade? That's right, stomp on the only thing that'll save them from having to upgrade.


Except that doesn't square with anything that AMD has done in the past 5 years. They've given benefits to the 7970 repeatedly and that card came out with the Magna Carta. I'm not saying *"YOU'RE WRONG!"* but I think we should wait a bit for more evidence before we storm Raja's compound.


----------



## zealord

Maybe a bit unrelated, but does anyone know if :

- does the R9 290/290X support VSR higher than 3200x1800 ?

- what about the R9 390/390X ?

I haven't checked for a while, but a couple of months ago my 290X could only downsample up to 3200x1800


----------



## Kriant

Quote:


> Originally Posted by *zealord*
> 
> you definitely don't and it looks like you never ever will lol.
> 
> 
> 
> 
> 
> 
> 
> ))))))


Well...i only had radeon 9200, x600, x1900xt, hd3870...4850...4870...4870x2...5870...5970, 7970, r9 290, r9 290x on my hands from the red camp, so its unlikely i will ever know ))))))) lol.


----------



## BradleyW

Quote:


> Originally Posted by *SuperZan*
> 
> Except that doesn't square with anything that AMD has done in the past 5 years. They've given benefits to the 7970 repeatedly and that card came out with the Magna Carta. I'm not saying *"YOU'RE WRONG!"* but I think we should wait a bit for more evidence before we storm Raja's compound.


I do agree it does not match with AMD's previous decision making, but a clean background doesn't make a company immune to doing something "wrong".


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> Zero sense in that. You're pretending AotS doesn't exist.


Have we seen 290/290X DX11/DX12 comparisons in Ashes? All I could find were 300 series. And it's not like AMD didn't try this before, with the tessellation driver at launch, and with VSR resolution support.
Quote:


> Originally Posted by *infranoia*
> 
> Can someone please explain the CPU/GPU bench statistics in DX12, when the DX11 bench has only CPU?


I was wondering that also.


----------



## OneB1t

Quote:


> Originally Posted by *BradleyW*
> 
> Looks like my theory still stands at this time. I truly believe AMD want to limit R9 200 DX12 performance to justify their blatant 300 series rebrand.


flash 390X bios and check again








there are some 290X which have higher limit allready from vendor in this case 290/390 difference will not show


----------



## OneB1t

but card can have allready enough with 0% settings
there are UBER and SILENT bioses
each card have it different

if you flash same bios into 390X and 290X it will get same results

see these 2 threads for more info
http://www.overclock.net/t/1564219/modded-r9-390x-bios-for-r9-290-290x-updated-02-16-2016
http://www.overclock.net/t/1561372/hawaii-bios-editing-290-290x-295x2-390-390x


----------



## Shogon

Quote:


> Originally Posted by *GoLDii3*
> 
> Or maybe they want to play Hitman...instead of Ashes. Pretty sure no one is buying this just because it has a DX12 version. The same for Ashes.


You're right. They are buying it to have a replica of the previous hitman title that ran terribly, and perpetuated the stigma of console ported PC games. More money for Square Enix though








Quote:


> Originally Posted by *Assirra*
> 
> What do those games even have in common besides both using DX12?


Look at quote below. Square Enix certainly doesn't have heart and soul to put into a game that people will buy anyways because of the title. They can release the same garbage they did last time and rely on "fans" to consume their product without a second thought.
Quote:


> Originally Posted by *looniam*
> 
> gets the red team cheerleaders to wave their pom poms?


That's the only thing I can think of as well. Least Ashes isn't a terrible port with a dx12 sticker on the box.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its the only game where they genuinely tried to use DX12. All other games are DX12 port.


So sad, yet true. Hopefully more companies will actually try and utilize the new API from the ground up, but I have my doubts.


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> flash 390X bios and check again
> 
> 
> 
> 
> 
> 
> 
> 
> there are some 290X which have higher limit allready from vendor in this case 290/390 difference will not show


Quote:


> Originally Posted by *OneB1t*
> 
> but card can have allready enough with 0% settings


Keep trying. Here's 80% power:

DX11: 67.45
DX12: 68.67 (CPU) and 69.06 (GPU).


----------



## OneB1t

@forceman: i dont really care about your results as i developed hawaii bios reader and know differences between 290/390 series







if you getting same result for 80% powerlimit and 100% powerlimit then there is some different problem with your PC

there is internal info from guy who have NDA with AMD about whole rebrand
Quote:


> There is no physical difference between project 215-08520xx ("Hawaii") and 215-08800xx ("Grenada") ASICs.
> 
> The only difference between the full variants (i.e. 44 CUs and 64 ROPs) is binning and fuse configuration.
> 
> 215-0852000 - Hawaii XT (67B0h DID, MA-RID 0h, MI-RID 0h)
> 215-0852022 - Hawaii XT(L) (67B9h DID, MA-RID 0h, MI-RID 0h)
> 215-0880004 - Grenada XT (67B0h DID, MA-RID 8h, MI-RID 0h)
> 
> The XT(L) bin is the highest quality of Hawaii family ASICs.
> The dies are binned for lowest possible leakage characteristics in order to reduce the power consumption and dissipated heat. It is only used in 295X2 cards.
> 
> "Grenada XT" has the highest leakage characteristics of Hawaii family.
> Higher leakage makes it possible reach higher clocks without violating the design electrical characteristics of the die, in terms of voltage. The down sides in using high leakage ASICs are the higher temperatures through the board and higher power consumption (due temperatures and lower VRM efficiency).
> 
> Any performance difference between "Hawaii" and "Grenada" cards is caused by the display driver. AMD changed the fused configuration in a way the driver can tell the difference between "the old" Hawaii and "the new" Grenada cards. They both obviously share the DeviceID but "Grenada" cards have their revision number bumped to 8h (instead of 0h on Hawaii). The revision number is configured by the fuse configuration during manufacturing and cannot be altered by the bios on retail cards.
> 
> If you are testing differences between "Hawaii" and "Grenada" bioses, note that "Grenada" bioses have higher clocks thru the different performance states than "Hawaii" bioses. This means that if at some point the card reaches it´s power limit and starts to shuffle through the different performance states, the "Grenada" cards might appear to be faster than "Hawaii" simply because of the higher clocks. If you are testing the differences between the two bioses, make sure you have the power limit set to the maximum setting (+50%) on both of them.
> 
> ps. Internally "Grenada" has been called as "Hawaii Refresh".
> 
> If you know how much there actually is difference in previous "refreshes" by AMD, then it is futile to argue if "Grenada" is the same ASIC or not wink.gif


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> @forceman: i dont really care about your results as i developed hawaii bios reader and know differences between 290/390 series
> 
> 
> 
> 
> 
> 
> 
> if you getting same result for 80% powerlimit and 100% powerlimit then there is some different problem with your PC
> 
> there is internal info from guy who have NDA with AMD about whole rebrand


So you will ignore actual testing that shows 0% gain from DX12 on a 290X at all power ranges from 80% to 150% because you know the BIOS? Very scientific.

Edit: And I don't get the same performance at 80% and 100%, but neither of them show any gain at all from DX12, and neither does the computerbase benchmarking.


----------



## OneB1t

i also runned test in ashes of singularity getting exact FPS as 390X on my 290X









yes im going to ignore your results







they are wrong for some reason

(it could be that you have not installed last driver correctly or your cpu is simply powerfull enought to keep card fully loaded under DX11 as there is some added overhead in DX12 parallel workload)


----------



## Forceman

Quote:


> Originally Posted by *OneB1t*
> 
> i also runned test in ashes of singularity getting exact FPS as 390X on my 290X
> 
> 
> 
> 
> 
> 
> 
> 
> 
> yes im going to ignore your results
> 
> 
> 
> 
> 
> 
> 
> they are wrong for some reason
> 
> (it could be that you have not installed last driver correctly or your cpu is simply powerfull enought to keep card fully loaded under DX11 as there is some added overhead in DX12 parallel workload)


Not sure how Ashes testing is relevant to Hitman results.


----------



## OneB1t

underclock your CPU to 2ghz then run benchmark again or use lower resolution
then there will be difference between DX11 and DX12


----------



## GoLDii3

Quote:


> Originally Posted by *Shogon*
> 
> You're right. They are buying it to have a replica of the previous hitman title that ran terribly, and perpetuated the stigma of console ported PC games. More money for Square Enix though


Well ok then....but what does this has to with this thread at all? The game does not run terribly at all. Have you actually played it?

Because i have,and no,it does not "run bad at all". I was getting 45-60 FPS easily on my old 7970.

And neither did Absolution for that matter.

Also FYI Square Enix is the publisher,coding was actually done by IO Interactive. So if there's anyone to blame,it's them.


----------



## infranoia

290x reflashed to 390x BIOS, +50% voltage limit, 1100/1350, 2560 x 1440
A lot of variance in the Max/Min and Frametimes, although average sure seems to be locked to 60 in spite of the min/max results. Can anyone help me with that? Why is average apparently locked to 60? I'm using DVI for this (can't find my ?#[email protected]#$ DP cable), but if it's a framebuffer limitation wouldn't max frames be locked to 60 as well?

DirectX 11:

Code:



Code:


Run  Frames  Average Min     Max     FT Average      FT Min  FT Max
1       6833    59.78   12.35   198.24  16.73   5.04    80.98
2       6807    59.56   12.57   214.44  16.79   4.66    79.54
3       6769    59.23   9.39    256.93  16.88   3.89    106.54
4       6803    59.53   12.31   220.37  16.8    4.54    81.21
5       6844    59.88   12.51   229.8   16.7    4.35    79.95
Var.    1.10%   1.10%   33%     30%     1.10%   30%     34%



DirectX 12, Render Target Reuse Active:

Code:



Code:


Run  Frames  Average Min     Max     FT Average      FT Min  FT Max
1       6848    59.92   12.23   189.61  16.69   5.27    81.75
2       6466    56.59   4.33    187.74  17.67   5.33    231.18
3       6691    58.56   12.28   145.42  17.08   6.88    81.43
4       6788    59.4    12.07   132.85  16.83   7.53    82.84
5       6745    59.02   10.88   186.92  16.94   5.35    91.89
Var.    6%      6%      183%    43%     6%      43%     184%



DirectX 12, Render Target Reuse Inactive:

Code:



Code:


Run  Frames  Average Min     Max     FT Average      FT Min  FT Max
1       6848    59.92   12.23   189.61  16.69   5.27    81.75
2       6466    56.59   4.33    187.74  17.67   5.33    231.18
3       6691    58.56   12.28   145.42  17.08   6.88    81.43
4       6788    59.4    12.07   132.85  16.83   7.53    82.84
5       6745    59.02   10.88   186.92  16.94   5.35    91.89
Var.    6%      6%      183%    43%     6%      43%     184%


----------



## BradleyW

I've read the game is locked to 60 FPS unless you run Window mode (effects Fury and 300 series). That "might" have something to do with it
.
More info here:
http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> I've heard the game is locked to 60 FPS unless you run Window mode (effects Fury and 300 series). That "might" be the issue you are seeing here.


And I had discounted my windowed results because exclusive fullscreen tends to be higher performance overall with GCN. But I guess those results stand, if that's the case-- which makes this 290x flashed to 390x only a 5.5% increase in average framerate for DX12 over DX11.

Should note, the above bench was with Exclusive Fullscreen. Switching from this HDMI Sony 65" screen to my Asus P278 makes DX12 fullscreen work.


----------



## BradleyW

I agree that full screen often performs better than windowed mode.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> And I had discounted my windowed results because exclusive fullscreen tends to be higher performance overall with GCN. But I guess those results stand, if that's the case-- which makes this 290x flashed to 390x only a 5.5% increase in average framerate for DX12 over DX11.
> 
> Should note, the above bench was with Exclusive Fullscreen. Switching from this HDMI Sony 65" screen to my Asus P278 makes DX12 fullscreen work.


If you flash it back to a 290X does it go above 60 FPS? I got above 60 in all three modes.


----------



## sugarhell

Quote:


> Originally Posted by *Forceman*
> 
> If you flash it back to a 290X does it go above 60 FPS? I got above 60 in all three modes.


If you have a monitor over 60hz it goes over 60fps.


----------



## paralemptor

So... do we have any results in Ashes that would show the 290s and 390s compared?
I'm thinking (and I'm probably wrong







) if Hitman might be using some other optimisations for the 390-series?


----------



## Forceman

Quote:


> Originally Posted by *sugarhell*
> 
> If you have a monitor over 60hz it goes over 60fps.


Oh, duh.


----------



## infranoia

Quote:


> Originally Posted by *sugarhell*
> 
> If you have a monitor over 60hz it goes over 60fps.


But Vsync is off. And this wouldn't be the DX12 Vsync 'problem', as the "average FPS" result is locked to 60FPS in DX11 as well.

Seriously getting confused.

All right, so if @Forceman has a higher refresh monitor and Average FPS shows a higher score, and I have a 60Hz monitor that is locked to 60 in DX11 regardless of the Vsync setting, then it would appear that Hitman is forcing Vsync on regardless of API, for Fullscreen results. Which is pretty much what @BradleyW says above.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> But Vsync is off. And this wouldn't be the DX12 Vsync 'problem', as the "average FPS" result is locked to 60FPS in DX11 as well.
> Seriously getting confused.


It is said that frames above the refresh rate are dropped when Vsync is disabled. Not sure if this would effect your results.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> It is said that frames above the refresh rate are dropped when Vsync is disabled. Not sure if this would effect your results.


That's only in the New World Order with DX12. This is the first I've heard that this problem exists with DX11.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> That's only in the New World Order with DX12. This is the first I've heard that this problem exists with DX11.


Maybe a dumb question, but have you tried changing the graphics settings and seeing if it just happens to be running at 60 and not actually locked there?


----------



## infranoia

Quote:


> Originally Posted by *Forceman*
> 
> Maybe a dumb question, but have you tried changing the graphics settings and seeing if it just happens to be running at 60 and not actually locked there?


Not a dumb question, but would be a heck of a coincidence. Note my windowed results earlier in the thread. Average FPS in a window gets up to 86 to 90 FPS.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> Not a dumb question, but would be a heck of a coincidence. Note my windowed results earlier in the thread. Average FPS in a window gets up to 86 to 90 FPS.


How did you get it to run windowed at 1440p? I could only do 1080p, even though it was set for 1440p. But I didn't try all that hard to figure it out.


----------



## infranoia

Quote:


> Originally Posted by *Forceman*
> 
> How did you get it to run windowed at 1440p? I could only do 1080p, even though it was set for 1440p. But I didn't try all that hard to figure it out.


You're right. My earlier windowed results were 1080p. Setting it to a 1440 window opens a 1080 window.

This is ridiculous. I'm stepping away from this complete cluster of a benchmark. Anyone who tries to generate any kind of theory based on its results is working from a garbage dataset. Maybe one day the Hitman team will get their act together, but it's just not an accurate bench by any stretch of the imagination.

Seriously, has anyone even gotten DX12 to work through HDMI? At all? My testing shows it CTDs regardless of the display.


----------



## pengs

Quote:


> Originally Posted by *sugarhell*
> 
> This is an theory with a lot of speculation. There is a single review that is showing a 290x to have 5% improvement vs 10% of a 390x.
> 
> First what model of 290x? If it is a reference then i can easily say that is throttling very hard


Yeah this could easily be the case and adding the memory differences and you could easily return 10-15% from a 290x to a 390x, more depending on how sensitive a game is to memory speed.

The compute side may be getting a bump from the memory rehaul on the 390x.


----------



## infranoia

Quote:


> Originally Posted by *pengs*
> 
> Yeah this could easily be the case and adding the memory differences and you could easily return 10-15% from a 290x to a 390x, more depending on how sensitive a game is to memory speed.
> 
> The compute side may be getting a bump from the memory rehaul on the 390x.


FWIW my 290x is watercooled; I've never seen it throttle since adding the HG10. It maintains 1100/1350 in the benches above.

That said, the 390x is 1500 mem, which is out of reach for me.


----------



## pengs

Quote:


> Originally Posted by *infranoia*
> 
> FWIW my 290x is watercooled; I've never seen it throttle since adding the HG10. It maintains 1100/1350 in the benches above.
> 
> That said, the 390x is 1500 mem, which is out of reach for me.


Is that the benchmark in question? Sorry I'm just coming in with claims that the 290 and X are much slower than the 390, 390X. I guess more testing is needed.


----------



## OneB1t

with 390X bios you can achieve 1500 on memory easily








its basicly because of changed timings for memory controller


----------



## BradleyW

Quote:


> Originally Posted by *OneB1t*
> 
> with 390X bios you can achieve 1500 on memory easily
> 
> 
> 
> 
> 
> 
> 
> 
> its basicly because of changed timings for memory controller


I have Elpida IC's. Would this BIOS not be a good option for me?


----------



## infranoia

Quote:


> Originally Posted by *OneB1t*
> 
> with 390X bios you can achieve 1500 on memory easily
> 
> 
> 
> 
> 
> 
> 
> 
> its basicly because of changed timings for memory controller


Hate to break it to you, but that 1500 mem clock is heavily dependent upon the bin and the quality of the card. This modded 290x is a poor overclocker, always has been, and that doesn't change with the 390x voltage and mem timings. 1100/1350 is maximum velocity on this one.

If you mean "in general, with aftermarket binned cards" then I might agree with you, but "easily" is not a word I'd use to describe hitting 1500 mem on a 290x.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Hate to break it to you, but that 1500 mem clock is heavily dependent upon the bin and the quality of the card. This modded 290x is a poor overclocker, always has been, and that doesn't change with the 390x voltage and mem timings. 1100/1350 is maximum velocity on this one.
> 
> If you mean "in general, with aftermarket binned cards" then I might agree with you, but "easily" is not a word I'd use to describe hitting 1500 mem on a 290x.


Same issue here. I am also using Elpida IC's.


----------



## Forceman

Quote:


> Originally Posted by *pengs*
> 
> Is that the benchmark in question? Sorry I'm just coming in with claims that the 290 and X are much slower than the 390, 390X. I guess more testing is needed.


The issue isn't that the 290 is slower than the 390 (of course it is), the issue is that the 390 shows a 10% gain going from DX11 to DX12, but the 290/290X shows a 0% gain from DX11 to DX12.


----------



## infranoia

Quote:


> Originally Posted by *Forceman*
> 
> The issue isn't that the 290 is slower than the 390 (of course it is), the issue is that the 390 shows a 10% gain going from DX11 to DX12, but the 290/290X shows a 0% gain from DX11 to DX12.


Well, I show a 5.5% gain in a window, so it's probably defensible.

What kills me is that no one believes AMD or Anandtech when describing the actual differences between a 390x and a 290x, which all can easily account for a 5% boost over a 290x between DX11/DX12. 5% is within a margin of error. Not to mention that Grenada consistently outperforms Hawaii in nearly every DX11 bench out there.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> Same issue here. I am also using Elpida IC's.


And this is a stock Sapphire with Hynix memory. I mean, as far as stock cards go, this was the gold standard I think. But then the much better-binned aftermarket cards showed up with the high default clocks, so there's that.


----------



## OneB1t

you guys just flash 390X bios then increase core voltage by like 50mV and its easy to achieve 1500mhz stable on memory







in fact my card can go all way up to 1650mhz

this is not possible with stock 290X bios
you need increased core voltage and 390X bios to have bigger memory overclock


----------



## pengs

Quote:


> Originally Posted by *infranoia*
> 
> Hate to break it to you, but that 1500 mem clock is heavily dependent upon the bin and the quality of the card. This modded 290x is a poor overclocker, always has been, and that doesn't change with the 390x voltage and mem timings. 1100/1350 is maximum velocity on this one.
> 
> If you mean "in general, with aftermarket binned cards" then I might agree with you, but "easily" is not a word I'd use to describe hitting 1500 mem on a 290x.


It's pretty much 50/50 in the Hawaii thread with most people having success as the thread goes on. I believe Sapphire and the rest of the vendors switched over to Hynix explicitly after the first few months when Hawaii was released because Elpidia was having either QC or supply issues. But at this point 1500 close to being guaranteed. May have something to do with drivers also.

I've got a late 2013 early release model iirc so it's pretty lucky in that it has Hynix memory.


----------



## infranoia

Quote:


> Originally Posted by *OneB1t*
> 
> you guys just flash 390X bios


Done.
Quote:


> Originally Posted by *OneB1t*
> 
> then increase core voltage by like 50mV


Done.
Quote:


> Originally Posted by *OneB1t*
> 
> and its easy to achieve 1500mhz stable on memory
> 
> 
> 
> 
> 
> 
> 
> in fact my card can go all way up to 1650mhz


Hard lockup and reboot, memory and VRM cooled by AIO water.
Quote:


> Originally Posted by *OneB1t*
> 
> this is not possible with stock 290X bios
> you need increased core voltage and 390X bios to have bigger memory overclock


Throughout this discussion you're not really paying attention that I've already flashed to the modded 390x BIOS and increased voltage limits.


----------



## pengs

Quote:


> Originally Posted by *infranoia*
> 
> And this is a stock Sapphire with Hynix memory. I mean, as far as stock cards go, this was the gold standard I think. But then the much better-binned aftermarket cards showed up with the high default clocks, so there's that.


Woops. Thought you said it was Elpida, disregard the last post.

Have you tried aux voltage?


----------



## infranoia

Quote:


> Originally Posted by *pengs*
> 
> Woops. Thought you said it was Elpida, disregard the last post.
> 
> Have you tried aux voltage?










Hey, thanks man. It looks like triXX and bumping up the offset voltage helps. I am getting massive IQ problems at 1500, but it's up there now, and this gives me something to horse around with.
Quote:


> Originally Posted by *OneB1t*
> 
> you guys just flash 390X bios then increase core voltage by like 50mV and its easy to achieve 1500mhz stable on memory
> 
> 
> 
> 
> 
> 
> 
> in fact my card can go all way up to 1650mhz
> 
> this is not possible with stock 290X bios
> you need increased core voltage and 390X bios to have bigger memory overclock


@OneB1t, my apologies. I knew it was something obvious I was missing-- can't just rely on Crimson and a voltage limit boost, I'll need to play with the offsets as well.


----------



## OneB1t

crimson is just hopeless use msi afterburner for all changes (even better is to hardcode changes into bios with hawaii bios reader)


----------



## infranoia

Quote:


> Originally Posted by *OneB1t*
> 
> crimson is just hopeless use msi afterburner for all changes (even better is to hardcode changes into bios with hawaii bios reader)


Agreed. I used to use Afterburner all the time but bailed on it when it wasn't compatible with the early Crimson releases, and Crimson has been good enough for 1350. Time to get back to it.

Still doesn't change the fact that this is a bad benchmark though, if I'm not able to log averages above 60FPS in fullscreen.


----------



## OneB1t

they are cut out new dx12 feature


----------



## BradleyW

I will be testing Hitman myself in a few days in CPU and GPU bound locations within the game. Stay tuned for the settings, results, locations and usage.


----------



## Mahigan

Regarding Hawaii and Memory,

Hawaii was memory bandwidth bottlenecked:


We also know that AMD shrank the Hawaii memory controller logic going from Tahiti to Hawaii and made it wider as well. As a result of this the new memory controller in Hawaii cannot handle higher clocked memory that well. It really takes fiddling with the memory timings in order to get a descent overclock.


----------



## infranoia

Now we're cooking with grease.

290x modded 390x BIOS, ramped voltages, DX11 / DX12 windowed results at 1920 x 1080, GPU 1100 / Memory 1500 (thanks guys!)

DX11 --> DX12

Code:



Code:


9341 frames --> 9964 frames : 6.7% increase
 81.78fps Average --> 87.23 fps Average : 6.7% fps average increase
 12.30fps Min --> 11.74fps Min : 4.6% worse fps minimum
299.48fps Max --> 280.73fps Max : 6.3% worse fps maximum
 12.23ms Average --> 11.46ms Average : 6.3% better frametime average
  3.34ms Min --> 3.56ms Min : 6.6% worse frametime minimum
 81.31ms Max --> 85.20ms Max : 4.8% worse frametime maximum

So minimums / maximums seem to suffer a bit from DX11 to DX12, but all the averages improve.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Now we're cooking with grease.
> 
> 290x modded 390x BIOS, ramped voltages, DX11 / DX12 windowed results at 1920 x 1080, GPU 1100 / Memory 1500 (thanks guys!)
> 
> DX11 --> DX12
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 9341 frames --> 9964 frames : 6.7% increase
> 81.78fps Average --> 87.23 fps Average : 6.7% fps average increase
> 12.30fps Min --> 11.74fps Min : 4.6% worse fps minimum
> 299.48fps Max --> 280.73fps Max : 6.3% worse fps maximum
> 12.23ms Average --> 11.46ms Average : 6.3% better frametime average
> 3.34ms Min --> 3.56ms Min : 6.6% worse frametime minimum
> 81.31ms Max --> 85.20ms Max : 4.8% worse frametime maximum
> 
> So minimums / maximums seem to suffer a bit from DX11 to DX12, but all the averages improve.


Low level API's are all about giving a higher minimum FPS in limited situations. I feel even worse after seeing those results. I really hoped Hitman would be a true built from the ground up DX12 game.


----------



## Mahigan

Quote:


> Originally Posted by *infranoia*
> 
> Now we're cooking with grease.
> 
> 290x modded 390x BIOS, ramped voltages, DX11 / DX12 windowed results at 1920 x 1080, GPU 1100 / Memory 1500 (thanks guys!)
> 
> DX11 --> DX12
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 9341 frames --> 9964 frames : 6.7% increase
> 81.78fps Average --> 87.23 fps Average : 6.7% fps average increase
> 12.30fps Min --> 11.74fps Min : 4.6% worse fps minimum
> 299.48fps Max --> 280.73fps Max : 6.3% worse fps maximum
> 12.23ms Average --> 11.46ms Average : 6.3% better frametime average
> 3.34ms Min --> 3.56ms Min : 6.6% worse frametime minimum
> 81.31ms Max --> 85.20ms Max : 4.8% worse frametime maximum
> 
> So minimums / maximums seem to suffer a bit from DX11 to DX12, but all the averages improve.


So the memory bandwidth increased the performance from DX11 to DX12 right?


----------



## sugarhell

Quote:


> Originally Posted by *BradleyW*
> 
> Low level API's are all about giving a higher minimum FPS in limited situations. I feel even worse after seeing those results. I really hoped Hitman would be a true built from the ground up DX12 game.


How to increase your minimum FPS when you are gpu bound?


----------



## infranoia

Quote:


> Originally Posted by *Mahigan*
> 
> So the memory bandwidth increased the performance from DX11 to DX12 right?


Yes, the first windowed test at 1100/1350 had a 5.5% increase between DX11 and DX12. Up the memory to 1500 and it's a 6.7% increase.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> Yes, the first windowed test at 1100/1350 had a 5.5% increase between DX11 and DX12. Up the memory to 1500 and it's a 6.7% increase.


Can you check again at 1440p and see if it makes a difference there also? I can do 1500 but it artifacts.


----------



## infranoia

Quote:


> Originally Posted by *Forceman*
> 
> Can you check again at 1440p and see if it makes a difference there also? I can do 1500 but it artifacts.


Can't do it. I'm locked to 60FPS in fullscreen in any Hitman API, and as you found yourself, windowed mode is only 1080.


----------



## ZealotKi11er

Quote:


> Originally Posted by *sugarhell*
> 
> How to increase your minimum FPS when you are gpu bound?


ASync


----------



## BradleyW

Quote:


> Originally Posted by *sugarhell*
> 
> How to increase your minimum FPS when you are gpu bound?


Low level API's can increase performance when GPU bound but not by much. Async can help with GPU bound situations but it's all about when your CPU limited, that's when a low level API can shine (provided it's coded into the application well).


----------



## sugarhell

Quote:


> Originally Posted by *BradleyW*
> 
> Low level API's can increase performance when GPU bound but not by much. It's all about when your CPU limited, provided the application is coded well.


By the way dx12 is not a low level API.

Again on my question. You are gpu limited dont expect an increase on minimums vs dx11 especially now that the engines are heavily favor dx11 pipeline.

Dx11 is good when you are gpu limited. You are missing fancy stuff like async but its still an API with a lot of optimizations all these years.


----------



## BradleyW

Quote:


> Originally Posted by *sugarhell*
> 
> By the way dx12 is not a low level API.
> Again on my question. You are gpu limited dont expect an increase on minimums vs dx11 especially now that the engines are heavily favor dx11 pipeline.
> Dx11 is good when you are gpu limited. You are missing fancy stuff like async but its still an API with a lot of optimizations all these years.


OK then, "lower" level API, compared to D3D11.

Then again, MS themselves refer to it as a low level API and then some.
Quote:


> Direct3D 12 does not replace Direct3D 11. The new rendering features of Direct3D 12 are available in Direct3D 11.3. Direct3D 11.3 is a low level graphics engine API, and Direct3D 12 goes even deeper.


Source

DX12 can still boost GPU limited situations through Async if coded correctly as suggested by others and I just now. but like I said the main focus is to overcome CPU limitations with such an API.


----------



## Mahigan

Quote:


> Originally Posted by *infranoia*
> 
> Yes, the first windowed test at 1100/1350 had a 5.5% increase between DX11 and DX12. Up the memory to 1500 and it's a 6.7% increase.


Cool,

That jiives with my last post above.

And FuryX doesn't gain much more because it becomes ROps bound.


----------



## ZealotKi11er

Quote:


> Originally Posted by *sugarhell*
> 
> By the way dx12 is not a low level API.
> 
> Again on my question. You are gpu limited dont expect an increase on minimums vs dx11 especially now that the engines are heavily favor dx11 pipeline.
> 
> Dx11 is good when you are gpu limited. You are missing fancy stuff like async but its still an API with a lot of optimizations all these years.


Ok GPU bound but that does not explain why 390 gets boost in DX12.


----------



## Ha-Nocri

Flurry X is up to 25% faster


----------



## TFL Replica

Anyone know why the DX12 version looks much darker compared to DX11?


----------



## mtcn77

Quote:


> Originally Posted by *Mahigan*
> 
> Regarding Hawaii and Memory,
> 
> *Hawaii was memory bandwidth bottlenecked:*
> 
> 
> We also know that AMD shrank the Hawaii memory controller logic going from Tahiti to Hawaii and made it wider as well. As a result of this the new memory controller in Hawaii cannot handle higher clocked memory that well. It really takes fiddling with the memory timings in order to get a descent overclock.


I don't think so. Those longer bars are not as significant as implied. In fact, memory compression is only 40% as important as actual bandwidth as both vendors use the same patents and recycle them to give a new flair. AMD says it straight forward while Nvidia add a few inverse "special effects" into the jargon.
Take for instance:

You can literally compare the theoretical ideals and find the same benefit. +40% gain is the same as -29% bandwidth reduction, only reciprocally(1.4ˆ-1≈0.71≈100%-*29%*). Which also means, the shorter bars are 2.5x times more vital than compression values. If we take this info into account,

290X(The Asus DCIIOC variant with 1050/1350) is 263GB/s,
980 is 218GB/s(60/40 actual/compressed split),
980Ti is 286GB/s.
and Fury X is 355GB/s.


----------



## OneB1t

as you can see here even 290 have boost from DX11 to DX12
this guy is running still pretty fast CPU so difference is low

on my PC i expect DX11 vs DX12 difference to be about 30-40% in 1080p and 5-10% in 4K


----------



## BradleyW

When the higher FPS was needed for those large crowds, DX12 struggled against DX11.


----------



## OneB1t

this will be patched in next releases i think







its bug


----------



## zealord

Quote:


> Originally Posted by *TFL Replica*
> 
> Anyone know why the DX12 version looks much darker compared to DX11?


are you playing DX12 in (borderless) windowed and DX11 in Fullscreen?


----------



## infranoia

Remember, Mantle didn't give much back to those of us with fast processors either. Thief was about a smidge faster (~5%) going from DX11 to Mantle on this 4770K at 4.7GHz. We're seeing the same thing here.

AMD procs are going to show a higher gain DX11 to DX12 than these i7s do.


----------



## Mahigan

Quote:


> Originally Posted by *mtcn77*
> 
> I don't think so. Those longer bars are not as significant as implied. In fact, memory compression is only 40% as important as actual bandwidth as both vendors use the same patents and recycle them to give a new flair. AMD says it straight forward while Nvidia add a few inverse "special effects" into the jargon.
> Take for instance:
> 
> You can literally compare the theoretical ideals and find the same benefit. +40% gain is the same as -29% bandwidth reduction, only reciprocally(1.4ˆ-1≈0.71≈100%-*29%*). Which also means, the shorter bars are 2.5x times more vital than compression values. If we take this info into account,
> 
> 290X(The Asus DCIIOC variant with 1050/1350) is 263GB/s,
> 980 is 218GB/s(60/40 actual/compressed split),
> 980Ti is 286GB/s.
> and Fury X is 355GB/s.


Color compression only affects colored textures in the test above. I'm not even looking at those figures. I'm looking at straight up random texture figures.

In that scenario Fiji, Hawaii, Grenada all have 64 ROps. Hawaii and Grenada are held back from fully utilizing their ROps by memory bandwidth whereas Fiji is held back from fully utilizing its memory bandwidth by having only 64 ROps.

Hawaii was memory bandwidth bottlenecked, just as Tech Report stated, rather than ROp bound. Fiji, on the other hand, is ROp bound rather than memory bandwidth bottlenecked


----------



## mirzet1976

Quote:


> Originally Posted by *infranoia*
> 
> Remember, Mantle didn't give much back to those of us with fast processors either. Thief was about a smidge faster (~5%) going from DX11 to Mantle on this 4770K at 4.7GHz. We're seeing the same thing here.
> 
> *AMD procs are going to show a higher gain DX11 to DX12 than these i7s do.*


Yes but in min FPS boost . FX+R9 290
left DX - right Mantle


----------



## mtcn77

Quote:


> Originally Posted by *Mahigan*
> 
> Color compression only affects colored textures in the test above. I'm not even looking at those figures. I'm looking at straight up random texture figures.
> 
> In that scenario Fiji, Hawaii, Grenada all have 64 ROps. Hawaii and Grenada are held back from fully utilizing their ROps by memory bandwidth whereas Fiji is held back from fully utilizing its memory bandwidth by having only 64 ROps.
> 
> Hawaii was memory bandwidth bottlenecked, just as Tech Report stated, rather than ROp bound. Fiji, on the other hand, is ROp bound rather than memory bandwidth bottlenecked


May I ask which figures are those, except the one you posted?


----------



## OneB1t

for me as FX user its awesome as DX12 using all 8 cores

this is 4ghz fx vs 4ghz skylake (we can ignore turbo as its multithread load)
my fx-8320 will probably do alot better as i have APM disabled, CPU-NB to 2600mhz, PCI-E to 150mhz and other tweaks...


----------



## Mahigan

Quote:


> Originally Posted by *mtcn77*
> 
> May I ask which figures are those, except the one you posted?


The figures I posted have both. Top bar is colored textures, bottom bar is random textures. I'm only looking at the random textures (bottom bar).

What you posted in response only affects the top bar (Color Compression).


----------



## TFL Replica

Quote:


> Originally Posted by *zealord*
> 
> are you playing DX12 in (borderless) windowed and DX11 in Fullscreen?


I'm referring to the DX11 vs DX12 side-by-side comparison video that was posted here. If it's borderless window vs fullscreen, then I guess that could be why.


----------



## mtcn77

Quote:


> Originally Posted by *Mahigan*
> 
> The figures I posted have both. Top bar is colored textures, bottom bar is random textures. I'm only looking at the random textures (bottom bar).
> 
> What you posted in response only affects the top bar (Color Compression).


I think we have a bit of a misunderstanding there.
I take both colour and random texture rop performances according to their theoretical maximums as representations allow because it isn't electable. Hawaii does not have memory compression, but that is not a handicap. It is almost neck and neck with big Maxwell which the latest benchmarks show in full swing.


----------



## infranoia

Quote:


> Originally Posted by *mirzet1976*
> 
> Yes but in min FPS boost . FX+R9 290
> left DX - right Mantle


...on an FX-- yep. On Thief DX11 / Mantle I look like this:

Min FPS 46.5 / 45.7 : 2% worse Min FPS
Max FPS 60.7 / 83.7 : 38% better Max FPS
Average FPS 56.0 / 58.4 : 4.3% better Average FPS

Max is bumped, but no real benefits for Mantle over DX11 on a fast Intel. This is pretty similar to the Hitman results for minimums and averages.


----------



## Mahigan

Quote:


> Originally Posted by *mtcn77*
> 
> I think we have a bit of a misunderstanding there.
> I take both colour and random texture rop performances according to their theoretical maximums as representations allow because it isn't electable. Hawaii does not have memory compression, but that is not a handicap. It is almost neck and neck with big Maxwell which the latest benchmarks show in full swing.


Oh I know that but I'm not comparing AMD to NV, simply AMD to AMD because folks were wondering why the R9 290x wasn't benefitting as much than a 390x under Hitman DX11 vs DX12. I suggested the memory bandwidth dfference between the two as the likely culprit. DX11 hitman isn't that CPU intensive to begin with (hence why AMD cards perform well relative to NVIDIA even in DX11). So I hypothesised that the R9 290x wasn't gaining much from DX12 and async compute due to a memory bandwidth bottleneck holding back its ROp performance.

Infranoia tested this by flashing his 290x into a 390x and clocking up his memory and voila... That was the issue.

So rather than being about AMD gimping the 200 series, it's more about memory bandwidth holding back ROp performance.

If we were to look at the NVIDIA cards, which I wasn't, you see the same pattern under Hitman which matches those figures. A 390x matches a GTX 980 Ti.

Seems that Hitman might be memory bandwidth bound relative to how it makes use of ROps.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Mahigan*
> 
> The figures I posted have both. Top bar is colored textures, bottom bar is random textures. I'm only looking at the random textures (bottom bar).
> 
> What you posted in response only affects the top bar (Color Compression).


You like to see 390X in there. 1500MHz vs 1250MHz of 290X and better timings. That's 20% increase in theoretical bandwidth.


----------



## infranoia

Quote:


> Originally Posted by *infranoia*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Forceman*
> 
> Can you check again at 1440p and see if it makes a difference there also? I can do 1500 but it artifacts.
> 
> 
> 
> Can't do it. I'm locked to 60FPS in fullscreen in any Hitman API, and as you found yourself, windowed mode is only 1080.
Click to expand...

Running the Thief bench again after so many years gave me the clue I needed. Exclusive Fullscreen is locked to refresh in Thief as well. So just use 'Fullscreen'. And yeah, I should have known that.

290x flashed to 390x, 1100/1500 at 2560 x 1440:

DX 11 to DX12

Code:



Code:


7179 frames --> 7428 frames : 3.5% increase
 62.82fps Average --> 65.04fps Average : 3.5% FPS increase
 11.88fps Min --> 10.47fps Min : 12% worse minimum FPS
202.28fps Max --> 173.61fps Max : 14% worse maximum FPS
 15.92ms Average --> 15.37 : 3.5% better average frametime
  4.94ms Min --> 5.76ms Min : 17% worse minimum frametime
 84.18ms Max --> 95.49ms Max : 14% worse maximum frametime

Not as much of a boost between APIs at 1440p.


----------



## PontiacGTX

Quote:


> Originally Posted by *Mahigan*
> 
> Cool,
> 
> That jiives with my last post above.
> 
> And FuryX doesn't gain much more because it becomes ROps bound.


what about lower end/midrange cards like 265/270x/370 those would be hitting a bandwidth wall wouldnt they?the HD 7970/280x and the 380x isnt bottlenecked at all?
Quote:


> Originally Posted by *infranoia*
> 
> Remember, Mantle didn't give much back to those of us with fast processors either. Thief was about a smidge faster (~5%) going from DX11 to Mantle on this 4770K at 4.7GHz. We're seeing the same thing here.
> 
> AMD procs are going to show a higher gain DX11 to DX12 than these i7s do.


but if you are using multiples gpus you could get a boost,no? or the frame queue isnt an issue on multiples gpus?
Quote:


> Originally Posted by *infranoia*
> 
> Running the Thief bench again after so many years gave me the clue I needed. Exclusive Fullscreen is locked to refresh in Thief as well. So just use 'Fullscreen'. And yeah, I should have known that.
> 
> 290x flashed to 390x, 1100/1500 at 2560 x 1440:
> 
> DX 11 to DX12
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 7179 frames --> 7428 frames : 3.5% increase
> 62.82fps Average --> 65.04fps Average : 3.5% FPS increase
> 11.88fps Min --> 10.47fps Min : 12% worse minimum FPS
> 202.28fps Max --> 173.61fps Max : 14% worse maximum FPS
> 15.92ms Average --> 15.37 : 3.5% better average frametime
> 4.94ms Min --> 5.76ms Min : 17% worse minimum frametime
> 84.18ms Max --> 95.49ms Max : 14% worse maximum frametime
> 
> Not as much of a boost between APIs at 1440p.


1440p is far from CPU bound


----------



## diggiddi

Quote:


> Originally Posted by *Defoler*
> 
> Please try to troll less.
> 
> The definition of what is rebrand and what is a refresh are pretty clear and they are in a way as I stated earlier.
> 
> Both a refresh and a rebrand can both be done with updated things.
> For example the 4790K is a haswell refresh, because it is using the same brand series, "haswell". If they called it "shazwell" instead, it would have been a rebrand because they used a new name to give it a new meaning, a new brand, *new*.
> 
> Re*brand* and refresh are not because of the product is new or not updated, but because of the *brand*. R9 300 series is a *brand*. Learn the difference.
> 
> Similar product, new brand = rebrand.
> Similar product, same brand = refresh.
> 
> And in your example, since they are using the same core, and B(390X) is running faster in both core and memory compared to A(290X), and it staking 8-10% more power on some reviews (depends on the version of the cards), it looks like the same card, only a bit polished. It could be a refresh, if they used the same *brand* name, as in 200 series. But they did not.


As per this definition it is both a rebrand and a refresh


----------



## infranoia

Quote:


> Originally Posted by *PontiacGTX*
> 
> 1440p is far from CPU bound


I agree. And I think between Hitman and Thief we're seeing games that have little to no Asynchronous Compute in them. If Hitman did have ASC, we'd see a wider spread for GPU-bound scenarios like the above.

Contrast this with AotS, which does have it, and shows a marked improvement between APIs on fast processors.

We saw this coming, by the way. FPS is not the showcase for ASC. RTS games are tailor-made for massively parallel tasks.

But then, look at me come up with a theory based on a whacked benchmark.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> I agree. And I think between Hitman and Thief we're seeing games that have little to no Asynchronous Compute in them. If Hitman did have ASC, we'd see a wider spread for GPU-bound scenarios like the above.


Except for this quote from AMD, which says Hitman has the best use of async yet.
Quote:


> Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs-called asynchronous compute engines-to handle heavier workloads and better image quality without compromising performance. PC gamers may have heard of asynchronous compute already, and Hitman demonstrates the best implementation of this exciting technology yet. By unlocking performance in GPUs and processors that couldn't be touched in DirectX 11, gamers can get new performance out of the hardware they already own.


----------



## infranoia

Eeeeeyeahhh. Well...

Show me, AMD, where that matters to *me*.


----------



## PontiacGTX

Quote:


> Originally Posted by *Forceman*
> 
> Except for this quote from AMD, which says Hitman has the best use of async yet.


if it were async compute heavy then 2 ACEs on the 280x would be hitting some wall then?


----------



## Forceman

Quote:


> Originally Posted by *PontiacGTX*
> 
> if it were async compute heavy then 2 ACEs on the 280x would be hitting some wall then?


In the computerbase test the 280X had no gain from DX12 at all, even at 1080p, so maybe it is?

Of course, best implementation yet isn't saying much considering the alternatives, but at least it must use some async.


----------



## Petet1990

anyone getting crashes? mine crashes on the startup. and in the settings i can not even select 4k. only lets me get up to 1920x1200.


----------



## PontiacGTX

Quote:


> Originally Posted by *Forceman*
> 
> In the computerbase test the 280X had no gain from DX12 at all, even at 1080p, so maybe it is?
> 
> Of course, best implementation yet isn't saying much considering the alternatives, but at least it must use some async.


but it is only getting 40FPS

http://www.purepc.pl/karty_graficzne/test_wydajnosci_hitman_directx_11_vs_directx_12_killer_optymalizacji?page=0,5

http://www.purepc.pl/karty_graficzne/test_wydajnosci_hitman_directx_11_vs_directx_12_killer_optymalizacji?page=0,6

there is a performance gain on lower resolution then the testing on computerbase has something wrong, maybe using highest quality detail?


----------



## Forceman

Quote:


> Originally Posted by *PontiacGTX*
> 
> but it is only getting 40FPS
> 
> http://www.purepc.pl/karty_graficzne/test_wydajnosci_hitman_directx_11_vs_directx_12_killer_optymalizacji?page=0,5
> 
> http://www.purepc.pl/karty_graficzne/test_wydajnosci_hitman_directx_11_vs_directx_12_killer_optymalizacji?page=0,6
> 
> there is a performance gain on lower resolution then the testing on computerbase has something wrong, maybe using highest quality detail?


I'm seeing no gain for the 280X even at low details (35 to 36 FPS) - am I missing something?

Edit: that site shows the Fury X losing FPS when going to DX12 at high and ultra settings? Seems odd.


----------



## BradleyW

AMD's marketing team ran wild with Hitman's Async feature. If anything, AOTS has the very best DX12 implementation yet.


----------



## Mahigan

Quote:


> Originally Posted by *Forceman*
> 
> I'm seeing no gain for the 280X even at low details (35 to 36 FPS) - am I missing something?
> 
> Edit: that site shows the Fury X losing FPS when going to DX12 at high and ultra settings? Seems odd.


They're probably using a 60Hz panel.


----------



## Forceman

Quote:


> Originally Posted by *Mahigan*
> 
> They're probably using a 60Hz panel.


Their FPS is in the mid-50s, so I doubt that's the issue, especially considering they get the same frame rate in DX11 where there are no Vsync issues.

Edit: And one of the two monitors they list in the platform specs is a 144Hz model (the other is the 4K monitor).

Starting to wonder if Ashes is the outlier, and not a good indicator of what DX12 is going to be in practice, at least in the near-term.


----------



## infranoia

Quote:


> Originally Posted by *Forceman*
> 
> Their FPS is in the mid-50s, so I doubt that's the issue, especially considering they get the same frame rate in DX11 where there are no Vsync issues.


There are Vsync issues in exclusive fullscreen. It's locked to refresh regardless of Vsync setting, even for DX11, as I show in that first bench above. Fullscreen is borderless window and shows the unlocked averages, so hopefully that's the mode they benched in..









EDIT: Probably just me, as PCG.de seemed to be able to run Hawaii full-bore in exclusive fullscreen mode.


----------



## diggiddi

Quote:


> Originally Posted by *OneB1t*
> 
> for me as FX user its awesome as DX12 using all 8 cores
> 
> this is 4ghz fx vs 4ghz skylake (we can ignore turbo as its multithread load)
> my fx-8320 will probably do alot better as i have APM disabled, CPU-NB to 2600mhz, PCI-E to 150mhz and other tweaks...


What are these tweaks you zpek of? I thought Pcie overclocking was surefire way to corrupt your OS?


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> There are Vsync issues in exclusive fullscreen. It's locked to refresh regardless of Vsync setting, even for DX11, as I show in that first bench above. Fullscreen is borderless window and shows the unlocked averages, so hopefully that's the mode they benched in..
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Probably just me, as PCG.de seemed to be able to run Hawaii full-bore in exclusive fullscreen mode.


That should only be an issue if you are above refresh, and since they used a 144Hz monitor (and were below 60 FPS anyway) I don't see how it caused any problems. I get the same results in both fullscreen and exclusive fullscreen on my 96Hz monitor, although that is with a 290X which didn't seem to have issues like the Fury.

But at 45~55 FPS Vsync shouldn't be the cause of any problems (although it could be). Ashes ran internally at full speed and just dropped the extra frames, which should mean the internal benchmark is still good.

In any case, Hitman doesn't appear to be the DX12 showcase some people were hyping it to be.


----------



## Majin SSJ Eric

I think you guys are just going to have to accept that it is simply much too early to start trying to make concrete assumptions about DX12 and how much of an advantage we will end up getting over DX11 with various cards. There is just no where near enough data available to make meaningful conclusions and that probably won't change for another year or so.


----------



## mtcn77

Quote:


> Originally Posted by *ZealotKi11er*
> 
> You like to see 390X in there. 1500MHz vs 1250MHz of 290X and better timings. That's 20% increase in theoretical bandwidth.


The 290X in this instance is running with 1350MHz ICs. It's the AsusDCIIOC model. I think anyone of us can run the tests by ourselves - is the Beyond3D Test Suite that exclusive?


----------



## Stige

Quote:


> Originally Posted by *BradleyW*
> 
> I've read the game is locked to 60 FPS unless you run Window mode (effects Fury and 300 series). That "might" have something to do with it
> .
> More info here:
> http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


Mine is not locked at fullscreen DX12.

If you have higher refresh rate than 60 then FPS is unlocked.


----------



## sblantipodi

is there any reason on using DX12 now?
as far as I can see it runs slower, it doesn't support SLI and there is no other improvements.


----------



## ku4eto

Quote:


> Originally Posted by *sblantipodi*
> 
> is there any reason on using DX12 now?
> as far as I can see it runs slower, it doesn't support SLI and there is no other improvements.


Slower...not really, its just a bit faster. I suspect there could be image quality difference between DX11 and DX12. Supposedly there are more effects under DX12 mode.


----------



## Kana-Maru

Quote:


> Originally Posted by *sblantipodi*
> 
> is there any reason on using DX12 now?
> as far as I can see it runs slower, it doesn't support SLI and there is no other improvements.


I'm seeing gains in DX12.


----------



## Defoler

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think you guys are just going to have to accept that it is simply much too early to start trying to make concrete assumptions about DX12 and how much of an advantage we will end up getting over DX11 with various cards. There is just no where near enough data available to make meaningful conclusions and that probably won't change for another year or so.


One of the issues is that there is a too fast push to DX12 before all the issues have been sorted out.

Even DX11 took a couple of years to enter AAA games after it was completely available to developers. Both AMD and Nvidia took very careful steps to make sure everything was ready.
DX12 is really being pushed out just a year after the API was completely available, and I think they are really not working on polishing it, but instead AMD and microsoft are pushing it too fast (AMD in order to sell their cards, and microsoft to push windows 10 and xbox/windows cross platform to fight sony).

I hope it will not make the polishing part and all the bugs fixing too slow if they are more concert about putting the games out fast, instead of well.
Anyway, I would wait at least another year to see how DX12 is coming a lot. Right now there are too many issues to say if it is actually better.


----------



## CalinTM

No doubt hitman is an amd-seller game, was made to work like this show these results, on purpose, i am sure. Just to sell the dx12 idea. So almost all the fps improvements come from cpu optimizations in dx12 ? I mean i have kinda the exact fps diff. like in hitman with my 980, in rise of the tomb raider, like, 6-10 fps more, in certain situations, even more in other, but thats the average i get. And comes strictly from CPU thing ?


----------



## Charcharo

Quote:


> Originally Posted by *Shogon*
> 
> Really depressing seeing so many people buying this game over Ashes. Maybe the cheap price is the reason for that, or people just don't like RTS games.


Strategy games are niche









It is sad that PC Gaming's greatest genre is like that, as well as the only thing gaming has that has no direct and superior alternative in other media... is in this position.

Such is life though. One of the reasons I am so jaded at gaming and PC Gamers these days


----------



## Kana-Maru

I've updated my Hitman Fury X review with more info and a optimization guide that should help some people enjoy the experience.
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks

I've also uploaded my Tomb Raider DX11 \ DX12 benchmarks:
http://www.overclock-and-game.com/news/pc-gaming/43-rise-of-the-tomb-raider-fury-x-benchmarks


----------



## mtcn77

One thing I will say: Ashes has fantastic sprites(I like the blue one), I hope they will deliver further on that front. Just copy paste Tiberian Sun guys - better yet, just reform WestWood Studios with all their developers. I like deformable ground, too. Voxels, please! RTS games deserve that.


----------



## sblantipodi

Quote:


> Originally Posted by *Kana-Maru*
> 
> I've updated my Hitman Fury X review with more info and a optimization guide that should help some people enjoy the experience.
> http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks
> 
> I've also uploaded my Tomb Raider DX11 \ DX12 benchmarks:
> http://www.overclock-and-game.com/news/pc-gaming/43-rise-of-the-tomb-raider-fury-x-benchmarks


tomb benchmark results varies too much from a run to another to be used as a benchmark to show difference between DX11 and DX12


----------



## Kana-Maru

Quote:


> Originally Posted by *sblantipodi*
> 
> tomb benchmark results varies too much from a run to another to be used as a benchmark to show difference between DX11 and DX12


Did you even read my article????

I also perform my own in-game benchmarks anyways. They will be coming up soon when I get more time and go through all of the data.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> I've updated my Hitman Fury X review with more info and a optimization guide that should help some people enjoy the experience.
> http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks
> 
> I've also uploaded my Tomb Raider DX11 \ DX12 benchmarks:
> http://www.overclock-and-game.com/news/pc-gaming/43-rise-of-the-tomb-raider-fury-x-benchmarks


+1 Thank you for your benchmarks.


----------



## raghu78

http://gamegpu.com/action-/-fps-/-tps/hitman-v-directx-12-test-gpu.html








Gamegpu results are out. DX11 version is faster than DX12. This means there is still room for improvement in DX12 version. Somehow other than AOTS no game has utilized DX12 to the fullest. AOTS engine was written from the ground up for low level APIs like DX12/Mantle and thats seen in the performance benchmarks and multithread perf scaling


----------



## ku4eto

Man, even on DX11, the 970 looks bad. Wonder if people are regretting their purchases.


----------



## Assirra

Some nvdia cards are getting less than half framerate in DX12.
What in the world?


----------



## Olivon

Quote:


> Originally Posted by *ku4eto*
> 
> Man, even on DX11, the 970 looks bad. Wonder if people are regretting their purchases.


Lawl. The 970 don't perform well in one game and people need to regret it ? You're funny









Maybe drivers update can improve the situation, no ?


----------



## PontiacGTX

Quote:


> Originally Posted by *raghu78*
> 
> http://gamegpu.com/action-/-fps-/-tps/hitman-v-directx-12-test-gpu.html
> 
> 
> 
> 
> 
> 
> 
> 
> Gamegpu results are out. DX11 version is faster than DX12. This means there is still room for improvement in DX12 version. Somehow other than AOTS no game has utilized DX12 to the fullest. AOTS engine was written from the ground up for low level APIs like DX12/Mantle and thats seen in the performance benchmarks and multithread perf scaling


Testing cpu scaling on nvidia while using DX12 when their driver has higher overhead
http://gamegpu.com/images/stories/Test_GPU/Action/Hitman_Beta/Hitman_в_DirectX_12_/test/hit_proz12.jpg


----------



## magnek

Quote:


> Originally Posted by *PontiacGTX*
> 
> Testing cpu scaling on nvidia while using DX12....fail
> http://gamegpu.com/images/stories/Test_GPU/Action/Hitman_Beta/Hitman_в_DirectX_12_/test/hit_proz12.jpg


2600K has worse minimums than i3-4330









Edit: They left 2600K at stock, but still shouldn't be single thread limited, not under DX12 anyway...


----------



## BradleyW

290X gets a big frame rate reduction in DX12 mode.


----------



## PontiacGTX

Quote:


> Originally Posted by *BradleyW*
> 
> 290X gets a big frame rate reduction in DX12 mode.


something is wrong with their testing


----------



## zealord

Quote:


> Originally Posted by *PontiacGTX*
> 
> something is wrong with their testing


yeah. Maybe they confused DX11/DX12 ?

It should be the other way around.

What the hell is going on


----------



## BradleyW

Quote:


> Originally Posted by *PontiacGTX*
> 
> something is wrong with their testing


Somethings not right. Could be their render target reuse setting. Even the Fury X and 980 Ti see large decreases in FPS when moving over to DX12.
Quote:


> Originally Posted by *zealord*
> 
> yeah. Maybe they confused DX11/DX12 ?
> It should be the other way around.
> What the hell is going on


Someone posted a DX11 vs DX12 video on here a few pages back. It also shown significantly less FPS in DX12 mode on the Fury X in large crowded areas of the game, so maybe their testing is simply a result of what is really going on. I'm not sure, I'm yet to test myself.


----------



## HexagonRabbit

I'm using a 9590 and a nano under water. I cant wait to give this a run and see what my numbers look like.


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> +1 Thank you for your benchmarks.


Thanks. I will be adding more data for both games soon.

Quote:


> Originally Posted by *raghu78*
> 
> http://gamegpu.com/action-/-fps-/-tps/hitman-v-directx-12-test-gpu.html
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Gamegpu results are out. DX11 version is faster than DX12. This means there is still room for improvement in DX12 version. Somehow other than AOTS no game has utilized DX12 to the fullest. AOTS engine was written from the ground up for low level APIs like DX12/Mantle and thats seen in the performance benchmarks and multithread perf scaling


Yeah.................I'll be running my own benchmarks to see what I get for comparison.


----------



## infranoia

I bet they used the "CPU" output in DX11 (which is the only bench output), and the "GPU" output in DX12 bench (and not the "CPU" output).

Doing this will show a huge regression across the board. You have to use both "CPU" outputs, as that's the one that matches the live output up in the corner of the bench.

I have no idea what that DX12 "GPU" section is all about, but the numbers there are always miserable.


----------



## OneB1t

Quote:


> Originally Posted by *HexagonRabbit*
> 
> I'm using a 9590 and a nano under water. I cant wait to give this a run and see what my numbers look like.


for you increase will be massive (like 60-70%)


----------



## BradleyW

Steam is downloading a 230mb patch on my system for Hitman.


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> Steam is downloading a 230mb patch on my system for Hitman.


.

I tried finding info on the patch, but there's nothing official so far from the devs.

Quote:


> Originally Posted by *raghu78*
> 
> http://gamegpu.com/action-/-fps-/-tps/hitman-v-directx-12-test-gpu.html
> 
> 
> Spoiler: Warning: Spoiler!


Ok I've ran my benchmark and uploaded the results. So let's see, they didn't use the internal benchmark, instead they used in-game benchmarks.

So they are running a:
i7 -5960X @ 4.6Ghz - 3rd Generation X99 + DDR4 2666Mhz

and I"m running a:
X5660 @ 4.6Ghz - 1st Generation X58 [2008 tech] + DDR3-1600Mhz

My results are here: [at the bottom of the page.
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks?showall=&start=2

Gamegpu only got *64fps average*. Someone tell me why I got *89fps average*. They are clearly benchmarking the game wrong or something isn't compatible with DX12. It could be that they don't know what they are doing with this game in particular.


----------



## Glottis

Quote:


> Originally Posted by *ku4eto*
> 
> Man, even on DX11, the 970 looks bad. Wonder if people are regretting their purchases.


why would people regret their one of the best bang for buck GPU purchases if one trashy unoptimized game runs bad?


----------



## infranoia

Patch does nothing for performance with this Haswell, I'm showing a 1.7% delta between DX11 and DX12 at 1440p. 56.64fps vs. 57.59 fps average (both windowed and exclusive fullscreen are between 56 and 57 fps). Min/max and frametimes all regress from DX11 to DX12, but averages are just slightly better.

Honestly, it's a wash.

However, DX12 now works with HDMI in Exclusive Fullscreen mode.


----------



## Stige

Quote:


> Originally Posted by *Glottis*
> 
> why would people regret their one of the best bang for buck GPU purchases if one trashy unoptimized game runs bad?


Because it doesn't have proper DX12 support and is inferior to the 390 today. It might have been on par with it at launch but that is long in the past.


----------



## ku4eto

Quote:


> Originally Posted by *Glottis*
> 
> why would people regret their one of the best bang for buck GPU purchases if one trashy unoptimized game runs bad?


Best?







you made me lol. I guess you haven't heard about the R9 290/x cards?


----------



## HexagonRabbit

Granted i'm on my phone but did I see this game for $14 on steam? WTH?


----------



## LAKEINTEL

no, that's not a bug


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *HexagonRabbit*
> 
> Granted i'm on my phone but did I see this game for $14 on steam? WTH?


Correct for a starter pakc.. looks like maybe a level or 2

Quote:


> Originally Posted by *LAKEINTEL*
> 
> no, that's not a bug


FALSE// see above


----------



## BradleyW

Quote:


> Originally Posted by *HexagonRabbit*
> 
> Granted i'm on my phone but did I see this game for $14 on steam? WTH?


Is that for the starter pack only?


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *BradleyW*
> 
> Quote:
> 
> 
> 
> Originally Posted by *HexagonRabbit*
> 
> Granted i'm on my phone but did I see this game for $14 on steam? WTH?
> 
> 
> 
> Is that for the starter pack only?
Click to expand...

yes


----------



## BradleyW

Quote:


> Originally Posted by *F3ERS 2 ASH3S*
> 
> yes


Cool.


----------



## LAKEINTEL

I should've been more specific


----------



## Glottis

Quote:


> Originally Posted by *ku4eto*
> 
> Best?
> 
> 
> 
> 
> 
> 
> 
> you made me lol. I guess you haven't heard about the R9 290/x cards?


don't twist my words, i said "one of the best". and yes, 970 is widely regarded as one of the best price/performance GPUs (numerous publications back my claim). not sure what's so funny about that?


----------



## looniam

Quote:


> Originally Posted by *Glottis*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ku4eto*
> 
> Best?
> 
> 
> 
> 
> 
> 
> 
> you made me lol. I guess you haven't heard about the R9 290/x cards?
> 
> 
> 
> don't twist my words, i said "one of the best". and yes, 970 is widely regarded as one of the best price/performance GPUs (numerous publications back my claim). not sure what's so funny about that?
Click to expand...

people like to forget that the 290/X dropped ~$140 in price because of the 970. so instead of saying thank you, they laugh.


----------



## ZealotKi11er

Quote:


> Originally Posted by *looniam*
> 
> people like to forget that the 290/X dropped ~$140 in price because of the 970. so instead of saying thank you, they laugh.


After the mining craze 290/290X could be had for well under $300. Nobody was buying new 290/290X back then.


----------



## OneB1t

im sad that i bought just one 290X back then


----------



## PontiacGTX

Quote:


> Originally Posted by *OneB1t*
> 
> im sad that i bought just one 290X back then


when there was a 1k usd Titan ?


----------



## looniam

Quote:


> Originally Posted by *ZealotKi11er*
> 
> After the mining craze 290/290X could be had for well under $300. Nobody was buying new 290/290X back then.


still doesn't change that the MSRP dropped considerably:

http://www.techpowerup.com/reviews/EVGA/GTX_970_SC_ACX_Cooler/
point taken but not everyone bought used cards on a forum, esp. considering if the warranties didn't transfer.

e:
if you want to talk mining craze than how about when they were twice that price but no, that wouldn't be right, huh?


----------



## Remij

Damn, AMD is doing really well in this game even in DX11. This goes with that I'm saying about AMD just needing to invest more in working with devs for their platform.

Still though, it's disappointing to see DX11 outpace DX12... but it's still very early days.


----------



## ZealotKi11er

Quote:


> Originally Posted by *looniam*
> 
> still doesn't change that the MSRP dropped considerably:
> 
> http://www.techpowerup.com/reviews/EVGA/GTX_970_SC_ACX_Cooler/
> point taken but not everyone bought used cards on a forum, esp. considering if the warranties didn't transfer.
> 
> e:
> if you want to talk mining craze than how about when they were twice that price but no, that wouldn't be right, huh?


All very true sir but my point still stands and its a reflection on my we have 80/20.

290X/290 were low in stock during mining craze and double the price so gamers would not buy them.
After mining craze was over used cards where very cheap but people did not trust them and at the same time they were not willing to pay double for new parts. All this confusion made GTX970 sales option super simple and easy. If only people had known about 3.5GB thing back then.


----------



## looniam

Quote:


> Originally Posted by *ZealotKi11er*
> 
> All very true sir but my point still stands and its a reflection on my we have 80/20.
> 
> 290X/290 were low in stock during mining craze and double the price so gamers would not buy them.
> After mining craze was over used cards where very cheap but people did not trust them and at the same time they were not willing to pay double for new parts. All this confusion made GTX970 sales option super simple and easy. If only people had known about 3.5GB thing back then.


oh c'mon now, how many other variables are you going to toss in?

both the mining craze and the used market after had no bearing on the *MSRP,* that changed ~quarter after the 970 release and a full half before the 390s.


----------



## BradleyW

Launched the game through DX12. Stood on Heli Pad. 60Hz is being forced. I can only get 144Hz to work in "Windowed Mode". Same goes for DX11. How do I get 144Hz to work on exclusive full screen?

Someone asked the same on the Hitman forum. They were told to edit it via an XML file which does not even exist!
http://www.hitmanforum.com/t/refresh-rate-setting/4538


----------



## tweezlednutball

i cant even get dx12 to launch... :'(


----------



## Kana-Maru

Quote:


> Originally Posted by *looniam*
> 
> oh c'mon now, how many other variables are you going to toss in?
> 
> both the mining craze and the used market after had no bearing on the *MSRP,* that changed ~quarter after the 970 release and a full half before the 390s.


Quote:


> Originally Posted by *BradleyW*
> 
> Launched the game through DX12. Stood on Heli Pad. 60Hz is being forced. I can only get 144Hz to work in "Windowed Mode". Same goes for DX11. How do I get 144Hz to work on exclusive full screen?
> 
> Someone asked the same on the Hitman forum. They were told to edit it via an XML file which does not even exist!
> http://www.hitmanforum.com/t/refresh-rate-setting/4538


It does exist. I had no issues finding it.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> It does exist. I had no issues finding it.


That's odd. Here are my game files.



I've searched each folder.


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> That's odd. Here are my game files.
> 
> 
> 
> I've searched each folder.


Here's mine:


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> Here's mine:


Well that's odd! And does your 16.3 Crimson see Hitman in the Game Profile section?


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> Well that's odd! And does Crimson see Hitman in the Game Profile section?


Yes, but I never use those profiles. I delete them. I use global default settings across my games with Textures set to "High" for the best IQ.

Here's a picture



I guess I'll try to use the profile to see if it makes a difference.

I haven't messed around with the XML file at all and I'm getting great results.


----------



## BradleyW

For some reason my game refuses to make such file. No idea why. I have just made a fresh format too! Not happy. Also CFX won't kick in for DX11 or DX12 mode.


----------



## Stige

Quote:


> Originally Posted by *BradleyW*
> 
> Is that for the starter pack only?


You can get the whole game on India Steam for 13,40€.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *looniam*
> 
> people like to forget that the 290/X dropped ~$140 in price because of the 970. so instead of saying thank you, they laugh.


Lol, "thank" Nvidia eh? AMD would have lowered the price of the 290/X regardless of how the 970 performed simply because it was a year newer card and they were already getting killed in market share. None of that is even relevant anyway because it doesn't change the fact that for the last year or so the 290/X has easily been the real price/performance champ in all sectors. In fact, as time goes on the Hawaii cards continue to get better and better while early Maxwell cards look worse and worse...


----------



## tweezlednutball

Quote:


> Originally Posted by *BradleyW*
> 
> For some reason my game refuses to make such file. No idea why. I have just made a fresh format too! Not happy. Also CFX won't kick in for DX11 or DX12 mode.


yea my crossfire wont work in dx11 either...


----------



## tweezlednutball

Quote:


> Originally Posted by *tweezlednutball*
> 
> yea my crossfire wont work in dx11 either...


NVM just fixed it installed this and its working in DX12 now. Still only loading 1 gpu... https://www.microsoft.com/en-us/download/details.aspx?id=48145


----------



## looniam

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Lol, "thank" Nvidia eh? AMD would have lowered the price of the 290/X regardless of how the 970 performed simply because it was a year newer card and they were already getting killed in market share. None of that is even relevant anyway because it doesn't change the fact that for the last year or so the 290/X has easily been the real price/performance champ in all sectors. In fact, as time goes on the Hawaii cards continue to get better and better while early Maxwell cards look worse and worse...


ya makes total sense to drop prices while you're losing money.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *looniam*
> 
> ya makes total sense to drop prices while you're losing money.


Um, it does when your product is a year old and your competitor is releasing a new product.


----------



## SuperZan

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Um, it does when your product is a year old and your competitor is releasing a new product.


And that coincides with what @ZealotKi11er was saying about the mining craze. AMD had to price against an abnormally high level of resale activity whilst simultaneously preparing for a new Nvidia product launch in the price segment. It's just not as simple as saying "thanks Nvidia!".


----------



## looniam

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Um, it does when your product is a year old and your competitor is releasing a new product.


which takes you to adjusting the price due to a competitor


----------



## sblantipodi

can't understand why they added DX12 if they don't used a single feature of DX12 and in this way it brings no improvements.


----------



## spyshagg

Quote:


> Originally Posted by *Stige*
> 
> You can get the whole game on India Steam for 13,40€.


how? I would like that


----------



## Assirra

Quote:


> Originally Posted by *spyshagg*
> 
> how? I would like that


I assume a VPN from India so steam thinks you are from there.


----------



## p00q

DX12 is stuttery at higher settings even in 1080p. It just loads up the dynamic memory, so most likely he's doing caching even though he's not suppose to. DX11 is much smoother because it keeps that under control. Lower the settings that affect vRAM usage and DX12 is at least just as smooth, but faster.

Now, I can't say if AMD is gimping the cards on purpose or if it knows something since it equipped R390(x) series with 8GB vRAM. Gimping them (old R290(x) and the rest) just to do extra work on the Fury(x)? Doesn't make much sense. Worth mentioning that Johan Andersson did say that WDDM 2.0 would allow better memory optimization than what we currently have and BF 4 under Mantle was facing some similar issues when closing in on the vRAM limit. However, just tested it today and... it's gone. Not sure if a driver from AMD, update to Mantle (doubt it) or the many patches the game received fixed that, but it's much better.
Perhaps someone can test Hitman on an nVIDIA card for the memory aspect with GPU Z running in the background and then looking at the maximum values they have? Bad memory management could also explain different scaling for Tomb Raider since the game already had its share of predicaments. Going from High texture settings to Medium did reveal a small increase and could be a factor in that game that from what I understood is quite the memory hog.

Anyway, the API just like Mantle it's promising. Showing an increase at max settings even at 5,54Mpx (2,07Mpx is 1080p and 3440*1440 is 4,95Mpx) is no small task. At the same resolution (5,54Mpx) a jump of 30% @ 3,3GHz in a CPU scenario (low settings) is again a wonderful show of strength.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> Yes, but I never use those profiles. I delete them. I use global default settings across my games with Textures set to "High" for the best IQ.
> 
> Here's a picture
> 
> 
> 
> I guess I'll try to use the profile to see if it makes a difference.
> 
> I haven't messed around with the XML file at all and I'm getting great results.


Even my settings won't stick. I can't believe it! Can you upload your XML file?


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *looniam*
> 
> which takes you to adjusting the price due to a competitor


My point was they could've released a potato and AMD would've lowered 290/X pricing. No need to "thank" Nvidia.

It cuts both ways. By your rationale, people should be "thanking" AMD for Nvidia releasing the 970 at $330...


----------



## BradleyW

I'm going crazy here. Still can't get 144Hz to work. Fresh format, RSC 16.3, Hitman DX11 and DX12 mode, exlusive full screen. Only windowed mode works with 144Hz. 16.3 won't detect Hitman either. CFX also not working.


----------



## OneB1t

use DDU to reinstall drivers


----------



## Olivon

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> My point was they could've released a potato and AMD would've lowered 290/X pricing. No need to "thank" Nvidia.
> 
> It cuts both ways. By your rationale, people should be "thanking" AMD for Nvidia releasing the 970 at $330...


Yep. It was another big present from AMD.
By not renewing the Hawaii chip and going rebrand madness, nVidia and their GM204 was total homerun and GTX 970 is one of the biggest success in the GPU history, if not the biggest.


----------



## BradleyW

Quote:


> Originally Posted by *OneB1t*
> 
> use DDU to reinstall drivers


I did better than that. I did a full format.


----------



## Noufel

Quote:


> Originally Posted by *Olivon*
> 
> Yep. It was another big present from AMD.
> By not renewing the Hawaii chip and going rebrand madness, nVidia and their GM204 was total homerun and GTX 970 is one of the biggest success in the GPU history, if not the biggest.


and you are the biggest nvidia fan in the fanboyism history







just kidding


----------



## looniam

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> My point was they could've released a potato and AMD would've lowered 290/X pricing. No need to "thank" Nvidia.
> 
> It cuts both ways. By your rationale, people should be "thanking" AMD for Nvidia releasing the 970 at $330...


problem with your point is the "potato" was ahead of the 290 and within 5% of the 290X at the time while substantial cheaper. _both the 290 and 290X already had expected price cuts._

there is no rational as you are trying, i just simply present the facts.


----------



## stahlhart

Quote:


> Originally Posted by *BradleyW*
> 
> I'm going crazy here. Still can't get 144Hz to work. Fresh format, RSC 16.3, Hitman DX11 and DX12 mode, exlusive full screen. Only windowed mode works with 144Hz. 16.3 won't detect Hitman either. CFX also not working.


Something that I noticed is that the new Hitman has the same executable name (hitman.exe) as the original Codename 47. When in my travels I ran across a hack in Nvidia Inspector to get SLI working with this game, it was actually through modification of the profile for the original, which I thought was strange -- isn't there at least a slight possibility that someone, somewhere is still playing Codename 47? I know that it wasn't the most well received game back in the day, but still.

By the way, the SLI hack worked for DX11, but crashed the demo for DX12. I can't get RTSS OSD working in DX12 mode anyway, so even if the profile did work I'd have no metrics for an indication as to how well.

I'm only bringing up the executable name issue in case it has something to do with the troubles you are experiencing.


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> I did better than that. I did a full format.


Exclusive fullscreen is locked to refresh regardless of Vsync setting, but @Forceman was able to see >60FPS on his 144Hz monitor. This sounds like an EDID / monitor specific problem-- what is your connection interface? Is it DisplayPort?

I'm also locked to 60 in exclusive fullscreen, but my monitor is 60Hz so that makes some sense. The only way in Hitman to show fully unlocked frame averages is to run in borderless window (aka 'fullscreen').

Because of my weird 290x --> 390x overclock I have to DDU every time I shut down, but Hitman shows right up in my Crimson profile every time.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Exclusive fullscreen is locked to refresh regardless of Vsync setting, but @Forceman was able to see >60FPS on his 144Hz monitor. This sounds like an EDID / monitor specific problem-- what is your connection interface? Is it DisplayPort?
> 
> I'm also locked to 60 in exclusive fullscreen, but my monitor is 60Hz so that makes some sense. The only way in Hitman to show fully unlocked frame averages is to run in borderless window (aka 'fullscreen').
> 
> Because of my weird 290x --> 390x overclock I have to DDU every time I shut down, but Hitman shows right up in my Crimson profile every time.


I too have unlimited FPS, but the game is running at 60 Hz. I know this by three ways:
1) Tearing.
2) The monitor tells me so.
3) Enabling Vsync locks the FPS to 60.

I am using Display Port (Mini DP) 1.2.
Crimson 16.3 does not pick up the game profile.
Crimson 16.2.1 does.

144hz will only work in "Window Mode" with DX11/12 on "Crimson 16.3".
144hz will work in "full screen" with "DX11 only" when using "Crimson 16.2.1".

What do you think? EDID or game issue?


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> I too have unlimited FPS, but the game is running at 60 Hz. I know this by three ways:
> 1) Tearing.
> 2) The monitor tells me so.
> 3) Enabling Vsync locks the FPS to 60.


Wait-- what? In Exclusive Fullscreen? So then why do you say
Quote:


> Originally Posted by *BradleyW*
> 
> 60Hz is being forced. I can only get 144Hz to work in "Windowed Mode". Same goes for DX11. How do I get 144Hz to work on exclusive full screen?


The Max FPS is always unlocked, but the Average FPS always clocks in at 60Hz. Like I said before, this benchmark is just completely whacked. I'm surprised we're all trying so hard to get it to work, when it's clearly bugged hard.
Quote:


> Originally Posted by *BradleyW*
> 
> I am using Display Port (Mini DP) 1.2.
> Crimson 16.3 does not pick up the game profile.
> Crimson 16.2.1 does.
> 
> 144hz will only work in "Window Mode" with DX11/12 on "Crimson 16.3".
> 144hz will work in "full screen" with "DX11 only" when using "Crimson 16.2.1".
> 
> What do you think? EDID or game issue?


The profile issue is a Crimson software scan issue, for sure. 16.3 doesn't pick up *every* game on my system, even if it gets Hitman. I have games spread over 6 drives and 3 Steam libraries, not to mention GOG, Origin, and disc-based games everywhere, and Crimson sees only 5 games, and it lists a Java-based disk space tool I have installed that isn't even remotely a game. That profile scan is definitely bugged.

You can manually add Hitman to the game profile list, and "Windowed Mode" and "Fullscreen" are the same. Fullscreen = Borderless window.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> *Wait-- what? In Exclusive Fullscreen? So then why do you say*
> The profile issue is a Crimson software scan issue, for sure. 16.3 doesn't pick up *every* game on my system, even if it gets Hitman. I have games spread over 6 drives and 3 Steam libraries, not to mention GOG, Origin, and disc-based games everywhere, and Crimson sees only 5 games, and it lists a Java-based disk space tool I have installed that isn't even remotely a game. That profile scan is definitely bugged.
> You can manually add Hitman to the game profile list.


I'm not sure what's going on but it's true. In "Full Screen" mode and "Exclusive Full Screen" mode, the game does not run / render @ 144Hz. Please note, my FPS appears to be unlimited. The refresh rate is locked at 60Hz, unless I run in ugly "Windowed" mode. I have no idea what's going on.

I wish there was a way to select 144Hz in the options like Rise of the Tomb Raider has. Or at least a config file to force 144Hz.

So basically here is the rundown.
DX11:*
Window = unlimited fps - 144Hz
*Full Screen = unlimited fps - 60Hz
Exclusive = unlimited fps - 60Hz

DX12:
*Window = unlimited fps - 144Hz*
Full Screen = unlimited fps - 60Hz
Exclusive = 144 fps limit - 60Hz


----------



## infranoia

Quote:


> Originally Posted by *BradleyW*
> 
> I'm not sure what's going on but it's true. In "Full Screen" mode and "Exclusive Full Screen" mode, the game does not run / render @ 144Hz. Please note, my FPS appears to be unlimited. The refresh rate is locked at 60Hz, unless I run in ugly "Windowed" mode. I have no idea what's going on. I truly don't. I wish there was a way to select 144Hz in the options like Rise of the Tomb Raider has. Or at least a config file to force 144Hz.


Honestly I wouldn't spend any more cycles on it. This is not a proper benchmark, and it isn't possible to establish a control or baseline for performance, as there are so many bugs with Vsync, high variance between runs (due to random NPCs per scene etc.), and questions around DX11 vs. DX12 statistics output (CPU vs. GPU outputs).


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Honestly I wouldn't spend any more cycles on it. This is not a proper benchmark, and it isn't possible to establish a control or baseline for performance, as there are so many bugs with Vsync, high variance between runs (due to random NPCs per scene etc.), and questions around DX11 vs. DX12 statistics output (CPU vs. GPU outputs).


I just want to play the game in Exclusive Full Screen - 144 Hz, like any other game I have. I can do it with ROTTR in DX12, so why not this pile of crap?

So basically here is the rundown.
DX11:
Window = unlimited fps - 144Hz
Full Screen = unlimited fps - 60Hz
Exclusive = unlimited fps - 60Hz

DX12:
Window = unlimited fps - 144Hz
Full Screen = unlimited fps - 60Hz
Exclusive = 144 fps limit - 60Hz


----------



## infranoia

Ignore Exclusive Fullscreen for a moment.

Windowed mode is limited to a 1080 window. Fullscreen is a full-resolution window. So I'm very confused when you say "unlimited fps" and 60Hz for the fullscreen setting.

Are you posting over 60FPS in Fullscreen mode? If so, how can you say it's running at 60Hz? There is no mode change, it just opens a borderless window on your desktop, so it will simply run at your desktop refresh rate. What is your desktop resolution? Unlocked FPS for 1440p just happens to run about 60FPS for Hawaii/Grenada.


----------



## BradleyW

Quote:


> Originally Posted by *infranoia*
> 
> Ignore Exclusive Fullscreen for a moment.
> 
> Windowed mode is limited to a 1080 window. Fullscreen is a full-resolution window. So I'm very confused when you say "unlimited fps" and 60Hz for the fullscreen setting.
> 
> Are you posting over 60FPS in Fullscreen mode? If so, how can you say it's running at 60Hz? There is no mode change, it just opens a borderless window on your desktop, so it will simply run at your desktop refresh rate. What is your desktop resolution? Unlocked FPS for 1440p just happens to run about 60FPS for Hawaii/Grenada.


When I run in Full Screen mode:

1) Excessive tearing.
2) Vsync locks the FPS to 60 if I enable it..
3) Monitor reports the refresh rate at 60hz.

When I run in Window mode:

1) No tearing.
2) Vsync locks the FPS to 144 if I enabled it.
3) Monitor reports the refresh rate at 144hz.

Desktop is set to 2560 x 1080 @ 144hz. Tested in-game on the Heli Pad, mission 1.

Edit: Hitman forum suggests to edit the XML file to allow higher refresh rates, but I don't have the XML file in the stated directory.


----------



## Forceman

Quote:


> Originally Posted by *infranoia*
> 
> Honestly I wouldn't spend any more cycles on it. This is not a proper benchmark, and it isn't possible to establish a control or baseline for performance, as there are so many bugs with Vsync, high variance between runs (due to random NPCs per scene etc.), and questions around DX11 vs. DX12 statistics output (CPU vs. GPU outputs).


And on the plus side (he said sarcastically), once you get into the actual game I can't even keep 60 FPS, so it's kind of a moot point.


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> And on the plus side (he said sarcastically), once you get into the actual game I can't even keep 60 FPS, so it's kind of a moot point.


What monitor do you have? If it's 144Hz, does the monitor report 144hz or 60hz in exclusive full screen? Do you have an XML file in the game directory?


----------



## Forceman

Quote:


> Originally Posted by *BradleyW*
> 
> What monitor do you have? If it's 144Hz, does the monitor report 144hz or 60hz in exclusive full screen? Do you have an XML file in the game directory?


It's an overclocked Qnix running 96Hz on DVI. The monitor doesn't report any FPS. There is an XML file in the game directory, but I don't know if it was there the whole time (the modified date is this morning when I played the campaign for the first time).


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> It's an overclocked Qnix running 96Hz on DVI. The monitor doesn't report any FPS. There is an XML file in the game directory, but I don't know if it was there the whole time (the modified date is this morning when I played the campaign for the first time).


I assume you have a good eye to see (or feel) between 60Hz and 96Hz? Also can you upload your XML file. My game fails to create one.


----------



## Forceman

Quote:


> Originally Posted by *BradleyW*
> 
> I assume you have a good eye to see (or feel) between 60Hz and 96Hz? Also can you upload your XML file. My game fails to create one.


In game I'm getting less than 60 most of the time, but in the benchmark it isn't locked the way infranoia's appears to be. It's possible it is rendering above 60 and discarding frames like Ashes, but there is no judder to indicate it is throwing out frames. Hard to be certain when it is normally 70-90 FPS though.

Here's the XML file (I have it in DX11 mode because DX12 seemed worse in crowded areas).

GFXSettings.HITMAN.xml 4k .xml file


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> In game I'm getting less than 60 most of the time, but in the benchmark it isn't locked the way infranoia's appears to be. It's possible it is rendering above 60 and discarding frames like Ashes, but there is no judder to indicate it is throwing out frames. Hard to be certain when it is normally 70-90 FPS though.
> 
> Here's the XML file (I have it in DX11 mode because DX12 seemed worse in crowded areas).
> 
> GFXSettings.HITMAN.xml 4k .xml file


Thanks for the upload. +1.
You say the FPS does not normally reach 60. That should not matter too much. A higher refresh rate, regardless of the current FPS value. will always look and feel "better". For instance 30 FPS @ 144Hz feels almost as smooth as a console @ 30 FPS, or 45 FPS @ 60Hz. That's why I love 144Hz. No tearing and smooth gaming.


----------



## Catscratch

Quote:


> Originally Posted by *BradleyW*
> 
> Thanks for the upload. +1.
> You say the FPS does not normally reach 60. That should not matter too much. A higher refresh rate, regardless of the current FPS value. will always look and feel "better". For instance 30 FPS @ 144Hz feels almost as smooth as a console @ 30 FPS, or 45 FPS @ 60Hz. That's why I love 144Hz. No tearing and smooth gaming.


I should sell everything I can and get a BenQ XL2730Z. My 280x wouldn't keep up with the resolution but at least, monitor itself has "Display Mode & Smart Scaling" to show real 1080p on screen. I can't go gpu+monitor and I like clear/sharp images more than anything (never use AA) So my best bet is to get a bit future proof monitor. My eyes don't get better with aging anyway.


----------



## Forceman

Quote:


> Originally Posted by *BradleyW*
> 
> Thanks for the upload. +1.
> You say the FPS does not normally reach 60. That should not matter too much. A higher refresh rate, regardless of the current FPS value. will always look and feel "better". For instance 30 FPS @ 144Hz feels almost as smooth as a console @ 30 FPS, or 45 FPS @ 60Hz. That's why I love 144Hz. No tearing and smooth gaming.


I said that, but now that I'm in the Paris level, DX12 does seem to be giving me more FPS, and I'm in the 80s most of the time. Doesn't really feel like 80+ though, so maybe it is still locked. I switched to exclusive full screen at the same time, so maybe I need to fool around with it some.


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> I said that, but now that I'm in the Paris level, DX12 does seem to be giving me more FPS, and I'm in the 80s most of the time. Doesn't really feel like 80+ though, so maybe it is still locked. I switched to exclusive full screen at the same time, so maybe I need to fool around with it some.


OK dude I will let you know how I get on also.









Edit: After reinstalling all the Redist packages, my game now creates it's own XML file.
Here is the new rundown:

DX11 Full Screen now works @ 144Hz refresh rate.
DX11 Exclusive Screen is stuck on 60Hz refresh rate still.
DX12: Game instantly crashes. Many others report this too.
Editing XML file did not seem to help with 144Hz issue.


----------



## prjindigo

I'm not wondering why the nV cards are having so much trouble with the game and framerates... the answer is simple.

The game isn't letting the nV drivers turn down and turn off details to keep an artificially high frame-rate. AMD has never stooped to screwing people on visuals when they're not paying attention.

Look at the hard data processing numbers of the two cards, how much mathematic power they have. AMD every time.

Now Polaris 10 is coming. HBM1 and basically everything they've learned from GCN and Fury. All nVidia is able to get out the door is a new architecture equivalent of the 970.

Dis gun be goot!


----------



## spyshagg

Quote:


> Originally Posted by *prjindigo*
> 
> I'm not wondering why the nV cards are having so much trouble with the game and framerates... the answer is simple.
> 
> The game isn't letting the nV drivers turn down and turn off details to keep an artificially high frame-rate. AMD has never stooped to screwing people on visuals when they're not paying attention.
> 
> Look at the hard data processing numbers of the two cards, how much mathematic power they have. AMD every time.
> 
> Now Polaris 10 is coming. HBM1 and basically everything they've learned from GCN and Fury. All nVidia is able to get out the door is a new architecture equivalent of the 970.
> 
> Dis gun be goot!


It did happened in the past, but, everytime this is brought up it should be accompanied by detailed comparatives to back them up.


----------



## Remij

Quote:


> Originally Posted by *prjindigo*
> 
> I'm not wondering why the nV cards are having so much trouble with the game and framerates... the answer is simple.
> 
> The game isn't letting the nV drivers turn down and turn off details to keep an artificially high frame-rate. AMD has never stooped to screwing people on visuals when they're not paying attention.
> 
> Look at the hard data processing numbers of the two cards, how much mathematic power they have. AMD every time.
> 
> Now Polaris 10 is coming. HBM1 and basically everything they've learned from GCN and Fury. All nVidia is able to get out the door is a new architecture equivalent of the 970.
> 
> Dis gun be goot!


It is gonna be good. AMD didn't learn much from Fury I'm afraid.


----------



## Noufel

hilbert from guru3d found negative scalling in DX12 for the fury positive for the 390X and no gain at all for the TitanX
againd weird things happening for the fiji Mahigan ??



http://www.guru3d.com/articles_pages/hitman_2016_pc_graphics_performance_benchmark_review,7.html


----------



## ZealotKi11er

Quote:


> Originally Posted by *Noufel*
> 
> hilbert from guru3d found negative scalling in DX12 for the fury positive for the 390X and no gain at all for the TitanX
> againd weird things happening for the fiji Mahigan ??
> 
> 
> 
> http://www.guru3d.com/articles_pages/hitman_2016_pc_graphics_performance_benchmark_review,7.html


ASync doing its work.. I do not think DX12 is even remotely ready. Just tested RoTR with 290X @ 4K High Settings. DX12 - 26 fps, DX11 - 33 fps. Pathetic.


----------



## BradleyW

How do I run the game at 144hz refresh rate in exclusive full screen?


----------



## Forceman

Quote:


> Originally Posted by *BradleyW*
> 
> How do I run the game at 144hz refresh rate in exclusive full screen?


Have you checked the in-game settings? I noticed last night that even if I set exclusive fullscreen in the launcher, in-game it always shows fullscreen. If you change it in-game it sticks. Not sure if that helps/addresses your problem though. The game might just be borked.


----------



## BradleyW

Quote:


> Originally Posted by *Forceman*
> 
> Have you checked the in-game settings? I noticed last night that even if I set exclusive fullscreen in the launcher, in-game it always shows fullscreen. If you change it in-game it sticks. Not sure if that helps/addresses your problem though. The game might just be borked.


I've noticed this too. But no I'm afraid this didn't resolve the problem. Exclusive Fullscreen has an issue. I hope they include a refresh rate slider as a quick solution at least.


----------



## airfathaaaaa

Quote:


> Originally Posted by *ZealotKi11er*
> 
> ASync doing its work.. I do not think DX12 is even remotely ready. Just tested RoTR with 290X @ 4K High Settings. DX12 - 26 fps, DX11 - 33 fps. Pathetic.


http://www.dualshockers.com/2016/03/14/directx12-requires-different-optimization-on-nvidia-and-amd-cards-lots-of-details-shared/


----------



## Majin SSJ Eric

Its disappointing to see these issues with DX12. I was still hoping beyond hope that this would be a panacea API...


----------



## BradleyW

Direct-X 11 4 LIFE!


----------



## Remij

It'll get better as developers get better. It's still early days my friends.


----------



## BIGTom

Quote:


> Originally Posted by *BradleyW*
> 
> Launched the game through DX12. Stood on Heli Pad. 60Hz is being forced. I can only get 144Hz to work in "Windowed Mode". Same goes for DX11. How do I get 144Hz to work on exclusive full screen?
> 
> Someone asked the same on the Hitman forum. They were told to edit it via an XML file which does not even exist!
> http://www.hitmanforum.com/t/refresh-rate-setting/4538


AMD just released driver version 16.3.1 and the frame rate lock might be fixed for you. I am now able to run the benchmark with frames above 60 fps on my 3440x1440 60hz panel.
Hope it works for you BradleyW

EDIT: However, I still cannot get the game to run in Exclusive Fullscreen


----------



## BradleyW

Quote:


> Originally Posted by *BIGTom*
> 
> AMD just released driver version 16.3.1 and the frame rate lock might be fixed for you. I am now able to run the benchmark with frames above 60 fps on my 3440x1440 60hz panel.
> Hope it works for you BradleyW
> 
> EDIT: However, I still cannot get the game to run in Exclusive Fullscreen


I don't have a frame rate lock. I have a "refresh rate" lock.


----------



## PontiacGTX

Quote:


> Originally Posted by *BradleyW*
> 
> I don't have a frame rate lock. I have a "refresh rate" lock.


Quote:


> We managed to fix the frame lock and the poor performance of AMD GPUs and crashes with Nvidia GPUs with DirectX 12th On the Start menu of the game in Direct X 12, an additional option is active: If "Render Target Reuse" as given to "automatic", Hitman runs under DirectX 12, as described above. we disable this option, the game runs smoothly on the other hand, the AMD Framelock deleted, the low-level API was led apparently as envisaged. We now update the benchmarks, replacing the old measurements and revise our statement - Hitman was led quite supple and, after initial findings quite clearly benefit from DX12. The first map we present, color highlighted the very strong abschneidende PowerColor R9 390 PCS + / 8G in the benchmarks. We are all tested to date graphics cards again compete with the customized settings control and adjust the values accordingly.


GCN3
Quote:


> And how it works: A 144-Hz display must be connected in addition to the UHD display of our test system, the maximum resolution is not important. Now the 144-Hz monitor is set as the primary display, then finally the display will be placed on the secondary display the launcher of Hitman.


http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


----------



## BradleyW

I don't have other monitors to plug in and try.


----------



## Semel

Quote:


> Originally Posted by *BradleyW*
> 
> I don't have other monitors to plug in and try.


the latest AMD driver has fixed refresh rate fps lock.


----------



## BradleyW

Quote:


> Originally Posted by *Semel*
> 
> the latest AMD driver has fixed refresh rate fps lock.


Yes true, but I'm still yet to see if it fixes the refresh rate lock (not the fps lock) in exclusive full screen.


----------



## OneB1t

here is nice comparsion of DX11 vs DX12 with cryengine
http://s30.postimg.org/9gf07ojq7/overheadtest.png


----------



## ZealotKi11er

Quote:


> Originally Posted by *OneB1t*
> 
> here is nice comparsion of DX11 vs DX12 with cryengine
> http://s30.postimg.org/9gf07ojq7/overheadtest.png


They should just make Crysis 4 and show people what DX12 really is.


----------



## PontiacGTX

Quote:


> Originally Posted by *ZealotKi11er*
> 
> They should just make Crysis 4 and show people what DX12 really is.


They should develop a Crysis and Crysis warhead redux on CE V using DX12 and then some version with VR capability,and using a reliable server to host the multiplayer.It would be a really good game with great multiplayer and using latest technology.

But it seems crytek prefers to invest in something else(like was homefront the revolution)


----------



## Catscratch

So umm, where are them benches ? Was there a big announcement somewhere that most known sites will be ignoring hitman ?


----------



## BradleyW

Just bring back Mantle and call it a day.


----------



## OneB1t

Quote:


> Originally Posted by *Catscratch*
> 
> So umm, where are them benches ? Was there a big announcement somewhere that most known sites will be ignoring hitman ?


this is my custom benchmark i made just for fun to check DX12 in cryengine 5


----------



## killerhz

damn i can't even get the hitman benchmark to run with DX12 ffs.
it's getting annoying real fast.

here are my setting
got my cpu i7 4790k stock clocks, EVGA 980 Ti Classified stock. What the hell is happening and why can't i run this.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *ZealotKi11er*
> 
> They should just make Crysis 4 and show people what DX12 really is.


OMG YES!!!! I would absolutely kill for a new DX12 Crysis 4! If anybody can get a game to show off its graphical splendor its Crytek and their Crysis series!


----------



## kzone75

Where does the benchmark results end up after you run 'em? Can't seem to find them anywhere..


----------



## airfathaaaaa

you should close render target reuse its being know of causing problems on both companies..


----------



## degenn

Quote:


> Originally Posted by *magnek*
> 
> nVidia can't DX12 confirmed! I think I might make a thread about this...


By the time DX12 actually matters and developers have a handle on it (probably at least 1-2 years from now), Nvidia, and AMD for that matter, will have no problem with DX12. All this DX12/ASC bickering back & forth between AMD & Nvidia fanboys lately is hilariously pointless. It has been rather amusing to read these types of threads though, I'll give you guys that much.









In case you were being sarcastic, be a little more obvious because with all of the legitimate frothing at the mouth these days it's become hard to tell.


----------



## looniam

icymi/fwiw guru3d updated its benches - looks like multi cards need some work:

http://www.guru3d.com/articles-pages/hitman-2016-pc-graphics-performance-benchmark-review,7.html


----------



## Forceman

Quote:


> Originally Posted by *kzone75*
> 
> Where does the benchmark results end up after you run 'em? Can't seem to find them anywhere..


It's in the user/Hitman folder.


----------



## Charcharo

Quote:


> Originally Posted by *ZealotKi11er*
> 
> They should just make Crysis 4 and show people what DX12 really is.


Considering how terrible Crysis 3 was... I hope they dont.

A remake of Crysis 1 is something i'd love though.


----------



## magnek

Quote:


> Originally Posted by *degenn*
> 
> By the time DX12 actually matters and developers have a handle on it (probably at least 1-2 years from now), Nvidia, and AMD for that matter, will have no problem with DX12. All this DX12/ASC bickering back & forth between AMD & Nvidia fanboys lately is hilariously pointless. It has been rather amusing to read these types of threads though, I'll give you guys that much.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In case you were being sarcastic, be a little more obvious because with all of the legitimate frothing at the mouth these days it's become hard to tell.


I prefer to make my sarcasm subtle


----------



## ZealotKi11er

Quote:


> Originally Posted by *Charcharo*
> 
> Considering how terrible Crysis 3 was... I hope they dont.
> 
> A remake of Crysis 1 is something i'd love though.


Much rather play Crysis 3 then the Ubisoft garbage like Division.


----------



## tweezlednutball

Finally got crossfire to work in dx12. Seems like it needs to be set to full screen instead of exclusive fullscreen. Much lower cpu overhead than dx11 but the game just randomly locks up everywhere...










Guess i need to wait for some patches


----------



## ZealotKi11er

Quote:


> Originally Posted by *tweezlednutball*
> 
> Finally got crossfire to work in dx12. Seems like it needs to be set to full screen instead of exclusive fullscreen. Much lower cpu overhead than dx11 but the game just randomly locks up everywhere...
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Guess i need to wait for some patches


What do you mean by lock ups?


----------



## tweezlednutball

Quote:


> Originally Posted by *ZealotKi11er*
> 
> What do you mean by lock ups?


The game just freezes solid. Have to force quit the task.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Charcharo*
> 
> Considering how terrible Crysis 3 was... I hope they dont.
> 
> A remake of Crysis 1 is something i'd love though.


What are you even talking about? Crysis 3 is an awesome game! Sure its much less open and free than the original but I still had a great time playing and beating it...


----------



## tweezlednutball

yep it freezes in the same exact spot every time...


----------



## semitope

Quote:


> Originally Posted by *degenn*
> 
> By the time DX12 actually matters and developers have a handle on it (probably at least 1-2 years from now), Nvidia, and AMD for that matter, will have no problem with DX12. All this DX12/ASC bickering back & forth between AMD & Nvidia fanboys lately is hilariously pointless. It has been rather amusing to read these types of threads though, I'll give you guys that much.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> In case you were being sarcastic, be a little more obvious because with all of the legitimate frothing at the mouth these days it's become hard to tell.


dont really understand this sentiment.

1. we are already seeing dx12 matter this year
2. people will still have the old GPUs. AMD/Nvidia releasing new GPUs won't change that.


----------



## degenn

Quote:


> Originally Posted by *semitope*
> 
> dont really understand this sentiment.
> 
> 1. we are already seeing dx12 matter this year
> 2. people will still have the old GPUs. AMD/Nvidia releasing new GPUs won't change that.


1. It really doesn't matter much if at all this year. Unless you consider bad launches & marginal, insignificant gains (sometimes even regressions) on a handful (one of which is a decade-old DX9 game w/ DX12 wrapper) of games so far to be a great thing, which I certainly don't. Other than those already out, we have 2 more coming which are locked to the Windows Store, and then Deus Ex/ARK? Meh, it's underwhelming.

2. True, but most people will be on to new GPU's by the time DX12 truly matters. By "truly" I mean when developers have a proper handle on the API (this takes time) and we have 15-20+ AAA _native_ DX12 titles on the market. Both of those scenarios won't happen for quite a while yet. DX11 will still be a major focus for many if not all developers for quite some time.


----------



## Devnant

Kinda off the topic but
Quote:


> Originally Posted by *Charcharo*
> 
> Considering how terrible Crysis 3 was... I hope they dont.
> 
> A remake of Crysis 1 is something i'd love though.


This!


----------



## airfathaaaaa

Quote:


> Originally Posted by *degenn*
> 
> 1. It really doesn't matter much if at all this year. Unless you consider bad launches & marginal, insignificant gains (sometimes even regressions) on a handful (one of which is a decade-old DX9 game w/ DX12 wrapper) of games so far to be a great thing, which I certainly don't. Other than those already out, we have 2 more coming which are locked to the Windows Store, and then Deus Ex/ARK? Meh, it's underwhelming.
> 
> 2. True, but most people will be on to new GPU's by the time DX12 truly matters. By "truly" I mean when developers have a proper handle on the API (this takes time) and we have 15-20+ AAA _native_ DX12 titles on the market. Both of those scenarios won't happen for quite a while yet. DX11 will still be a major focus for many if not all developers for quite some time.


pretty sure the fact that major gaming engines have native support for dx12 is already a "factor" towards it not to mention that only one company actually have the excuse to stay on dx11 for longer the rest of the players that are and will use this year dx12/vulkan is much bigger than that company


----------



## Charcharo

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Much rather play Crysis 3 then the Ubisoft garbage like Division.


Well... IDK. I dont buy Ubi games due to Uplay so I cant say








Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> What are you even talking about? Crysis 3 is an awesome game! Sure its much less open and free than the original but I still had a great time playing and beating it...


My problems with it were basically:
- Missed opportunities
- Easy ... too easy.
- Problems with gameplay rhythm, Artificial Intelligence, bad design decisions.
- Storyline and actions in the game were terrible







. Much worse than almost any CoD game even, much worse than Crysis 2 and 1.

Sorry for the off-topic. IDK where this should be discussed...


----------



## dubldwn

Having a lot of fun playing Hitman. Dx11 with a 980Ti is great but I did get a couple crashes and restarts. I did install blood money to see how rose colored my glasses were (loved that game) but the AI, physics and graphics are so bad I can't even play it.

OT: because of the conversation here I reinstalled Crysis 1 and hacked it for 1440p. That game still looks great.


----------



## diggiddi

Quote:


> Originally Posted by *dubldwn*
> 
> Having a lot of fun playing Hitman. Dx11 with a 980Ti is great but I did get a couple crashes and restarts. I did install blood money to see how rose colored my glasses were (loved that game) but the AI, physics and graphics are so bad I can't even play it.
> 
> OT: because of the conversation here I reinstalled Crysis 1 and hacked it for 1440p. That game still looks great.


How different is that from using VSR to change it 1440?


----------



## dubldwn

Quote:


> Originally Posted by *diggiddi*
> 
> How different is that from using VSR to change it 1440?


Well, the big jump was from 1080 to my native res. I did just try 2.25x DSR (4k) with 16% smoothness and it is stunning. Especially stuff far away. Some of the best water of any game and Crysis is almost 9 years old...so many effects didn't even exist then.


----------



## diggiddi

So is it the same as hacking it?


----------



## dubldwn

Quote:


> Originally Posted by *diggiddi*
> 
> So is it the same as hacking it?


Oh I see what you're saying. Leave it at 1080 in the config and then use DSR to render at 1440? Seems I would be getting all the disadvantages of DSR and none of the benefits. If I remember tonight I'll try that.

I tried 2160 in Hitman and my framerate tanked, some screens(eg loading) didn't display properly and little improvement in image quality.

EDIT: ok tried it and I see no visual difference between the two so yeah I guess you could just use DSR.

Back on topic so is the verdict if you're running nVidia to just run Dx11??


----------



## Stige

Quote:


> Originally Posted by *dubldwn*
> 
> Oh I see what you're saying. Leave it at 1080 in the config and then use DSR to render at 1440? Seems I would be getting all the disadvantages of DSR and none of the benefits. If I remember tonight I'll try that.
> 
> I tried 2160 in Hitman and my framerate tanked, some screens(eg loading) didn't display properly and little improvement in image quality.
> 
> EDIT: ok tried it and I see no visual difference between the two so yeah I guess you could just use DSR.
> 
> Back on topic so is the verdict if you're running nVidia to just run Dx11??


Yes. Nvidia cards don't do DX12, even the new upcoming cards seem to have trouble with DX12 lol


----------



## p4inkill3r

TweakTown's performance review is up: http://www.tweaktown.com/guides/7634/hitman-pc-performance-analysis-directx-12-finest/index.html
Quote:


> Hitman is an exploratory lesson in how to properly implement DirectX 12 into a game in these early stages of the API. It isn't perfect, but offloading a good portion of the special effects work onto the compute queue we can see a startlingly large relative increase in performance for AMD's cards which have a hardware scheduler capable of fully taking advantage of asynchronous compute. The speedup is real here.
> 
> Unfortunately, NVIDIA actually struggles. Due to the difficulty in context-switching with Maxwell, there's actually a performance drop as it's not able to change quickly enough. Using CUDA and their proprietary GameWorks could solve that, actually, as that would allow both queues to be used at the same time. But with DX12, and in Hitman, that isn't the case, and it isn't used, so NVIDIA cards perform worse in DirectX 12.


----------



## Glottis

Quote:


> Originally Posted by *Stige*
> 
> Yes. Nvidia cards don't do DX12, even the new upcoming cards seem to have trouble with DX12 lol


What are you bubbling about? My 980Ti is playing DX12 games just fine.


----------



## ebduncan

Meh, from what I can see on the charts, people from both AMD and Nvidia camp can play this game just fine @1440p and @1080p. I suppose the 390x and 390 would both be better than the gtx 980 and 970, so if all you play is hitman then choice becomes pretty clear what card to get/use.

DX12 is a mixed bag, it all boils down to how the developer wants to code their engines. AMD wants to push ACE's, and Nvidia wants to push gameworks. I just wish the gaming industry was less segmented and more generalized. We pc gamers just want to run our games well with quality options that exceed the console.


----------



## Forceman

Quote:


> Originally Posted by *p4inkill3r*
> 
> TweakTown's performance review is up: http://www.tweaktown.com/guides/7634/hitman-pc-performance-analysis-directx-12-finest/index.html


Their bar for "startlingly large relative increase" is pretty low, considering the improvement ranges from 0% to 10%.


----------



## mcg75

Quote:


> Originally Posted by *Forceman*
> 
> Their bar for "startlingly large relative increase" is pretty low, considering the improvement ranges from 0% to 10%.


Strange how the TitanX is slower than a 390x at 1440p and dx11 but then is faster in 4K.

I also don't agree with his conclusion about async in this case but as always I could be wrong.

I think lower cpu overhead is a bigger factor in the AMD jump here. Because there is no performance gain in 4K between dx11 and dx12 except the minmum fps got a small bump.


----------



## Forceman

Quote:


> Originally Posted by *mcg75*
> 
> I think lower cpu overhead is a bigger factor in the AMD jump here.


That sounds right.


----------



## mtcn77

Quote:


> Originally Posted by *mcg75*
> 
> Strange how the TitanX is slower than a 390x at 1440p and dx11 but then is faster in 4K.
> 
> I also don't agree with his conclusion about async in this case but as always I could be wrong.
> 
> I think lower cpu overhead is a bigger factor in the AMD jump here. Because there is no performance gain in 4K between dx11 and dx12 except the minmum fps got a small bump.


Titan has 5-10% more bandwidth than 390x, the latter's strength comes from scheduling.



We cannot call it cpu overhead at this point since the cpu has been relegated from the gpu pipeline. Pipeline overhead sounds better.


----------



## Kana-Maru

My DX12 test differs from TweakTown. I ran the internal benchmark test with my Fury X @ Stock Settings and got:

-89.43fps Average @ 1080p DX12 with SMAA[+Max Settings] and TechReport only got 79fps @1080p with FXAA????
-72.30fps Average @ 1440p DX12 with SMAA[+Max Settings] and TechReport only got 67fps Average @1440p with FXAA????

-36.02fps Average @ 2160p DX12 with SMAA[+Max Settings] and TechReport only got 44fps Average @4K with FXAA.

Well they were using FXAA @ 4K and I was running SMAA. I'd have to run the 4K test again with FXAA for a better comparison, but still their frame rates are way off from my results. 1440p is sort of close tho, but that's still a 8% different.

Maybe it has something to do with my system having more CPU cores or something, but I'm using an old 1st generation X58 platform with only DDR3-1600Mhz during these test. I would think their newer i7-6700K setup with put up better numbers and put my 1st gen build at a disadvantage.


----------



## cowie

here some % in dx11 vs dx12 in this game

use translate if you must

http://nl.hardware.info/reviews/6626/8/hitman-2016-review-directx-12-game-getest-met-22-gpus-directx-11-vs-directx-12

looks legit I guess


----------



## BradleyW

HITMAN Benchmark

290X - 2560x1080, Max Details, SMAA.

Average FPS:

DX11 = 61.5
DX12 = 68.6


----------



## killerhz

well don't know how long since this game has been released. game still will not run benchmark in DX12 and crashes when trying to game in DX12.
not happy about this at all but guess have to wait till the fix.


----------



## Kana-Maru

Quote:


> Originally Posted by *killerhz*
> 
> well don't know how long since this game has been released. game still will not run benchmark in DX12 and crashes when trying to game in DX12.
> not happy about this at all but guess have to wait till the fix.


The game was recently updated to 1.03 on Tuesday. I've been running DX11 & DX12 since Day 1. It released on March 11th.


----------



## killerhz

Quote:


> Originally Posted by *Kana-Maru*
> 
> The game was recently updated to 1.03 on Tuesday. I've been running DX11 & DX12 since Day 1. It released on March 11th.


yeah i am checking my integrity of my files now but 2 minutes before... it crashed same place around the chopper part.

glad u can and have enjoyed but i cant.

my card is a 980Ti Classified wonder if that has to do with it.


----------



## Kana-Maru

Quote:


> Originally Posted by *killerhz*
> 
> yeah i am checking my integrity of my files now but 2 minutes before... it crashed same place around the chopper part.
> 
> glad u can and have enjoyed but i cant.
> 
> my card is a 980Ti Classified wonder if that has to do with it.


Yeah I'm running a Fury X so I can't speak for the 980 Ti. Have you tried enabling or disabling the "Render Target Reuse" in the settings? Choosing the correct setting could solve your problem.


----------



## killerhz

Quote:


> Originally Posted by *Kana-Maru*
> 
> Yeah I'm running a Fury X so I can't speak for the 980 Ti. Have you tried enabling or disabling the "Render Target Reuse" in the settings? Choosing the correct setting could solve your problem.


hmm nope let me have a try at that and report back...


----------



## killerhz

enabled crashes at the bar disabled crashes at the chopper...

maybe update my drivers.


----------



## Kana-Maru

Quote:


> Originally Posted by *killerhz*
> 
> enabled crashes at the bar disabled crashes at the chopper...
> 
> maybe update my drivers.


That sucks man. DX11 is still good in this game though. Has Nvidia those drivers issues they were having recently that crashed users OS's?


----------



## killerhz

Quote:


> Originally Posted by *Kana-Maru*
> 
> That's sucks man. DX11 is still good in this game though. Has Nvidia those drivers issues they were having recently that crashed users OS's?


yeahthere are ones that were released on march 28 but going to stay away from them atm. the ones before i just went back to so now a re boot then see if it helps.

thanks for your help and will update should everything work


----------



## BradleyW

Quote:


> Originally Posted by *killerhz*
> 
> yeahthere are ones that were released on march 28 but going to stay away from them atm. the ones before i just went back to so now a re boot then see if it helps.
> 
> thanks for your help and will update should everything work


Run the latest driver, HITMAN build and install all C++ Redist packs (x86 and x64) from 2015 all the way down to 2008.


----------



## killerhz

Quote:


> Originally Posted by *BradleyW*
> 
> Run the latest driver, HITMAN build and install all C++ Redist packs (x86 and x64) from 2015 all the way down to 2008.


right now running 364.51

hitman 1.03

how do i get them c++ gimmiks...


----------



## BradleyW

Quote:


> Originally Posted by *killerhz*
> 
> right now running 364.51
> 
> hitman 1.03
> 
> how do i get them c++ gimmiks...


They arn't gimmicks. Without them, many games can't run as they don't have a library to consult with, in compliance of their programmed language. Go to your Hitman install directory. You'll find the executables for the C++ packages. After installing those, check which you have installed, and Google the ones you don't have.

You need 2015, 2013, 2012, 2010 and 2008 (x86, x64).


----------



## mirzet1976

Quote:


> Originally Posted by *BradleyW*
> 
> They arn't gimmicks. Without them, many games can't run as they don't have a library to consult with, in compliance of their programmed language. Go to your Hitman install directory. You'll find the executables for the C++ packages. After installing those, check which you have installed, and Google the ones you don't have.
> 
> You need 2015, 2013, 2012, 2010 and 2008 (x86, x64).


Link to 2013, 2012, 2010 ,2008 and 2005 - http://getintopc.com/softwares/utilities/visual-c-plus-plus-redistributable-packages-free-download/


----------



## killerhz

Quote:


> Originally Posted by *BradleyW*
> 
> They arn't gimmicks. Without them, many games can't run as they don't have a library to consult with, in compliance of their programmed language. Go to your Hitman install directory. You'll find the executables for the C++ packages. After installing those, check which you have installed, and Google the ones you don't have.
> 
> You need 2015, 2013, 2012, 2010 and 2008 (x86, x64).


i didn't mean gimmicks in the true sence lol... i mean i want to grab them and install.

Quote:


> Originally Posted by *mirzet1976*
> 
> Link to 2013, 2012, 2010 ,2008 and 2005 - http://getintopc.com/softwares/utilities/visual-c-plus-plus-redistributable-packages-free-download/


cheers bud grabbed and installed... going to re boot and test...

either way +1 for those who tried to help me. love me ocn'ers that take of each other...

edit...

well thanks again to all but same results. tried just about every setting with DX12 enabled and seems to keep freezing same spot... the chopper now.

guess uninstall and move on.







just about a month and can't play the game using DX12... the only reason i installed windows 10 was to install this game and play.... pretty disappointing.


----------



## diggiddi

Quote:


> Originally Posted by *killerhz*
> 
> i didn't mean gimmicks in the true sence lol... i mean i want to grab them and install.
> cheers bud grabbed and installed... going to re boot and test...
> 
> either way +1 for those who tried to help me. love me ocn'ers that take of each other...
> 
> edit...
> 
> well thanks again to all but same results. tried just about every setting with DX12 enabled and seems to keep freezing same spot... the chopper now.
> 
> guess uninstall and move on.
> 
> 
> 
> 
> 
> 
> 
> just about a month and can't play the game using DX12... the only reason i installed windows 10 was to install this game and play.... pretty disappointing.


Are you overclocked and how good is your power supply?
Return everything to stock/turn down our clocks and try again


----------



## killerhz

Quote:


> Originally Posted by *diggiddi*
> 
> Are you overclocked and how good is your power supply?
> Return everything to stock/turn down our clocks and try again


no not overclocked during gaming.

other games are fine with my system the way it is.

update however...

benchmark tool don't work however just loaded game up and was flawless. guess we all good to go now. must just be the tool not working for me ha ha..


----------



## BradleyW

HITMAN Perf Test - RSC 16.4.1 - Game Build 1.0.3.

Settings:



Location:



GPU Test (MIN FPS):

290X D3D11 = 37
290X D3D12 = 38
290X CFX D3D11 = 60

CPU Test - CFX - D3D11 (MIN FPS):

HT on = 60
HT off = 55

6c 12t = 60
5c 10t = 60
4c 8t = 60
3c 6t = 59.7
2c 4t = 59.6

IMPORTANT:

1) 290X CFX bottlencked by the large AI crowd. Confirmed by overclocking the GPU's to 1120/1375, FPS remained the same.

2) D3D12 mode does not run the game in Exclusive Full Screen mode, despite selecting it in the "in-game" options menu. If you force Exclusive Full Screen through the registry, the game reverts via a fail safe back to D3D11 mode. This explains why many users don't see much of an FPS increase in D3D12 mode because Full Screen runs historically slower on most games.

For reference, Assassin's Creed Unity with a crowd of 150 to 200 people and all settings max could yield a min FPS of 72.


----------



## Deout

What's the benefit of using dx12 over dx11? besides the fps boost for AMD. If the game looks exactly the same then there's really no reason to run in dx12 on Nvidia.


----------



## EightDee8D

Quote:


> Originally Posted by *Deout*
> 
> What's the benefit of using dx12 over dx11? besides the fps boost for AMD. If the game looks exactly the same then there's really no reason to run in dx12 on Nvidia.


Performance boost for those weak ipc moar cores cpus.


----------



## XHellAngelX

Killer Instinct


----------



## EightDee8D

Quote:


> Originally Posted by *XHellAngelX*
> 
> Killer Instinct


That legendary 7970


----------



## PontiacGTX

Quote:


> Originally Posted by *Deout*
> 
> What's the benefit of using dx12 over dx11? besides the fps boost for AMD. If the game looks exactly the same then there's really no reason to run in dx12 on Nvidia.


SFR,Multi adapter,Lower overhead, Asynchronous shaders,, better multithreading,better performance which allow developers to add more graphics effects,

if nvidia cant get benefit from it then why you use it?


----------



## Skinnered

Dx12 is useless for me till they start to use the wonderfull dx12 multi GPU features


----------



## Skinnered

Quote:


> Originally Posted by *Deout*
> 
> What's the benefit of using dx12 over dx11? besides the fps boost for AMD. If the game looks exactly the same then there's really no reason to run in dx12 on Nvidia.


Well, for me its just an easy way to disable crossfire....


----------



## fewness

Nvidia has zero luck in Windows Store exclusives....


----------



## TopicClocker

Quote:


> Originally Posted by *BradleyW*
> 
> HITMAN Perf Test - RSC 16.4.1 - Game Build 1.0.3.
> 
> Settings:
> -SNIP-
> 
> Location:
> 
> -Snip-
> 
> 1) 290X CFX bottlencked by the large AI crowd. Confirmed by overclocking the GPU's to 1120/1375, FPS remained the same.
> 
> *2) D3D12 mode does not run the game in Exclusive Full Screen mode, despite selecting it in the "in-game" options menu. If you force Exclusive Full Screen through the registry, the game reverts via a fail safe back to D3D11 mode.* This explains why many users don't see much of an FPS increase in D3D12 mode because Full Screen runs historically slower on most games.
> 
> For reference, Assassin's Creed Unity with a crowd of 150 to 200 people and all settings max could yield a min FPS of 72.


Seriously? Gah that's ridiculous.

Nice testing by the way!

The crowds in Hitman are alot more detailed than in Assassin's Creed Unity, but that's an interesting comparison though!
Quote:


> Originally Posted by *EightDee8D*
> 
> That legendary 7970


The 7970 and 7950 are true legends, 4 years old and they're still fantastic to this date!

GCN is an amazing architecture!


----------



## incog




----------



## TopicClocker

Quote:


> Originally Posted by *incog*


These DX12 games are really friendly to AMD GPUs, under DX12 the AMD GPUs are outperforming their NVIDIA equivalents most of the time, sometimes even be a sizeable amount.
Even under DX11 in some of the most recent games like The Division and Hitman.

There's probably a ton of factors for this, but one of them is possibly due to the consoles having AMD hardware in them so some of the optimizations are possibly rubbing off on the PC versions.
It could even be down to the differences in AMD's GCN, and NVIDIA's Kepler and Maxwell architectures.


----------



## Forceman

Quote:


> Originally Posted by *fewness*
> 
> Nvidia has zero luck in Windows Store exclusives....


None of us have any luck in Windows Store exclusives. It's pretty much a disaster at this point.


----------



## Kana-Maru

Quote:


> Originally Posted by *incog*
> 
> 
> 
> Spoiler: Warning: Spoiler!


Quote:


> Originally Posted by *TopicClocker*
> 
> These DX12 games are really friendly to AMD GPUs, under DX12 the AMD GPUs are outperforming their NVIDIA equivalents most of the time, sometimes even be a sizeable amount.
> Even under DX11 in some of the most recent games like The Division and Hitman.
> 
> There's probably a ton of factors for this, but one of them is possibly due to the consoles having AMD hardware in them so some of the optimizations are possibly rubbing off on the PC versions.
> It could even be down to the differences in AMD's GCN, and NVIDIA's Kepler and Maxwell architectures.


DX12 is friendly to AMD due to the architecture that AMD started years ago that allows their GPUs to perform more work than Nvidia hardware. This is based on several benchmarks. Well there's only one true title that was built around the Mantle API and upgraded to Microsoft DX12 API & that's Ashes of the Singularity. The SquareEnix devs have some problems with DX12, but Hitman DX12 performance much better than Rise of the Tomb Raider DX12. Tomb Raider DX12 is straight micro stuttering garbage.

AMD DX11 drivers aren't the "best", but all of the complaining over the years wasn't necessary and just seemed like the same old complaining. AMD has always been competitive even when I was running my GTX 400\500\600 series. That 7970 & 7970Ghz was no joke and the 290X has aged very well. The 295X2 was simply dominating and the price drop made the price even sweeter. For the past year in a half AMD has been lowering their DX11 overhead issues and increasing their performance just through the drivers. Price and performance does matter.

I have no idea what Nvidia will do, but I know they have more than enough money to make something positive happen with their next line of GPUs. I've been enjoying Hitman @ 4K while averaging approximately 45fps. That's more than enough for a game like this @ 4K.


----------



## incog

It was just humor.










I have a 7970 myself and it's aging very, very well. I haven't felt the need to upgrade since I got it.


----------



## Assirra

Quote:


> Originally Posted by *Forceman*
> 
> None of us have any luck in Windows Store exclusives. It's pretty much a disaster at this point.


Don't think i heard much bad about Killer Instinct PC version but maybe it is overshadowed by all the other nonsense.


----------



## TopicClocker

Quote:


> Originally Posted by *Assirra*
> 
> Don't think i heard much bad about Killer Instinct PC version but maybe it is overshadowed by all the other nonsense.


From what I've heard and seen of the game, it's pretty good, I've played a bit of it myself and it runs very good on my GTX 970.

Here's a video of it, in which a number of GPUs are tested.

There was an issue where the game would run at double the speed on a 120Hz display, instead of running at 60Hz as fighting game mechanics are dependent on the frame-rate.

The solution to it was to change your desktop's refresh rate to 60Hz to fix it, I'm not sure if the game has been updated to fix this properly yet though.


----------



## BradleyW

Quote:


> Originally Posted by *TopicClocker*
> 
> From what I've heard and seen of the game, it's pretty good, I've played a bit of it myself and it runs very good on my GTX 970.
> 
> Here's a video of it, in which a number of GPUs are tested.
> 
> *There was an issue where the game would run at double the speed on a 120Hz display, instead of running at 60Hz as fighting game mechanics are dependent on the frame-rate.
> 
> *The solution to it was to change your desktop's refresh rate to 60Hz to fix it, I'm not sure if the game has been updated to fix this properly yet though.


Frame rate and refresh rate are completely two different things. I assume the game is fps locked to the same value as the refresh rate. Another solution would be to set the screen to 120hz and use a frame lock to set the fps to 60.


----------



## KarathKasun

FPS lock does not work in DX12 AFAIK, at least not properly.


----------



## BradleyW

Quote:


> Originally Posted by *KarathKasun*
> 
> FPS lock does not work in DX12 AFAIK, at least not properly.


I think DXTory works with it.


----------



## ladcrooks

from what i have seen on different tech sites the difference is not huge - not a win 10 breaker . Hence win 7 is okay still


----------



## PontiacGTX

Quote:


> Originally Posted by *ladcrooks*
> 
> from what i have seen on different tech sites the difference is not huge - not a win 10 breaker . Hence win 7 is okay still


win7 isnt fine since DX11 is the biggest issue for games,and win10 is the only way to get better performance,probably for nvidia users it isnt since they dont have any DX12 game which worth playing


----------



## BradleyW

On the Paris level in Hitman I see gains as high as 12 fps with d3d12 compared to d3d11 on a single 290X. Lowest gain is about 4 fps.


----------



## Mahigan

Maybe try the new 1.1.0 patch


----------



## Kana-Maru

Hitman 2016 1.1.0 Benchmarks Updated.

I'm performing an "Apples to Apples" benchmark. Check the link above for charts and more info.

*Edit:*
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks?showall=&start=4

I've ran my Apples to Apples benchmark along with my Real Time Benchmarks ™ and it appears that the patch has broken the game. From what I have viewed, the vRAM usage showed me everything I needed to know. Basically it appears that the settings aren't being saved in the option.


----------



## BradleyW

I hope the new map is a lot bigger than Paris. I was bitterly disappointed with Episode 1.
*Edit: Did you test between Render Reuse On and Off?*


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> I hope the new map is a lot bigger than Paris. I was bitterly disappointed with Episode 1.
> *Edit: Did you test between Render Reuse On and Off?*


I enjoyed Paris and I'm still finding new areas when I get a chance to play. It's a pretty large place. There's still so much I haven't done, but I've unlocked a lot of things.

I have"*Render Target Reuse (D3D12)*" Enabled. If you read the second page in my article I describe why I have it set to Enable instead of Auto.

Also I'm waiting to hear from the devs to ensure nothing is "broken" in this patch. My increases have been nothing short of amazing, but I'm still skeptical with games nowadays. There's usually always issues with patches. The performance increases also are great and I'm continuing my benchmarks until I hear something from the devs.

*Edit:*
I ran some Real Time Benchmarks™ and I'm going to look over the data to make sure nothing fishy is going on with the results I got from the internal benchmark.

*Edit:*
I've ran my Apples to Apples benchmark along with my Real Time Benchmarks ™ and it appears that the patch has broken the game. From what I have viewed, the vRAM usage showed me everything I needed to know. Basically it appears that the settings aren't being saved in the option.


----------



## Kana-Maru

I was still able to get some decent info from my results. I've updated my article with DX11 vs DX12 results and overclocked results in DX12.

http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks?showall=&start=4

We still have to wait for IO Interactive to fix the graphical settings, but my results were better than nothing I suppose.


----------



## Assirra

Quote:


> Originally Posted by *PontiacGTX*
> 
> win7 isnt fine since DX11 is the biggest issue for games,and win10 is the only way to get better performance,*probably for nvidia users it isnt since they dont have any DX12 game which worth playing*


What does that even mean?
Did i miss a memo and games on DX12 are now exclusive to nvidia or amd cause that is exactly how you make it sound.


----------



## Mahigan

Quote:


> Originally Posted by *Assirra*
> 
> What does that even mean?
> Did i miss a memo and games on DX12 are now exclusive to nvidia or amd cause that is exactly how you make it sound.


I'm pretty sure he means that it's best to play games using DX11 on !Maxwell and older architectures then run them using DX12 because performance drops once the latter is used.


----------



## mtcn77

Quote:


> Originally Posted by *Assirra*
> 
> What does that even mean?
> Did i miss a memo and games on DX12 are now exclusive to nvidia or amd cause that is exactly how you make it sound.


He is being realistic. Directx 12 have been a showcase for AMD as of yet. Why favour Directx 12 when Nvidia's average is higher at Directx 11? No, Nvidia still hasn't optimised the drivers for Directx 12...
Quote:


> Originally Posted by *Mahigan*
> 
> I'm pretty sure he means that it's best to play games using DX11 on !Maxwell and older architectures then run them using DX12 because performance drops once the latter is used.


The results are self evident.


Spoiler: Warning: Spoiler!


----------



## zGunBLADEz

I dont know if they are testing this game right or what not... But vsync dont work if you enable exclusive fullscreen the performance goes to hell...

The benchmark is crashing in top of the table same spot i dont want to switch drivers yet plus im not sure if is related to drivers. But according to the bench results so far it was over 100+avg 980TI @ 1525 at that moment

System Info:
CPU Vendor: GenuineIntel
CPU Brand: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
CPU Cores: 8
CPU Speed: 4.00GHz
System Memory: 6.07GB / 31.94GB
GPU: NVIDIA GeForce GTX 980 Ti
GPU Memory: 5.91GB

Graphics Settings:
RESOLUTION: 1920 x 1080
ResolutionWidth = 1920
ResolutionHeight = 1080
Refreshrate = 60
Fullscreen = 1
ExclusiveFullscreen = 0
VSync = 0
VSyncInterval = 1
Monitor = 0
Adapter = 0
Aspectratio = 0
WindowPosX = 0
WindowPosY = 0
WindowWidth = 1920
WindowHeight = 1080
Stereoscopic = 0
Stereo_Depth = 3.000000
Stereo_Strength = 0.030000
WindowMaximized = 0
FocusLoss = 0
UseGdiCursor = 0
ShadowQuality = 3
ShadowResolution = 2
TextureResolution = 2
TextureFilter = 4
SSAO = 1
MirrorQuality = 1
AntiAliasing = 1
LevelOfDetail = 3
MotionBlur = 0
Bokeh = 0
SuperSampling = 1.000000
Gamma = 1.000000
QualityProfile = 4

Benchmark Results:
---- CPU ----
11959 frames
103.89fps Average
6.31fps Min
168.26fps Max
9.63ms Average
5.94ms Min
158.60ms Max
---- GPU ----
11959 frames
104.27fps Average
6.48fps Min
2993.29fps Max
9.59ms Average
0.33ms Min
154.40ms Max

this game have some issues in my end.. Working on it just bought it yesterday


----------



## PontiacGTX

Quote:


> Originally Posted by *Assirra*
> 
> What does that even mean?
> Did i miss a memo and games on DX12 are now exclusive to nvidia or amd cause that is exactly how you make it sound.


exactly like mahigan and mtcn77 suggested,Nvidia gpus owners probably wont buy current DX12 games because the ones that are on the market have an amd sponsor and the AMD gpus are better than nvidia ones. then DX11 games run just fine on nvidia gpus so they see no reason to upgrade to a platform/os isnt favoring them(aswell microsoft privacy policy,but if they are delivering a free OS you can expect something hidden),but that platform that uses DX11 only,isnt allowing better hardware utilization of resources therefore better performance and headroom for improvement for graphics.


----------



## zGunBLADEz

So what im seeing with my own eyes here is a lie right? 4k screenies every max out @ dx11 1525/4001 980TI


----------



## mtcn77

ˆYes, you will continue to have great Directx 11 performance. The problem is, Nvidia stated they had great performance in Directx 12, not in 11. When Directx 11 was announced, "It wasn't a big deal" for a while, but at launch of Directx 12, there were straightforward claims that they had the better Directx 12 support. Judge for yourself.


----------



## zGunBLADEz

Quote:


> Originally Posted by *mtcn77*
> 
> ˆYes, you will continue to have great Directx 11 performance. The problem is, Nvidia stated they had great performance in Directx 12, not in 11. When Directx 11 was announced, "It wasn't a big deal" for a while, but at launch of Directx 12, there were straightforward claims that they had the better Directx 12 support. Judge for yourself.


Well so far the dx12 that i have play they are rigged with a bunch of bugs to begin with, except KI3 that one runs flawless. Gears, Quantum (specially this one) and hitman are very buggy.


----------



## PontiacGTX

Ashes of the singularity works fine on DX12 ,if the games have problems it is due to the developers.


----------



## Kana-Maru

Quote:


> Originally Posted by *zGunBLADEz*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> I dont know if they are testing this game right or what not... But vsync dont work if you enable exclusive fullscreen the performance goes to hell...
> 
> The benchmark is crashing in top of the table same spot i dont want to switch drivers yet plus im not sure if is related to drivers. But according to the bench results so far it was over 100+avg 980TI @ 1525 at that moment
> 
> System Info:
> CPU Vendor: GenuineIntel
> CPU Brand: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
> CPU Cores: 8
> CPU Speed: 4.00GHz
> System Memory: 6.07GB / 31.94GB
> GPU: NVIDIA GeForce GTX 980 Ti
> GPU Memory: 5.91GB
> 
> Graphics Settings:
> RESOLUTION: 1920 x 1080
> ResolutionWidth = 1920
> ResolutionHeight = 1080
> Refreshrate = 60
> Fullscreen = 1
> ExclusiveFullscreen = 0
> VSync = 0
> VSyncInterval = 1
> Monitor = 0
> Adapter = 0
> Aspectratio = 0
> WindowPosX = 0
> WindowPosY = 0
> WindowWidth = 1920
> WindowHeight = 1080
> Stereoscopic = 0
> Stereo_Depth = 3.000000
> Stereo_Strength = 0.030000
> WindowMaximized = 0
> FocusLoss = 0
> UseGdiCursor = 0
> ShadowQuality = 3
> ShadowResolution = 2
> TextureResolution = 2
> TextureFilter = 4
> SSAO = 1
> MirrorQuality = 1
> AntiAliasing = 1
> LevelOfDetail = 3
> MotionBlur = 0
> Bokeh = 0
> SuperSampling = 1.000000
> Gamma = 1.000000
> QualityProfile = 4
> 
> 
> 
> Benchmark Results:
> ---- CPU ----
> 11959 frames
> *103.89fps Average*
> 6.31fps Min
> 168.26fps Max
> 9.63ms Average
> 5.94ms Min
> 158.60ms Max
> 
> *---- GPU ----*
> 11959 frames
> *104.27fps Average*
> 6.48fps Min
> 2993.29fps Max
> 9.59ms Average
> 0.33ms Min
> 154.40ms Max
> 
> this game have some issues in my end.. Working on it just bought it yesterday


Based on your results I have a test similar to yours except I used SMAA and you used FXAA. I'm also only running my Fury X @ 1050Mhz on the core [everything is stock bascially]

My results were:

Fury X @ stock settings [1050Mhz Core] - DirectX 12 100% maxed [broken patch]

Benchmark Results:
---- CPU ----
12526 frames
*108.97fps Average*
13.18fps Min
186.87fps Max
9.18ms Average
5.35ms Min
75.85ms Max

*---- GPU ----*
12525 frames
*109.96fps Average*
15.98fps Min
540.66fps Max
9.09ms Average
1.85ms Min
62.58ms Max

I never use FXAA and that was our only difference.

When I overclocked my CPU to 4.8Ghz I got these results:

*---- GPU ----*
14380 frames
*126.02fps Average*
17.04fps Min
671.75fps Max
7.94ms Average
1.49ms Min
58.69ms Max

At the moment the game is having some serious issues with the graphical settings. We have to wait on another patch to address this issue. In my article I did notice that DX12 is performance well with my Fury X, but I can't give more accurate results until the issue is fixed. In the meantime I'm still enjoying the game and it still looks pretty good.

Quote:


> Originally Posted by *zGunBLADEz*
> 
> So what im seeing with my own eyes here is a lie right? 4k screenies every max out @ dx11 1525/4001 980TI
> 
> 
> Spoiler: Warning: Spoiler!


No it's not a lie, Hitman is a good looking game. The FPS are higher because some settings aren't being set in the options. I got *52FPS average @ 4K* when using DX12+ Stock Fury X. Playable and fun @ 4K. The frame times have increased as well. We still have to wait for an update though. I've noticed a few bugs here and there, but nothing game breaking on my end.


----------



## Mahigan

Quote:


> Originally Posted by *zGunBLADEz*
> 
> Well so far the dx12 that i have play they are rigged with a bunch of bugs to begin with, except KI3 that one runs flawless. Gears, Quantum (specially this one) and hitman are very buggy.


Gears and Quantum now work perfectly on AMD hardware. In fact Quantum has always worked perfectly on AMD hardware...it has issues on NV hardware.

I'm waiting for Total War: Warhammer personally.


----------



## jologskyblues

DX12 has been a disappointment so far...for both AMD and NVidia hardware.


----------



## BradleyW

Quote:


> Originally Posted by *jologskyblues*
> 
> DX12 has been a disappointment so far...for both AMD and NVidia hardware.


Gears, Ashes, Hitman and Quantum show excellent gains on AMD cards after patches and drivers.


----------



## PontiacGTX

Quote:


> Originally Posted by *jologskyblues*
> 
> DX12 has been a disappointment so far...for both AMD and NVidia hardware.


it is mainly for Nvidia. because AMD performs better on hitman, AotS,quatum break when using DX12 , and if the games has bugs is a different topic to performance, the only game which works great is AotS with no bugs


----------



## Liranan

Quote:


> Originally Posted by *jologskyblues*
> 
> DX12 has been a disappointment so far...for both AMD and NVidia hardware.


The biggest disappointment with DX12 has been UWP.


----------



## BradleyW

UWP and Phil Spensor both need to go away, right now!


----------



## PontiacGTX

Quote:


> Originally Posted by *BradleyW*
> 
> UWP and Phil Spensor both need to go away, right now!


that wont solve the bad console ports of DX12


----------



## BradleyW

Quote:


> Originally Posted by *PontiacGTX*
> 
> that wont solve the bad console ports of DX12


Erm......I never said it would.......?

My comment is purely in relation to UWP's limitations and Phil's absurd vision for PC gaming.


----------



## zGunBLADEz

Well, i would not take any of this games to show hardware performance differences.

Specially with bad optimized half baked games.

Quantum is the worst of the bunch by the way.


----------



## TopicClocker

Quote:


> Originally Posted by *BradleyW*
> 
> Erm......I never said it would.......?
> 
> My comment is purely in relation to UWP's limitations and Phil's absurd vision for PC gaming.


I loved the original Xbox and the Xbox 360.

However after destroying the Xbox brand with the Xbox One and all of the failures they made after and before it for PC Gamers I'm pretty sure the only thing they are capable of is malicious, no one understands how pissed I am about the Xbox One. Goddammit, the Xbox 360 was my favorite games console of all time, and they destroyed what was meant to be the successor to it, I will never forgive them for that.

Maybe I'm being too negative, they've made... Interesting strides and a push for PC Gaming lately, *I understand their master plan behind this push, and with the way how it is going, this is going to end poorly for PC Gaming and PC Gamers.*

Simply put, they want control, they attempted to push for this with the Xbox One and no one was having it, they lost PC Gaming to Valve and GFWL failed, now they're trying again, by bringing Xbox games to PC, which has advantages and many disadvantages.

*This Universal Windows Platform also makes it a lot easier for them to release a new Xbox console which can play these games, this new console has been hinted at repeatedly, it is highly probable.*

So they're not only thinking about the future of PC Gaming for themselves, but also the future of the Xbox which is in serious jeopardy.
*If they can get both PC Gamers and Xbox Gamers under their wings then they have a solid strategy for the future.*

I was pretty much on-board with what they were doing, but now I've seen what they're trying to do, and realized that the future isn't bright.

Almost everything that came out of the UWP has been terrible, except for Killer Instinct and Rise of the Tomb Raider. (Although that has limitations imposed upon it due to UWP).
They rushed games out like Gears of War Ultimate Edition and Quantum Break. The Coalition and Remedy are mostly to blame, but Microsoft surely played a part in this.
Alongside applying restrictions on-top of these games which may be slightly lifted.

This makes for an absolutely terrible first impression to your potential customers, most of which are well informed on the Microsoft's past history with PC Gaming.
It's especially bad and embarrassing *when the founder of Epic Games, a company which has had a long history for Windows and the Xbox calls for the demise of the Universal Windows Platform.*

The demise of such a platform may not be the best option, as there are many ways of rectifying the problems which are outstanding, providing the ability to shape a brighter future.
But as it currently stands there are serious problems to be sorted out. Can we even trust them again with PC Gaming?

It's good to have competition, as it makes things better for the consumers, and it reduces the chance of one company having a monopoly. But the thing is, *Microsoft already have a monopoly on PC Gaming, to play the majority of PC Games you need to run the Windows operating system, but that's another complicated matter.*


----------



## mtcn77

Quote:


> Originally Posted by *TopicClocker*
> 
> I loved the original Xbox and the Xbox 360.
> 
> However after destroying the Xbox brand with the Xbox One and all of the failures they made after and before it for PC Gamers I'm pretty sure the only thing they are capable of is malicious, no one understands how pissed I am about the Xbox One. Goddammit, the Xbox 360 was my favorite games console of all time, and they destroyed what was meant to be the successor to it, I will never forgive them for that.
> 
> Maybe I'm being too negative, they've made... Interesting strides and a push for PC Gaming lately, I understand their master plan behind this push, and with the way how it is going, this is going to end poorly for PC Gaming and PC Gamers.
> 
> Simply put, they want control, they attempted to push for this with the Xbox One and no one was having it, they lost PC Gaming to Valve and GFWL failed, now they're trying again, by bringing Xbox games to PC, which has advantages and many disadvantages.
> 
> This Universal Windows Platform also makes it a lot easier for them to release a new Xbox console which can play these games, this new console has been hinted at repeatedly, it is highly probable.
> 
> So they're not only thinking about the future of PC Gaming for themselves, but also the future of the Xbox which is in serious jeopardy.
> If they can get both PC Gamers and Xbox Gamers under their wings then they have a solid strategy for the future.
> 
> I was pretty much on-board with what they were doing, but now I've seen what they're trying to do, and realized that the future isn't bright.
> 
> Almost everything that came out of the UWP has been terrible, except for Killer Instinct and Rise of the Tomb Raider. (Although that has limitations imposed upon it due to UWP).
> They rushed games out like Gears of War Ultimate Edition and Quantum Break. The Coalition and Remedy are mostly to blame, but Microsoft surely played a part in this.
> Alongside applying restrictions on-top of these games which may be slightly lifted.
> 
> This makes for an absolutely terrible first impression to your potential customers, most of which are well informed on the Microsoft's past history with PC Gaming.
> It's especially bad and embarrassing when the founder of Epic Games, a company which has had a long history for Windows and the Xbox calls for the demise of the Universal Windows Platform.
> 
> The demise of such a platform may not be the best option, as there are many ways of rectifying the problems which are outstanding, providing the ability to shape a brighter future.
> But as it currently stands there are serious problems to be sorted out. Can we even trust them again with PC Gaming?
> 
> It's good to have competition, as it makes things better for the consumers, and it reduces the chance of one company having a monopoly. But the thing is, Microsoft already have a monopoly on PC Gaming, to play the majority of PC Games you need to run the Windows operating system, but that's another complicated matter.


Nvidia's failure to bring VR to the Titan had nothing to do with Microsoft. The blame rests squarely on their shoulders, but not so according to the "twimtbp" squad. Microsoft getting some "control" is actually rather good since any ensuing failures will actually be their responsibility in the first place.


----------



## BradleyW

*The Hitman Devs have locked out 120/144hz support via hard coding. The reg tweaks no longer work after patch 1.1.0! They are forcing us to use 60Hz. They never wanted us to use it in the first place since we had to make reg tweaks.*

Can someone upload their Hitman XML file please?


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> *The Hitman Devs have locked out 120/144hz support via hard coding. The reg tweaks no longer work after patch 1.1.0! They are forcing us to use 60Hz. They never wanted us to use it in the first place since we had to make reg tweaks.*
> 
> Can someone upload their Hitman XML file please?


They updated the game, but I know if they fixed your problem. I know they fixed some bugs and the graphical issues. Have you tried anything after downloading the latest patch to see if it fixed anything? I've never seemed to be limited to 60hz.

I've updated my article with more info after the patch "fix".
Hitman Patch 1.1.0 [Fix]

The noticeable gain is at 4K 100% max settings _. My FPS Average in DX12 increased by 22%! I'm using the latest AMD drivers. The charts are in the link above.

I hope the increases continue._


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> They updated the game, but I know if they fixed your problem. I know they fixed some bugs and the graphical issues. Have you tried anything after downloading the latest patch to see if it fixed anything? I've never seemed to be limited to 60hz.
> 
> I've updated my article with more info after the patch "fix".
> Hitman Patch 1.1.0 [Fix]
> 
> The noticeable gain is at 4K 100% max settings _. My FPS Average in DX12 increased by 22%! I'm using the latest AMD drivers. The charts are in the link above.
> 
> I hope the increases continue._


The latest patch did not fix the issue. I think you are confusing FPS with refresh rate. Do you play the game on a screen which runs higher than 60Hz, such as a 120Hz screen?


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> The latest patch did not fix the issue. I think you are confusing FPS with refresh rate. Do you play the game on a screen which runs higher than 60Hz, such as a 120Hz screen?


Yes I do. 144Hz.

When I game at 4K it's only 60hz.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> Yes I do. 144Hz.
> 
> When I game at 4K it's only 60hz.


Right I see.
Anyway I've found a fix, again. I will be making a new thread with the whole story plus the fix.

*Edit: New fix is live!
*
http://www.overclock.net/t/1598735/how-to-enable-higher-than-60hz-refresh-rates-in-hitman-2016


----------



## BradleyW

Delete..


----------



## BradleyW

After patch 1.1.2 I am no longer able to get DX12 mode to run.

DX11 boots up fine, however DX12 mode won't start. I don't get a single error message. Hitman.exe just hangs around in task manager, using 12% CPU usage and 0.8GB of RAM. I then have to end the task. I tried running it in Admin mode. This had no effect.

290X.
Windows 10 64 bit.
AMD 16.4.2.


----------



## Kana-Maru

Well I'm not confusing frame rate with refresh rate for the record. I simply don't have the same issues you and others are having. Every setup is different and a lot of issues I see on Steam doesn't apply to me either.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> Well I'm not confusing frame rate with refresh rate for the record. I simply don't have the same issues you and others are having. Every setup is different and a lot of issues I see on Steam doesn't apply to me either.


When you launch the game, does the screen's OSD report @ 144 Hz, without any need for tweaks? If so I believe your EDID or monitor driver is able to run the higher Hz rate with customized extensions blocks that allow applications to always detect the proper hz rate.


----------



## Kana-Maru

I just tested it to be 100% sure and yes, OSD reports 144hz. I've never had to tweak anything.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> I just tested it to be 100% sure and yes, OSD reports 144hz. I've never had to tweak anything.


Thank you for checking. And I assume after Patch 1.1.2. you are still able to launch in DX12 mode? It seems most people can't.


----------



## Kana-Maru

Nope. I can't launch DX12 normally with 1.1.2 without fixes found by users on Steam. There's a topic with the fixes on the Steam forums.

-Run Steam as Admin and DX12 works [it works randomly for me]
-Set Launch Options: -SKIP_LAUNCHER

The game was working fine yesterday after the hotfix [1.1.0 hotfix] and I'm not sure what this latest update [1.1.2] was "suppose" to fix or add. DX11 works fine, but I get more FPS @ 4K with DX12 so I normally play in DX12.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> Nope. I can't launch DX12 normally with 1.1.2 without fixes found by users on Steam. There's a topic with the fixes on the Steam forums.
> 
> -Run Steam as Admin and DX12 works [it works randomly for me]
> -Set Launch Options: -SKIP_LAUNCHER
> 
> The game was working fine yesterday after the hotfix [1.1.0 hotfix] and I'm not sure what this latest update [1.1.2] was "suppose" to fix or add. DX11 works fine, but I get more FPS @ 4K with DX12 so I normally play in DX12.


-SKIP_LAUNCHER forces DX11. See for yourself via task manager. Right click the process and click properties, then look at the .exe's file directory. It leads back to the retail foder for DX11.


----------



## Kana-Maru

I just checked and it does force DX11. There's nothing wrong with DX11, but I'd like DX12. I can't wait for developers to start focusing solely on the new DX12 API. Trying to do both isn't working out. I'd hate to see a game with Vulkan support and offer DX11. Seems like time will be wasted to implement the old limited DX11.

I guess we'll be waiting another week for a hotfix.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> I just checked and it does force DX11. There's nothing wrong with DX11, but I'd like DX12. I can't wait for developers to start focusing solely on the new DX12 API. Trying to do both isn't working out. I'd hate to see a game with Vulkan support and offer DX11. Seems like time will be wasted to implement the old limited DX11.
> 
> I guess we'll be waiting another week for a hotfix.


Before the patch DX11 was fine. Afterwards, many including me report significant FPS reductions, even Nvidia users. Hopefully DX11 continues to work OK for you. I'm sick of waiting for IO to release hotfix after hotfix for silly little issues.


----------



## Kana-Maru

I haven't played the game since I ran my benchmarks. That was when I running the hotfix. I haven't played using 1.1.2, but I've read some issues. I'd have to play the game to see if I have any performance issues.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> I haven't played the game since I ran my benchmarks. That was when I running the hotfix. I haven't played using 1.1.2, but I've read some issues. I'd have to play the game to see if I have any performance issues.


I only assumed you had no issues because you stated "there's nothing wrong with DX11". Can you see my confusion?


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> I only assumed you had no issues because you stated "there's nothing wrong with DX11". Can you see my confusion?


Yeah I see. I meant there was nothing wrong the last time I played it. I'm going to be playing it shortly so I'll see how good or bad DX11 is.

*Edit:*
I just played nearly 2 hours and I had zero problems with DX11. Everything ran perfectly. No glitches or crashes.
This Fury X is spoiling me with the low temps. My temps hovered around 42c-45c
.


----------



## Rickles

Meh, this 980 Ti owes me nothing, it's been a great card!


----------



## BradleyW

HITMAN Benchmark

290X (16.4.2) - 2560x1080, Max Details, SMAA.

Average FPS:

Patch 1.1.0
DX11 = 61.5
DX12 = 68.6

Patch 1.1.2
DX11 = 55
DX12 = 62

Paris - Fashion show overlooking audience:

Patch 1.1.0
DX11 = 68
DX12 = 72

Patch 1.1.2
DX11 = 49
DX12 = 55

*Performance is getting worse and worse with each patch.*


----------



## Robenger

Quote:


> Originally Posted by *BradleyW*
> 
> HITMAN Benchmark
> 
> 290X (16.4.2) - 2560x1080, Max Details, SMAA.
> 
> Average FPS:
> 
> Patch 1.1.0
> DX11 = 61.5
> DX12 = 68.6
> 
> Patch 1.1.2
> DX11 = 55
> DX12 = 62
> 
> Paris - Fashion show overlooking audience:
> 
> Patch 1.1.0
> DX11 = 68
> DX12 = 72
> 
> Patch 1.1.2
> DX11 = 49
> DX12 = 66
> 
> *Performance is getting worse and worse with each patch.*


Nvidia threatened to sue.


----------



## Kana-Maru

lol ^

Quote:


> Originally Posted by *BradleyW*
> 
> HITMAN Benchmark
> 
> 290X (16.4.2) - 2560x1080, Max Details, SMAA.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Average FPS:
> 
> Patch 1.1.0
> DX11 = 61.5
> DX12 = 68.6
> 
> Patch 1.1.2
> DX11 = 55
> DX12 = 62
> 
> Paris - Fashion show overlooking audience:
> 
> Patch 1.1.0
> DX11 = 68
> DX12 = 72
> 
> Patch 1.1.2
> DX11 = 49
> DX12 = 55
> 
> 
> 
> *Performance is getting worse and worse with each patch.*


That sucks. They fixed DX12 for 1.1.2 now [thank goodness], but I haven't gotten around to running any benchmarks yet. I hope this patch doesn't ruin my 22% FPS average increase I got while playing @ 4K!

I went from [Day 1] *36.02fps* Average to [Patch 1.1.0] *43.82fps* Average. I hope they didn't kill performance. I was enjoying the increase.

I'll have to run the internal benchmark again to see what my results are.
.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> lol ^
> That sucks. They fixed DX12 for 1.1.2 now [thank goodness], but I haven't gotten around to running any benchmarks yet. I hope this patch doesn't ruin my 22% FPS average increase I got while playing @ 4K!
> 
> I went from [Day 1] *36.02fps* Average to [Patch 1.1.0] *43.82fps* Average. I hope they didn't kill performance. I was enjoying the increase.
> 
> I'll have to run the internal benchmark again to see what my results are.
> .


Let me know what kind of result you get. The perf issues seem to be widespread on the hitman IO interactive forum.


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> Let me know what kind of result you get. The perf issues seem to be widespread on the hitman IO interactive forum.


I will post my results for sure. It seems there are issues on Steam forums as well about DX12 performance.


----------



## Kana-Maru

My Hitman 4K results were still the same:

Hitman 100% Max Settings - Benchmark
Patch 1.1.0 = 43.43fps Average
Patch 1.1.2 = 43.61fps Average

There appears to be no virtually difference in my FPS Average.

I've uploaded my Episode 2 Sapienza 4K Real Time Benchmark™ [DX12]
http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks?showall=&start=5

I'll run a DX11 benchmark in Sapienza for comparisons. Some of these websites results are all over the place and I'm trying to set the record straight. At least for the Fury X.


----------



## BradleyW

Quote:


> Originally Posted by *Kana-Maru*
> 
> My Hitman 4K results were still the same:
> 
> Hitman 100% Max Settings - Benchmark
> Patch 1.1.0 = 43.43fps Average
> Patch 1.1.2 = 43.61fps Average
> 
> There appears to be no virtually difference in my FPS Average.
> 
> I've uploaded my Episode 2 Sapienza 4K Real Time Benchmark™ [DX12]
> http://www.overclock-and-game.com/news/pc-gaming/42-hitman-directx-12-fury-x-benchmarks?showall=&start=5
> 
> I'll run a DX11 benchmark in Sapienza for comparisons. Some of these websites results are all over the place and I'm trying to set the record straight. At least for the Fury X.


I've seen various Fury X members state that the FPS has dropped after this patch. You can't set any record straight given the fact that each system has a plethora of variables which effect the outcome. So, some Fury X owners won't have issues, whilst others most certainly will. This is how it's always been. If all Fury X's were operating fine, I would have accused AMD of gimping the 290X to boost Polaris sales. I would not put it past them. But that's just me. I see the worse in companies.


----------



## Kana-Maru

Quote:


> Originally Posted by *BradleyW*
> 
> I've seen various Fury X members state that the FPS has dropped after this patch. You can't set any record straight given the fact that each system has a plethora of variables which effect the outcome. So, some Fury X owners won't have issues, whilst others most certainly will. This is how it's always been. If all Fury X's were operating fine, I would have accused AMD of gimping the 290X to boost Polaris sales. I would not put it past them. But that's just me. I see the worse in companies.


I'm running a old gaming rig based on 2008 architecture with a 2015 high-end GPU. I can't speak for everyone on newer platforms, I can only speak for myself & show my results. I'm setting the record straight with the Fury X results, or shall I say from a "working as expected" Fury X. I expect newer tech to give better results since the architectures are up to date and the CPUs generally have faster IPCs and other beneficial features.

If some gamers are having issues with their Fury X I completely understand. I'm not having issues as you can see. My performance did NOT decrease @ 4K + *DX12* with this patch [1.1.2] and I ran my 1080p benchmarks *DX11* [Episode 2 Sapienza]. By looking over the data quickly it appears I averaged 110fps. I went through different areas of the level and caused chaos. I wasn't just staring at a brick wall or a floor. I actually played the game.

I'm using Crimson 16.4.2 Beta Drivers by the way.


----------



## Thoth420

I have pretty good performance in DX11 and 12 on my Fury X(99% of time) I have noticed last night the first time I messed with DX12 though that I get stutter on any gun zoom but not the normal zoom and it only happens periodically. I also had once benchrun that my min fps was .25 (screen froze for approx 3 to 4 seconds) instead of the usual 15 to 18. No clue what happened there but that was DX11 post patch with everything maxed other than AA(FXAA) and it only happened in one of three runs with those settings.

I had a question for you people playing on PC since I just recently migrated from the console version(did not carry my account over as my PC games are attached to a totally separate email so everything is locked and I had to start over as I prefer to do with games: Do you all get an error in your event viewer every time you initialize the game via steam or via the .exe that says something along the lines of *failed to install script, runasadmin vdf.(steam game id number)*? I get this every single time I launch but the game has never crashed on me yet at all so far. It was missing binaries after install the first time I tried to run it but installing the redists manually fixed that no problem. I already tried a reinstall of the game and put it on my secondary drive which had no effect on the error issue.


----------

