# [VC]GTX 1060 specifications leaked - faster than RX 480



## Sequences

Quote:


> Originally Posted by *iLeakStuff*


Graphs like this should be illegal.


----------



## Shiftstealth

Quote:


> Originally Posted by *Sequences*
> 
> Graphs like this should be illegal.


QFT.

10% more performance represented as double lol.


----------



## Artikbot

Quote:


> Originally Posted by *iLeakStuff*
> 
> Oh man, this will turn ugly for AMD












Let's see what they have to offer! Especially since the 1070 and the 1080 are unobtainium in Europe, at just under 500€ and just under 800€ MSRP.

At just 15% faster, this better start at 300€ tops for the base model (assuming they do various variants à la 4GB/8GB) or they will rot on the shelves.


----------



## JunkaDK

This must be a new low from Nvidia, if the graphs come from them directly.


----------



## Klocek001

Quote:


> Originally Posted by *Shiftstealth*
> 
> QFT.
> 
> 10% more performance represented as double lol.


lol nvidia now targets


----------



## azanimefan

founder edition for MSRP $330 (actual 350, as it will be a paper launch like the 1080/1070), normal edition for MSRP $230 (actual 270, since again, mostly a paper launch).


----------



## Aussiejuggalo

Leaked Nvidia graph showing how much better an Nvidia product is against an AMD one







. Think I'll wait for the OC3D review







.


----------



## TheBlademaster01

Quote:


> Originally Posted by *Artikbot*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Let's see what they have to offer! Especially since the 1070 and the 1080 are unobtainium in Europe, at just under 500€ and just under 800€ MSRP.
> 
> At just 15% faster, this better start at 300€ tops for the base model (assuming they do various variants à la 4GB/8GB) or they will rot on the shelves.


The sad thing is that at 400 eur people would probably still buy it, which is why we can't have properly priced products. Especially considering how many people think that GTX 970/980 performance at $199 is "disappointing". Of course while not taking into account the price tag...


----------



## lolfail9001

Graph is probably fake i think, nV has no reason to compare itself to AMD in marketing their products.


----------



## lolfail9001

Quote:


> Originally Posted by *TheBlademaster01*
> 
> The sad thing is that at 400 eur people would probably still buy it, which is why we can't have properly priced products. Especially considering how many people think that GTX 970/980 performance at $199 is "disappointing". Of course while not taking into account the price tag...


Considering that according to Gibbo the 4gb rx480 is discontinued, $199 is nothing but a marketing trickery.


----------



## Cybertox

Quote:


> Originally Posted by *iLeakStuff*


lol, are they being serious with this slide?


----------



## andrews2547

That graph doesn't even make sense.

15% more performance based on what?

25% more VR performance based on what?

45% more power efficient based on what?


----------



## NFL

Quote:


> Originally Posted by *Cybertox*
> 
> lol, are they being serious with this slide?


I like WhyCry's graph much better


----------



## maarten12100

But what is the price...
When will it launch...
Will there be stock...

A value card needs to have value


----------



## DaaQ

So 15% faster than rx480 would put it at roughly what? 980/390x performance? For $300? Which is how many years ago performance? 2-4 right? Cause 390x = 290x pretty much.
Am I doing this right?
This is much disappointment, may as well add $60 bucks for 980ti or $80 more for 1070.


----------



## FLCLimax

all those pics look shopped.


----------



## Derp

I REALLY wanted to go Freesync because I can't stand the G-sync premium but the floppy RX 480 launch pushes me towards this card + a G-Sync panel for a nice fat price premium.

These graphs are hilarious though.


----------



## FLCLimax

Quote:


> Originally Posted by *DaaQ*
> 
> So 15% faster than rx480 would put it at roughly what? 980/390x performance? For $300? Which is how many years ago performance? 2-4 right? Cause 390x = 290x pretty much.
> Am I doing this right?
> This is much disappointment, may as well add $60 bucks for 980ti or $80 more for 1070.


yea, solid logic that was applied to the 480 after all.


----------



## Cybertox

Quote:


> Originally Posted by *NFL*
> 
> I like WhyCry's graph much better


Yeah that graph makes more sense. But we have yet to see what this card is truly capable of. Nvidia is known for making bold statements which do not necessarily live up to the hype.


----------



## FLCLimax

Quote:


> Originally Posted by *Derp*
> 
> I REALLY wanted to go Freesync because I can't stand the G-sync premium but the floppy RX 480 launch pushes me towards this card + a G-Sync panel for a nice fat price premium.
> 
> These graphs are hilarious though.


nobody saw this coming







Only took one fake graph.


----------



## umeng2002

HA!

Maybe this is why nVidia thinks 3.5 and 4 are the same number


----------



## WorldExclusive

A real nivida slide will compare this to a 960 to show 50%+ gains. Wait for that one.


----------



## AuraNova

Why do I get the feeling we might be seeing another train arrive at the station very soon?


----------



## DaaQ

Quote:


> Originally Posted by *WorldExclusive*
> 
> A real nivida slide will compare this to a 960 to show 50%+ gains. Wait for that one.


No the wccf will come first.


----------



## seabiscuit68

Quote:


> Originally Posted by *maarten12100*
> 
> But what is the price...
> When will it launch...
> *Will there be stock...*
> 
> A value card needs to have value


No


----------



## Clocknut

48 Rop @ 1.7Ghz is going to look real nice pixel fill rate.


----------



## umeng2002

Hey guys, nVidia is now claiming it's 3 times as fast


----------



## Derp

Quote:


> Originally Posted by *FLCLimax*
> 
> nobody saw this coming
> 
> 
> 
> 
> 
> 
> 
> Only took one fake graph.


It wasn't the graph, it was the awful RX480 launch. I made it clear that I wanted to avoid supporting Nvidia because of their business choices regarding g-sync.


----------



## BradleyW

If the GTX 1060 provides 10% better performance, better cooling, OC headroom and a lower TDP, this might stomp out the 480 RX entirely, given the issues that GPU currently faces!


----------



## FLCLimax

Quote:


> Originally Posted by *BradleyW*
> 
> If the GTX 1060 provides 10% better performance, better cooling, OC headroom and a lower TDP, this might stomp out the 480 RX entirely, given the issues that GPU currently faces!


Agreed.


----------



## lolfail9001

Quote:


> Originally Posted by *FLCLimax*
> 
> they're always right. the 4850 sucked, the 4870 sucked, the 5870 sucked, the 5770 sucked, on and on. you're right and i feel your pain. AMD brutally sucking and forcing you to run back to good ol Nvidia who strokes you just right. i understand.


Man you look mad.

Either way, slides are fake and i'll get a signature if 1060 beats rx480, so i won't get a signature.


----------



## DaaQ

Quote:


> Originally Posted by *Derp*
> 
> It wasn't the graph, it was the awful RX480 launch. I made it clear that I wanted to avoid supporting Nvidia because of their business choices regarding g-sync.


Remember that when you have that g-sync monitor and FE in your cart at checkout. It will make you feel better for AMD making you buy those cursed NV items.


----------



## Nestala

Made a proper chart:


----------



## davidelite10

Quote:


> Originally Posted by *BradleyW*
> 
> If the GTX 1060 provides 10% better performance, better cooling, OC headroom and a lower TDP, this might stomp out the 480 RX entirely, given the issues that GPU currently faces!


This is what matters to me.

Hopefully AIB rx480s fix the issues it's currently having.
I wonder how well this thing will oc!


----------



## Klocek001

amd fans getting sulky that RX480 is gonna be the king of the low-end for a week
with performance per watt graphs showing Pascal at more than 1.5x over Polaris what the hell did you expect ?


----------



## zealord

Nvidia is probably very happy with how the RX480 turned out.

Let's see how they price the GTX 1060.

My bet : slightly higher than RX480. Like 250$ - 299$.

Is it confirmed that there will be a 3gb and 6gb card or is it just a rumor?


----------



## PlugSeven

Quote:


> Originally Posted by *davidelite10*
> 
> This is what matters to me.
> 
> Hopefully AIB rx480s fix the issues it's currently having.
> *I wonder how well this thing will oc!*


A fair bit I'm guessing, but Pascal doesn't scale as well as Maxwell does it?


----------



## davidelite10

Quote:


> Originally Posted by *PlugSeven*
> 
> A fair bit I'm guessing, but Pascal doesn't scale as well as Maxwell does it?


Not as much.

My 1080 reaches 2130mhz(2050mhz depending on gpu utilization) and it's not as significant of a performance increase as maxwell OC.

It still causes a good performance increase though!


----------



## Slomo4shO

Quote:


> Originally Posted by *zealord*
> 
> Nvidia is probably very happy with how the RX480 turned out.
> 
> Let's see how they price the GTX 1060.
> 
> My bet : slightly higher than RX480. Like 250$ - 299$.


Huang did state that the competition sets the price


----------



## PlugSeven

Quote:


> Originally Posted by *Slomo4shO*
> 
> Huang did state that the competition sets the price


He also said something about what the market can bear, it flexes to suit.


----------



## tpi2007

Quote:


> Originally Posted by *NFL*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Cybertox*
> 
> lol, are they being serious with this slide?
> 
> 
> 
> I like WhyCry's graph much better
Click to expand...

Much better. Every time a manufacturer comes out with graphs such as the one in the OP at least a dozen sites should recreate them with the bars starting from 0 and calling them out on it and saying: "this is how it should be done".


----------



## maltamonk

Regardless of the performance, the desired target market simply does not want to spend over $250 for a gpu. That's an artificial line I know, but not everyone and their brother is a hardware enthusiast out there and limits are set price first and foremost.

If they price this with the 480 in the 200-250 bracket and it is indeed on par/10% over and they'll have the best xx60 card in generations. Above 250 and they'll miss the target market.


----------



## Orthello

Quote:


> Originally Posted by *BradleyW*
> 
> If the GTX 1060 provides 10% better performance, better cooling, OC headroom and a lower TDP, this might stomp out the 480 RX entirely, given the issues that GPU currently faces!


Its Pacsall , like to clock about 10% better if that. Meanwhile the AIB 480s are coming and we now the heavily capped ref cards can clock 5-10% numbers. Once the next 15% opens up the we are in dx12 1070 territory almost long past the 1060 territory .. the 1060 vs AIB 480 is going to look quite weak for the price. In Dx12 it will be looking really poor. If the 1060 is on par the 980 in dx12 , they are going to have some issues.

Couple that with 6gb of ram vs 8gb - less ram bandwidth also . I just don't see this thing blowing the 480 out of the water at all if the specs are right.

It will beat the ref 480 for sure, the AIBs 480s it will have a very hard time beating convincingly.


----------



## Derp

Quote:


> Originally Posted by *maltamonk*
> 
> If they price this with the 480 in the 200-250 bracket and it is indeed on par/10% over and they'll have the best xx60 card in generations.


I'm hoping this is the case after their recent 60 series card. Pretty much everyone agrees the GTX 960 was a very bad card when it came to fps per dollar.


----------



## Cakewalk_S

Wonder how the 1060 will stack up to a 970....


----------



## lolfail9001

Quote:


> Originally Posted by *Cakewalk_S*
> 
> Wonder how the 1060 will stack up to a 970....


Well, if those slides are legit, 1060 beats 970.

Very big if, but it can be legit.


----------



## Cakewalk_S

Well not good news for AMD...

1060... maybe same level of performance as a 480, lets forget the slides for a moment... Same performance, maybe slightly more, maybe better overclocking...and I bet for the same price too...or around $300...


----------



## prznar1

Quote:


> Originally Posted by *Sequences*
> 
> Graphs like this should be illegal.


Iirc nvidia started those graphs







anyway, the gtx 1060 will be faster then rx 480. That was damn obvious from start. But will it be significantly faster in dx 12? Doubtful.


----------



## Artikbot

Quote:


> Originally Posted by *Cakewalk_S*
> 
> Well not good news for AMD...
> 
> 1060... maybe same level of performance as a 480, lets forget the slides for a moment... *Same performance, maybe slightly more, maybe better overclocking...and I bet for the same price too...or around $300...*


Around $300 is hardly RX 480 price.

The bolded part is exactly how the beyond ******ed hype train for the RX 480 started. AMD said '1.5 times as efficient as R9 390 at the same performance level'.

Then people started speculating with performance levels, started speculating with overclockability, clock speeds, and in the end it turned into a 1070 killer that makes it beyond 1.5GHz on the stock vapour chamber cooler and only sets you back $199.

Only the price being correct of course.

So please, stop it with the nonsensical hype. nVidia will tell us where the card slots into the market. Let's appreciate those as and when.


----------



## TheBlademaster01

I think it would be in the customers' favour if nVidia failed this time around. It would deflate the market a bit.


----------



## gigafloppy

Quote:


> Originally Posted by *TheBlademaster01*
> 
> I think it would be in the customers' favour if nVidia failed this time around. It would deflate the market a bit.


I can see that happening considering Nvidia's current 1070/1080 supply problems. How can they manufacture enough 1060s when they can't even make enough 1070s? The price will go through the roof and everybody will buy cheap RX480s!


----------



## davidelite10

I also don't get the butthurt about these graphs.

They have the numbers stated clearly, as long as you can tell 1.2 is bigger and what the ratio is compared to 1.0 they're not misleading,
Not like 2x RX480s beating a 1080 in ashes


----------



## scorch062

Speculating about the price, will there be same Founders Edition BS like 1070 and 1080 have?

If yes, then i assume the FE price will be somewhere between 300-380 $

If no, then there might be some uproar from early 1070/1080 adopters.


----------



## xzamples

"faster than a 480"

"costs more than a 480"


----------



## andrews2547

Quote:


> Originally Posted by *davidelite10*
> 
> I also don't get the butthurt about these graphs.
> 
> They have the numbers stated clearly, as long as you can tell 1.2 is bigger and what the ratio is compared to 1.0 they're not misleading,
> Not like 2x RX480s beating a 1080 in ashes


They have the numbers clearly stated, but it doesn't give a reference.

What does it mean 1.15x more performance? What is that based on?

What does it mean 1.25x more vr performance? What is that based on?

What does it mean 1.45x more power efficient? What is that based on?

The AMD graphs for the RX480 were pretty bad, but not as bad as this graph. The RX480 graphs at least explained how it was better. 62.5fps in ashes of singularity dx12 vs 58.5fps for the GTX 1080 instead of "RX480x2 has 1.07x more performance than a GTX 1080."


----------



## Slomo4shO

GTX 970 was $330
GTX 960 was $200

GTX 1070 is $380
GTX 1060 will likely be $250

Same $130 gap between products.

Also, the Nano now has competition:


----------



## davidelite10

Quote:


> Originally Posted by *andrews2547*
> 
> They have the numbers clearly stated, but it doesn't give a reference.
> 
> What does it mean 1.15x more performance? What is that based on?
> 
> What does it mean 1.25x more vr performance? What is that based on?
> 
> What does it mean 1.45x more power efficient? What is that based on?
> 
> The AMD graphs for the RX480 were pretty bad, but not as bad as this graph. The RX480 graphs at least explained how it was better. 62.5fps in ashes of singularity dx12 vs 58.5fps for the GTX 1080 instead of "RX480x2 has 1.07x more performance than a GTX 1080."


Overall performance. Generally that's how Nvid has always done their slides. Except on the gtx 480 release. which they cherry picked hard.
Power efficiency makes sense though, that is completely related to performance and powerdraw.
Look at the GTX 1070 being like 15% less power draw than the RX480 yet significantly faster.
VR performance I wouldn't be able to tell. I don't know they're methodology for that.

Nvidia's slides have been pretty damn accurate the past few rounds, I honestly don't doubt this will be 15-20% oc to oc than a rx 480 and be priced around 275.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Slomo4shO*
> 
> GTX 970 was $330
> GTX 960 was $200
> 
> GTX 1070 is $380
> GTX 1060 will likely be $250
> 
> Same $130 gap between products.
> 
> Also, the Nano now has competition:


This would make the most sense as the price would be close to the RX 480. 15% performance increase over it though... Wonder how poorly they will OC (both ref and custom). If it's anything like the 1070 and 1080, the 1060 will also be a poor OCer and custom RX 480 cards may be close to what custom 1060s can offer.


----------



## JackCY

How long did it take to photoshop? 30min?
I have no doubt the 1060 is going to be a decent card considering how much cores and transistors it has but it's pricing may not be as great.


----------



## Tivan

Looking at the amount of cuda cores they pack and the clock speed, I do not think that this card will be consistently faster than a 480. Let alone 15%. lul


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Tivan*
> 
> Looking at the amount of cuda cores they pack and the clock speed, I do not think that this card will be consistently faster than a 480. Let alone 15%. lul


Nvidia math, yo. Testing games with all GW features enabled against the RX 480 to get that 15% improvement.


----------



## junkman

At least this time they didn't gimp it with a 128bit interface and charge you ~50 USD more. Could be a decent card.


----------



## andrews2547

Quote:


> Originally Posted by *davidelite10*
> 
> Overall performance. Generally that's how Nvid has always done their slides. Except on the gtx 480 release. which they cherry picked hard.
> Power efficiency makes sense though, that is completely related to performance and powerdraw.
> Look at the GTX 1070 being like 15% less power draw than the RX480 yet significantly faster.
> VR performance I wouldn't be able to tell. I don't know they're methodology for that.
> 
> Nvidia's slides have been pretty damn accurate the past few rounds, I honestly don't doubt this will be 15-20% oc to oc than a rx 480 and be priced around 275.


But overall performance in what? Games? Rendering? How far it can go if you throw it?

Do you see the problem here? If they said how they got those results, then it wouldn't as much of a problem.


----------



## geox

Quote:


> Originally Posted by *Sequences*
> 
> Graphs like this should be illegal.


couldn't agree more. not sure why nvidia pulls stunts like this.


----------



## KarathKasun

Once RX 480 3rd party designs are out it will be a bloody fight between them and the GTX 1060's. If aforementioned 480's are cheaper, they will win the battle though.


----------



## davidelite10

Quote:


> Originally Posted by *andrews2547*
> 
> But overall performance in what? Games? Rendering? How far it can go if you throw it?
> 
> Do you see the problem here? If they said how they got those results, then it wouldn't as much of a problem.


I thought it was implied gaming performance.
I digress, I'm excited to see how this pans out.


----------



## junkman

Quote:


> Originally Posted by *KarathKasun*
> 
> Once RX 480 3rd party designs are out it will be a bloody fight between them and the GTX 1060's. If aforementioned 480's are cheaper, they will win the battle though.


I want to agree with you, but I doubt that AMD can win on price with similar or slightly better performance with some people. Although, I have been hopeful considering all of the shelves are out of stock concerning the RX 480.


----------



## BinaryDemon

If the GTX1060 was going to be a real performance/price competitor to the RX 480 then Nvidia should have leaked some of this information a few days ago to undercut AMD sales. My guess is that it will out perform the RX 480 but that Nvidia doesn't want to compete on price.


----------



## KarathKasun

Quote:


> Originally Posted by *BinaryDemon*
> 
> If the GTX1060 was going to be a real performance/price competitor to the RX 480 then Nvidia should have leaked some of this information a few days ago to undercut AMD sales. My guess is that it will out perform the RX 480 but that Nvidia doesn't want to compete on price.


This, a whole lot of this.


----------



## Slomo4shO

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Meh, I went from *2x 970* to a single *290X Lightning* and then to *2x 290X TriX* and had issues with drivers, CF, and stuttering in FC4. Ditched them to go *2x 980 KPEs* and they were great until the 980 Ti dropped like a month later and I sold them. Had *3x 980 Ti* that was worse than 2x 980 Ti in FC4. Ended up getting a *Fury* and FreeSync monitor to play with but same issues I've had with their drivers in the past (FreeSync not working, tearing, downclocking despite being on water, issues setting 144 FPS, etc.). Now I'm on a single *980 Ti* and am tempted to get *two custom RX 480 cards* because I don't want to fund Nvidia's (and Intel's) greed any longer (it also reminds me of the 6600 GT days where people would SLI those mid-range cards for nice performance gains) but I've had more issues on the red side than on the green side. Both sides suck and are terrible but my experience has been that Nvidia sucks a bit less.


You went through 11 GPUs in 2 years? But you don't want to fund Nvidia's greed?


----------



## KarathKasun

Quote:


> Originally Posted by *Glottis*
> 
> RX480 costs so cheap for a reason, you'll need cash you save to keep buying motherboard replacements every month


With how many cards have sold, we should be hearing about motherboard failures en mass in the next few weeks if you are right. I doubt it will be a real issue if they lock the power target in the drivers unless you agree to a warning.

GTX 470/480/570/580 could easily kill a motherboard or PSU.


----------



## FLCLimax

Quote:


> Originally Posted by *BinaryDemon*
> 
> If the GTX1060 was going to be a real performance/price competitor to the RX 480 then Nvidia should have leaked some of this information a few days ago to undercut AMD sales. My guess is that it will out perform the RX 480 but that Nvidia doesn't want to compete on price.


If it has that awesome FE cooler i'd say we're looking at $329.


----------



## junkman

Quote:


> Originally Posted by *Slomo4shO*
> 
> You went through 11 GPUs in 2 years? But you don't want to fund Nvidia's greed?


If he bought them used, he doesn't contribute to marketshare.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Slomo4shO*
> 
> You went through 11 GPUs in 2 years? But you don't want to fund Nvidia's greed?


"any longer" And something like that...

Had 2x 280X, then when the 900 series dropped a month later I picked up a reference 980, then sold it and got a 970 G1 Gaming, then a 2nd one and put them both under water, then returned both because of the 3.5 GB thing and got a 290X Lightning which I also put under water, then sold it to get 2x 290X TriX which I put under water, then sold it for 2x 980 KPE which I also put under water, then sold those to get 2x 980 Ti to put under water, then got a third 980 Ti to put under water, then got a Fury to put under water while selling two of the 980 Tis, then sold the Fury and now I'm left with a single 980 Ti. What's that like 14 cards in 2 years?

Edit: Yeah, August 2014 -> January 2016 (when I sold the Fury and have stuck with a single 980 Ti since).


----------



## raghu78

Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying

https://community.amd.com/thread/202410

AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


----------



## Glottis

Quote:


> Originally Posted by *KarathKasun*
> 
> With how many cards have sold, we should be hearing about motherboard failures en mass in the next few weeks if you are right. I doubt it will be a real issue if they lock the power target in the drivers unless you agree to a warning.
> 
> GTX 470/480/570/580 could easily kill a motherboard or PSU.


and that number is......... ?


----------



## KarathKasun

Quote:


> Originally Posted by *raghu78*
> 
> Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying
> 
> https://community.amd.com/thread/202410
> 
> AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


Thats two reports. Ill call it epic proportions when there is a veritable flood of reports.


----------



## KarathKasun

Quote:


> Originally Posted by *Glottis*
> 
> and that number is......... ?


Number of 480's sold? Places are out of stock in a day, many reported large inventories beforehand (500-1000). Pallets of them were on display in some shops.

The number of units sold in the last day more than likely eclipses the total units moved for 1080 and 1070 combined in the last month.


----------



## junkman

Quote:


> Originally Posted by *raghu78*
> 
> Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying
> 
> https://community.amd.com/thread/202410
> 
> AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


A lot of doom and gloom in you.







We really just need a third and fourth GPU company, so that prices on all the GPUs will magically drop and speeds will go up like what happened with the ISP's.


----------



## lombardsoup

For $300+? They'll be collecting dust in the warehouse.

And lol at the graphs


----------



## maarten12100

Quote:


> Originally Posted by *raghu78*
> 
> Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying
> 
> https://community.amd.com/thread/202410
> 
> AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


I'm wondering whether these are people fishing for a new motherboard or their boards actually died from pulling a little more than 75W from the PCI-e slot.
Some have pretty good quality boards and overclocked cards pull over spec from the MB all the time. Well whatever you have warranty for these kind of things anyways.


----------



## FLCLimax

Quote:


> Originally Posted by *raghu78*
> 
> Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying
> 
> https://community.amd.com/thread/202410
> 
> AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


Agreed.


----------



## FLCLimax

Quote:


> Originally Posted by *Glottis*
> 
> and that number is......... ?


they have easily sold through 10,000 day 1, likely more.


----------



## maarten12100

Quote:


> Originally Posted by *FLCLimax*
> 
> they have easily sold through 10,000 day 1, likely more.


Considering OC UK sold 2000 in just one day the number is likely even far high. Very worrisome if it indeed breaks cheap boards because people buying this card will use the cheapest board that fits their i5.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *maarten12100*
> 
> Considering OC UK sold 2000 in just one day the number is likely even far high. Very worrisome if it indeed breaks cheap boards because people buying this card will use the cheapest board that fits their i5.


And then they'll have to buy new boards, most likely from the AMD side, increasing AMDs profits even more. Smart.


----------



## FLCLimax

Quote:


> Originally Posted by *maarten12100*
> 
> Quote:
> 
> 
> 
> Originally Posted by *FLCLimax*
> 
> they have easily sold through 10,000 day 1, likely more.
> 
> 
> 
> Considering OC UK sold 2000 in just one day the number is likely even far high. Very worrisome if it indeed breaks cheap boards because people buying this card will use the cheapest board that fits their i5.
Click to expand...

Think of the free bundle boards from Microcenter!!!!


----------



## SoloCamo

Quote:


> Originally Posted by *raghu78*
> 
> Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying
> 
> https://community.amd.com/thread/202410
> 
> AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


I can guarantee you that there will be tons more of these reports suddenly popping up, though I can also guarantee you that 99% of them will be posted by trolls & shills.


----------



## maarten12100

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> And then they'll have to buy new boards, most likely from the AMD side, increasing AMDs profits even more. Smart.


You can't put an i5 in an AMD board








They should've launched Zen board first if this was their plan. How dark it would be.
Quote:


> Originally Posted by *SoloCamo*
> 
> I can guarantee you that there will be tons more of these reports suddenly popping up, though I can also guarantee you that 99% of them will be posted by trolls & shills.


Probably.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *maarten12100*
> 
> You can't put an i5 in an AMD board
> 
> 
> 
> 
> 
> 
> 
> 
> They should've launched Zen board first if this was their plan. How dark it would be.
> Probably.


I think most people buying an RX 480 have AMD CPUs. That's what my guess would be.


----------



## maarten12100

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I think most people buying an RX 480 have AMD CPUs. That's what my guess would be.


People still use AMD cpus?!


----------



## FLCLimax

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I think most people buying an RX 480 have AMD CPUs. That's what my guess would be.


Nope, this would appeal to anyone who won't spend over $300 or so on a GPU. Also first time PC gamers coming from consoles, holdouts with HD6XXX or older NVIDIA GPU's. It's hitting a far bigger crowd than AMD fans, especially AMD _Processor_ fans.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *maarten12100*
> 
> People still use AMD cpus?!


http://www.overclock.net/f/10/amd-cpus

I didn't even know they still existed. They're like bigfoot to me.


----------



## Malinkadink

Quote:


> Originally Posted by *JunkaDK*
> 
> This must be a new low from Nvidia, if the graphs come from them directly.


I disagree, anyone that looks at a graph and doesn't first look at the scale of values below the bars should be rightly fooled.


----------



## mohit9206

Even if its 10-20% faster than RX480 stock vs stock,it won't matter unless the card is priced competitively which it won't. Get ready to pay $250 for 3gb and $300 for 6gb. And if you're outside US,get ready to pay $500 or more as RX480 8gb is already $400 in my country which is 60% markup over MSRP.


----------



## epic1337

Quote:


> Originally Posted by *andrews2547*
> 
> That graph doesn't even make sense.
> 
> 15% more performance based on what?
> 25% more VR performance based on what?
> 45% more power efficient based on what?


why would it not make sense? the baseline itself is the RX480.

it literally says:
"15% faster than RX480 on regular content"
"25% faster than RX480 on VR content"
"45% more power efficient than RX480"

its such a straightforward comparison i'm surprised that people are still confused by this.


----------



## andrews2547

Quote:


> Originally Posted by *epic1337*
> 
> why would it not make sense? the baseline itself is the RX480.
> 
> it literally says:
> "15% faster than RX480 on regular content"
> "25% faster than RX480 on VR content"
> "45% more power efficient than RX480"
> 
> its such a straightforward comparison i'm surprised that people are still confused by this.


What regular content was used to get the results?

What VR content was used to get the results?

What tests were done to show that it's 45% more power efficient than an RX480?

This is what the issue is.


----------



## KarathKasun

Not a troll, just poking fun at the fact that 4GB is considered by many to be not enough until NV launches a 3gb card in the same performance segment as their old 4gb cards.


----------



## Slomo4shO

Quote:


> Originally Posted by *andrews2547*
> 
> What tests were done to show that it's 45% more power efficient than an RX480?


We already know that pascal is more power efficient than polaris. 45% power efficiency seems in line with 1080 and 1070 efficiency numbers.

Don't have the slightest clue on what the performance numbers would be based off...


----------



## epic1337

Quote:


> Originally Posted by *andrews2547*
> 
> What regular content was used to get the results?
> What VR content was used to get the results?
> What tests were done to show that it's 45% more power efficient than an RX480?
> 
> This is what the issue is.


i'm not even sure if you're serious...


----------



## davidelite10

Quote:


> Originally Posted by *KarathKasun*
> 
> Not a troll, just poking fun at the fact that 4GB is considered by many to be not enough until NV launches a 3gb card in the same performance segment as their old 4gb cards.


But a 4gb furyx is fine right guys? Amiright?

3gb is fine for 1080p, which this card is for.
This is a cheap low end card. People need to stop expecting 200-300 cards be perfect for 4k/1440p


----------



## SlackerITGuy

This launch should be somewhat exciting.

*But first NVIDIA needs to solve the horrible DPC Latancy issues, micro-stuttering and low performance that's affecting a lot of Pascal buyers.*


----------



## FLCLimax

Quote:


> Originally Posted by *KarathKasun*
> 
> Not a troll, just poking fun at the fact that 4GB is considered by many to be not enough until NV launches a 3gb card in the same performance segment as their old 4gb cards.


Now now, you know good and well that a 4GB $199 card or 8GB $239 card is not a very good value. It would be much better to pay 50% or 37% more for 3GB/6GB and ~10% more(less in DX12) performance and the pride of being a Founder!


----------



## Clocknut

dont really care about power consumption. The price is really what I care.

I hope is not above $250.

If it is 120w-130w TDP, I might be able to run SLI within 650w spec. (buy 1 now, buy another 1-2yr later)


----------



## KarathKasun

Quote:


> Originally Posted by *davidelite10*
> 
> But a 4gb furyx is fine right guys? Amiright?
> 
> 3gb is fine for 1080p, which this card is for.
> This is a cheap low end card. People need to stop expecting 200-300 cards be perfect for 4k/1440p


But, but, the RX 480 has 8GB's. It must be good enough for 4k!









3gb is not fine for 1080p at this level of performance. 4GB is good up to 1440p in pretty much all cases. 4GB is good enough for 4k in many situations, though it is right on the limit.

Frame buffer size is not a huge impact on memory usage. Textures and compute buffers are.


----------



## epic1337

Quote:


> Originally Posted by *Slomo4shO*
> 
> We already know that pascal is more power efficient than polaris. 45% power efficiency seems in line with 1080 and 1070 efficiency numbers.
> 
> Don't have the slightest clue on what the performance numbers would be based off...


you don't need to get which benchmarks they're using.

just get the GTX1070 results and scale it down based on power consumption.
GTX1070 is 150W at typical gaming load, and GTX1060 is presumably 120W.
so thats 120/150 = 0.8*Perf



151% * 0.8 = 121%
that literally translates 121% average performance.


----------



## andrews2547

Quote:


> Originally Posted by *epic1337*
> 
> i'm not even sure if you're serious...


I am completely serious.

Who's to say they didn't only test content that only uses Gameworks or DX11 to get those results? Or if they used one game to get those results. I'm not denying the GTX 1060 is going to be more powerful than the RX480. I'm just pointing out the graph that is in the source is terrible. It's worse than the graphs that AMD used to show 2x RX480s are "better" than a single GTX 1080. Those graphs at least said how they got the results.

Quote:


> Originally Posted by *Slomo4shO*
> 
> We already know that pascal is more power efficient than polaris. 45% power efficiency seems in line with 1080 and 1070 efficiency numbers.
> 
> Don't have the slightest clue on what the performance numbers would be based off...


I'm not denying the GTX 1060 is more power efficient. I just want to know what tests were done to get that specific results. If they just said "The GTX 1060 is more power efficient than the RX480" then it wouldn't be an issue, but they figures for it, which means tests were done but they aren't revealing what tests were done.

This is assuming that graph isn't made up by Videocardz to get ad revenue.


----------



## MaxFTW

Well one thing is for sure, Dont expect to be paying £200 or less for this no matter how close it is to 480 performance, and that would be for the 3GB model too, Im seeing the 480 at £220 for 8GB


----------



## poii

The 1060 has 66% of 1070 shaders
75% of 1070 memory interface
probably 50% of ROPs

let it be 70% as fast as a 1070 and you'll get a R9 390/RX 480 (ref. numbers computerbase.de)


----------



## maltamonk

Quote:


> Originally Posted by *davidelite10*
> 
> Nvidia used wood screws on high end GPUs at one point and time.
> 
> This must have been before your time though


Don't think it was before my time, but most likely before I quit drinking. I didn't really care for much in the way of tech back then.


----------



## FLCLimax

Quote:


> Originally Posted by *maltamonk*
> 
> Quote:
> 
> 
> 
> Originally Posted by *davidelite10*
> 
> Oh my sweet sweet summer child..
> 
> 
> 
> Seriously I don't get the wood screw reference. Plz elaborate
Click to expand...

GTX 4XX series was a disaster of a luanch and NVIDIA showed a completely faked mockup of a GPU at an event to make it look like they had actual cards ready and working. Upon examination it was basically a fancy box fastened by wood screws. 6 months later the GTX 480 and 470 came out, frying food and catching on fire the world round.


----------



## EightDee8D

3gb isnt fine at 1080p because it doesnt allow you to use ultra textuers on many new games. Ultra textuers+ med iq = playble on 1080p and it looks way better vs med textures+ med iq on low vram card. If you dont understand that , you are an idiot. Period.


----------



## Slomo4shO

Quote:


> Originally Posted by *epic1337*
> 
> you don't need to get which benchmarks they're using.
> 
> just get the GTX1070 results and scale it down based on power consumption.
> GTX1070 is 150W at typical gaming load, and GTX1060 is presumably 120W.
> so thats 120/150 = 0.8*Perf


Except that power consumption is not linear and scales exponentially with voltage...


----------



## epic1337

Quote:


> Originally Posted by *Slomo4shO*
> 
> Except that power consumption is not linear and scales exponentially with voltage...


and does this mean anything when comparing stock vs stock?


----------



## NuclearPeace

Anyone who read the OP should know that the 1060 will be coming with 6GB of VRAM.

Anyway, this is good stuff from both AMD and NVIDIA. It looks like they are both trying to make the mid range a great price range to buy from again rather than just throwing us scraps like the 960 and washed up Hawaii GPUs. Who will win, the 480 or the 1060?


----------



## Slomo4shO

Quote:


> Originally Posted by *epic1337*
> 
> and does this mean anything when comparing stock vs stock?


Stock voltage differs from product to product...


----------



## epic1337

Quote:


> Originally Posted by *Slomo4shO*
> 
> Stock voltage differs from product to product...


then does this make reviewer's power consumption measurements irrelevant?


----------



## FLCLimax

Quote:


> Originally Posted by *NuclearPeace*
> 
> Anyone who read the OP should know that the 1060 will be coming with 6GB of VRAM.


dunno why, that's TOO MUCH VRAM.


----------



## epic1337

Quote:


> Originally Posted by *FLCLimax*
> 
> dunno why, that's TOO MUCH VRAM.


i'm not too sure about that, a lot of people think GTX980Ti's 6GB VRAM is too small.









furthermore, RX480 already has 8GB, so GTX1060 using 6GB does look small in comparison.


----------



## headd

I already see that price..300-350USD
GTX1060-300USD AIB and 350Founders(In real life 350-360USD for AIB)
GTX1070-450-470USD


----------



## epic1337

Quote:


> Originally Posted by *headd*
> 
> I already see that price..300-350USD
> GTX1060-300USD AIB and 350Founders(In real life 350-360USD for AIB)
> GTX1070-450-470USD


that would directly pit it at GTX970... well GTX1060 would be somewhat faster so it'll be decent?
why do they keep pushing the price up, GTX960 came out at $199 MSRP, now we're looking at $299.


----------



## CasualCat

Quote:


> Originally Posted by *Shiftstealth*
> 
> QFT.
> 
> 10% more performance represented as double lol.


First thing I saw and cringed about. I swear these companies use the how to lie with statistics book as a marketing manual.


----------



## Forceman

Quote:


> Originally Posted by *thedogman*
> 
> Wow a bunch of new accounts that are clearly nividia supporters. Funny no reviewers have had their motherboards fried.


You haven't been here long if you thing Raghu78 is a Nvidia supporter.


----------



## iLeakStuff

Quote:


> Originally Posted by *BradleyW*
> 
> If the GTX 1060 provides 10% better performance, better cooling, OC headroom and a lower TDP, this might stomp out the 480 RX entirely, given the issues that GPU currently faces!


It will make RX 480 look like crap.
RX 480 already have the same power consumption as GTX 1070.
GTX 1060 will be much lower while still beating RX 480 by 15%.

Wouldnt surprise me that the card will cost around $250


----------



## junkman

I think they would be insane to release a 192-bit limited card at anything over 300 (FE or not) with all of the other second hand and new options available. If the past launches are any indication, actual price will be a lot more than MSRP.

I think this is an opportunity for NV to release another gtx 560 ti - they can win a lot of very budget mined buyers if they can price this one right.


----------



## epic1337

Quote:


> Originally Posted by *junkman*
> 
> I think this is an opportunity for NV to release another gtx 560 ti - they can win a lot of very budget mined buyers if they can price this one right.


i wonder what would happen to RX480 if Nvidia decides to launch GTX1060 at $199.
i'd expect them to drop prices, but that would be hilarious, RX480 isn't even a month old yet.


----------



## headd

Quote:


> Originally Posted by *junkman*
> 
> I think they would be insane to release a 192-bit limited card at anything over 300 (FE or not) with all of the other second hand and new options available. If the past launches are any indication, actual price will be a lot more than MSRP.
> 
> I think this is an opportunity for NV to release another gtx 560 ti - they can win a lot of very budget mined buyers if they can price this one right.


If GTX1070 cost 450-470USD why they should price gtx1060 bellow 300USD?Performance gap will not be that high.1070 is 30% faster than GTX980.


----------



## NuclearPeace

Quote:


> Originally Posted by *epic1337*
> 
> i wonder what would happen to RX480 if Nvidia decides to launch GTX1060 at $199.
> i'd expect them to drop prices, but that would be hilarious, RX480 isn't even a month old yet.


Knowing NVIDIA they wont release at $199. Margins then would be too low with the 6GB of VRAM and all.


----------



## Forceman

Quote:


> Originally Posted by *epic1337*
> 
> i wonder what would happen to RX480 if Nvidia decides to launch GTX1060 at $199.
> i'd expect them to drop prices, but that would be hilarious, RX480 isn't even a month old yet.


I suspect well never find out, because there is no way Nvidia prices it lower than $250. They have been able to price higher for similar performance for quite a while, and I don't see that changing now. Especially not with AIB 480s expected to be closer to $300.


----------



## iLeakStuff

Quote:


> Originally Posted by *headd*
> 
> If GTX1070 cost 450-470USD why they should price gtx1060 bellow 300USD?Performance gap will not be that high.1070 is 30% faster than GTX980.


It cost $380 and GP106 will be much cheaper since its a tiny chip.
And GTX 1070 is 40% faster than GTX 980.

Stop spreading bull


----------



## headd

If its under 300USD it will kill GTX1070 sales because 1070 cost 440-470USD and performance gap will be only 30%.
It should cost 350USD


----------



## lolerk52

Quote:


> Originally Posted by *epic1337*
> 
> i wonder what would happen to RX480 if Nvidia decides to launch GTX1060 at $199.
> i'd expect them to drop prices, but that would be hilarious, RX480 isn't even a month old yet.


It would be a 4870 in reverse... Which would be hilarious considering AMD's marketing.


----------



## Peanuts4

Quote:


> Originally Posted by *maltamonk*
> 
> Wood screws? Whatca talkin about Willis?


Just being very deceptive. Like showing off a product like its complete when not is as bad as graphs that show massive differences when its in actuality small incriments. We've seen it dine by hardware companies as well, always pisses me off.

People are freaking out about AMDs power usage, when in the last 10 years have they been beating Nvidia on this? Give it a rest, I buy 4:1 Nvidia cards to AMD over the years and acceot AMD will produce a cparable performing card that uses more power but less if a price. Its been that way for a decade and now everyone is acting like this is mind boggling news. I really dont want to see the return to gtx 8800's prices and people act like its tge second coming because Nvdia took years to move onto its next gen cards and threw all us suckers a bone with a bit of a discount.

We're already suckers for paying $200 for G-Sync its essentially a contract we sign to continue to purchase their video cards to see an optimal feature on a monitor instead if the competition. They should be giving something like that away for free seeing as we keep buying what they pop out and are locked into it because we don't replace our expensive monitors frequently. Here's my money to garuntee I will continue to only buy video cards from you. We're all just getting wood screwed if you ask me.

I would lay off the bs focus on AMD's power usage as it may pull away ppl willing to purchase those cards. You know Nvdias only competition keeping prices down. I dont want to spend $350 for a midrange card that has incremental minor improvements forever. Ie: Intel since the 2500k's rolled out due to lack of competition. Hardware pergitory sucks for all of us.

Its bad enough we were stuck on dx9 for 10 years, dx10 was nothing, dx11, was nothing just marketing so us suckers upgrade our OS's. Oh thank goodness our savior of graphics performance isnt new graphics or minfblowing games but 4k displays and VR.....


----------



## epic1337

Quote:


> Originally Posted by *NuclearPeace*
> 
> Knowing NVIDIA they wont release at $199. Margins then would be too low with the 6GB of VRAM and all.


not quite, GDDR5 is cheap now a days, so its not that much of an issue.
regarding former, getting 6GB required 384bit bus using 12chips, now they only need 192bit bus using 6chips.
Quote:


> Originally Posted by *Forceman*
> 
> I suspect well never find out, because there is no way Nvidia prices it lower than $250. They have been able to price higher for similar performance for quite a while, and I don't see that changing now. Especially not with AIB 480s expected to be closer to $300.


its actually possible, its almost always the case where the mid-tier cards have the most superior perf/$, even by a large margin of 50% higher perf/$ than their flagships.

since GTX1070 was supposed to target the $380 price point later on, GP106 can be cheaper than $250.

theres 3points to note on this:
GP106 die is 1/2 smaller than GP104 making it directly at least 1/2 cheaper.
GP106 requires a less robust VRM design, less GDDR5 chips, and less PCB space.
GP106 is much cooler than GP104, which means it'll need a less robust heatsink design.


----------



## FLCLimax

Quote:


> Originally Posted by *epic1337*
> 
> Quote:
> 
> 
> 
> Originally Posted by *NuclearPeace*
> 
> Knowing NVIDIA they wont release at $199. Margins then would be too low with the 6GB of VRAM and all.
> 
> 
> 
> not quite, GDDR5 is cheap now a days, so its not that much of an issue.
> regarding former, getting 6GB required 384bit bus using 12chips, now they only need 192bit bus using 6chips.
> Quote:
> 
> 
> 
> Originally Posted by *Forceman*
> 
> I suspect well never find out, because there is no way Nvidia prices it lower than $250. They have been able to price higher for similar performance for quite a while, and I don't see that changing now. Especially not with AIB 480s expected to be closer to $300.
> 
> Click to expand...
> 
> its actually possible, its almost always the case where the mid-tier cards have the most superior perf/$, even by a large margin of 50% higher perf/$ than their flagships.
> 
> since GTX1070 was supposed to target the $380 price point later on, GP106 can be cheaper than $250.
> 
> theres 3points to note on this:
> GP106 die is 1/2 smaller than GP104 making it directly at least 1/2 cheaper.
> GP106 requires a less robust VRM design, less GDDR5 chips, and less PCB space.
> GP106 is much cooler than GP104, which means it'll need a less robust heatsink design.
Click to expand...

don't overhype yourself into thinking NVIDIA will release a cheap GPU now...the hypetrain is just waiting to get derailed!


----------



## epic1337

Quote:


> Originally Posted by *FLCLimax*
> 
> don't overhype yourself into thinking NVIDIA will release a cheap GPU now...the hypetrain is just waiting to get derailed!


i can't help myself, whenever a potential bloodbath is right around the corner, i always end up getting excited.


----------



## maarten12100

Quote:


> Originally Posted by *iLeakStuff*
> 
> It will make RX 480 look like crap.
> RX 480 already have the same power consumption as GTX 1070.
> GTX 1060 will be much lower while still beating RX 480 by 15%.
> 
> Wouldnt surprise me that the card will cost around $250


Edit: you already provided it so almost twice the price 10-15% faster. Terrible value.


----------



## junkman

Quote:


> Originally Posted by *FLCLimax*
> 
> don't overhype yourself into thinking NVIDIA will release a cheap GPU now...the hypetrain is just waiting to get derailed!


I think we can afford to hype this card, considering there are probably a ton of people who would buy it at ~$250-$300+ regardless.

Even for all of the hype the RX 480 got, and all of the unrealistic expectations on this forum, it is out-of-stock everywhere.

It seems like hype only ruins it for enthusiasts like us that would be happy to fill our 4th backup PC with 4k 60FPS GPUs at $199.


----------



## Exeed Orbit

Don't the X60 range of products usually launch at $199? I'd be willing to bet 3GB at $199. 6GB at $229-$239?


----------



## epic1337

Quote:


> Originally Posted by *Exeed Orbit*
> 
> Don't the X60 range of products usually launch at $199? I'd be willing to bet 3GB at $199. 6GB at $229-$239?


normally $199~249.


----------



## junkman

Quote:


> Originally Posted by *Exeed Orbit*
> 
> Don't the X60 range of products usually launch at $199? I'd be willing to bet 3GB at $199. 6GB at $229-$239?


Rumors put it at $250 for the 3gb version, your guess is as good as theirs is.


----------



## jologskyblues

I'm not going to get on the 1060 hype train on this one but I see no reason for Nvidia to compete with the RX 480 with kid gloves, performance and efficiency-wise considering Pascal's potential.

For me, the big question is its pricing. I know almost everybody is expecting the 1060 to be more expensive but what if Nvidia willing to undercut its competitor to deal maximum damage to the sales of Polaris? They already did that when they launched the GTX 680 at $500 after the HD 7970 which was still at $550 and slower due to immature drivers.


----------



## dubldwn

1280 cores. Was so hoping for for another 560Ti. Alas, another 960. Oh well. Definitely buying one for the backup rig. What else can I do? I'm trapped.


----------



## Chaython

faster in gameworks games*


----------



## Exeed Orbit

Quote:


> Originally Posted by *dubldwn*
> 
> 1280 cores. Was so hoping for for another 560Ti. Alas, another 960. Oh well. Definitely buying one for the backup rig. What else can I do? I'm trapped.


a TI is still a possibility, given the gap between 1060 and 1070


----------



## Exeed Orbit

I'm expecting 60% of the 1080's performance, at 40% of the price. Give or take. I think that's generally been the trend of X60 vs X80.


----------



## CasualCat

Quote:


> Originally Posted by *epic1337*
> 
> GP106 is much cooler than GP104, which means it'll need a less robust heatsink design.


At the end of the day how much does the cooling design factor into the price? You can get heatpipe cpu coolers like the 212 hyper evo for $30 retail, and the majority of AIB cards are 2-3 fans, heat pipes/vapor chamber and an aluminum HS. So I highly suspect that any non-aio cooling solutions are pretty insignificant cost wise.


----------



## epic1337

Quote:


> Originally Posted by *Exeed Orbit*
> 
> I'm expecting 60% of the 1080's performance, at 40% of the price. Give or take. I think that's generally been the trend of X60 vs X80.


this one would be a lot closer to 50% perf, as GP106 is literally 1/2 of GP104.


----------



## Diogenes5

Nvidia has like 80% marketshare so they can overcharge and people will pay it because of all the fanboys. Just look at all forums. The vast majority of the fanboys are all Nvidia guys.

The rx 480 performs at about the level of a 390 atm. The gtx 1070 is about 53% faster at 2k resolutions. The 1060 looks to be about 15% faster than a 480 or 5% faster than a 390x. So a 1070 is about 33% faster than a 1060. They want people to buy a 1070 because that's probably where there revenue are the highest. The 1070 is the top of the perf/$ curve for Nvidia. Just like with the 970 and 960, the 960 was cheaper but the 970 was much better in terms of perf/$.

This puts a floor on the price of a 1060. They have to price the 6gb 1060 above $285 to keep the $/perf of the 1070 superior (at the current MSRP of $379 which IMO is fake since you can't find it anywhere at that price but that's another story). IMO, that means the 1060 will probably be $249/$299 maybe even something like $279/329. The lower end version will be gimped at resolutions above 1080p because of the 3gb of ram (but have better perf/$ than the 1070). The 1070 will be superior to the higher-end card in terms of perf/$. The 1060 will also be somewhat supply limited as Nvidia wants to keep its margins high and clear out its stock of old cards.

So Nvidia will offer an option that will be still be beaten by the 480 in terms of perf/$ but still an option for people in that price range which will sell well because of brand recognition alone. Since many people only consider Nvidia cards, this will give people in the 970/980 market right now a reason to upgrade since the perf/$ will be superior to existing 970/980 prices. I'm calling it from my armchair of wisdom guys, watch out!


----------



## Exeed Orbit

Quote:


> Originally Posted by *epic1337*
> 
> this one would be a lot closer to 50% perf, as GP106 is literally 1/2 of GP104.


So was the 960 compared to the 980. Yet it performed at around 60% of the 980. (Depending on the game, and its appetite for memory bandwidth)


----------



## EightDee8D

Quote:


> Originally Posted by *Exeed Orbit*
> 
> So was the 960 compared to the 980. Yet it performed at around 60% of the 980. (Depending on the game, and its appetite for memory bandwidth)


eh, it performs 50% mostly. and also has a bit worse scaling and p/w.


----------



## epic1337

Quote:


> Originally Posted by *CasualCat*
> 
> At the end of the day how much does the cooling design factor into the price? You can get heatpipe cpu coolers like the 212 hyper evo for $30 retail. So I highly suspect that any non-aio cooling solutions are pretty insignificant cost wise.


theres obviously gonna be a difference, like these three for example.

GTX950 Windforce cooler


GTX960 Windforce cooler


GTX980 Windforce cooler


the difference between the 980's and 960's isn't just a mere extra heatpipe, 980's fin is much denser and deeper, and theres a large chunk of copper baseplate.
this won't be just a few $ of difference, i wouldn't be surprised if their CPU HSF equivalent is like an NH-U12S vs NH-D15, or roughly $25 price difference.


----------



## Exeed Orbit

Quote:


> Originally Posted by *Diogenes5*
> 
> Nvidia has like 80% marketshare so they can overcharge and people will pay it because of all the fanboys. Just look at all forums. The vast majority of the fanboys are all Nvidia guys.
> 
> The rx 480 performs at about the level of a 390 atm. The gtx 1070 is about 53% faster at 2k resolutions. The 1060 looks to be about 15% faster than a 480 or 5% faster than a 390x. So a 1070 is about 33% faster than a 1060. They want people to buy a 1070 because that's probably where there revenue are the highest. The 1070 is the top of the perf/$ curve for Nvidia. Just like with the 970 and 960, the 960 was cheaper but the 970 was much better in terms of perf/$.
> 
> This puts a floor on the price of a 1060. They have to price the 6gb 1060 above $285 to keep the $/perf of the 1070 superior (at the current MSRP of $379 which IMO is fake since you can't find it anywhere at that price but that's another story). IMO, that means the 1060 will probably be $249/$299 maybe even something like $279/329. The lower end version will be gimped at resolutions above 1080p because of the 3gb of ram (but have better perf/$ than the 1070). The 1070 will be superior to the higher-end card in terms of perf/$. The 1060 will also be somewhat supply limited as Nvidia wants to keep its margins high and clear out its stock of old cards.
> 
> So Nvidia will offer an option that will be still be beaten by the 480 in terms of perf/$ but still an option for people in that price range which will sell well because of brand recognition alone. Since many people only consider Nvidia cards, this will give people in the 970/980 market right now a reason to upgrade since the perf/$ will be superior to existing 970/980 prices. I'm calling it from my armchair of wisdom guys, watch out!


If they do that (at 15% above RX 480) they will likely lose the mainstream market. Or perhaps they'll pull the same thing. FE at $329, AIB at $279 or something similar. That would keep the Price/Performance similar to the RX 480. (Assuming 15% increase in price compared to $239)


----------



## dubldwn

Quote:


> Originally Posted by *Exeed Orbit*
> 
> a TI is still a possibility, given the gap between 1060 and 1070


Well, I'm assuming this is full GP106. Cutting down GP104 even more seems strange to me.


----------



## EightDee8D

First they damaged amd with x80 part (GTX680) then with x70 part (GTX970), and now its time for final nail in the coffin with biggest market (x60 part). it's safe to say this year will be the end of RTG. hope intel or samsung buys them, they really need huge R&D budget to beat nvidia.


----------



## Seven7h

Quote:


> Originally Posted by *davidelite10*
> 
> Nvidia used wood screws on high end GPUs at one point and time.
> 
> This must have been before your time though


Wow, talk about being wrong.

At the Fermi launch at GTC, there was exactly one mock-up GPU that came direct from NVIDIAs labs (instead of an assembly factory)... As such it was just meant for the photo of the CEO holding it up, so they didn't particularly care what screws they used, so someone sloppily used woodscrews. They used the mock-up because the cards were in the process of production. The board had also been modified to fit the final cooler.

That *one* board used woodscrews, and the press (if you can even call Charlie Demerjian that) made a big deal out of it on his tabloid site that was so eager to slam NVIDIA.

However all the demos shown at the event were really running on Fermi GPUs but those boards just weren't pretty because they didn't have the final coolers on them yet.


----------



## CasualCat

Quote:


> Originally Posted by *epic1337*
> 
> theres obviously gonna be a difference, like these three for example.
> 
> GTX950 Windforce cooler
> 
> 
> GTX960 Windforce cooler
> 
> 
> GTX980 Windforce cooler
> 
> 
> the difference between the 980's and 960's isn't just a mere extra heatpipe, 980's fin is much denser and deeper, and theres a large chunk of copper baseplate.
> this won't be just a few $ of difference, i wouldn't be surprised if their CPU HSF equivalent is like an NH-U12S vs NH-D15, or roughly $25 price difference.


My point is I suspect even with those differences the cost difference to the mfg is measured in single digit dollars.


----------



## KarathKasun

Quote:


> Originally Posted by *Seven7h*
> 
> Wow, talk about being wrong.
> 
> At the Fermi launch at GTC, there was exactly one GPU that came direct from NVIDIAs labs (instead of an assembly factory)... As such it was one that had been assembled by hand and likely frequently got disassembled in that lab, so they didn't particularly care what screws they used. That just so happened to be the one card used to show the GPU off on stage at the launch, because the cards were in the process of production.
> 
> That *one* board used woodscrews, and the press (if you can even call Charlie Demerjian that) made a big deal out of it on his tabloid site that was so eager to slam NVIDIA.
> 
> Source: I was there.


Just... no. The card had been sawed off so the reference cooler would fit on it. I remember that day well.


----------



## raghu78

Quote:


> Originally Posted by *EightDee8D*
> 
> First they damaged amd with x80 part (GTX680) then with x70 part (GTX970), and now its time for final nail in the coffin with biggest market (x60 part). it's safe to say this year will be the end of RTG. hope intel or samsung buys them, they really need huge R&D budget to beat nvidia.


Actually Nvidia did the worst damage to AMD during Maxwell generation with GTX 750 Ti, GTX 750, GTX 950, GTX 960 and GTX 970. AMD lost more than half their GPU revenue in a few quarters and that cannot happen from a single GPU. Nvidia thrashed AMD badly pretty much across the stack. But it was the high volume segments where AMD lost the bulk of their revenue.


----------



## epic1337

Quote:


> Originally Posted by *CasualCat*
> 
> My point is I suspect even with those differences the cost difference to the mfg is measured in single digit dollars.


i don't think its that simple, otherwise they'd be reusing their high-end coolers for their mid-end cards more often, instead of making a weaker version of it.

if it were truly just a few dollars of difference, they could just reuse their high-end cooler and strip off parts during production to make it slightly cheaper.
but it doesn't seem to be the case, they've made an entirely different design instead, and obviously used a lot less aluminum and copper.


----------



## Horsemama1956

Couldn't they just kill AMD by releasing it at 200?


----------



## epic1337

Quote:


> Originally Posted by *Horsemama1956*
> 
> Couldn't they just kill AMD by releasing it at 200?


thats the current topic, or at least a few posts ago.


----------



## qlum

If these graphs are real I expect the real word performance to be about on par with the rx480 probably giving you a slightly more efficient card at a bit of a higher price.


----------



## Zero_

Received pricing for this card. The 3GB version is seems to be $240-250 while the 6GB version seems to be $280-300. Final prices yet to be confirmed.


----------



## epic1337

Quote:


> Originally Posted by *Zero_*
> 
> Received pricing for this card. The 3GB version is seems to be $240-250 while the 6GB version seems to be $280-300. Final prices yet to be confirmed.


$250 is unreasonable for a 3GB card...


----------



## FLCLimax

Quote:


> Originally Posted by *epic1337*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Zero_*
> 
> Received pricing for this card. The 3GB version is seems to be $240-250 while the 6GB version seems to be $280-300. Final prices yet to be confirmed.
> 
> 
> 
> $250 is unreasonable for a 3GB card...
Click to expand...

we've established that 3GB is MORE than enough to max out games at 1080p and play at 1440p.....


----------



## Seven7h

Quote:


> Originally Posted by *KarathKasun*
> 
> Just... no. The card had been sawed off so the reference cooler would fit on it. I remember that day well.


Just... No. Nothing about what you said negates what I said. I clarified my comment to reflect that it was a mock up GPU because the PCB had been cut. It came from the lab and was hand assembled and the backplate was held on with wood screws. All the GPUs in the demo machines had science project looking coolers, so they wanted the mock-up to show what the final product looked like, and have something to hold up, even if it meant there was no GPU on the PCB.

I remember that day well.


----------



## Exeed Orbit

Quote:


> Originally Posted by *FLCLimax*
> 
> we've established that 3GB is MORE than enough to max out games at 1080p and play at 1440p.....


Currently, yes it is. You're not likely to be memory starved given NVidia's memory compression techniques. BUT, I do fear for future games. Though, mainstream cards are not particularly known for their longevity.


----------



## Kand

Quote:


> Originally Posted by *KarathKasun*
> 
> Just... no. The card had been sawed off so the reference cooler would fit on it. I remember that day well.


Could be worse.
http://www.xtremesystems.org/forums/showthread.php?263668-The-true-reason-behind-the-delay-of-AMD-Cayman-6970-6950-you-d-never-wanna-know


----------



## CasualCat

Quote:


> Originally Posted by *epic1337*
> 
> i don't think its that simple, otherwise they'd be reusing their high-end coolers for their mid-end cards more often, instead of making a weaker version of it.
> 
> if it were truly just a few dollars of difference, they could just reuse their high-end cooler and strip off parts during production to make it slightly cheaper.
> but it doesn't seem to be the case, they've made an entirely different design instead, and obviously used a lot less aluminum and copper.


Looks like you're right. Only data I could find was from the GTX 500 series and HD6000 series.

Nvidia had an approximately $30 delta overall from high end to low end

AMD had approximately $20 delta overall


----------



## Diogenes5

Quote:


> Originally Posted by *Zero_*
> 
> Received pricing for this card. The 3GB version is seems to be $240-250 while the 6GB version seems to be $280-300. Final prices yet to be confirmed.


Yep, nailed it. They can't go under $285 without threatening 1070 sales when they finally settle down to $380. The 480 will still be the $/perf king. Nvidia is a company that's there to make money, not fight for scraps of marketshare.

They'll let AMD regain some marketshare and still laugh all the way to the bank. It's not like AMD could go much lower. They were under 20% marketshare at one point. All chip-makers need to sell their chips in volume anyways to maintain revenue. The Marginal Costs of cards is dirt-cheap; its the R&D that's expensive. It's extremely difficult for one graphics card maker to go under 35% marketshare because of this but somehow AMD did it the last year.

Hopefully regaining marketshare and being inside ALL of the home consoles will make AMD drivers catch up to Nvidia's. Developers won't be cheap and ignore AMD Driver Support like Microsoft did with Gears of Wars ultimate edition and when Crystal Dynamics took all that Nvidia Gameworks money to make Rise of the Tomb Raider run better on Nvidia cards than on AMD even though the game was funded by the xbox one which has an r9 7790 inside.

Screw companies acting like monopolies.


----------



## SpeedyVT

Quote:


> Originally Posted by *andrews2547*
> 
> That graph doesn't even make sense.
> 
> 15% more performance based on what?
> 25% more VR performance based on what?
> 45% more power efficient based on what?


Not only that you throw something on an NVidia that makes it struggle it struggles a lot more than AMD, but the trick is to make it struggle. A lot of NVidia GPUs are powerful enough to be beyond the point of struggle with current gen benchmarks and games. However I see an AMD GPU lasting longer and struggling less than NVidia on heavier benchmarks. Probably why AMD does so well with DX12 performance.


----------



## KarathKasun

Quote:


> Originally Posted by *Seven7h*
> 
> Just... No. Nothing about what you said negates what I said. I clarified my comment to reflect that it was a mock up GPU because the PCB had been cut. It came from the lab and was hand assembled and the backplate was held on with wood screws. All the GPUs in the demo machines had science project looking coolers, so they wanted the mock-up to show what the final product looked like, and have something to hold up, even if it meant there was no GPU on the PCB.
> 
> I remember that day well.


That was not the initial messaging though. It was a stunt to placate investors and partners.


----------



## lolerk52

That graph is probably cherry picked from an NVIDIA favoring title.

NVIDIA's cards scale almost linearly with TFLOPS, and that applies to Maxwell and Pascal too.
If we calculate the TFLOPS, it would need 1950MHz at 1280SP's to match a 980, while the boost clock is apparently at 1700MHz, which is 4.4TFLOPS.

From that, 1060 is probably going to be on par with 480, at a better power efficiency.


----------



## aDyerSituation

I hope a $300 card would beat a $230 card lmao























And don't get me started on these graphs


----------



## CynicalUnicorn

Guys, I made the graph better!









Keep in mind that this thing has half a 1080's cores and 75% its memory bus. These numbers are not 100% unreasonable in Nvidia-biased games. Overall I'd expect it to about match the 480 however.


----------



## SuperZan

Quote:


> Originally Posted by *FLCLimax*
> 
> we've established that 3GB is MORE than enough to max out games at 1080p and play at 1440p.....


3Gb is enough for 12k 144fps... as long as it's Nvidia's 3Gb.

AMD already has to have a couple of people on-hand to optimise for the Fury's 4Gb but 3Gb in 2016 is a gift from the leather gods.


----------



## Defoler

Quote:


> Originally Posted by *JunkaDK*
> 
> This must be a new low from Nvidia, if the graphs come from them directly.


Why?
Aren't AMD doing the exact same thing?
Cherry picking numbers and chart scales? This is not "new low". Everyone are doing it. AMD, Intel, Nvidia, even samsung, apple, whatever.

This is called PR.


----------



## Cyro999

Quote:


> Keep in mind that this thing has half a 1080's cores and 75% its memory bus.


Rumor says gddr5 (which makes some sense) rather than gddr5x so the bandwidth to core ratio would be be quite low if they halved the bus width as well

256 bit gddr5x = 320GB/s
256 bit gddr5 = 256GB/s

128 bit gddr5 = 128GB/s (would need 160GB/s for same bandwidth ratio on half a 1080)
192 bit gddr5 = 192GB/s (would be less of a limit for half a 1080 than 256 bit gddr5x would be for a whole 1080)

VRAM isn't free either and 6GB is a really nice number right now.
Quote:


> we've established that 3GB is MORE than enough to max out games at 1080p and play at 1440p.....


No we have not, i got a 980 two years ago and there were multiple new AAA games at the time that i couldn't max textures on 1080p w/ 4GB of VRAM. There have been more since. 3GB would be too little and 4GB would be fine if you're willing to turn down some settings in VRAM-heavy games, but 6GB lets you do whatever you want @1080p and probably 1440p too.


----------



## FLCLimax

Quote:


> Originally Posted by *SuperZan*
> 
> Quote:
> 
> 
> 
> Originally Posted by *FLCLimax*
> 
> we've established that 3GB is MORE than enough to max out games at 1080p and play at 1440p.....
> 
> 
> 
> 3Gb is enough for 12k 144fps... as long as it's Nvidia's 3Gb.
> 
> AMD already has to have a couple of people on-hand to optimise for the Fury's 4Gb but 3Gb in 2016 is a gift from the leather gods.
Click to expand...

This is what i've been saying, NVIDIA has magical VRAM. 3/6GB will be sufficient for the next 20 years, until VR can be sorted out and we can move on to 9.5 GB.


----------



## Alex132

Quote:


> Originally Posted by *Cyro999*
> 
> *No we have not, i got a 980 two years ago and there were multiple new AAA games at the time that i couldn't max textures on 1080p w/ 4GB of VRAM.* There have been more since. 3GB would be too little and 4GB would be fine if you're willing to turn down some settings in VRAM-heavy games, but 6GB lets you do whatever you want @1080p and probably 1440p too.


Huh?

I haven't had any issues maxing any game with the 4GB on my R9 295X2 on 1440p too...?


----------



## Cyro999

Quote:


> Originally Posted by *Alex132*
> 
> Huh?
> 
> I haven't had any issues maxing any game with the 4GB on my R9 295X2 on 1440p too...?


It's not all that common though it will be more so at 1440p especially w/ MSAA.


----------



## CasualCat

Quote:


> Originally Posted by *CynicalUnicorn*
> 
> 
> 
> Guys, I made the graph better!


You're hired for the ministry of truth marketing department. When can you start?


----------



## FattysGoneWild

Say it aint so! Nvidia beating AMD again?!?!? Shocking!


----------



## Alex132

Quote:


> Originally Posted by *Cyro999*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Alex132*
> 
> Huh?
> 
> I haven't had any issues maxing any game with the 4GB on my R9 295X2 on 1440p too...?
> 
> 
> 
> It's not all that common though it will be more so at 1440p especially w/ MSAA.
Click to expand...

But what game...? I use the maximum MSAA in GTA V and have no issues at all.

Just because it's using 4/4GB of VRAM doesn't mean it needs more, it will use as much as you give it. But performance wouldn't necessarily increase if you had, let's say, 8GB of VRAM. You'd just use more of it.


----------



## pengs

And the life span of a house fly...

These mid range GPU's keep getting closer and closer to being disposable. The bus width, lack of parallel processing, memory. Even Fermi (yes, the well renowned flame thrower) had more potential longevity, the only thing shut Fermi down was it's lack of memory. It's architecture was quite well rounded.

When these new API's start being used in full force NVIDIA's mid range is going to crash quicker than a drunken girl in stilettos


----------



## Exeed Orbit

Quote:


> Originally Posted by *pengs*
> 
> And the life span of a house fly...
> 
> These mid range GPU's keep getting closer and closer to being disposable. The bus width, lack of parallel processing, memory. Even Fermi (yes, the well renowned flame thrower) had more potential longevity, the only thing shut Fermi down was it's lack of memory. It's architecture was quite well rounded.
> 
> When these new API's start being used in full force NVIDIA's mid range is going to crash quicker than a drunken girl in stilettos


Given the pace at which technology advances, this has unfortunately become the norm. Yearly phone refreshes. Yearly CPU refreshes. It's all designed to be disposable so that you can upgrade to the next best thing. Only flagship products are expected to last a couple years. My GTX 670 is due for an upgrade now, as it slides into sub mainstream performance.


----------



## Buris

I would bet that the 10% faster will ring true, but only in select applications.

DX11 games should give a defacto edge to the 1060, it won't be quite 10%, but it will there.

This card will be most likely the best value @1080p card upon release (if it is under the 8GB 480 in price, otherwise ...eh).

At DX12 or 1440p/VR, the RX480 will most likely edge it out. though it will likely hold its own

The RX480 is already leaning towards the lower bracket of 1440p usefulness, and with a 192-bit bus and only 192GB/s memory bandwidth, even with the video compression on current pascal cards, this card will be extremely bandwidth limited, this makes the 1060 not only near-useless for SLI, but also useless for 1440p.

Likely the 1060 could have been massively overclocked and without the hard-192-bit-bus limitation the card could seriously damage the income Nvidia's own 1070.. With this block in place, it simply won't be able to compete at a higher resolution than 1080p, or VR for that matter.


Summation:
When the GTX 1060 comes out, it will be the best mid-range card at 1080p in DX11.

It will lose to the RX 480 in VR, 1440p, and DX12.

Neither card will blow eachother away like this falacious graph is trying to convey.

No word on overclocking performance yet, but AIB 1060's might be a great option depending on memory overclocking


----------



## iLeakStuff

Quote:


> Originally Posted by *FattysGoneWild*
> 
> Say it aint so! Nvidia beating AMD again?!?!? Shocking!


Nvidia got too much money to spew in development to always come on top.
AMD thought they were clever to go for low end and let Nvidia have high end alone.
BAM, here comes GTX 1060 out of the blue to challenge RX 480.
The card got like a week or two alone.

AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


----------



## Slomo4shO

Quote:


> Originally Posted by *iLeakStuff*
> 
> One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


Clearly the architecture. Maxwell brought about the efficiency lead, the node change just made it more pronounced.


----------



## SoloCamo

Quote:


> Originally Posted by *iLeakStuff*
> 
> Nvidia got too much money to spew in development to always come on top.
> AMD thought they were clever to go for low end and let Nvidia have high end alone.
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.
> The card got like a week or two alone.
> 
> AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


So... where is the 1060? Or are we now using this well known click bait site as 100% proof at this point?

Or do you have some "insider knowledge" guaranteeing it's going to be faster, let alone cheaper? Well since we all know the answer there those with a brain will wait until it's in the reviewers hands. Nvidia hasn't put out a decent mid range card since the 560ti IMO.


----------



## Redeemer

only graph that matters


----------



## sugalumps

Quote:


> Originally Posted by *Redeemer*
> 
> only graph that matters


LOL









Nvidia never going to live down the fanboy edition.


----------



## bombastinator

the 480 is released @$200 so your most optimistic number is way high


----------



## FattysGoneWild

Quote:


> Originally Posted by *iLeakStuff*
> 
> Nvidia got too much money to spew in development to always come on top.
> AMD thought they were clever to go for low end and let Nvidia have high end alone.
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.
> The card got like a week or two alone.
> 
> AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


It would be pretty naive and stupid if AMD was to think Nvidia would not answer back with the 1060.


----------



## gigafloppy

Quote:


> Originally Posted by *iLeakStuff*
> 
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.


BAM, there falls the GTX 1060 flat on its face because of the tiny 3GB and higher price. RX 480 wins.


----------



## EightDee8D

Quote:


> Originally Posted by *gigafloppy*
> 
> BAM, there falls the GTX 1060 flat on its face because of the tiny 3GB and higher price. RX 480 wins.


pff, don't you know 3db is enough ? git gut man.


----------



## lolfail9001

The ironic part here is that the only reason we assume there will be 3gb 1060 is because we saw a Zauba shipment with it.

For all we know, it could be 1050.


----------



## Redeemer

seems like the 1060 is a winner at 1080p with no AA enabled


----------



## SpeedyVT

Quote:


> Originally Posted by *Redeemer*
> 
> seems like the 1060 is a winner at 1080p with no AA enabled


AMD does better in stressful loads than NVidia, NVidia cards lose more performance going from low settings to ultra than AMD does.


----------



## Artikbot

Quote:


> Originally Posted by *Redeemer*
> 
> seems like the 1060 is a winner at 1080p with no AA enabled


The GTX 1280 Ti will be able to run Battlefield 2018 at over 144 FPS on a HTC Vive 2, so you're wasting your time buying anything this generation.


----------



## junkman

Quote:


> Originally Posted by *lolfail9001*
> 
> The ironic part here is that the only reason we assume there will be 3gb 1060 is because we saw a Zauba shipment with it.
> 
> For all we know, it could be 1050.


I would think they would gimp the 1050 with a 128 bit bus like in previous X50 series cards. 192 sounds a bit expensive for such a budget minded card.


----------



## BulletSponge

Quote:


> Originally Posted by *Artikbot*
> 
> The GTX 1280 Ti will be able to run Battlefield *2143* at over 144 FPS on a HTC Vive 2, so you're wasting your time buying anything this generation.


FTFY


----------



## geoxile

How can the 1060 be faster than RX 480 when it has half as many shaders as the 1070?


----------



## cowie

Quote:


> Originally Posted by *SoloCamo*
> 
> So... where is the 1060? Or are we now using this well known click bait site as 100% proof at this point?
> 
> Or do you have some "insider knowledge" guaranteeing it's going to be faster, let alone cheaper? Well since we all know the answer there those with a brain will wait until it's in the reviewers hands. Nvidia hasn't put out a decent mid range card since the 560ti IMO.


there is nothing but they released a date...no hype train...... tad faster then a 980 for ???300? we will see
its to compete with the 480 custom cards maybe?


----------



## Forceman

Quote:


> Originally Posted by *geoxile*
> 
> How can the 1060 be faster than RX 480 when it has half as many shaders as the 1070?


Because it'll actually have 2/3 as many? 1280/1920=66%


----------



## Scotty99

Glad i waited, wanted to give the 480 a chance and its still a decent buy for the money but it didnt knock my socks off like i was hoping (and what AMD needed to do) to get my money.


----------



## zealord

Quote:


> Originally Posted by *geoxile*
> 
> How can the 1060 be faster than RX 480 when it has half as many shaders as the 1070?


It doesn't have half as many.

Also more shaders do scale well but don't scale 1:1

2000 shaders are not necessarily exactly twice as fast as 1000. It's more like 60% maybe.

Look at previous cards like GTX 680 -> Titan / Titan Black / 780 Ti. It's like 80% more shaders (cuda cores) but only 50% more performance.

or GTX 980 -> 980 Ti. it's like 50% more shaders but only 30~% more performance


----------



## one-shot

Quote:


> Originally Posted by *zealord*
> 
> It doesn't have half as many.
> 
> Also more shaders do scale well but don't scale 1:1
> 
> 2000 shaders are not necessarily exactly twice as fast as 1000. It's more like 60% maybe.
> 
> Look at previous cards like GTX 680 -> Titan / Titan Black / 780 Ti. It's like 80% more shaders (cuda cores) but only 50% more performance.
> 
> or GTX 980 -> 980 Ti. it's like 50% more shaders but only 30~% more performance


You're speculating without any references. You need to add context and show which other components of the GPU can influence scaling. I understand you just wanted to make up a number and "60%" was the lucky number.

You also made up the numbers on the shaders. Your entire argument isn't based on any evidence. A quick search here:

http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review

That link which took me approximately 5 seconds to find, shows the Cuda Cores for each GPU.

Cuda Cores

GTX 980 Ti - 2816

GTX 980 - 2048

37.5% more, not 50%.

Texture Units

GTX 980 Ti - 176

GTX 980 - 128

37.5% more

ROPs

GTX 980 Ti - 96

GTX 980 - 64

50% more


----------



## 12Cores

Apparently these cards will not support SLI, can anyone confirm this?


----------



## tashcz

Guys, just have no expectations from this







Hope you learned enough from RX480, and that's what I've been posting for a month and I got hated on the forum just for being real, with people not willing to accept my arguments.

Thing is, AMD had real stability issues lately, and bad, bad power consumption. Let's face it. Their mid-range products were fine, my R9 270X still does a decent job at [email protected] with almost all sliders maxed (no AA though). But that's for 180W TDP and 2x6pin connectors.

AMD maybe thought "Hey, if we don't put another 6pin here on the card people will think it won't use as much power" - "GO FOR IT!!! GLORY IDEA!" Just not the way to do it. My posts regarding the RX480 were that a single 6pin won't provide any OC headroom. And I'm not talking about the power draw issues. Thing is a card with one 6pin and a TDP of 150W won't provide any OC room, at least on the standard PCB, maybe some manufecturers will make a custom PCB with better power delivery and make the RX480 a better card, currently it's not that good but wait for non-refference versions.

Currently, in the mid-range segment I'm seeing 3 cards. GTX970, RX480 and R9 390. Same price, different architectures, almost the same performance. Say what you say but it some games AMD wins, in some NVidia does. That's the current case. And considering DX12, we really don't know wether developers will start using it tomorrow or in 5 years. Again, many gamers play same games for years now and don't care about DX12. Really, currently it's hard to decide between these 3 cards. All 3 can max-out almost any game at 1080p60Hz. End of story.

Another point is non-refference cards can deliver performance close or the same as the segment above. Good GTX970s clock to 1500MHz and deliver almost (or the same) perf as 980. 390 clocks less but delivers still good performance. For 480 we still need to see what they're going to make out of it, so not the right time to judge.

Judging by the latest 1080 and 1070, NVidia is giving us a good step-up from 9xx series with at least 20-30% more perf per model. If the same goes for 1060, we can expect another 970 but without VRAM issues (though 3.5GB is more "on paper" thing than it really is in reality). From what we've seen at 1080, their new models clock well. So if it overclocks as 1080 or 970, we can see it delivering the performance near 1070, or at least 5-10% less than that, which is still great. If it performs as near as 980 or a bit above and gives us OC room as I stated here, we'll see somewhere near GTX980Ti performance in Gigabytes G1 or Asus' Strix, Nitro series etc. If the price for those models is around 330$ in US or 350EUR in Europe, I think NVidia will do very well. People will pay the difference between RX480 and GTX1060 if they need that "little bit extra" to push the FPS.

My facts are based on current performance of 1xxx and 9xx models, and judging by what I thought about the RX480, this might be correct. Only thing I can't predict is the pricing. It's not going to be as cheap as RX480 but probably 50$/50EUR more.

The market is really in need of a good performing new 1080p card without any known issues for current models. Maybe the 390 holds the crown right now, but I doubt because of it's power requirement and OC ability. 970 does better in that so it's hard to tell.

All in all, in the next couple of days when RX480 non-refference versions come out, and the 1060 also with it, we'll see a bunch of options so everyone will be able to get what they need for 1080p.

RX480/GTX970/R9390/GTX1060 - probably will come to personal preference more than anything else.

And note that all my asuptions are for 1080p because that's the segment I'm monitoring and interested.

I wish you all low prices and high performance in future.


----------



## SpeedyVT

Quote:


> Originally Posted by *zealord*
> 
> It doesn't have half as many.
> 
> Also more shaders do scale well but don't scale 1:1
> 
> 2000 shaders are not necessarily exactly twice as fast as 1000. It's more like 60% maybe.
> 
> Look at previous cards like GTX 680 -> Titan / Titan Black / 780 Ti. It's like 80% more shaders (cuda cores) but only 50% more performance.
> 
> or GTX 980 -> 980 Ti. it's like 50% more shaders but only 30~% more performance


Only time it's twice as fast is with AMD when it has sufficient rops to handle the core count.


----------



## epic1337

Quote:


> Originally Posted by *SpeedyVT*
> 
> Only time it's twice as fast is with AMD when it has sufficient rops to handle the core count.


ROPs and bandwidth.

ROPs performance is sensitively tied with bandwidth usage, its quite demanding.
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/4
Quote:


> Looking beyond the frontend and shader cores, we've seen a very interesting reorganization of the rest of the GPU as opposed to Cayman. Keeping in mind that AMD's diagrams are logical diagrams rather than physical diagrams, the fact that the ROPs on Tahiti are not located near the L2 cache and memory controllers in the diagram is not an error. The ROPs have in fact been partially decoupled from the L2 cache and memory controllers, which is also why there are 8 ROP partitions but only 6 memory controllers. Traditionally the ROPs, L2 cache, and memory controllers have all been tightly integrated as ROP operations are extremely bandwidth intensive, making this a very unusual design for AMD to use.
> 
> As it turns out, there's a very good reason that AMD went this route. ROP operations are extremely bandwidth intensive, so much so that even when pairing up ROPs with memory controllers, the ROPs are often still starved of memory bandwidth. With Cayman AMD was not able to reach their peak theoretical ROP throughput even in synthetic tests, never mind in real-world usage. With Tahiti AMD would need to improve their ROP throughput one way or another to keep pace with future games, but because of the low efficiency of their existing ROPs they didn't need to add any more ROP hardware, they merely needed to improve the efficiency of what they already had.
> 
> The solution to that was rather counter-intuitive: decouple the ROPs from the memory controllers. By servicing the ROPs through a crossbar AMD can hold the number of ROPs constant at 32 while increasing the width of the memory bus by 50%. The end result is that the same number of ROPs perform better by having access to the additional bandwidth they need.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *iLeakStuff*
> 
> Nvidia got too much money to spew in development to always come on top.
> AMD thought they were clever to go for low end and let Nvidia have high end alone.
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.
> The card got like a week or two alone.
> 
> AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


The 1060 is still not going touch the 480 in p/p and the 1600MHz AIB 480's will be faster than the more expensive 1060's anyway. But you hang on tight to that power efficiency argument as that's all you really have left at the lower end. Don't know if you realize this yet but the 480 reference is absolutely flying off the shelves...


----------



## PooPipeBoy

Quote:


> Originally Posted by *tashcz*
> 
> Guys, just have no expectations from this
> 
> 
> 
> 
> 
> 
> 
> Hope you learned enough from RX480, and that's what I've been posting for a month and I got hated on the forum just for being real, with people not willing to accept my arguments.
> 
> Thing is, AMD had real stability issues lately, and bad, bad power consumption. Let's face it. Their mid-range products were fine, my R9 270X still does a decent job at [email protected] with almost all sliders maxed (no AA though). But that's for 180W TDP and 2x6pin connectors.


My same thoughts exactly, and oddly enough I have an R9 270X as well.
I'm just glad I'm not buying a new graphics card right now, because it's actually entertaining to watch everyone hype up these new graphics cards and then freak out when a bunch of problems are discovered later.


----------



## Majin SSJ Eric

The efficiency of Polaris is pretty bad, I won't lie. But I never have cared about power efficiency anyway (*looks at my overvolted SLI OG Titans)...


----------



## tashcz

Quote:


> Originally Posted by *PooPipeBoy*
> 
> My same thoughts exactly, and oddly enough I have an R9 270X as well.
> I'm just glad I'm not buying a new graphics card right now, because it's actually entertaining to watch everyone hype up these new graphics cards and then freak out when a bunch of problems are discovered later.


Thing is, my low-cost Powercolor R9 270X (It wasn't me that chose it, I got it at that time) started dying but probably due to my extreme overclocks and multiple thermal compound applying, and the Powercolor version of it is just not made for that. I'm in need of a mid-range GPU which I consider those that I mentioned. They're all good, but each one of them has a drawback right now, and we'll see if anything changes with the 1060. If it provides decent OC, decent power usage (not that I care, I just want it not to be to much since I'm going to OC), decent performance it can outperform all of these cards:

GTX970 - 2 years old, 3.5GB VRAM issue
R9 390 - high power consumption for that kind of performance, lower OC than above, MAYBE not as fast
RX480 - current versions have terrible power delivery, still probably not the best drivers, still has to prove itself, currently bad overclocking performance

If those things were different each one of them would be perfect. I'm looking at those three since here in Serbia they're almost the same price (280EUR (970 G1) -310EUR (RX480 8GB REF)). Still no 4GB versions of this card in my country.

So if GTX1060 solves all of this at a reasonably higher price it might win the segment for us who aim 60FPS on FHD monitors. But time will tell.


----------



## Forceman

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> The 1060 is still not going touch the 480 in p/p and the 1600MHz AIB 480's will be faster than the more expensive 1060's anyway. But you hang on tight to that power efficiency argument as that's all you really have left at the lower end. Don't know if you realize this yet but the 480 reference is absolutely flying off the shelves...


Maybe we should wait until we actually see some 1600MHz 480s before we say what the will or won't beat. They may turn out to be as much a unicorn as the MSRP 1080s.


----------



## Fuell

Quote:


> Originally Posted by *iLeakStuff*
> 
> Nvidia got too much money to spew in development to always come on top.
> AMD thought they were clever to go for low end and let Nvidia have high end alone.
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.
> The card got like a week or two alone.
> 
> AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


So I will be able to buy a GTX 1060 within 2 weeks? Gotcha. I'm just gonna quote this for reference.


----------



## umeng2002

Quote:


> Originally Posted by *raghu78*
> 
> Nvidia is going to continue and maybe even further extend its GPU market domination. Rx 480 is a PR nightmare worse than the ref R9 290X / R9 290 series. This is a debacle of epic proportions. reports of PCI-E slots dying
> 
> https://community.amd.com/thread/202410
> 
> AMD will not be able to recover from the negative PR around this launch. Nvidia is going to just laugh all the way to the bank with Pascal. The way AMD is going they might fold up within a year.


If it were nVidia, I would be concerned since they only shipped like 7 pascal cards to stores...









But from the reports, AMD sold thousands RX 480 PER STORE.

I would expect yields for the GTX 1060 to be slightly better, so like 10 cards per country.

By the time they fill that order, the GTX 1080 Ti should be here.


----------



## prjindigo

Quote:


> Originally Posted by *Shiftstealth*
> 
> QFT.
> 
> 10% more performance represented as double lol.


and 4.4tflop is more than 5.5! Wow... look at them data!


----------



## umeng2002

Quote:


> Originally Posted by *tashcz*
> 
> RX480 - current versions have terrible power delivery, still probably not the best drivers, still has to prove itself, currently bad overclocking performance


The power delivery, actually, is quite good if you look at the design. The only issue is the power draw from the PCIe slot.


----------



## tashcz

Quote:


> Originally Posted by *prjindigo*
> 
> and 4.4tflop is more than 5.5! Wow... look at them data!


970 has 3.5TFlops and it performs almost identical. It's not all in numbers.


----------



## DaFirnz

Even if the 1060 is the same price and the same performance give or take a bit. I can still get into a FreeSync monitor for $150 less than a similar (same specs, same manufacturer) GSync monitor.


----------



## tashcz

Quote:


> Originally Posted by *umeng2002*
> 
> The power delivery, actually, is quite good if you look at the design. The only issue is the power draw from the PCIe slot.


Yes but that is the current issue, and I'm looking from the current buyers perspective. Also, a single six pin connector won't provide enough power for overclocking. If the reference TDP is 150W and that's what it's drawing, can't see how we can squeeze more out of it without any power. I've got no info on power phases etc. But a custom PCB with a 8pin connector or 2x6pin would be great. Really no need for a single 6pin, most PSUs above 500W have them, and I doubt anyone running anything above integrated graphics has a PSU without at least an 8pin.


----------



## epic1337

Quote:


> Originally Posted by *tashcz*
> 
> Yes but that is the current issue, and I'm looking from the current buyers perspective. Also, a single six pin connector won't provide enough power for overclocking. If the reference TDP is 150W and that's what it's drawing, can't see how we can squeeze more out of it without any power. I've got no info on power phases etc. But a custom PCB with a 8pin connector or 2x6pin would be great. Really no need for a single 6pin, most PSUs above 500W have them, and I doubt anyone running anything above integrated graphics has a PSU without at least an 8pin.


6Pin would actually be fine, the problem is Polaris is much more power hungry than anticipated.

so in this case, its better off to have an 8Pin, 2x6Pin would be unappealing as some less powerful power supplies only have a single 2+6pin.


----------



## magnek

Quote:


> Originally Posted by *sugalumps*
> 
> LOL
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nvidia never going to live down the fanboy edition.


Nah Founders Edition would indeed be too pompous for a card in 1060's segment (but I'm sure uninformed gamers will still flock to it like flies on dung). So instead of having a Founders Edition, the *1060 3GB edition will be called the Casual Edition.* Why Casual Edition? Because it's for all your casual gaming needs, perfect for the budget conscious yet demanding gamer. Pre-order yours today and reserve a copy on launch day!

As far as expected performance:

















960 is exactly half of a 980, and had around 55.6% of its performance. Since 1060 is exactly half of a 1080, assuming the same scaling factor, it would end up having 177 * 0.556 = 98.4% relative performance using RX 480 as the baseline.

Obviously nVidia claimed 15% faster performance, but it could be just as cherry picked as CF 480 > 1080 in AotS ie absolute best case scenario. So I think in reality it'll probably end up trading blows with 480 but with better perf/watt.


----------



## Klocek001

Quote:


> Originally Posted by *iLeakStuff*
> 
> Nvidia got too much money to spew in development to always come on top.
> AMD thought they were clever to go for low end and let Nvidia have high end alone.
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.
> The card got like a week or two alone.
> 
> AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


----------



## Vesku

Quote:


> Originally Posted by *Exeed Orbit*
> 
> Currently, yes it is. You're not likely to be memory starved given NVidia's memory compression techniques. BUT, I do fear for future games. Though, mainstream cards are not particularly known for their longevity.


Highest quality textures have generally been targeting 4 GB for at least a year now.


----------



## epic1337

2GB~3GB would be fine for budget <$150 cards, but 4GB should be the norm for the latest mainstream cards.


----------



## magnek

Quote:


> Originally Posted by *Vesku*
> 
> Highest quality textures have generally been targeting 4 GB for at least a year now.


Exactly. Apparently a 970 with its 3.5GB is already struggling with Dying Light: The Following Enhanced Edition
Quote:


> Originally Posted by *HardOCP poster*
> 
> Well, the 970 has only 3.5GB of VRAM, while you can get 4 or 8 for the 480. 8 should really be the minimum in this day and age, unless you're planning to upgrade within the next year. Won't even last two.
> 
> I think games within the next year will start to max out 4GB, and maybe even creep towards 6GB VRAM. *I know my 970 has VRAM troubles with Dying Light The Following Enhanced Edition if I max everything.*


Then DOOM with ultra nightmare settings turned on even goes over 4GB even at 1080p. I've even seen vram usage creep up to 4.5GB in some spots, so 3GB in 2016 is pretty much DoA.


----------



## tajoh111

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Lol, all of your comments are like bad satire I swear! The 1060 is still not going touch the 480 in p/p and the 1600MHz AIB 480's will be faster than the more expensive 1060's anyway. But you hang on tight to that power efficiency argument as that's all you really have left at the lower end. Don't know if you realize this yet but the 480 reference is absolutely flying off the shelves...


All the 1060 needs to have similar price to performance as a 480 is a 250 dollar selling price and to perform 5% better than a rx 480. Very realistic at this point.

Looking at the architecture, it will probably get there(66-70% of the performance of a 1070 with a bit more clocks). The 192bit bus, is huge for the 1060. If it was 128bit it would have trouble but at the very least, it will be as fast as an rx480, atleast initially. Nvidia's is not going to let AMD to gain ground in marketshare in such a popular segment. Since AMD have such a foothold in the console arena, Nvidia's marketshare in computers have been tremendously important getting developers to have a reason to program and develop with cuda type cores as a basis or to implement gameworks. As a result, they have been aggressive so far in obtaining marketshare for PC's and I see no reason for them to let up. So as a result, they are not going to let AMD have a 25% advantage in price to performance, when their cards will be cheaper to make and they can afford to take in a smaller margin. Founders edition crap was because AMD announced that they wouldn't be competing till 2017.

1600 mhz AIB are unlikely and definitely not factory clocked at that speed. GCN just is not capable of clocks like that because it has a volatile nature. The best GCN 3 was capable of last gen was 1.6ghz on LN2(vs the 2.3ghz maxwell was capable of) and this wasn't game stable just bench stable. The kink for AMD however is every new iteration of GCN clocked worse than the last. Tahiti overclocked better than hawaii. Hawaii overclocked better than tonga/fiji. Fiji was a good hint that we were not going to see the frequency gains Nvidia got from the node transition. Even with water from the beginning, Fiji only overclocked to 1200mhz, with most stopping in the 1150mhz range. The problem for AMD and why fiji overclocks worse with every iteration is although they kind of scale linearly with voltage, the Overclock gains vs power consumption slope becomes a flatter with every itteration of GCN. Higher power consumption with smaller gains in frequency essentially.

And this version of GCn is showing the same symptoms. With only 5-6% overclocks, these cards power consumption are jumping 33% or 50 watts. As a result, things like 25% overclocks are out of the window unless you want to cool a 400-500+watt, 232mm2 which due to the surface area, water is only capable of cooling and your much better with a 1070 at that point or even a gtx 980 ti.

This is all the very reason why AIB won't clock these cards aggressively. Long term stability are effected by heat and power. Why maxwell was clocked so high this generation is it simply stable when it comes power consumption. Basically power doesn't go off the deep end until past 1400mhz which is why so many partners cards were overclocked to this range. Because the power is controlled, there is a less likely chance of long term degradation.

As I said on before, ln2 clocks give a preview of how well cards will clock on the next node, but we have to take into usability of AMD clocks vs Nvidia usability of clocks.

Because it is GCN and a more complex version of it, I am guessing 1.5ghz might be possible under the right scenario like a benchmark using conventional cooling, but the problem with AMD clocks at their upper limit is they are simply not usable for gaming. The Highest clocked we have seen so far using a water cooler hit 1.48ghz or so, but that was only running a bench, not stable for gaming. For stability, this card is probably going to clock more in the 1425mhz range with an AIB while gaming.

That 1600mhz rumor might have been from a partner like sapphire or XFX or powercolor who only sells AMD cards who have everything to gain by waiting for their higher margin card to be released.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Forceman*
> 
> Maybe we should wait until we actually see some 1600MHz 480s before we say what the will or won't beat. They may turn out to be as much a unicorn as the MSRP 1080s.


Fair point.


----------



## Ashura

Quote:


> Originally Posted by *Redeemer*
> 
> only graph that matters











I was just about make one too!


----------



## SpeedyVT

Quote:


> Originally Posted by *Ashura*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was just about make one too!


Graphs are so 2015 you should make a pie chart.


----------



## Defoler

Quote:


> Originally Posted by *Buris*
> 
> I would bet that the 10% faster will ring true, but only in select applications.
> 
> DX11 games should give a defacto edge to the 1060, it won't be quite 10%, but it will there.
> 
> This card will be most likely the best value @1080p card upon release (if it is under the 8GB 480 in price, otherwise ...eh).
> 
> At DX12 or 1440p/VR, the RX480 will most likely edge it out. though it will likely hold its own
> 
> The RX480 is already leaning towards the lower bracket of 1440p usefulness, and with a 192-bit bus and only 192GB/s memory bandwidth, even with the video compression on current pascal cards, this card will be extremely bandwidth limited, this makes the 1060 not only near-useless for SLI, but also useless for 1440p.
> 
> Likely the 1060 could have been massively overclocked and without the hard-192-bit-bus limitation the card could seriously damage the income Nvidia's own 1070.. With this block in place, it simply won't be able to compete at a higher resolution than 1080p, or VR for that matter.
> 
> 
> Summation:
> When the GTX 1060 comes out, it will be the best mid-range card at 1080p in DX11.
> 
> It will lose to the RX 480 in VR, 1440p, and DX12.
> 
> Neither card will blow eachother away like this falacious graph is trying to convey.
> 
> No word on overclocking performance yet, but AIB 1060's might be a great option depending on memory overclocking


Not sure you are correct assuming it will not be better in DX12.

Don't forget that pascal is not maxwell. nvidia had a lot of work done and support for DX12 is high and mighty.
The 1080 is running from 20% on an AMD favouring game, to a whooping 80% faster than the fury x at 1440p in DX12 games. Even in AOTS it runs 25% faster.

With the 480 is running still under the 390x in DX12, and is still 15-30% slower than the 1070 in DX12 depends on the game.
So if the 1060 is going to be 15-20% slower than the 1070 at 1080p (based on specs and performance/power ratio from nvidia vs amd), it is going to match the 480 in DX12 at least in most games, and be a couple of steps faster in DX11.

If that is true (and we have no idea, as well are all speculating atm), and if nvidia are going to price the 1060 with 3GB at the lower end of the bracket, it is going to rival the 480 and they are going to be neck in neck.

The only upside that AMD has, is that they are now selling the 480s, which should mean they have an advantage in selling first, which might hinder a bit nvidia sales.


----------



## Clovertail100

"Leaked"

I'm trying to picture a bunch of NV engineers looking at this graph after extensive testing, or showing it to their trade partners.

I could see this being a "leak" from a secret presentation to drunk ******ed children, though, I guess.


----------



## magnek

Quote:


> Originally Posted by *Defoler*
> 
> Not sure you are correct assuming it will not be better in DX12.
> 
> Don't forget that pascal is not maxwell. nvidia had a lot of work done and support for DX12 is high and mighty.
> The 1080 is running from 20% on an AMD favouring game, to a whooping 80% faster than the fury x at 1440p in DX12 games. Even in AOTS it runs 25% faster.
> 
> With the 480 is running still under the 390x in DX12, and is still 15-30% slower than the 1070 in DX12 depends on the game.
> So if the 1060 is going to be 15-20% slower than the 1070 at 1080p (based on specs and performance/power ratio from nvidia vs amd), it is going to match the 480 in DX12 at least in most games, and be a couple of steps faster in DX11.
> 
> If that is true (and we have no idea, as well are all speculating atm), and if nvidia are going to price the 1060 with 3GB at the lower end of the bracket, it is going to rival the 480 and they are going to be neck in neck.
> 
> The only upside that AMD has, is that they are now selling the 480s, which should mean they have an advantage in selling first, which might hinder a bit nvidia sales.


We can get a rough idea what to expect just by looking at past history.



Spoiler: Warning: Spoiler!







650 Ti = exactly 1/2 of 680
960 = exactly 1/2 of 980

I picked the SSC version of 650 Ti because its GPU clock was closest to the average boost clock of 680.

In 650 Ti's case, it had 48% the performance of a 680, while 960 had 55.6% the performance of a 980. So, if 1060 ends up being *exactly 1/2* of 1080 (1/2 in shaders, TMUs, and ROPs, but especially ROPs), then we can estimate a low and high value for its performance.



Spoiler: Warning: Spoiler!







Using the low value of 48%, we end up with 177 * 48% = 85% performance relative to 480
Using the high value of 55.6%, we end up with 177 * 55.6% = 98.4% performance relative to 480

Now obviously since nVidia is marketing the 1060 as being 15% faster (possibly in the absolute best case scenario), the high value is likely more realistic. So the two cards will likely end up trading blows, with 1060 having the obvious edge in perf/watt.


----------



## Defoler

Quote:


> Originally Posted by *magnek*
> 
> We can get a rough idea what to expect just by looking at past history.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 650 Ti = exactly 1/2 of 680
> 960 = exactly 1/2 of 980
> 
> I picked the SSC version of 650 Ti because its GPU clock was closest to the average boost clock of 680.
> 
> In 650 Ti's case, it had 48% the performance of a 680, while 960 had 55.6% the performance of a 980. So, if 1060 ends up being *exactly 1/2* of 1080 (1/2 in shaders, TMUs, and ROPs, but especially ROPs), then we can estimate a low and high value for its performance.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Using the low value of 48%, we end up with 177 * 48% = 85% performance relative to 480
> Using the high value of 55.6%, we end up with 177 * 55.6% = 98.4% performance relative to 480
> 
> Now obviously since nVidia is marketing the 1060 as being 15% faster (possibly in the absolute best case scenario), the high value is likely more realistic. So the two cards will likely end up trading blows, with 1060 having the obvious edge in perf/watt.


That of course is possible.
Since I expect nvidia to rush the 1060 as fast as possible because the 480 is already in the market, we should be able to see in the next month or so.


----------



## twitchyzero

assuming those graphs are right, how much faster is it compared to 960?


----------



## zealord

Quote:


> Originally Posted by *one-shot*
> 
> You're speculating without any references. You need to add context and show which other components of the GPU can influence scaling. I understand you just wanted to make up a number and "60%" was the lucky number.
> 
> You also made up the numbers on the shaders. Your entire argument isn't based on any evidence. A quick search here:
> 
> http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review
> 
> That link which took me approximately 5 seconds to find, shows the Cuda Cores for each GPU.
> 
> Cuda Cores
> 
> GTX 980 Ti - 2816
> 
> GTX 980 - 2048
> 
> 37.5% more, not 50%.
> 
> Texture Units
> 
> GTX 980 Ti - 176
> 
> GTX 980 - 128
> 
> 37.5% more
> 
> ROPs
> 
> GTX 980 Ti - 96
> 
> GTX 980 - 64
> 
> 50% more
> 
> Please stop making up numbers, spreading more misinformation because you're too lazy and can't be bothered to research for a few seconds.


I don't like your condescending tone, but I see that I made a mistake there.

I was thinking about the Titan X (3072) which has exactly 50% more cuda cores compared to the GTX 980 (2048)









And even then the Titan X is, like I said, not exactly 50% faster although it has 50% more shaders/cuda cores, but only 35%~.


----------



## Fb74

Quote:


> Originally Posted by *twitchyzero*
> 
> assuming those graphs are right, how much faster is it compared to 960?


Assuming that GTX 1060 "could" be equivalent to a GTX980, you should have nearly the double in terme of enhancement









So GTX 1060 should be nearly 1.8x to 2x a GTX960.


----------



## maltamonk

Quote:


> Originally Posted by *tajoh111*
> 
> This is all the very reason why AIB won't clock these cards aggressively. Long term stability are effected by heat and power. Why maxwell was clocked so high this generation is it simply stable when it comes power consumption. Basically power doesn't go off the deep end until past 1400mhz which is why so many partners cards were overclocked to this range. Because the power is controlled, there is a less likely chance of long term degradation.
> 
> As I said on before, ln2 clocks give a preview of how well cards will clock on the next node, but we have to take into usability of AMD clocks vs Nvidia usability of clocks.
> 
> Because it is GCN and a more complex version of it, I am guessing 1.5ghz might be possible under the right scenario like a benchmark using conventional cooling, but the problem with AMD clocks at their upper limit is they are simply not usable for gaming. The Highest clocked we have seen so far using a water cooler hit 1.48ghz or so, but that was only running a bench, not stable for gaming. For stability, this card is probably going to clock more in the 1425mhz range with an AIB while gaming.
> 
> That 1600mhz rumor might have been from a partner like sapphire or XFX or powercolor who only sells AMD cards who have everything to gain by waiting for their higher margin card to be released.


To be fair the 1080 are not able to maintain their 2ghz+ clocks while gaming. They throttle back due to power targets and In the case of the FE thermal targets.


----------



## Basard

Quote:


> Originally Posted by *andrews2547*
> 
> But overall performance in what? Games? Rendering? How far it can go if you throw it?
> 
> Do you see the problem here? If they said how they got those results, then it wouldn't as much of a problem.


LOL, at least there's some comedy in this thread. "...how far you can throw it?"


----------



## ChevChelios

Nvidia hitting with full force


----------



## rv8000

With 50% less shaders, huge bandwidth cuts, rop/tmu cuts, I'm expecting the 1060 to be roughly 35% (difference between a 1070/1080 clock for clock is only ~8%) slower than a 1080.

So clock for clock (using 1962 for this estimate) FS GPU score would be around 14.7k. It will definitely be faster than a 980 at 1080p, but It isn't going to have much headroom outside of gains from memory oc's depending on how high they boost at stock.

*I'm also expecting a $299 msrp


----------



## Catscratch

As I'm in the targeted "upgrade range", should I be honored to witness such gimmick nonsense race from both companies ?


----------



## tashcz

You are not taking all perspectives. The point is you need a reason not to buy a generation before card step-up. If it's RX480 performance it's almost the same as 970 performance. Why would you buy the 3GB 1060 instead of the 970 than?

Point is:

1080 > 980ti
1070 > 980
1060 > 970

And it's almost a whole last gen model up faster. So 1060 should not be as fast as a 970, it should be as fast as 980. Again talking, not all in numbers, cores etc.


----------



## Chargeit

Holy Hell!!!

It's like a single 1060 has twice the performance of a 480 and is 3x as power efficient. Marketing graphs at their finest.


----------



## sugalumps

Quote:


> Originally Posted by *Chargeit*
> 
> Holy Hell!!!
> 
> It's like a single 1060 has twice the performance of a 480 and is 3x as power efficient. Marketing graphs at their finest.


That graph was actually proven earlier btw, by an nvidia scientist.


----------



## Exeed Orbit

Quote:


> Originally Posted by *tashcz*
> 
> You are not taking all perspectives. The point is you need a reason not to buy a generation before card step-up. If it's RX480 performance it's almost the same as 970 performance. Why would you buy the 3GB 1060 instead of the 970 than?
> 
> Point is:
> 
> 1080 > 980ti
> 1070 > 980
> 1060 > 970
> 
> And it's almost a whole last gen model up faster. So 1060 should not be as fast as a 970, it should be as fast as 980. Again talking, not all in numbers, cores etc.


The comparisons you're making are inaccurate. The 1070 is also at about GTX 980Ti numbers (Usually better, unless the game is particularly memory bandwidth hungry). So it's safe to say the 1060 will be closer to the 980 than it will to the 970.


----------



## SuperZan

Quote:



> Originally Posted by *ChevChelios*
> 
> Nvidia hitting with full force


I'm sure the 1060 will be just fine but I'd never call a *60 'hitting with full force'.


----------



## ChevChelios

by that I meant they not only took over the higher-end market, but releasing the mainstream too now (in 1-2 weeks) instead of giving AMD free space in the $200-$250+ market till Fall


----------



## SoloCamo

Quote:


> Originally Posted by *ChevChelios*
> 
> by that I meant they not only took over the higher-end market, but releasing the mainstream too now (in 1-2 weeks) instead of giving AMD free space in the $200-$250+ market till Fall


I'm honestly confused or clearly missing something that was posted. When did Nvidia confirm the 1060 and when did someone even confirm it's release timeframe? Polaris has been rumored quite a while for example.. didn't exactly show up 2 weeks after the rumors. We all know the 1060 is coming, but everyone seems to be talking as if the release date has already been set in stone..


----------



## tashcz

Quote:


> Originally Posted by *Exeed Orbit*
> 
> The comparisons you're making are inaccurate. The 1070 is also at about GTX 980Ti numbers (Usually better, unless the game is particularly memory bandwidth hungry). So it's safe to say the 1060 will be closer to the 980 than it will to the 970.


That's exactly what I said in my post before, this is just something I wanted to point out, that there is no way GTX1060 will be the same as 970, it will be faster. Maybe I just didn't say it well in this post, sorry for misunderstanding.


----------



## tajoh111

Quote:


> Originally Posted by *magnek*
> 
> We can get a rough idea what to expect just by looking at past history.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> 650 Ti = exactly 1/2 of 680
> 960 = exactly 1/2 of 980
> 
> I picked the SSC version of 650 Ti because its GPU clock was closest to the average boost clock of 680.
> 
> In 650 Ti's case, it had 48% the performance of a 680, while 960 had 55.6% the performance of a 980. So, if 1060 ends up being *exactly 1/2* of 1080 (1/2 in shaders, TMUs, and ROPs, but especially ROPs), then we can estimate a low and high value for its performance.
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> 
> Using the low value of 48%, we end up with 177 * 48% = 85% performance relative to 480
> Using the high value of 55.6%, we end up with 177 * 55.6% = 98.4% performance relative to 480
> 
> Now obviously since nVidia is marketing the 1060 as being 15% faster (possibly in the absolute best case scenario), the high value is likely more realistic. So the two cards will likely end up trading blows, with 1060 having the obvious edge in perf/watt.


We have to take into account a few more factors.

At low resolution, high end cards, particularly the high end cards of today, are CPU bound and front end problems prevent cards like the 1080 from showing their max performance which shrinks the real power difference between the card. As a result, these cards only can show how strong they are vs low end cards at high resolutions.

So as a result, it would be more accurate to use the 1070 as a baseline if we are going to use pascal as a basis. This is because it is experience that overhead issue less and because of the next point. The gtx 1060 will have 192bit bus. What it means is this card is going to get 48 ROPs and more TU. This is what prevents mediocrity.

And this means a great deal. For example if we look at the kepler generation. GK106, the full chip was actually the gtx 660 and coincidentally, it also had a 192bit bus. What this meant is the architecture was less starved for bandwidth and it also got everything else that came with a 192bit bus along with the extra ROPs. And this is demonstrated clearly between the gtx 660 and gtx 650 ti.

There is only a 25% difference in shaders between the two card, but the gtx 660 performs 50% better than it.

As a result, I would take your numbers and add about 15% to them to get the performance of the gtx 1060.


----------



## L36

A mere 15% performance lead will be matched with a few driver updates pushed out by AMD. Looking at the track record of driver optimizations from nvidia, this card will be equivalent of a doorstop once Volta drops next year. Once DX12 titles start rolling out, the 480 will stomp this card.


----------



## ChevChelios

Quote:


> Originally Posted by *SoloCamo*
> 
> I'm honestly confused or clearly missing something that was posted. When did Nvidia confirm the 1060 and when did someone even confirm it's release timeframe? Polaris has been rumored quite a while for example.. didn't exactly show up 2 weeks after the rumors. We all know the 1060 is coming, but everyone seems to be talking as if the release date has already been set in stone..


well the 1060 leaks seemed legit to me atm

if they're fake then you're right, yeah


----------



## ChevChelios

Quote:


> Originally Posted by *L36*
> 
> A mere 15% performance lead will be matched with a few driver updates pushed out by AMD. Looking at the track record of driver optimizations from nvidia, this card will be equivalent of a doorstop once Volta drops next year. Once DX12 titles start rolling out, the 480 will stomp this card.


yeah, or not

I mean 480 is on average just on par with the old "obsolete" 970 right now, how is it ever gonna beat 1060









Quote:


> Looking at the track record of driver optimizations from nvidia,


seems pretty excellent to me, they take very little time to get everything out of their cards

Quote:


> once Volta drops next year


you think Volta is in 2017 ? do tell ...


----------



## nakano2k1

Has NVidia made annoucements about when this card is going to be released? I mean, there are some cards being release into the "wild", but where are the official annoucements? Its one thing to flaunt a couple of cards that could be all of 40 or 50 in the world today. It's another things to have proper stock to do a proper launch.

As for "amazing price to performance and godlike power draw", we'll see when reviewers do proper unbiased reviews. Until then, I have my blinders on.


----------



## Fb74

Quote:


> Originally Posted by *nakano2k1*
> 
> Has NVidia made annoucements about when this card is going to be released? I mean, there are some cards being release into the "wild", but where are the official annoucements? Its one thing to flaunt a couple of cards that could be all of 40 or 50 in the world today. It's another things to have proper stock to do a proper launch.
> 
> As for "amazing price to performance and godlike power draw", we'll see when reviewers do proper unboased reviews. Until then, I have my blinders on.


They said "availability on 14th of July".
But I don't trust them, I am sure you can add a full month to have "some" stock.


----------



## JackCY

Even 1080/1070s are not properly released yet, they can't provide any supply for that 1% that wants to buy them lol
So who the hell knows when they tease 1060 with a paper launch the same way.


----------



## philhalo66

AMD does the same crap not long ago i seen one nearly identical to this from the red team. They both target people who don't know better.


----------



## mcg75

I would be very surprised if these slides actually came from Nvidia.

Nvidia typically doesn't refer to a competitor's product in any of these. And claiming it's the power of gtx 980? They will probably compare it to the gtx 960 since they reside in the same lineup spot.


----------



## Master__Shake

Quote:


> Originally Posted by *lombardsoup*
> 
> For $300+? They'll be collecting dust in the warehouse.
> 
> And lol at the graphs


doubtful.

the 960 was a genuine turd and it sold.


----------



## Master__Shake

Quote:


> Originally Posted by *CasualCat*
> 
> First thing I saw and cringed about. I swear these companies use the how to lie with statistics book as a marketing manual.


lies, damn lies, and nvidia graphs


----------



## Rav3n07

I d9nt think anyone will be surprised if it's faster than the rx480. The 480 barely knocks off the 970 architecture from 2 years ago.

I would like to see the 480 perform as well as a 1060 though, because competition is great for us consumers

Sent from my SM-N920P using Tapatalk


----------



## 12Cores

Its clear that Nvidia is just selling you relative performance, this card will without a doubt equal the 980 which means they are giving you a $200 price cut in their book. GTX 1050 for $229 anyone?


----------



## fatmario

My guess is Gtx 1060 will have gtx 980 performance out of box and price for 3gb version $249.99 and 6gb for $299.99


----------



## GorillaSceptre

3GB..









Better not be the case in 2016.. Especially for that price.


----------



## BulletSponge

720p has met its match with the 1060 3Gb model!


----------



## tashcz

Where did you guys even get the whole 3GB thing?


----------



## Chargeit

Quote:


> Originally Posted by *fatmario*
> 
> My guess is Gtx 1060 will have gtx 980 performance out of box and price for 3gb version $249.99 and 6gb for $299.99


Have to price them according to the 480's to put the nail in the coffin.


----------



## incog

Speculation is nice and all but until these cards are on sale by distributors, there's not much going on here.

We'll see if it can undercut the RX 480, which would be bad for AMD, thus bad for the industry.


----------



## Clocknut

Quote:


> Originally Posted by *fatmario*
> 
> My guess is Gtx 1060 will have gtx 980 performance out of box and price for 3gb version $249.99 and 6gb for $299.99


1060 wont be above $280.

8gb 1070 is price at $380. Selling 6GB 1060 for $299 wont make any sense. Who is going to buy EVGA 1060 SC edition when they can just buy 1070 base model?


----------



## tkenietz

1080 = 8.85 tflops
1070 = 6.45 tflops 72%tf = 84% actual
1060 = 4.35 tflops 49%tf = ?57% actual?

980ti= 6.06 tflops
980 = 4.95 tflops 81%tf = 84% actual
970 = 3.92 tflops 64%tf = 72% actual
960 = 2.56 tflops 42%tf = 44% actual

Would put it right at rx480.

I would think oc to 2200mhz is reasonable and if it overclocks on the reference version, with value of nvidia brand, $300 *msrp* for the 6gb is a 'fair' price.

I think that a rx480 @1500mhz would likely beat a 1060 @2200mhz, or at least be very close.

I also think that it would pull 110-120 watt on typical load.

I also think if it launches this month, it will be somewhere between low and no stock. Possibly causing prices to escalate.

People have talked about how people pay $500-750 on what used to be an x60, now you could very well possibly pay up to $350 for an actual current x60.


----------



## tkenietz

Quote:


> Originally Posted by *Clocknut*
> 
> 1060 wont be above $280.
> 
> 8gb 1070 is price at $380. Selling 6GB 1060 for $299 wont make any sense. Who is going to buy EVGA 1060 SC edition when they can just buy 1070 base model?


People who aren't going to spend $120 more for a gpu, as cheapest 1070 is $420.


----------



## Tobiman

When cards are actually available for sale at msrp then let's talk otherwise this is just a massive waste of time. I still can't purcahse a 1070 for $379 or even $400 without sitting on my pc spamming F5 like I have no life. There's absolutely none on the market atm. Zero. Even EVGA aren't selling a 1070 under $400. Don't even talk about the 1080.

I paid $540 for a $400 R9 290 3 years ago and won't repeat that ever again.


----------



## TrueForm

Quote:


> Originally Posted by *Tobiman*
> 
> When cards are actually available for sale at msrp then let's talk otherwise this is just a massive waste of time. I still can't purcahse a 1070 for $379 or even $400 without sitting on my pc spamming F5 like I have no life. There's absolutely none on the market atm. Zero. Even EVGA aren't selling a 1070 under $400. Don't even talk about the 1080.
> 
> I paid $540 for a $400 R9 290 3 years ago and won't repeat that ever again.


You can afford to wait. Your R9 290 can still hold it's own.


----------



## Diffident

Didn't any of you learn your lesson in the RX480 thread? All these grand predictions of performance were picked out of the air and thrown around as fact, which most turned out to be totally false....and it's happening again with the 1060. I guess in a day or two the 1060 will be beating a 980ti just like the RX 480.


----------



## ChevChelios

low end prediction - 970 level or a bit above, so ~equal to 480

mid end prediction - right between 970 & 980, maybe a bit closer to 980

high end prediction - solid 980 level or a bit above it


----------



## Lass3

Quote:


> Originally Posted by *TrueForm*
> 
> You can afford to wait. Your R9 290 can still hold it's own.


How do you know? I upgraded from a 290 (@1180 MHz) 1 year ago because i lacked performance at 1440p.


----------



## Ha-Nocri

I don't see 1060 with these specs beating a non-throttling 480, especially as new vulkan/dx12 titles come out. Will consume less power tho, for those that care


----------



## Waitng4realGPU

So videocardz.com made some graphs?


----------



## BigTree

almost 500€ in Lithuania

http://www.skytech.lt/search.php?keywords=gtx+1060&x=0&y=0&search_in_description=0

from reddit


----------



## aDyerSituation

Quote:


> Originally Posted by *BulletSponge*
> 
> 720p has met its match with the 1060 3Gb model!


























Comment of the year


----------



## rainzor

Quote:


> Originally Posted by *BigTree*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 
> 
> 
> almost 500€ in Lithuania
> 
> http://www.skytech.lt/search.php?keywords=gtx+1060&x=0&y=0&search_in_description=0
> 
> from reddit


So it's the same price as MSI 1070 that shop is selling. Obviously it won't cost that much.
Some more rumors http://videocardz.com/61780/nvidia-geforce-gtx-1060-rumors-part-1


----------



## BigTree

Quote:


> So it's the same price as MSI 1070 that shop is selling. Obviously it won't cost that much.


ASUS STRIX 1070 543.00€
http://www.skytech.lt/strixgtx1070o8ggaming-asus-strixgtx1070o8ggaming-p-316077.html

ASUS STRIX 1060 472.00€

Thats a 70€ difference. Seems logical.


----------



## ChevChelios

lol its really happening

Quote:


> Obviously it won't cost that much.


of course it wont

300-350 EUR in EU


----------



## Waitng4realGPU

I'll believe the 1060 is launching when I see it in stock at stores for longer than half a day


----------



## rainzor

You are comparing two different models, it'e either 44 or 113 eur difference. There is no logic behind it, it's a placeholder price.


----------



## BigTree

Quote:


> Originally Posted by *ChevChelios*
> 
> lol its really happening
> of course it wont
> 
> 300-350 EUR in EU


In Germany 970 ASUS STRIX costs 279€ and the 960 ASUS STRIX 197€

1070 STRIX costs 529€ 1060 STRIX ?


----------



## Waitng4realGPU

Just found some leaked info on another forum.

GTX 1060 Release date 07/07/2016

Specs

DX11 compatible

1080p Ready

Windows 7/8 compatible.

3GB (2.5GB) DDR5

DPC latency stuttering feature

DVI Pixel clock feature

Non SLI Compatible

Other info

Launch type - Paper

Supplies as of 04/07/2016 - 15 units.

Estimated pricing - Gouged










I'll try to see if they have a source for this valuable information.


----------



## rainzor

I'm sorry, was that supposed to be funny?


----------



## EightDee8D

Quote:


> Originally Posted by *Waitng4realGPU*
> 
> Just found some leaked info on another forum.
> 
> GTX 1060 Release date 07/07/2016
> 
> Specs
> 
> DX11 compatible
> 
> 1080p Ready
> 
> 3GB (2.5GB) DDR5
> 
> DVI Pixel clock feature
> 
> Non SLI Compatible
> 
> Other info
> 
> Launch type - Paper
> 
> Supplies as of 04/07/2016 - 15 units.
> 
> Estimated pricing - Gouged
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'll try to see if they have a source for this valuable information.


----------



## ChevChelios

Quote:


> Originally Posted by *rainzor*
> 
> I'm sorry, was that supposed to be funny?


it is to *them*


----------



## Waitng4realGPU

Quote:


> Originally Posted by *ChevChelios*
> 
> it is to *them*


So if someone had a similar list regarding the let downs of the 480 you wouldn't find that funny?

I would find both funny because we I know it's all exaggerated/hype/FUD/garbage.


----------



## GnarlyCharlie

Quote:


> Originally Posted by *Waitng4realGPU*
> 
> So if someone had a similar list regarding the let downs of the 480 you wouldn't find that funny?
> 
> I would find both funny because we I know it's all exaggerated/hype/FUD/garbage.


I figured you'd be happy now that your real GPU has been released?


----------



## pony-tail

All I know is that it had better be - a lot better - than the 960 that I am typing this on or I will not be buying one .
I have three to upgrade all mini ItX in Sg05 cases -
original plan was - Replace 380 with 480 , replace 960 with 1060 , replace the 760 in the backup pc with the 380 out of the first one - ebay the 760 and the 960 !
Well that was the plan - not sure now 480 is said to have issues the 1060 not here yet -and not sure if they are going to be itx friendly - much waiting to do !


----------



## DETERMINOLOGY

Quote:


> Originally Posted by *Diffident*
> 
> Didn't any of you learn your lesson in the RX480 thread? All these grand predictions of performance were picked out of the air and thrown around as fact, which most turned out to be totally false....and it's happening again with the 1060. I guess in a day or two the 1060 will be beating a 980ti just like the RX 480.


Pretty much...


----------



## hjacob

Quote:


> Originally Posted by *ChevChelios*
> 
> low end prediction - 970 level or a bit above, so ~equal to 480
> 
> mid end prediction - right between 970 & 980, maybe a bit closer to 980
> 
> high end prediction - solid 980 level or a bit above it


*Chev* keep them in check... 192bit bus has limitation beyond 1080p ...we will see how the 1060 gtx compares to amd AIBs custom cooled rx 480... at the end of the day if they trade blows better for us the consumers...better prices,,,,


----------



## Bryst

Im sure most people knew the 1060 would be slightly better then the RX480. The real question was: would the price beat or match the p/p of the RX480. Which we wont know till later.

Im going to guess that it probably wont. I see nvidia pricing the 1060 around 270-280ish.


----------



## chir

Fake slide, hopefully? I don't think even NVIDIA often compares themselves directly to AMD, that seems extremely aggressive. That and the bogus scale on the slide. They'll compare the GTX 1060 to their own previous gen. products.

I kinda don't want to support AMD's shaky reference design by buying the RX480, but I kinda don't wanna pay NVIDIA prices. I might seriously go with a RX480 instead, despite the less than stellar launch. Will depend on AIB card performance and price.


----------



## ChevChelios

Quote:


> Originally Posted by *hjacob*
> 
> *Chev* keep them in check... 192bit bus has limitation beyond 1080p ...


the card is not meant for more than 1080p anyway

I will never understand ppl who claim cards like 970 or 480 are just fine for 1440p (it might be if Overwatch or dota are you most demanding games)


----------



## Defoler

Quote:


> Originally Posted by *ChevChelios*
> 
> the card is not meant for more than 1080p anyway
> 
> I will never understand ppl who claim cards like 970 or 480 are just fine for 1440p (it might be if Overwatch or dota are you most demanding games)


Exactly. Which is why I think nvidia choosing 3GB as the low end of the 1060 (if that is actually going to come out) as a somewhat ok choice for that market.
The 480 might pull close to 50 fps in some benchmarks at 1440p, but that is only the average, which means less than stellar performance for 1440p.

I'm also not convinced someone will purchase a less than 300$ to play at 1440p with current performance.


----------



## Klocek001

Quote:


> Originally Posted by *Defoler*
> 
> I'm also not convinced someone will purchase a less than 300$ to play at 1440p with current performance.


It's totally possible, I used to have a 1440p display with OC'd R9 290, so about the same amount of power as RX 480. But I'm gonna leave my piece of advice here for ppl that really wanna go for it.
The problem on that setup was fps consistency and CPU overhead. If I was a die-hard AMD fan with a limited budget I'd really go cheap with the GPU and purchase the least expensive 4GB RX480 just to invest that $100 more into a freesync monitor with at least 40Hz min sync range.
With 1060 you'd pay more for a GPU but you'd get a card that's slightly faster than RX 480 and most importantly will give you more consistent framerate, with min fps significantly closer to your avg fps.

I suppose both choices would be equally good, depends on your brand preference.


----------



## Nestala

Quote:


> Originally Posted by *andrews2547*
> 
> Quote:
> 
> 
> 
> Originally Posted by *davidelite10*
> 
> Overall performance. Generally that's how Nvid has always done their slides. Except on the gtx 480 release. which they cherry picked hard.
> 
> Power efficiency makes sense though, that is completely related to performance and powerdraw.
> 
> Look at the GTX 1070 being like 15% less power draw than the RX480 yet significantly faster.
> 
> VR performance I wouldn't be able to tell. I don't know they're methodology for that.
> 
> Nvidia's slides have been pretty damn accurate the past few rounds, I honestly don't doubt this will be 15-20% oc to oc than a rx 480 and be priced around 275.
> 
> 
> 
> But overall performance in what? Games? Rendering? *How far it can go if you throw it?*
> 
> Do you see the problem here? If they said how they got those results, then it wouldn't as much of a problem.
Click to expand...

Yes.


----------



## pony-tail

Quote:


> Originally Posted by *ChevChelios*
> 
> the card is not meant for more than 1080p anyway
> 
> I will never understand ppl who claim cards like 970 or 480 are just fine for 1440p (it might be if Overwatch or dota are you most demanding games)


Not "just fine" but adequate at a pinch for casual gamers who are willing to wind their settings back a bit (I use mini itx so I am restricted by size , power , and heat ) but still use 1440p .
and soon to be 4k on one of the machines ( so there goes gaming on that one - work only + maybe solitaire )


----------



## oxidized

Quote:


> Originally Posted by *Waitng4realGPU*
> 
> Just found some leaked info on another forum.
> 
> GTX 1060 Release date 07/07/2016
> 
> Specs
> 
> DX11 compatible
> 
> 1080p Ready
> 
> Windows 7/8 compatible.
> 
> 3GB (2.5GB) DDR5
> 
> DPC latency stuttering feature
> 
> DVI Pixel clock feature
> 
> Non SLI Compatible
> 
> Other info
> 
> Launch type - Paper
> 
> Supplies as of 04/07/2016 - 15 units.
> 
> Estimated pricing - Gouged
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'll try to see if they have a source for this valuable information.


very funi xdxd


----------



## Arturo.Zise

I will wait for a thorough AIB 480 vs AIB 1060 review then see if either are a worthy upgrade to my current 970. If not then I will start saving pennies for a 1070.


----------



## Waitng4realGPU

Quote:


> Originally Posted by *Arturo.Zise*
> 
> I will wait for a thorough AIB 480 vs AIB 1060 review then see if either are a worthy upgrade to my current 970. If not then I will start saving pennies for a 1070.


I'd start saving if you need a significant increase in performance over the 970.


----------



## Chargeit

Quote:


> Originally Posted by *ChevChelios*
> 
> the card is not meant for more than 1080p anyway
> 
> I will never understand ppl who claim cards like 970 or 480 are just fine for 1440p (it might be if Overwatch or dota are you most demanding games)


The games people tend to put the most time into are usually low demand. I have some benchmarks using a 950 @ 1440p in my sig. Was able to handle anything I tossed at it 30 + fps. With the correct settings.


----------



## mirzet1976

Next up, the PCB is shorter than the card itself, and NVIDIA's unique new reference-cooler makes the card about 50% longer than its PCB. NVIDIA listened to feedback about shorter PCBs pushing power connectors towards the middle of the cards; and innovated a unique design*, in which the card's sole 6-pin PCIe power connector is located where you want it (towards the end), and internal high-current wires are soldered to the PCB. Neato? Think again. What if you want to change the cooler, or maybe use a water-block? Prepare to deal with six insulated wires sticking out of somewhere in the PCB, and running into that PCIe power receptacle.*

https://www.techpowerup.com/223883/nvidia-geforce-gtx-1060-doesnt-support-sli-reference-pcb-difficult-to-mod


----------



## jellybeans69

Just to let you know suppliers already have prices on 1060 gtx

ASUS STRIX-GTX1060-6G-GAMING 8GB DVI 2xHDMI 2xDisplayPort - 338.70e
ASUS STRIX-GTX1060-O6G-GAMING 8GB DVI 2xHDMI 2xDisplayPort - 357.00e

These are prices without VAT. Expect at least +100-125$/e over RX480 wherever you are.
At the same supplier reference Asus RX480 went for 230e without VAT for comparision if anybody is interested.


----------



## ChevChelios

thats for a custom 1060 though, right ? Strix ?


----------



## jellybeans69

Don't know really. In reality it is going to cost around 400 euros nontheless with VAT included.


----------



## ChevChelios

nah 400 would be too much even for Nvidia for reference

I see ~320-330+ EUR for reference as the price after it settles down .. likely more at launch especially if there is shortage

depending on your locale of course


----------



## jellybeans69

Quote:


> Originally Posted by *ChevChelios*
> 
> nah 400 would be too much even for Nvidia for reference
> 
> I see ~320-330+ EUR for reference as the price after it settles down .. likely more at launch especially if there is shortage
> 
> depending on your locale of course


Well 339+21% Vat = 410e. Same supplier had the Asus RX480 for 230e and it goes for ~270e in e-shops.


----------



## GoLDii3

lol who is going to buy 1060 at 400 EUR when you can get a new 980 Ti for 450 EUR or used for 400


----------



## FLCLimax

So much for the 1060's RX480 bloodbath slaughter.


----------



## ChevChelios

a card that has a better ref cooler and doesnt fry your mobo sounds like a much better deal than the 480


----------



## GoLDii3

Quote:


> Originally Posted by *ChevChelios*
> 
> a card that has a better ref cooler and doesnt fry your mobo sounds like a much better deal than the 480


Custom RX 480 for 300 EUR says hi

GTX 1060 = FAIL


----------



## Fb74

Quote:


> Originally Posted by *jellybeans69*
> 
> Just to let you know suppliers already have prices on 1060 gtx
> 
> ASUS STRIX-GTX1060-6G-GAMING 8GB DVI 2xHDMI 2xDisplayPort - 338.70e
> ASUS STRIX-GTX1060-O6G-GAMING 8GB DVI 2xHDMI 2xDisplayPort - 357.00e
> 
> These are prices without VAT. Expect at least +100-125$/e over RX480 wherever you are.
> At the same supplier reference Asus RX480 went for 230e without VAT for comparision if anybody is interested.


Sorry but... GTX1060-*6G* ... *8GB* ?

No, *6GB*... unless there is something we do not know (TI version) ?

I read also that Gigabyte was planning to deliver a GTX1070 at 379 dollars (without VAT) and 379 dollars -> 341 euros.
So it could be an error and the price of a GTX 1070 ( without VAT).


----------



## SoloCamo

Quote:


> Originally Posted by *ChevChelios*
> 
> a card that has a better ref cooler and doesnt fry your mobo sounds like a much better deal than the 480


Seriously guy, these comments are simply tainting any respect people have for this forum. Can you not see how childish you sound? I can understand now why it seems the people I used to have a rational discussion with on gpu's are no longer around... I'm starting to feel the same. Every single thread it's the same posters ruining it.


----------



## ChevChelios

Quote:


> Originally Posted by *GoLDii3*
> 
> Custom RX 480 for 300 EUR says hi
> 
> GTX 1060 = FAIL


show me one









whats fail is the reference 480









Quote:


> Originally Posted by *SoloCamo*
> 
> Seriously guy, these comments are simply tainting any respect people have for this forum. Can you not see how childish you sound? I can understand now why it seems the people I used to have a rational discussion with on gpu's are no longer around... I'm starting to feel the same. Every single thread it's the same posters ruining it.


OC.net is amazing outside of the news section


----------



## FLCLimax

Hey, at least NVIDIA now has a card close to the $389 MSRP of the 1070. Too bad it's not the 1070 though.


----------



## umeng2002

Probably 3 and 6 GB to avoid the whole 3.5 GB thing again.

Possibly shoveling SLI over PCIe for the 3 GB card?

After the Pascal event, and claims, I wouldn't trust a single performance metric nVidia tosses out at the upcoming event... even if they can properly start their graphs at 0 this time.

Should be interesting seeing this and the RX 480 head-to-head.


----------



## ChevChelios

Quote:


> After the Pascal event, and claims, I wouldn't trust a single performance metric nVidia tosses out at the upcoming event


well everything they claimed there was true









even the 2100 67C 1080 is true if you put the FE cooler to 100%


----------



## PlugSeven

Quote:


> Originally Posted by *ChevChelios*
> 
> well everything they claimed there was true
> 
> 
> 
> 
> 
> 
> 
> 
> 
> even the 2100 67C 1080 is true if you put the FE cooler to 100%


You're like a child at that age were they think their dad is superman!


----------



## ChevChelios

Quote:


> Originally Posted by *PlugSeven*
> 
> You're like a child at that age were they think their dad is superman!


dont be mad just because I said the truth


----------



## PlugSeven

Quote:


> Originally Posted by *ChevChelios*
> 
> dont be mad just because I said the truth


Hard to stay mad at kids, we're good


----------



## Exeed Orbit

Quote:


> Originally Posted by *ChevChelios*
> 
> well everything they claimed there was true
> 
> 
> 
> 
> 
> 
> 
> 
> 
> even the 2100 67C 1080 is true if you put the FE cooler to 100%


Sustained? No. I don't think I've heard of cards that sustain 2+Ghz. Even the most of the higher end AIBs have been reported to clock back down to high 19s haven't they?


----------



## jellybeans69

It is 6gb if you look at product code, description showing 8gb is just bad read of xmls by suppliers. Swedish suppliers list both same cards by the way if you search for product code from supplier i used. The second one seem to be Overclocked version


----------



## EightDee8D

Quote:


> Originally Posted by *ChevChelios*
> 
> well everything they claimed there was true
> 
> 
> 
> 
> 
> 
> 
> 
> 
> even the 2100 67C 1080 is true if you put the FE cooler to 100%


Where's 2x p/w ? where's 2100mhz for every single card out there ? where's 3x perf in vr ?


----------



## ChevChelios

Quote:


> Originally Posted by *EightDee8D*
> 
> Where's 2x p/w ? where's 2100mhz for every single card out there ? where's 3x perf in vr ?


1 - its there or close enough









2 - was never claimed for _every_ card









3 - noone has done VR reviews or benchmarks, so noone actually knows









Quote:


> Sustained? No. I don't think I've heard of cards that sustain 2+Ghz. Even the most of the higher end AIBs have been reported to clock back down to high 19s haven't they?


it can go from ~2088-2100 to ~2050+ sustained on an AIB cooler

nothing drops to 19XX that Ive seen

on water you either get no clock drops at all or something like ~13 Mhz drop


----------



## EightDee8D

Quote:


> Originally Posted by *ChevChelios*
> 
> 1 - its there or close enough
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 2 - was never claimed for _every_ card
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 3 - noone has done VR reviews or benchmarks, so noone actually knows


1. 60% is not even close to 100%

2. hehe just like 2x480=1080 wasn't for every game but green team implied it anyway. hypocrisy is high here.

3. so basically useless thing to say.


----------



## Ha-Nocri

1060 @400e would be a joke... and I don't think it will be much faster, if at all, than a non-throttling AIB 480. Sadly, it would still sell good


----------



## GoLDii3

Quote:


> Originally Posted by *ChevChelios*
> 
> show me one
> 
> 
> 
> 
> 
> 
> 
> 
> 
> whats fail is the reference 480


https://www.overclockers.co.uk/sapphire-radeon-rx-480-nitro-oc-8192mb-gddr5-pci-express-graphics-card-gx-37b-sp.html

Indeed it does,luckily custom models exist.

LOL at GTX 1060 providing GTX 980 performance for the same cost of GTX 980. Atleast RX 480 gives you performance between GTX 970 and 980 for 220 EUR and real async.

192 bit cookie cutter vga


----------



## Iscaria

People are calling this the 480 killer, but I just don't see it. If the slide is to be believed then the stock 1060 offers 10% more performance than the stock 480. Okay, but we've already seen how poorly the 1080 and 1070 overclock and the miminal gains to be had from doing so. We pretty much know the AIB 480s can make the jump from 1266 to 1500+ clock speeds. So OC vs OC the 480 is going to win everytime. So, not much of a killer :/


----------



## ChevChelios

Quote:


> Okay, but we've already seen how poorly the 1080 and 1070 overclock


from ~1750+ sustained avg boost on stock to ~2050-2100Mhz is a *17%* OC

much much better than the 5% of 480

Quote:


> *We pretty much know* the AIB 480s can make the jump from 1266 to *1500+ clock speeds*


lol another hype train set to derail hard


----------



## Exeed Orbit

Quote:


> Originally Posted by *ChevChelios*
> 
> from ~1750+ sustained avg boost on stock to ~2050-2100Mhz is a *17%* OC
> 
> much much better than the 5% of 480
> lol another hype train set to derail hard


Yeah, i'm not too convinced about that 1500 Mhz either. Maybe the $300+ cards, at which point price/perf goes out the window.


----------



## Waitng4realGPU

Quote:


> Originally Posted by *Exeed Orbit*
> 
> Yeah, i'm not too convinced about that 1500 Mhz either. Maybe the $300+ cards, at which point price/perf goes out the window.


There's already a 1330mhz XFX reference card.

AIB cards are rumoured to have boost clocks between 1350-1400mhz.

Hitting 1500mhz+ will be reliant on the silicon lottery and/or a heap of voltage.


----------



## jaredismee

i dont really get why they too decided to go with a 6 pin. i know it can be done better than amd has shown us, but what is the point?


----------



## epic1337

Quote:


> Originally Posted by *jaredismee*
> 
> i dont really get why they too decided to go with a 6 pin. i know it can be done better than amd has shown us, but what is the point?


overestimating the benefits of die shrink + finfet, and cutting it too close to paper spec.

they thought that those can sufficiently decrease power consumption, enough to fit within 150W envelope.
but you see what they currently have in their lineup, much more power hungry than estimated, even more so when compared to Nvidia's.

if they had successfully made Polaris as efficient as Nvidia's Pascal, then RX480 would've ran without even exceeding 120W.


----------



## caswow

yea it will be the same as with my 660oc and the 7870. at first its as fast as the rx480 while the rx480 ages better the 1060 will fall behind rx470 i guarantee it


----------



## DrFPS

Quote:


> Originally Posted by *Ha-Nocri*
> 
> 1060 @400e would be a joke...


It would be a joke, if nvidia didn't crush amd in every performance benchmark out there, everyone of them!

This is what happens when there is 0% performance competition at the top tiers.


----------



## FLCLimax

Lmao, reverse NVIDIA bloodbath.


----------



## cowie

Quote:


> Originally Posted by *Iscaria*
> 
> People are calling this the 480 killer, but I just don't see it. If the slide is to be believed then the stock 1060 offers 10% more performance than the stock 480. Okay, but we've already seen how poorly the 1080 and 1070 overclock and the miminal gains to be had from doing so. We pretty much know the AIB 480s can make the jump from 1266 to 1500+ clock speeds. So OC vs OC the 480 is going to win everytime. So, not much of a killer :/


how you or anyone without a 1060 card could even guess?

for all we know the 3g card beats the ref 4g and the 6g card beats the customs?

if this card is soooooo bad at overclocking just like the 10x0's







150 watts and up to 1800 would really set the 480 custom cards in a hole and with crap ref board?

but lets not have the race before both on the track so to speak


----------



## ekg84

some more info and photos has surfaced on 1060

SOURCE





Im wondering if this is the 3Gb SLI'less edition...
It is indeed interesting how they implemented that hanging off the pcb 6-pin PCI-e power connector


----------



## Derp

Why is the six pin not on the PCB? Is Nvidia trying to be edgy or is there a genuine advantage that I cannot comprehend here?


----------



## magnek

From the source:
Quote:


> Next up, the PCB is shorter than the card itself, and NVIDIA's unique new reference-cooler makes the card about 50% longer than its PCB. *NVIDIA listened to feedback about shorter PCBs pushing power connectors towards the middle of the cards; and innovated a unique design, in which the card's sole 6-pin PCIe power connector is located where you want it (towards the end), and internal high-current wires are soldered to the PCB.* Neato? Think again. What if you want to change the cooler, or maybe use a water-block? Prepare to deal with six insulated wires sticking out of somewhere in the PCB, and running into that PCIe power receptacle.


----------



## ekg84

Quote:


> Originally Posted by *magnek*
> 
> From the source:


I'd love to see what that actually looks like with that shroud removed... Not sure how much I like this "solution"... Oh well, few more days...


----------



## Slomo4shO

Quote:


> Originally Posted by *ekg84*
> 
> Not sure how much I like this "solution"... Oh well, few more days...


I doubt it will exist as such on the AIB models. Who is going to buy a reference 1060 and replace the cooler?


----------



## ekg84

Quote:


> Originally Posted by *Slomo4shO*
> 
> I doubt it will exist as such on the AIB models. Who is going to buy a reference 1060 and replace the cooler?


I certainly wouldn't, but some people would likely want to slap a waterblock on that PCB.

Or under another scenario, what if you want to replace TIM and solder from that internal PCI-e wire breaks off the PCB when you take the shroud off. Not trying to say It's a huge issue, but something to be aware of.

One way or another we need to see the final product.


----------



## Forceman

[/quote]
Quote:


> Originally Posted by *ekg84*
> 
> I'd love to see what that actually looks like with that shroud removed... Not sure how much I like this "solution"... Oh well, few more days...


Yeah, I wonder if the connector on the fan plugs into the PCB or how it is connected. Be hard to assemble if it is soldered together. Be interesting to see how AIBs put custom fans on a reference PCB.


----------



## Diffident

I would guess that there is still a 6-pin socket on the board hidden under the shroud with an extension plugged into it feeding the connector on the cooler.


----------



## ekg84

Quote:


> Originally Posted by *Diffident*
> 
> I would guess that there is still a 6-pin socket on the board hidden under the shroud with an extension plugged into it feeding the connector on the cooler.


I doubt it, if this was the case then we'd likely see solder points of a regular 6-pin plug on the back of that pcb... It could still have some proprietary plug though...

Edit: hopefully 6gb version comes with backplate...


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *magnek*
> 
> From the source:


For the majority of people that would buy a reference 1060 that six pin location is brilliant to be honest! That's one of the things I hate most about short PCB cards that have the six pin connector in the middle of the freaking card.


----------



## Waitng4realGPU

Quote:


> Originally Posted by *ekg84*
> 
> I certainly wouldn't, but some people would likely want to slap a waterblock on that PCB.


Yep slap a waterblock on a low range pascal that will be boosted practically to it's limit out of the box and won't respond well to increased voltage. Totally pointless.


----------



## epic1337

Quote:


> Originally Posted by *Waitng4realGPU*
> 
> Yep slap a waterblock on a low range pascal that will be boosted practically to it's limit out of the box and won't respond well to increased voltage. Totally pointless.


why would it needs to be OCed just because you put it under water?
i've seen CPU+GPU loops on ITX builds that are ran on stock, they're practical because a single loop saves a lot of space and runs much more cooler.


----------



## Waitng4realGPU

Quote:


> Originally Posted by *epic1337*
> 
> why would it needs to be OCed just because you put it under water?
> i've seen CPU+GPU loops on ITX builds that are ran on stock, they're practical because a single loop saves a lot of space and runs much more cooler.


Fair enough but that's a pretty small market right there, I don't really see a problem with cheaping out on a cheap card. They need a model to compete directly with the reference 480.

Maybe the 6GB 1060 will be a bit more premium?


----------



## epic1337

Quote:


> Originally Posted by *Waitng4realGPU*
> 
> Fair enough but that's a pretty small market right there, I don't really see a problem with cheaping out on a cheap card. They need a model to compete directly with the reference 480.
> 
> Maybe the 6GB 1060 will be a bit more premium?


GPU water cooling in general is a small market by itself.

and if GTX1060 3GB will end up at a price of $250, then its obviously not a match for RX480 8GB $250 but RX480 4GB $200 instead.
with GTX1060 being one tier more expensive than RX480, it doesn't matter if its faster or better, its a tier above RX480 in price.
as such, only GTX1050 can directly compete against RX480 since they'd be at the same price bracket.
much like GTX960 is higher tier than GTX950 despite only being $40 more expensive.

depends, if they could showcase a good performance and its adequately equipped with proper parts (decent VRM and no overheating issues) then they could make it a premium product.


----------



## tajoh111

Quote:


> Originally Posted by *Iscaria*
> 
> People are calling this the 480 killer, but I just don't see it. If the slide is to be believed then the stock 1060 offers 10% more performance than the stock 480. Okay, but we've already seen how poorly the 1080 and 1070 overclock and the miminal gains to be had from doing so. We pretty much know the AIB 480s can make the jump from 1266 to 1500+ clock speeds. So OC vs OC the 480 is going to win everytime. So, not much of a killer :/


https://www.youtube.com/watch?v=Jq47qmwcus8

There's a good chance the rx 480's 24/7 overclocks are in the 1400 range even with partner cards.

The card in the above video was super overvolted, his cards had a hard mod applied, put a water cooler on and was aided by a world class overclocker and he reached 1500 and he was pushing 260+ watts. Not 24/7 clocks. 1500 mhz wasn't stable and he had to dial it down to 1480. This is a 17% overclock.

If a 1060 reaches the same clocks as a 1080, which it should since small dies clock better and it starts at 1700, if 2100mhz is reached, that's a 23.5% overclock.

Plus he concludes 1480 is not 24/7 and this is with water. So as a result, with partner cards, expect 1400, 1450 if your lucky. Do not hype up cards if there is evidence in the present, that those expectations won't be met.


----------



## Waitng4realGPU

Quote:


> Originally Posted by *tajoh111*
> 
> https://www.youtube.com/watch?v=Jq47qmwcus8
> 
> There's a good chance the rx 480's 24/7 overclocks are in the 1400 range even with partner cards.
> 
> The card in the above video was super overvolted, his cards had a hard mod applied, put a water cooler on and was aided by a world class overclocker and he reached 1500 and he was pushing 260+ watts. Not 24/7 clocks. 1500 mhz wasn't stable and he had to dial it down to 1480. This is a 17% overclock.
> 
> If a 1060 reaches the same clocks as a 1080, which it should since small dies clock better and it starts at 1700, if 2100mhz is reached, that's a 23.5% overclock.
> 
> Plus he concludes 1480 is not 24/7 and this is with water. So as a result, with partner cards, expect 1400, 1450 if your lucky. Do not hype up cards if there is evidence in the present, that those expecations won't be met.


The graphics score is also only 500 points off a 1500mhz GTX 980.

So expect a 1400mhz+ 480 to match a reference 980.

Overclocking doesn't seem to scale as well on Pascal either judging by reviews, and judging by that video. So a 23.5% OC'd (which of course is not guaranteed as we see with the 1070/1080 only hitting 2000+ sometimes) 1060 could gain less than a 15% OC'd 480.

http://www.pcgameshardware.de/AMD-Radeon-Grafikkarte-255597/Specials/RX-480-Test-1199839/2/


----------



## EightDee8D

Quote:


> Originally Posted by *tajoh111*
> 
> *Do not hype up cards* if there is evidence in the present, that those expecations won't be met.


People never learn, specially from red team.

Speculation is another thing, but idiots usually take something from a speculation of someone and hype it to the moon as if that's true fact or something. that's why we have so many disappointed people crying on rx480's launch. and apparently there's still some fire left ...


----------



## BiG StroOnZ

Quote:


> Originally Posted by *tajoh111*
> 
> https://www.youtube.com/watch?v=Jq47qmwcus8
> 
> There's a good chance the rx 480's 24/7 overclocks are in the 1400 range even with partner cards.
> 
> The card in the above video was super overvolted, his cards had a hard mod applied, put a water cooler on and was aided by a world class overclocker and he reached 1500 and he was pushing 260+ watts. Not 24/7 clocks. 1500 mhz wasn't stable and he had to dial it down to 1480. This is a 17% overclock.
> 
> If a 1060 reaches the same clocks as a 1080, which it should since small dies clock better and it starts at 1700, if 2100mhz is reached, that's a 23.5% overclock.
> 
> Plus he concludes 1480 is not 24/7 and this is with water. So as a result, with partner cards, expect 1400, 1450 if your lucky. Do not hype up cards if there is evidence in the present, that those expecations won't be met.


My only problem with these conservative clocks is that it doesn't line up with the clockspeed gains that are to be received from going to 14nm from 28nm. If it only clocks to 1400-1450MHz there is a serious problem with AMD's 14nm card here. As 14nm explicitly states 40-50% improvement in clockspeed compared to 28nm:



We know AMD is able to bin up to 1050MHz with their last gen cards. So you use that as a baseline number to figure out where a 14nm card would end up with 40-50% improvement in clockspeed.

So that would put us at 1470MHz to 1575MHz.

Now we know these 14nm and 16nm processes are capable of offering their clockspeed promises because here is what 16nm has to say about clockspeed improvements:



Also 40% improvement in clockspeed. And as we see with Pascal 1080, 1070, and now 1060 NVIDIA indefinitely has increased their clockspeeds by at least 40% compared to previous gen. As a matter a fact they were able to go over 40% in the end as we can see with 2200MHz clockspeeds provided by Pascal.

So basically if the 480 or any future AMD products cannot clock high, it is a problem with the 14nm process that they are using and perhaps it is because it is not as mature as 16nm is. In that 16nm was able to immediately provide the clockspeed advantage as promised and shown by NVIDIA with Pascal. Whereas we are still waiting around to see where 14nm will end up, because the 480 reference was so power limited.


----------



## tajoh111

Quote:


> Originally Posted by *Waitng4realGPU*
> 
> The graphics score is also only 500 points off a 1500mhz GTX 980.
> 
> So expect a 1400mhz+ 480 to match a reference 980.
> 
> Overclocking doesn't seem to scale as well on Pascal either judging by reviews, and judging by that video. So a 23.5% OC'd (which of course is not guaranteed as we see with the 1070/1080 only hitting 2000+ sometimes) 1060 could gain less than a 15% OC'd 480.
> 
> http://www.pcgameshardware.de/AMD-Radeon-Grafikkarte-255597/Specials/RX-480-Test-1199839/2/


He reduced the tessellation work load which boasts scores by 10-15% if you watch the video.
Quote:


> Originally Posted by *BiG StroOnZ*
> 
> My only problem with these conservative clocks is that it doesn't line up with the clockspeed gains that are to be received from going to 14nm from 28nm. If it only clocks to 1400-1450MHz there is a serious problem with AMD's 14nm card here. As 14nm explicitly states 40-50% improvement in clockspeed compared to 28nm:
> 
> 
> 
> We know AMD is able to bin up to 1050MHz with their last gen cards. So you use that as a baseline number to figure out where a 14nm card would end up with 40-50% improvement in clockspeed.
> 
> So that would put us at 1470MHz to 1575MHz.
> 
> Now we know these 14nm and 16nm processes are capable of offering their clockspeed promises because here is what 16nm has to say about clockspeed improvements:
> 
> 
> 
> Also 40% improvement in clockspeed. And as we see with Pascal 1080, 1070, and now 1060 NVIDIA indefinitely has increased their clockspeeds by at least 40% compared to previous gen. As a matter a fact they were able to go over 40% in the end as we can see with 2200MHz clockspeeds provided by Pascal.
> 
> So basically if the 480 or any future AMD products cannot clock high, it is a problem with the 14nm process that they are using and perhaps it is because it is not as mature as 16nm is. In that 16nm was able to immediately provide the clockspeed advantage as promised and shown by NVIDIA with Pascal. Whereas we are still waiting around to see where 14nm will end up, because the 480 reference was so power limited.


It might not necessarily be a nodal one.

I posted this well over a month ago and it's another on of my predictions coming true.

http://www.overclock.net/t/1599440/geforce-nvidia-gtx-1080-1070-unveiled/2100#post_25155828

LN2 is useless as far as showing max usable overclocks will be on a paticularly node, but they do gives some indication of how well an architecture will overclock on a more advance node. Look at the gtx 280ln(980mhz) vs overclocked gtx 480/580(980mhz h20) for example and now the gtx 980ln2 clocks vs what the 1080 does. The gtx 980/980 ti could get to 2300 on a very good ln2 bench run. 2200 are a more normal result. This could very well mirror what we get for the overclocks of the gtx 1080.

What I suspect is what 14nm finfet will show is AMD needs a big redesign to use get bigger clocks because with every new iteration of GCN where they add more functionality, the worse the clocks get. GCN 1.0 could get to 1800mhz on ln2. GCN 1.3 or fiji, even those it is made on a better process only gets to 1450mhz on ln2. GCN is just not an architecture designed to go as fast as maxwell. The cores are smaller and the pipelines are shorter but as a result, they can put more cores.

Basically GCN as an architecture just doesn't clock high. Even under ln2, fiji which is the variant most similar to the one in Polaris only overclocks to 1.45ghz under LN2. What does Maxwell overclock to with ln2? 2.2 ghz. Notice a pattern here? This is with the same node, but look at the clock advantage maxwell has over GCN 1.3.

GCN was was never going to overclock that high, even with the aid of tsmc. Fanboys are just looking for a scapegoat so they can put their hopes on the next GCN derivative.

The problem with GCN and why it will never catch Polaris or maxwell in performance per watt or mm2 is deep down it is just GCN.

What AMD has been doing is they have been ricing their GCN architecture out. Adding mods, turbo's, air intakes and exhausts, but what they really need is a new car to win the race. GCN is an architecture that has aged well, but we won't get big changes until their truly big next gen architecture.


----------



## Waitng4realGPU

Quote:


> Originally Posted by *tajoh111*
> 
> He reduced the tessellation work load which boasts scores by 10-15% if you watch the video.
> It might not necessarily be a nodal one.


I used the first test results as my example because that was before he did those tweaks. There was two different firestrike scores displayed.
Quote:


> Originally Posted by *tajoh111*
> 
> Also 40% improvement in clockspeed. And as we see with Pascal 1080, 1070, and now 1060 NVIDIA indefinitely has increased their clockspeeds by at least 40% compared to previous gen. As a matter a fact they were able to go over 40% in the end as we can see with 2200MHz clockspeeds provided by Pascal.


Well some Pascal chips are only getting to around 2000mhz, whilst other very lucky chips are getting to 2150mhz (both on air)

That's a fair range right there but generally speaking yes Pascal is seeing better gains in clock speeds with the node shrink.
Quote:


> Originally Posted by *tajoh111*
> 
> The problem with GCN and why it will never catch Polaris or maxwell in performance per watt or mm2 is deep down it is just GCN.


Agreed.

When all is said and done I'd still buy an AIB 480 that will go to 1400mhz and higher if lucky. (If reasonably priced)


----------



## BiG StroOnZ

Quote:


> Originally Posted by *tajoh111*
> 
> I posted this well over a month ago and it's another on of my predictions coming true.
> 
> http://www.overclock.net/t/1599440/geforce-nvidia-gtx-1080-1070-unveiled/2100#post_25155828
> 
> LN2 is useless as far as showing max usable overclocks will be on a paticularly node, but they do gives some indication of how well an architecture will overclock on a more advance node. Look at the gtx 280ln(980mhz) vs overclocked gtx 480/580(980mhz h20) for example and now the gtx 980ln2 clocks vs what the 1080 does. The gtx 980/980 ti could get to 2300 on a very good ln2 bench run. 2200 are a more normal result. This could very well mirror what we get for the overclocks of the gtx 1080.
> 
> What I suspect is what 14nm finfet will show is AMD needs a big redesign to use get bigger clocks because with every new iteration of GCN where they add more functionality, the worse the clocks get. GCN 1.0 could get to 1800mhz on ln2. GCN 1.3 or fiji, even those it is made on a better process only gets to 1450mhz on ln2. GCN is just not an architecture designed to go as fast as maxwell. The cores are smaller and the pipelines are shorter but as a result, they can put more cores.
> 
> Basically GCN as an architecture just doesn't clock high. Even under ln2, fiji which is the variant most similar to the one in Polaris only overclocks to 1.45ghz under LN2. What does Maxwell overclock to with ln2? 2.2 ghz. Notice a pattern here? This is with the same node, but look at the clock advantage maxwell has over GCN 1.3.
> 
> GCN was was never going to overclock that high, even with the aid of tsmc. Fanboys are just looking for a scapegoat so they can put their hopes on the next GCN derivative.
> 
> The problem with GCN and why it will never catch Polaris or maxwell in performance per watt or mm2 is deep down it is just GCN.
> 
> What AMD has been doing is they have been ricing their GCN architecture out. Adding mods, turbo's, air intakes and exhausts, but what they really need is a new car to win the race. GCN is an architecture that has aged well, but we won't get big changes until their truly big next gen architecture.


Just because GCN can't clock high on 28nm doesn't mean it wouldn't be able to clock high on 14nm. If anything, it should be able to allow them to clock higher because their power savings are giving them more to play with core voltage. Why it is true GCN never really clocked well, this wasn't always true as demonstrated with your GCN 1.0 example. Their smaller cards tend to clock pretty well. And Polaris is indeed a smaller card...

I wouldn't just ignore the facts that I presented you and shrug it off as, "Oh, well GCN doesn't clock well, so even though 14nm promises clockspeed improvements by up to 50% compared to previous gen, I'm going to ignore this fact because GCN doesn't clock that well on 28nm"


----------



## ChevChelios

Quote:


> Just because GNC can't clock high on 28nm doesn't mean it wouldn't be able to clock high on 14nm.


but 1400+ on air is still high _for GCN_


----------



## tajoh111

Quote:


> Originally Posted by *BiG StroOnZ*
> 
> Just because GNC can't clock high on 28nm doesn't mean it wouldn't be able to clock high on 14nm. If anything, it should be able to allow them to clock higher because their power savings are giving them more to play with core voltage. Why it is true GNC never really clocked well, this wasn't always true as demonstrated with your GCN 1.0 example. Their smaller cards tend to clock pretty well. And Polaris is indeed a smaller card...
> 
> I wouldn't just ignore the facts that I presented you and shrug it off as, "Oh, well GNC doesn't clock well, so even though 14nm promises clockspeed improvements by up to 50% compared to previous gen, I'm going to ignore this fact because GNC doesn't clock that well on 28nm"


It does. GCN pipeline just isn't that long which is why the cores are smaller and they can pack more for a given area. What finfet improvements allow is the drawing out of potential that was already in the chips. The problem with GCN however is that every time AMD improved the architecture and improved the IPC and performance, it hurt the clocks. That's why GCN 3 overclocks worse than GCN 2 and GCN 2 overclocks worse than GCN 1. It's a consistent pattern. It's not the size of the chips, its the architecture. Tonga overclocked worse than tahiti and Hawaii and in reviews, just went a tad over 1100 much like fury X did. Even worse is the fact that fury x was made on a more advance and higher clocking node process than GCN1 tahiti. It just shows you how much these architectural changes damaged GCN's clocks.

LN2 removes the leakage and thermal constraints of an architecture and shows how well the core can overclock, hence's why it's a preview of the next node. Both Polaris and Pascals are clocking very close to their last gens, ln2 clocks. The difference is maxwell Ln2 clocks were 2200 while fiji's were 1450mhz. These clocks are a function of the architecture since, at the time, maxwell and GCN3 were using the same nodal technology.

What finfet allowed for AMD was it afforded AMD to add further to GCN(which if done on 28nm would have lowered clocks futher), while allowing them to boost clocks to 1266mhz while keeping power consumption somewhat in check.

Why pascal got a bigger boost was they already had an efficient architecture, which allowed them to add more clocks rather than save clocks to gain additional performance per watt.


----------



## BiG StroOnZ

Quote:


> Originally Posted by *ChevChelios*
> 
> but 1400+ on air is still high _for GCN_


Where did the other 20% go though? It just vanished out of thin air? Are we really to believe there is a some huge GCN penalty and that's it?
Quote:


> Originally Posted by *tajoh111*
> 
> It does. GCN pipeline just isn't that long which is why the cores are smaller and they can pack more for a given area. What finfet improvements allow is the drawing out of potential that was already in the chips. The problem with GCN however is that every time AMD improved the architecture and improved the IPC and performance, it hurt the clocks. That's why GCN 3 overclocks worse than GCN 2 and GCN 2 overclocks worse than GCN 1. It's a consistent pattern. It's not the size of the chips, its the architecture. Tonga overclocked worse than tahiti and Hawaii and in reviews, just went a tad over 1100 much like fury X did. Even worse is the fact that fury x was made on a more advance and higher clocking node process than GCN1 tahiti. It just shows you how much these architectural changes damaged GCN's clocks.
> 
> LN2 removes the leakage and thermal constraints of an architecture and shows how well the core can overclock, hence's why it's a preview of the next node. Both Polaris and Pascals are clocking very close to their last gens, ln2 clocks. The difference is maxwell Ln2 clocks were 2200 while fiji's were 1450mhz. These clocks are a function of the architecture since, at the time, maxwell and GCN3 were using the same nodal technology.
> 
> What finfet allowed for AMD was it afforded AMD to add further to GCN(which if done on 28nm would have lowered clocks futher), while allowing them to boost clocks to 1266mhz while keeping power consumption somewhat in check.
> 
> Why pascal got a bigger boost was they already had an efficient architecture, which allowed them to add more clocks rather than save clocks to gain additional performance per watt.


So the guy who runs HardOCP is just a big fat liar with nothing to lose:



So even though we gained 20% from 1050MHz with the RX 480's 1266MHz clockspeed we magically lost that leftover 20-30% clockspeed uplift as promised by the 14nm process because of the inevitable automatic tajoh GCN clockspeed penalty.


----------



## ChevChelios

well _maybe_ 1 in 100, under water, 480 will reach that 1600Mhz









the rest 99 on air will be 1400-1450

so he's not wrong _per se_


----------



## BiG StroOnZ

Quote:


> Originally Posted by *ChevChelios*
> 
> well _maybe_ 1 in 100, under water, 480 will reach that 1600Mhz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> the rest 99 on air will be 1400-1450
> 
> so he's not wrong _per se_


I feel like it would make more sense if it ended up like the 970 average overclocks, where cards we able to do 1600,1550,1500,1450 etc. With most aiming for the 1500MHz mark.

It just seems like there is so much contradictory information out there right now. The leaks of cards doing 1500MHz+ and then of course someone claiming to have similar information from AIB partners, then of course the 14nm process itself which basically says all of this is possible. Then you have other people saying it's impossible, not going to happen, 1400MHz is going to be the max, it makes you wonder what exactly is going on.

The icing on the cake is the fact that AMD designed an overclocking tool for their GPUs incorporated in the driver, why would AMD go through all this trouble of making an overclocking tool if the card had bad overclocking headroom? Doesn't seem to make sense, is it possible der8auer just didn't win the silicon lottery? As stated by the AIB partners, "it's a lottery draw" as they said.


----------



## Derp

http://www.xfastest.com/data/attachment/forum/201607/05/232416bsq3lbmloblboslv.png.thumb.jpg

http://www.xfastest.com/data/attachment/forum/201607/05/232414iwbtwww6bc61vzk6.png.thumb.jpg


----------



## fatmario

Quote:


> Originally Posted by *Derp*


source ?


----------



## aDyerSituation

So slightly faster than a throttling reference 480.

Meh. If this isn't priced at $250 or less and these benches are real then pass


----------



## lolfail9001

Quote:


> Originally Posted by *Derp*
> 
> 
> 
> Spoiler: Warning: Spoiler!


Too good to be true so a fake.


----------



## tajoh111

Quote:


> Originally Posted by *BiG StroOnZ*
> 
> Where did the other 20% go though? It just vanished out of thin air? Are we really to believe there is a some huge GCN penalty and that's it?
> So the guy who runs HardOCP is just a big fat liar with nothing to lose:
> 
> 
> 
> So even though we gained 20% from 1050MHz with the RX 480's 1266MHz clockspeed we magically lost that leftover 20-30% clockspeed uplift as promised by the 14nm process because of the inevitable automatic tajoh GCN clockspeed penalty.


My predictions have been accurate so far on Polaris.

It's partners word on the matter and I suspect partners don't want you to buy reference cards but instead want you to buy their versions and derivatives of the rx 480. The margins are better for them.

Cards did get a frequency boost. The stock clocks of cards like tonga were 1000mhz and these usually overclocked to 1150mhz or so with a none reference cooler.Having cards now clock at 1266 and overclock to 1400 is a boost over last gen. You might want to dub it the tajoh GCN clockspeed penalty but the GCN penalty is very real as seen from GCN to GCN4 nowadays. On 28nm, GCN's clocks got worse even those the process that each GPU was made on was an more advanced and higher clocking node then the previous version of GCN. Add in Nvidia was using the same technology and process and continued to improve clocks, and it adds more evidence to my hypothesis.

It isn't just the process that determines frequency, it's the architecture as well.

1450mhz FIJI when running on ln2 should have been a redflag about the frequency potential of GCN as well as the frequencies maxwell was reaching under ln2. It real evidence as is the current clocks and overclocks of Polaris.

This guy is ranked number 4 on hwbot so he knows what he's talking about. This card was hard modded and overvolted to levels beyond safe for air cooling and even 24/7 water cooling. Aib partner cards should get really any better considering they are not going to go as balls out as this guy did.

In reviews, using the stock cooler, overclocks vary only 50mhz between the best and worst which is about 4% which means binning won't play a crazy difference. The sooner we can squash the 1.6ghz rumors, the less hype that is built up, which is better at this point.
Quote:


> Originally Posted by *BiG StroOnZ*
> 
> I feel like it would make more sense if it ended up like the 970 average overclocks, where cards we able to do 1600,1550,1500,1450 etc. With most aiming for the 1500MHz mark.
> 
> I just seems like their is so much contradictory information out there right now. The leaks of cards doing 1500MHz+ and then of course someone claiming to have similar information from AIB partners, then of course the 14nm process itself which basically says all of this is possible. Then you have other people saying it's impossible, not going to happen, 1400MHz is going to be the max, it makes you wonder what exactly is going on.
> 
> The icing on the cake is the fact that AMD designed an overclocking tool for their GPUs incorporated in the driver, why would AMD go through all this trouble of making an overclocking tool if the card had bad overclocking headroom? Doesn't seem to make sense, is it possible der8auer just didn't win the silicon lottery? As stated by the AIB partners, "it's a lottery draw" as they said.


Rumors, are what lead to the trainwreck of hype that was the rx 480. Actual evidence was already present on how polaris was going to perform but fanboys believing the most optimistic rumors were true vastly raised polaris' expectations.

The highest overclocked rx480 in reviews was the one in Guru3d and this was 1375mhz. The worst clocked ones were 1325mhz. This is only a 4 percent difference. What this means is the silicon lottery isn't going to be playing a big role.

What Dauer said was he expect most samples to overclock to 1400mhz for AIB which is safe. What he expect from better samples is 1450mhz for safe clocks. This agrees with the 4% variance as above.


----------



## Derp

Quote:


> Originally Posted by *fatmario*
> 
> source ?


images are from xfastest.


----------



## ChevChelios

Quote:


> Originally Posted by *Derp*


http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,26.html 480

FS = 12195 (graphics), 1060 is 8%+ faster
FSU = 2682 (graphics), 1060 is 9% faster


----------



## magnek

Quote:


> Originally Posted by *tajoh111*
> 
> My predictions have been accurate so far on Polaris.
> 
> It's partners word on the matter and I suspect partners don't want you to buy reference cards but instead want you to buy their versions and derivatives of the rx 480. The margins are better for them.
> 
> Cards did get a frequency boost. The stock clocks of cards like tonga were 1000mhz and these usually overclocked to 1150mhz or so with a none reference cooler.Having cards now clock at 1266 and overclock to 1400 is a boost over last gen. You might want to dub it the tajoh GCN clockspeed penalty *but the GCN penalty is very real as seen from GCN to GCN4 nowadays. On 28nm, GCN's clocks got worse even those the process that each GPU was made on was an more advanced and higher clocking node then the previous version of GCN.* Add in Nvidia was using the same technology and process and continued to improve clocks, and it adds more evidence to my hypothesis.
> 
> It isn't just the process that determines frequency, it's the architecture as well.
> 
> 1450mhz FIJI when running on ln2 should have been a redflag about the frequency potential of GCN as well as the frequencies maxwell was reaching under ln2. It real evidence as is the current clocks and overclocks of Polaris.
> 
> This guy is ranked number 4 on hwbot so he knows what he's talking about. This card was hard modded and overvolted to levels beyond safe for air cooling and even 24/7 water cooling. Aib partner cards should get really any better considering they are not going to go as balls out as this guy did.
> 
> In reviews, using the stock cooler, overclocks vary only 50mhz between the best and worst which is about 4% which means binning won't play a crazy difference. The sooner we can squash the 1.6ghz rumors, the less hype that is built up, which is better at this point.


Could it have anything to do with transistor density? It got progressively worse from Tahiti (12.25 million/mm²) to Hawaii (14.16 million/mm²) to Fiji (14.93 million/mm²). Although Tonga at 13.93 million/mm² was just a smidge below Hawaii in terms of OC headroom so I guess it's there.


----------



## SoloCamo

Argh, so sick of rumors every where. Just wish we had some official tests for the 1060 too. Not like I need to replace my oc'ed 290x for 4k with either this or a rx-480 but the midrange is a fairly good indication of what is coming on the high end performance wise.

If AIB 1060's are both decently faster and use considerably less power than a rx-480 then AMD is seriously in a much worse spot financially.


----------



## aDyerSituation

Quote:


> Originally Posted by *SoloCamo*
> 
> Argh, so sick of rumors every where. Just wish we had some official tests for the 1060 too. Not like I need to replace my oc'ed 290x for 4k with either this or a rx-480 but the midrange is a fairly good indication of what is coming on the high end performance wise.
> 
> If AIB 1060's are both decently faster and use considerably less power than a rx-480 then AMD is seriously in a much worse spot financially.


Not really. We know nvidia will tray to pull $300 out of this card


----------



## SoloCamo

Quote:


> Originally Posted by *aDyerSituation*
> 
> Not really. We know nvidia will tray to pull $300 out of this card


Yea but unfortunately people will justify the higher price. Like I've said before Nvidia might as well be called Apple as far as gpu's are concerned.


----------



## aDyerSituation

Quote:


> Originally Posted by *SoloCamo*
> 
> Yea but unfortunately people will justify the higher price. Like I've said before Nvidia might as well be called Apple as far as gpu's are concerned.


Good point. It all comes down to AIB prices for the 480. If they can hover around $250 or below they will still sell well IMO.

And that sapphire nitro is too damn sexy to pass up


----------



## GorillaSceptre

If that leak is real, then the 1060 on a Nvidia favored bench is only 8% faster than a "atrocious" reference 480? And it will probably have most of it's OC headroom used up by boost 3.0, so it shouldn't have nearly the same potential as the AIB 480's.

But i guess this 3GB card will be hailed as a resounding success, even when it ends up costing $250+.. Remember guys, it needs to handily beat the 980 to be a success.


----------



## Slomo4shO

Quote:


> Originally Posted by *SoloCamo*
> 
> Like I've said before Nvidia might as well be called Apple as far as gpu's are concerned.


For some reason your statement broght this to mind:


----------



## tajoh111

Quote:


> Originally Posted by *magnek*
> 
> Could it have anything to do with transistor density? It got progressively worse from Tahiti (12.25 million/mm²) to Hawaii (14.16 million/mm²) to Fiji (14.93 million/mm²). Although Tonga at 13.93 million/mm² was just a smidge below Hawaii in terms of OC headroom so I guess it's there.


Tonga having worse overclocking headroom than hawaii and having the same overclocking headroom as fiji is a counter argument for this. Tonga being a bad overclocker even with it's moderate transistor density show's that its architecturally related. Considering it was made on a more advanced node, there should have been clock jumps. This is part of the reason why maxwell got a frequency boost. The fact that it got worse just shows the impact of these additions on the GCN architecture.

Also once heat is out of the way, the gtx 980 and gtx titan x are capable of reaching nearly the same clocks even those the later has a higher transistor density.


----------



## Orthello

Quote:


> Originally Posted by *tajoh111*
> 
> My predictions have been accurate so far on Polaris.
> 
> It's partners word on the matter and I suspect partners don't want you to buy reference cards but instead want you to buy their versions and derivatives of the rx 480. The margins are better for them.
> 
> Cards did get a frequency boost. The stock clocks of cards like tonga were 1000mhz and these usually overclocked to 1150mhz or so with a none reference cooler.Having cards now clock at 1266 and overclock to 1400 is a boost over last gen. You might want to dub it the tajoh GCN clockspeed penalty but the GCN penalty is very real as seen from GCN to GCN4 nowadays. On 28nm, GCN's clocks got worse even those the process that each GPU was made on was an more advanced and higher clocking node then the previous version of GCN. Add in Nvidia was using the same technology and process and continued to improve clocks, and it adds more evidence to my hypothesis.
> 
> It isn't just the process that determines frequency, it's the architecture as well.
> 
> 1450mhz FIJI when running on ln2 should have been a redflag about the frequency potential of GCN as well as the frequencies maxwell was reaching under ln2. It real evidence as is the current clocks and overclocks of Polaris.
> 
> This guy is ranked number 4 on hwbot so he knows what he's talking about. This card was hard modded and overvolted to levels beyond safe for air cooling and even 24/7 water cooling. Aib partner cards should get really any better considering they are not going to go as balls out as this guy did.
> 
> In reviews, using the stock cooler, overclocks vary only 50mhz between the best and worst which is about 4% which means binning won't play a crazy difference. The sooner we can squash the 1.6ghz rumors, the less hype that is built up, which is better at this point.
> Rumors, are what lead to the trainwreck of hype that was the rx 480. Actual evidence was already present on how polaris was going to perform but fanboys believing the most optimistic rumors were true vastly raised polaris' expectations.
> 
> The highest overclocked rx480 in reviews was the one in Guru3d and this was 1375mhz. The worst clocked ones were 1325mhz. This is only a 4 percent difference. What this means is the silicon lottery isn't going to be playing a big role.
> 
> What Dauer said was he expect most samples to overclock to 1400mhz for AIB which is safe. What he expect from better samples is 1450mhz for safe clocks. This agrees with the 4% variance as above.


One of the german reviews their ref sample did 1420mhz . It was the highest i have seen. So their might be the odd Golden AIB card that does 1500mhz yet. Still i think its golden card at 1500 rather than 1600.


----------



## tajoh111

Quote:


> Originally Posted by *Orthello*
> 
> One of the german reviews their ref sample did 1420mhz . It was the highest i have seen. So their might be the odd Golden AIB card that does 1500mhz yet. Still i think its golden card at 1500 rather than 1600.


Plus I think it is a matter of stability testing too. Considering his results are exceptionally higher than everyone elses, I think it is more likely they were not as thorough with their stability testing.
Quote:


> Originally Posted by *GorillaSceptre*
> 
> If that leak is real, then the 1060 on a Nvidia favored bench is only 8% faster than a "atrocious" reference 480? And it will probably have most of it's OC headroom used up by boost 3.0, so it shouldn't have nearly the same potential as the AIB 480's.
> 
> But i guess this 3GB card will be hailed as a resounding success, even when it ends up costing $250+.. Remember guys, it needs to handily beat the 980 to be a success.


Most of the criticism launched at rx 480 isn't it's price to performance, it's the engineering let down vs Nvidia. It is just so much slower than Polaris even with the die size taken into account.

If the 1060 is 8 percent faster, while being 20% smaller and achieves this using 20-25% less power, it basically means nvidia can play around with AMD because they beat them in every important metric.

It means they control pricing, which means AMD future is in Nvidia's hands. Being so behind Nvidia in the engineering department, all AMD can do is value price their products to compete with Nvidia. However because of the performance per mm2 aspect, Nvidia is capable of producing a faster product than AMD that is cheaper to make. This means AMD is forced to deal with lower margins, and if Nvidia decides to be aggressive like the 970 again, it can force AMD to take losses on their GPU department.

People wanted AMD at the very least show they closed the engineering gap and had perhaps Fury levels of performance for the full polaris chip.

I.e The 1080 being 35% larger than polaris is 42-45% faster.

What we got is the 1080 for being 35% larger, shows a gulf of 75% faster 80+% gap at higher resolutions. That performance per mm2 gap means they are just as far behind as Tonga vs the gm104 generation. Meaning AMD hasn't closed the gap at all. And if you remembered, this was a bad time for AMD marketshare and profits.

If this gap isn't closed, it also means vega has a very good chance of needing to be near hawaii sized to compete with a chip much smaller than it. And that isn't good when you add HBM to the overall costs. The current performance of polaris lowered peoples expectations for vega.

AMD needs to compete on the engineering level because this is what they need to if they want to be profitable again and this is what everyone desires.


----------



## Orthello

Quote:


> Originally Posted by *tajoh111*
> 
> Plus I think it is a matter of stability testing to. Considering his results are exceptionally higher than everyone elses, I think it is more likely they were not as thorough with their stability testing.


Yes could be right.

What we are yet to see though is a non throttling overclocked result. Eg you mention 1375 boost at guru3d.. did it maintain that boost at Guru3d through the benchmarks ? If the AIBs can maintain their boosts that also will make a difference as my pic is those boosts are throttling down to lesser numbers.


----------



## GorillaSceptre

Quote:


> Originally Posted by *tajoh111*
> 
> Plus I think it is a matter of stability testing to. Considering his results are exceptionally higher than everyone elses, I think it is more likely they were not as thorough with their stability testing.


GN got theirs to 1390 when eliminating the reference heat issues, the AIB's will have vastly superior power delivery too. I think 1400 is low-balling the AIB's imo, but we'll see, hopefully not much longer to wait.

We've seen the pcie power draw crash systems with low quality boards, it's not hard to believe that some of the stability problems past 1400 might of been down to more than the chip itself, even though the higher quality boards didn't entirely crash.


----------



## Orthello

Quote:


> Originally Posted by *GorillaSceptre*
> 
> GN got theirs to 1390 when eliminating the reference heat issues, the AIB's will have vastly superior power delivery too. I think 1400 is low-balling the AIB's imo, but we'll see, hopefully not much longer to wait.
> 
> We've seen the pcie power draw crash systems with low quality boards, it's not hard to believe that some of the stability problems past 1400 might of been down to more than the chip itself, even though the higher quality boards didn't entirely crash.


Hopefully some leaks of the Nitro will come out soon , i think we will see more % gain from an AIB overclock at same mhz matched to a ref card OC. I'm sure the ref cards are throttling any overclock from its full potential. PCPer alluded to this in their review.

I don't want to bring out that age old drivers will improve things arguement but .. they simply will going by AMDs past history. In another year add 10-15% to these numbers and it will be over GTX980 stock level and well beyond GTX970 level for sure and even beyond both in DX12. So yeah if you are looking now to buy in this price category then only cards to compare against in this price bracket would be GTX1060 (if its in the price bracket in all reality see 1070 vs retail pricing). Not liking the lack of SLI either .. or 6gb vs 8gb on the 1060.


----------



## ChevChelios

all new cards get better drivers post release, 1060 is no different

ref 1060 will probably be better than ref 480, AIB vs AIB might be closer


----------



## lolfail9001

Quote:


> Originally Posted by *Orthello*
> 
> Hopefully some leaks of the Nitro will come out soon , i think we will see more % gain from an AIB overclock at same mhz matched to a ref card OC. I'm sure the ref cards are throttling any overclock from its full potential. PCPer alluded to this in their review.
> 
> I don't want to bring out that age old drivers will improve things arguement but .. they simply will going by AMDs past history. In another year add 10-15% to these numbers and it will be over GTX980 stock level and well beyond GTX970 level for sure and even beyond both in DX12. So yeah if you are looking now to buy in this price category then only cards to compare against in this price bracket would be GTX1060 (if its in the price bracket in all reality see 1070 vs retail pricing). Not liking the lack of SLI either .. or 6gb vs 8gb on the 1060.


Well, how much did 380's performance improve 960 over a year? Dare to take a guess? Well, according to TPU

https://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html

https://www.techpowerup.com/reviews/Palit/GeForce_GTX_1080_GameRock/25.html

It (the lead) has dropped from ~17% to ~6% averaged over fairly large game sample.


----------



## sugarhell

Quote:


> Originally Posted by *lolfail9001*
> 
> Well, how much did 380's performance improve 960 over a year? Dare to take a guess? Well, according to TPU
> 
> https://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html
> 
> https://www.techpowerup.com/reviews/Palit/GeForce_GTX_1080_GameRock/25.html
> 
> It has dropped from ~17% to ~6%


Because the 380 is not the same card as the 380x


----------



## oxidized

Quote:


> Originally Posted by *Orthello*
> 
> Hopefully some leaks of the Nitro will come out soon , i think we will see more % gain from an AIB overclock at same mhz matched to a ref card OC. I'm sure the ref cards are throttling any overclock from its full potential. PCPer alluded to this in their review.
> 
> I don't want to bring out that age old drivers will improve things arguement but .. they simply will going by AMDs past history. In another year add 10-15% to these numbers and it will be over GTX980 stock level and well beyond GTX970 level for sure and even beyond both in DX12. So yeah if you are looking now to buy in this price category then only cards to compare against in this price bracket would be GTX1060 (if its in the price bracket in all reality see 1070 vs retail pricing). Not liking the lack of SLI either .. or 6gb vs 8gb on the 1060.


~50€/$ more doesn't make the 1060 another category, also the 8GB that nobody needs, won't make any difference over 6, especially on mid range cards which have limited life span


----------



## lolfail9001

Quote:


> Originally Posted by *sugarhell*
> 
> Because the 380 is not the same card as the 380x


0/10, both linked summaries contain 380.


----------



## ChevChelios

Quote:


> Originally Posted by *sugarhell*
> 
> Because the 380 is not the same card as the 380x


he is comparing 380 to 960 in both of those ....

nothing to do with 380X (its just a link to an older review/summary)


----------



## Orthello

Quote:


> Originally Posted by *ChevChelios*
> 
> all new cards get better drivers post release, 1060 is no different


Absolutely that's why i'm saying that's the 480s only competition (GTX1060). Considering the 480 ref card was particulary bad for overclocking - or even maintaining default boost , its not even maintaining default boost in 1440p or 4k according to some reviews , the AIBs should improve this card a LOT. Enough to compete with the 1060 i would think . Remains to be seen if AIBs improve the GTX1060 performance, if the 1070 and 1080 are anything to go by they won't - might improve the noise factor if anything. So lets see how it all panes out i guess.


----------



## aDyerSituation

Quote:


> Originally Posted by *oxidized*
> 
> ~50€/$ more doesn't make the 1060 another category, also the 8GB that nobody needs, won't make any difference over 6, especially on mid range cards which have limited life span


the nvidia card will have a limited lifespan, AMD not so much


----------



## jellybeans69

Quote:


> Originally Posted by *oxidized*
> 
> ~50€/$ more doesn't make the 1060 another category, also the 8GB that nobody needs, won't make any difference over 6, especially on mid range cards which have limited life span


More like 100 eur more. 230e for 480 vs 339e for 1060 at same eu supplier without vat.


----------



## ChevChelios

Quote:


> Originally Posted by *aDyerSituation*
> 
> the nvidia card will have a limited lifespan, AMD not so much


960 catching up to 380 demonstrates that perfectly


----------



## Orthello

Quote:


> Originally Posted by *oxidized*
> 
> ~50€/$ more doesn't make the 1060 another category, also the 8GB that nobody needs, won't make any difference over 6, especially on mid range cards which have limited life span


Depends really , for the 480 it can be Neccessary if you are going to CFX them for 1440p high settings. ROTR can easily use this much ram. An option the 1060 won't have.


----------



## oxidized

Quote:


> Originally Posted by *aDyerSituation*
> 
> the nvidia card will have a limited lifespan, AMD not so much


We got it u're a red fan don't worry









Quote:


> Originally Posted by *jellybeans69*
> 
> More like 100 eur more. 230e for 480 vs 339e for 1060 at same eu supplier without vat.


It's surely closer to 50 than 100, and still doesn't make it a higher category...


----------



## ChevChelios

rumors say the 6GB 1060 will have SLI

either way Im sure majority of those 80%+ of market mainstream buyers who get $200-250 cards never bother with SLI/CF anyway


----------



## aDyerSituation

Quote:


> Originally Posted by *oxidized*
> 
> We got it u're a red fan don't worry
> 
> 
> 
> 
> 
> 
> 
> 
> It's surely closer to 50 than 100, and still doesn't make it a higher category...


Not a red fan, just don't like Ngreedia practices. Nice try.

Meanwhile a 7970 beating a 780 now....ahahaha


----------



## oxidized

Quote:


> Originally Posted by *ChevChelios*
> 
> rumors say the 6GB 1060 will have SLI
> 
> either way Im sure majority of those 84% mainstream buyers who get $200-250 cards never bother with SLI/CF anyway


this, plus you guys are taking rumors too seriously prices included...
Quote:


> Originally Posted by *aDyerSituation*
> 
> Not a red fan, just don't like Ngreedia practices. Nice try.
> 
> Meanwhile a 7970 beating a 780 TI now....ahahaha


You just confirmed that


----------



## lolfail9001

Quote:


> Originally Posted by *aDyerSituation*
> 
> Not a red fan, just don't like Ngreedia practices. Nice try.
> 
> Meanwhile a 7970 beating a 780 now....ahahaha


S for Source.


----------



## Orthello

Quote:


> Originally Posted by *aDyerSituation*
> 
> Not a red fan, just don't like Ngreedia practices. Nice try.
> 
> Meanwhile a 7970 beating a 780 TI now....ahahaha


Been subjective does not mean you a Red Fan , look at my sig am i a red fan with my purchasing ?

And yes 7970 driver gains have been astonishing and if you don't like looking at whats happened to the 780ti as gimping then you have to refer to it as non-optimising either way its not for longevity. AMD did this with the 290x , where mildly a overclocked 290x (390x) now beats a 980 in quite a few games also and at release it was losing to a 780 TI overall.


----------



## oxidized

Quote:


> Originally Posted by *Orthello*
> 
> Been subjective does not mean you a Red Fan , look at my sig am i a red fan with my purchasing ?
> 
> And yes 7970 driver gains have been astonishing and if you don't like looking at whats happened to the 780ti as gimping then you have to refer to it as non-optimising either way its not for longevity. AMD did this with the 290x , where mildly a overclocked 290x (390x) now beats a 980 in quite a few games also and at release it was losing to a 780 TI overall.


Well i wasn't talking to you, and being SUbjective doesn't surely help


----------



## Scotty99

3gb 1060 will be 229.00
6gb 1060 will be 259.00

I don't think nvidia would sell any cards if priced higher than that, would be getting too close to 1070 territory.


----------



## magnek

Quote:


> Originally Posted by *Derp*
> 
> 
> 
> 
> 
> http://www.xfastest.com/data/attachment/forum/201607/05/232416bsq3lbmloblboslv.png.thumb.jpg
> 
> http://www.xfastest.com/data/attachment/forum/201607/05/232414iwbtwww6bc61vzk6.png.thumb.jpg


Dunno if anybody noticed but 106 TMUs is 26 TMUs too many for 1280 shaders.


----------



## Kuivamaa

Quote:


> Originally Posted by *lolfail9001*
> 
> Well, how much did 380's performance improve 960 over a year? Dare to take a guess? Well, according to TPU
> 
> https://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html
> 
> https://www.techpowerup.com/reviews/Palit/GeForce_GTX_1080_GameRock/25.html
> 
> It (the lead) has dropped from ~17% to ~6% averaged over fairly large game sample.


Apples to oranges? Different game selection between reviews.


----------



## Slomo4shO

Quote:


> Originally Posted by *lolfail9001*
> 
> S for Source.


Superior at 4K and within 5% at 1080P currently.


----------



## MikeDuffy

Is this a new chip, or a cut GP104?


----------



## Slomo4shO

Quote:


> Originally Posted by *MikeDuffy*
> 
> Is this a new chip, or a cut GP104?


GP106 from my understanding. May be GP107 considering that it has no SLI support.


----------



## headd

i am impressed with GTX1060 performance.1070 is only 30% faster.
GTX970 was 60% faster than GTX960.It looks like GTX1070 is worst card in pascal family.It offers worst performance boost over predecessor.


----------



## ChevChelios

Quote:


> Originally Posted by *headd*
> 
> i am impressed with GTX1060 performance.1070 is only 30% faster.
> GTX970 was 60% faster than GTX960.It looks like GTX1070 is worst card in pascal family.It offers worst performance boost over predecessor.


hey you're that guy who hates 1070 with a passion

welcome back !

Quote:


> 1070 is only 30% faster.


actually its 40%+ if the 1060 is indeed 8-9% faster than a 480

since 1070 is 50%+ faster than a ref 480


----------



## aDyerSituation

Quote:


> Originally Posted by *headd*
> 
> i am impressed with GTX1060 performance.1070 is only 30% faster.
> GTX970 was 60% faster than GTX960.It looks like GTX1070 is worst card in pascal family.It offers worst performance boost over predecessor.


No, 960 was just a turd


----------



## lolfail9001

Quote:


> Originally Posted by *Kuivamaa*
> 
> Apples to oranges? Different game selection between reviews.


Correct and expected objection.

Yet, 970 advantage over 290 is 5% in 380x review and 4% in Palit 1080 review. Go figure, 380 degraded in performance compared to 290!


----------



## headd

Quote:


> Originally Posted by *ChevChelios*
> 
> hey you're that guy who hates 1070 with a passion
> 
> welcome back !
> actually its 40%+ if the 1060 is indeed 8-9% faster than a 480
> 
> since 1070 is 50%+ faster than a ref 480


Firestrike score is around 30% more on GTX1070.
Also to match GTX980, 1060 must be 80-90% faster than GTX960.
1070 is only like 50-55% faster than GTX970
1080 is 60-70% faster than GTX980
SO yeah GTX1070 is worst card in pascal family.


----------



## ChevChelios

Quote:


> Originally Posted by *headd*
> 
> Also to match GTX980, 1060 must be 80-90% faster than GTX960.
> 1070 is only like 50-55% faster than GTX970
> 1080 is 60-70% faster than GTX980
> SO yeah GTX1070 is worst card in pascal family.


dis logic

and TPU chart shows 1070 is 50%+ faster in games than 480 .. so compared to 1060 it would be 40%+ faster


----------



## headd

Still 1060 must be + 80-90% over GTX960 to match stock GTX980.







(and Btw GTX1070 is 35% faster than GTX980 in games)
1080 is + 60-70% over GTX980
1070 is only 50-55% over GTX970

What card is worst and offer worst performance gain?


----------



## ChevChelios

Quote:


> What card is worst


only the 960 is turd

all the others are good cards









Quote:


> Still 1060 must be + 80-90% over GTX960 to match stock GTX980


you seem awfully convinced that 1060 has to match 980 for some weird reason


----------



## lolfail9001

Quote:


> Originally Posted by *ChevChelios*
> 
> only the 960 is turd
> 
> all the others are good cards
> 
> 
> 
> 
> 
> 
> 
> 
> you seem awfully convinced that 1060 has to match 980 for some weird reason


Let's be honest, 1060 will most probably be more expensive than rx480 in retail. It better match 980 then.


----------



## headd

Quote:


> Originally Posted by *ChevChelios*
> 
> you seem awfully convinced that 1060 has to match 980 for some weird reason


Because GTX980 is 8-10% faster than rx480?


----------



## BiG StroOnZ

Quote:


> Originally Posted by *tajoh111*
> 
> My predictions have been accurate so far on Polaris.
> 
> It's partners word on the matter and I suspect partners don't want you to buy reference cards but instead want you to buy their versions and derivatives of the rx 480. The margins are better for them.
> 
> Cards did get a frequency boost. The stock clocks of cards like tonga were 1000mhz and these usually overclocked to 1150mhz or so with a none reference cooler.Having cards now clock at 1266 and overclock to 1400 is a boost over last gen. You might want to dub it the tajoh GCN clockspeed penalty but the GCN penalty is very real as seen from GCN to GCN4 nowadays. On 28nm, GCN's clocks got worse even those the process that each GPU was made on was an more advanced and higher clocking node then the previous version of GCN. Add in Nvidia was using the same technology and process and continued to improve clocks, and it adds more evidence to my hypothesis.
> 
> It isn't just the process that determines frequency, it's the architecture as well.
> 
> 1450mhz FIJI when running on ln2 should have been a redflag about the frequency potential of GCN as well as the frequencies maxwell was reaching under ln2. It real evidence as is the current clocks and overclocks of Polaris.
> 
> This guy is ranked number 4 on hwbot so he knows what he's talking about. This card was hard modded and overvolted to levels beyond safe for air cooling and even 24/7 water cooling. Aib partner cards should get really any better considering they are not going to go as balls out as this guy did.
> 
> In reviews, using the stock cooler, overclocks vary only 50mhz between the best and worst which is about 4% which means binning won't play a crazy difference. The sooner we can squash the 1.6ghz rumors, the less hype that is built up, which is better at this point.
> Rumors, are what lead to the trainwreck of hype that was the rx 480. Actual evidence was already present on how polaris was going to perform but fanboys believing the most optimistic rumors were true vastly raised polaris' expectations.
> 
> The highest overclocked rx480 in reviews was the one in Guru3d and this was 1375mhz. The worst clocked ones were 1325mhz. This is only a 4 percent difference. What this means is the silicon lottery isn't going to be playing a big role.
> 
> What Dauer said was he expect most samples to overclock to 1400mhz for AIB which is safe. What he expect from better samples is 1450mhz for safe clocks. This agrees with the 4% variance as above.


We know right away your predictions are incorrect in that you claim with each revision of GCN, the clocks got slower. Except here with the newest iteration of GCN with Polaris they improved their clockspeeds by 20%. Then even you claim up to 1450MHz clocks are possible. That brings us to a total of around 38% clockspeed increase from last gen. So, not only did the process have something to do with it, they almost reached their baseline target of 40% increase in clockspeeds from switching to 14nm.

I really doubt AIB partners would flat out lie to a member of the press so he in turn flat out lies to the public. Doesn't seem to make that much sense. Sure companies like to promote their products but 1480MHz-1600MHz is a flat out lie according to you. So, obviously this wouldn't benefit the AIB partners or the press to be saying this type of stuff if it wasn't at least slightly accurate.

So regardless of your supposed clockspeed penalty, we automatically know with Polaris that this is not true. Because again, their baseline target is 40%, and @ 1450MHz you would be @ 38% from 1050MHz. While most AMD cards can do 1100-1150 we will use 1050 because its what every card is capable of. So while yes it isn't just the process and of course the architecture itself has something to do with it. We see 1425MHz possible on Air already with reference PCB.

I'm not doubting that the guy doesn't know what he is doing, but K|NGP|N tends to break all of his records running cards modded similarly, but again typically they are not reference cards that he is able to break his records with. He is using his own custom cards to do this level of overclocking. Which proves again, that even though the card was hard modded and overvolted it is still a stock PCB card. If max overclocks were able to be reached under LN2 with reference cards, why would there ever be a need for a K|NGP|N Edition, Lightning, Classified, HoF, etc. Why would K|NGP|N prefer to use his own cards?

We also see quite a variance in overclocks with the reference card (we have numbers ranging from 1330-1480MHz), which indeed indicates that binning is important and basically all cards made the cut from AMD (probably why there is a power issue with the reference cards). This means AIB cards will come with factory overclocks so the binning will be quite different.

So in the end if it indeed clocks to 1450MHz that means they increased their clockspeeds by almost 40% which was the baseline target with 14nm, and again this is with 4th Generation GCN, so there was no regression in clockspeed (as you insist and continually repeat). And of course we know AMD is going to have revisions of these cards:



Which means again, it is most likely NOT GCN that is limiting the clockspeeds entirely, it is most likely a manner of manufacturing process maturity. Which means, as the process gets more mature, they will be able to hit their full 40-50% clockspeed improvement targets.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *BiG StroOnZ*
> 
> My only problem with these conservative clocks is that it doesn't line up with the clockspeed gains that are to be received from going to 14nm from 28nm. If it only clocks to 1400-1450MHz there is a serious problem with AMD's 14nm card here. As 14nm explicitly states 40-50% improvement in clockspeed compared to 28nm:
> 
> 
> 
> We know AMD is able to bin up to 1050MHz with their last gen cards. So you use that as a baseline number to figure out where a 14nm card would end up with 40-50% improvement in clockspeed.
> 
> So that would put us at 1470MHz to 1575MHz.
> 
> Now we know these 14nm and 16nm processes are capable of offering their clockspeed promises because here is what 16nm has to say about clockspeed improvements:
> 
> 
> 
> Also 40% improvement in clockspeed. And as we see with Pascal 1080, 1070, and now 1060 NVIDIA indefinitely has increased their clockspeeds by at least 40% compared to previous gen. As a matter a fact they were able to go over 40% in the end as we can see with 2200MHz clockspeeds provided by Pascal.
> 
> So basically if the 480 or any future AMD products cannot clock high, it is a problem with the 14nm process that they are using and perhaps it is because it is not as mature as 16nm is. In that 16nm was able to immediately provide the clockspeed advantage as promised and shown by NVIDIA with Pascal. Whereas we are still waiting around to see where 14nm will end up, because the 480 reference was so power limited.


Well to be fair, the GCN architecture has been very lackluster in clock potential since its launch (and clock speeds have steadily declined since the 7970). That said, I do agree that 14nm should allow for higher clocks than we are currently seeing but we are also limited by the fact that only reference boards have been launched and tested so far so we really can't say for sure what the absolute clock speed potential of P10 is as of yet...

The flip side of this lower clock speed potential than Nvidia's Maxwell and Pascal architectures is that generally GCN scales much better with higher clocks, so even though the ultimate speed may be lower than Nvidia, the performance for each MHz is higher...


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *lolfail9001*
> 
> Well, how much did 380's performance improve 960 over a year? Dare to take a guess? Well, according to TPU
> 
> https://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html
> 
> https://www.techpowerup.com/reviews/Palit/GeForce_GTX_1080_GameRock/25.html
> 
> It (the lead) has dropped from ~17% to ~6% averaged over fairly large game sample.


You are comparing the percentage of the 960 and 380 to the performance of a 380X in the first chart and the percentage of performance to a 1080 in the second. That will cause massively slower cards like the 960 and 380 to get relatively closer in such a chart as the difference between the two compared to a card that 3x faster. The difference is that a 1080 is LESS than 3 times faster than a 380 but MORE than 3 times faster than a 960. Its an awkward comparison to begin with because it deals in percentages vs vastly different performing cards rather than just showing avg frame rate between the 380 and 960 from two different time periods. Hell, the first chart was only 8 months ago, not even a full year.


----------



## tajoh111

Quote:


> Originally Posted by *BiG StroOnZ*
> 
> We know right away your predictions are incorrect in that you claim with each revision of GCN, the clocks got slower. Except here with the newest iteration of GCN with Polaris they improved their clockspeeds by 20%. Then even you claim up to 1450MHz clocks are possible. That brings us to a total of around 38% clockspeed increase from last gen. So, not only did the process have something to do with it, they almost reached their baseline target of 40% increase in clockspeeds from switching to 14nm.
> 
> I really doubt AIB partners would flat out lie to a member of the press so he in turn flat out lies to the public. Doesn't seem to make that much sense. Sure companies like to promote their products but 1480MHz-1600MHz is a flat out lie according to you. So, obviously this wouldn't benefit the AIB partners or the press to be saying this type of stuff if it wasn't at least slightly accurate.
> 
> So regardless of your supposed clockspeed penalty, we automatically know with Polaris that this is not true. Because again, their baseline target is 40%, and @ 1450MHz you would be @ 38% from 1050MHz. While most AMD cards can do 1100-1150 we will use 1050 because its what every card is capable of. So while yes it isn't just the process and of course the architecture itself has something to do with it. We see 1425MHz possible on Air already with reference PCB.
> 
> I'm not doubting that the guy doesn't know what he is doing, but K|NGP|N tends to break all of his records running cards modded similarly, but again typically they are not reference cards that he is able to break his records with. He is using his own custom cards to do this level of overclocking. Which proves again, that even though the card was hard modded and overvolted it is still a stock PCB card. If max overclocks were able to be reached under LN2 with reference cards, why would there ever be a need for a K|NGP|N Edition, Lightning, Classified, HoF, etc. Why would K|NGP|N prefer to use his own cards?
> 
> We also see quite a variance in overclocks with the reference card (we have numbers ranging from 1330-1480MHz), which indeed indicates that binning is important and basically all cards made the cut from AMD (probably why there is a power issue with the reference cards). This means AIB cards will come with factory overclocks so the binning will be quite different.
> 
> So in the end if it indeed clocks to 1450MHz that means they increased their clockspeeds by almost 40% which was the baseline target with 14nm, and again this is with 4th Generation GCN, so there was no regression in clockspeed (as you insist and continually repeat). And of course we know AMD is going to have revisions of these cards:
> 
> 
> 
> Which means again, it is most likely NOT GCN that is limiting the clockspeeds entirely, it is most likely a manner of manufacturing process maturity. Which means, as the process gets more mature, they will be able to hit their full 40-50% clockspeed improvement targets.


Polaris would have been on target to get the same decrease in frequency if it wasn't for 14nm finfet. I already said finfet was improving the clocks that could be achieved with GCN. Hence why I said GCN hitting 1266mhz and 1450mhz is a product of finfet. These clocks when we include the cooling would have never been reached on 28nm with equivalent cooling. Hence why I mentioned, why it might not necessarily be the process that's holding back AMD from reaching higher clocks, but the architecture itself.

What I was trying to get at is these clocks could have been higher if it wasn't for the frequency regression of GCN. E.g If GCN 1 was used, aka the frequency of tahiti on LN2 at 28nm, we could be seeing stock clocks be at 1500mhz with the potential to overclock to 1800mhz, much like maxwell's improvements on finfet. The problem was GCN if it was by design or by accident, doesn't clock as high with the more advanced variants. There is no question Tahiti reached the highest clocks, then hawaii, and lastly Tonga/fiji. If we look at hwbot, the highest tahiti's reached near 1800, the highest hawaii 1600 and the highest fiji reached was 1450mhz with ln2.

What 14nm is allows is those fiji ln2 clocks to be drawn out on air much like how 2200mhz may be achievable with pascal on a good enough sample. 1600mhz is beyond the clocks that were last achieved on 28nm on tonga/fiji which means I don't think we will get there even with the improvements of 14nm finfet or 16nm finfet.

Polaris got an increase because of finfet but the increase wasn't as large as Nvidia's increase. Why we are seeing clocks being limited so far and not the frequency jumps that Nvidia got are the limits of the GCN architecture.

Architecture has as much to do with frequency as the process they are made on.

At the end your actually agree with me half way. And that what I was insisting. The current frequency and limits are a product of the GCN architecture and the improvements of finfet. I never said GCN4 was going to clock worse when built on finfet, I said if GCN4 was made on 28nm, it would have shown the frequency regression pattern. Finfet counters this pattern but doesn't mean we are going to see 1.6ghz GCN chips.

Also, this 1.48ghz should not be included in the overclockability of an average sample. It is too overvolted for 24/7 usability and the author admits this and the power consumption and costs make it impractical to apply to a card. E.g AIB card or stock card + hard mod + water block = just a bit above a gtx 980 with a cost near 1070 with the water block, - 30% performance vs a gtx 1070 and over twice the power consumption.

Hence real world overclockability, which preserve the rx 480 value proposition is more along the lines of 1400mhz to 1450 on a good sample.

AMD exclusive partners have everything to gain by having people wait to buy their AIB models. AMD reference cooler has the lowest margins because it has the lowest cost and has the lowest potential for marketing. Also for partners that are not sapphire, because they are buying the cards from sapphire to rebuy, they lose part of the margin in the process. In addition because the rx 480 doesn't have too much competition in its price range, they don't have to worry too much about losing a sale to nvidia(atleast prior to the 1060 launches). Because of this, they would rather customers wait, so they can sell you a heavier margin AIB card, then to sell you a low margin reference design.

Reference cards overclocks as seen in reviews and hard modded cards by professional overclockers are much better evidence for the overclockability of polaris than some rumors and a single phrase by a person a couple weeks ago was said by the very same people clinging to these rumors, that his words couldn't be trusted and he was a shill for Nvidia.


----------



## Majin SSJ Eric

One thing is for sure Tajoh11, IF we do indeed see 1500-1600MHz AIB 480's I think its safe to say that your predictions were not at all correct and maybe we can get a little break from your twenty-paragraph diatribes slavering over Nvidia's oh-so-dominant technology advantages and perfect architectures?


----------



## tajoh111

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> One thing is for sure Tajoh11, IF we do indeed see 1500-1600MHz AIB 480's I think its safe to say that your predictions were not at all correct and maybe we can get a little break from your twenty-paragraph diatribes slavering over Nvidia's oh-so-dominant technology advantages and perfect architectures?


But what if they are correct again? Everything about my performance/clock predictions, performance per mm2 and power consumption figures came true as well as the number of chips that we would see this year and the specific segments that they were going to launch with. This frequency prediction has a good chance of proving right as well. You have to ask yourself why was I so accurate for this launch when I don't have sources, where guys like mahigan were so off the performance mark that a dogs prediction were closer to the mark? I was going to take a break anyways until the next architecture launch.

I am not saying Nvidia's architecture is perfect as they have their flaws when most directx 12 titles show, but they have just a consistent advantage almost everywhere. When in their hands with their marketing/brand advantage it gives them enough leverage to push AMD out of the market which is the last thing I want. But the reality exists looking at Polaris, that they need something better than GCN. GCN ages well but this isn't a selling point reviewers or marketers can use to sell cards. They need a big push in performance per watt so that their large cards are faster(and don't need water cooling) and their low end cards are preferable to partners and oems.

I want AMD to stop underpricing their products in respect how much they cost to make. This is not a good long term strategy with so little money in the war chest.

I would rather see AMD have better products so they can charge more for them. I would happily pay 300-350 dollars for a polaris chip if the full die was almost as fast as a gtx 980 ti particularly after the founders edition stunt and Nvidia's abandoning the improvement of driver improvements for Keplar owners. AMD needs the money and I would rather have AMD have worse price to performance but generally faster products. I want engineering advances from AMD so they can charge more.


----------



## FLCLimax

Quote:


> Originally Posted by *Derp*
> 
> 
> 
> 
> 
> http://www.xfastest.com/data/attachment/forum/201607/05/232416bsq3lbmloblboslv.png.thumb.jpg
> 
> http://www.xfastest.com/data/attachment/forum/201607/05/232414iwbtwww6bc61vzk6.png.thumb.jpg


pretty disappointing since this is being hyped as a bloodbath 480 killer, and costs so much more money.


----------



## ekg84

Quote:


> Originally Posted by *FLCLimax*
> 
> pretty disappointing since this is being hyped as a bloodbath 480 killer, and costs so much more money.


How much does it cost? has price been confirmed yet?


----------



## GorillaSceptre

Quote:


> Originally Posted by *tajoh111*
> 
> But what if they are correct again? Everything about my performance/clock predictions, performance per mm2 and power consumption figures came true as well as the number of chips that we would see this year and the specific segments that they were going to launch with. This frequency prediction has a good chance of proving right as well. You have to ask yourself why was I so accurate for this launch when I don't have sources, where guys like mahigan were so off the performance mark that a dogs prediction were closer to the mark? I was going to take a break anyways until the next architecture launch.
> 
> I am not saying Nvidia's architecture is perfect as they have their flaws when most directx 12 titles show, but they have just a consistent advantage almost everywhere. When in their hands with their marketing/brand advantage it gives them enough leverage to push AMD out of the market which is the last thing I want. But the reality exists looking at Polaris, that they need something better than GCN. GCN ages well but this isn't a selling point reviewers or marketers can use to sell cards. They need a big push in performance per watt so that their large cards are faster(and don't need water cooling) and their low end cards are preferable to partners and oems.
> 
> I want AMD to stop underpricing their products in respect how much they cost to make. This is not a good long term strategy with so little money in the war chest.
> 
> I would rather see AMD have better products so they can charge more for them. I would happily pay 300-350 dollars for a polaris chip if the full die was almost as fast as a gtx 980 ti particularly after the founders edition stunt and Nvidia's abandoning the improvement of driver improvements for Keplar owners. AMD needs the money and I would rather have AMD have worse price to performance but generally faster products. I want engineering advances from AMD so they can charge more.


You might want to post some evidence of your "predictions" and mahigans failed ones before calling him out, or anyone else for that matter.. Polaris performs pretty close to where he said it would iirc.. It's the perf/watt that caught a lot of people by surprise.

Everything is speculation, no one knows exactly what is happening besides AMD/Nvidia. Guessing a few things is meaningless as even a broken clock is right twice a day..

Nvidia having an advantage everywhere except DX12 is a bit of an oxymoron wouldn't you say? A large part of Nvidias advantage in perf/Watt is because they cut out most of the stuff/don't have what makes AMD strong using DX12 in the first place..

How do you know what margins AMD is getting on P10? We also don't know what happened with the 14nm process they used, it's pretty clear something went wrong considering how much people can under-volt P10.. Vega will be using TSMC as far as i know, one chip in the Polaris family on a different process to Nvidia doesn't give us any insight into the future/ show us where Polaris would of stacked up against Pascal if they were both using the same process.


----------



## lolfail9001

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> You are comparing the percentage of the 960 and 380 to the performance of a 380X in the first chart and the percentage of performance to a 1080 in the second. That will cause massively slower cards like the 960 and 380 to get relatively closer in such a chart as the difference between the two compared to a card that 3x faster. The difference is that a 1080 is LESS than 3 times faster than a 380 but MORE than 3 times faster than a 960. Its an awkward comparison to begin with because it deals in percentages vs vastly different performing cards rather than just showing avg frame rate between the 380 and 960 from two different time periods. Hell, the first chart was only 8 months ago, not even a full year.


I hope you realize that 380X review is used as data point for 380 performance?

Or that flew over your head too? Though yes, it is indeed correct, it's not a full year, though point stands. The point is that you can even compare 380 to 290, to make it funnier.


----------



## BiG StroOnZ

Quote:


> Originally Posted by *tajoh111*
> 
> Polaris would have been on target to get the same decrease in frequency if it wasn't for 14nm finfet. I already said finfet was improving the clocks that could be achieved with GCN. Hence why I said GCN hitting 1266mhz and 1450mhz is a product of finfet. These clocks when we include the cooling would have never been reached on 28nm with equivalent cooling. Hence why I mentioned, why it might not necessarily be the process that's holding back AMD from reaching higher clocks, but the architecture itself.
> 
> What I was trying to get at is these clocks could have been higher if it wasn't for the frequency regression of GCN. E.g If GCN 1 was used, aka the frequency of tahiti on LN2 at 28nm, we could be seeing stock clocks be at 1500mhz with the potential to overclock to 1800mhz, much like maxwell's improvements on finfet. The problem was GCN if it was by design or by accident, doesn't clock as high with the more advanced variants. There is no question Tahiti reached the highest clocks, then hawaii, and lastly Tonga/fiji. If we look at hwbot, the highest tahiti's reached near 1800, the highest hawaii 1600 and the highest fiji reached was 1450mhz with ln2.
> 
> What 14nm is allows is those fiji ln2 clocks to be drawn out on air much like how 2200mhz may be achievable with pascal on a good enough sample. 1600mhz is beyond the clocks that were last achieved on 28nm on tonga/fiji which means I don't think we will get there even with the improvements of 14nm finfet or 16nm finfet.
> 
> Polaris got an increase because of finfet but the increase wasn't as large as Nvidia's increase. Why we are seeing clocks being limited so far and not the frequency jumps that Nvidia got are the limits of the GCN architecture.
> 
> Architecture has as much to do with frequency as the process they are made on.
> 
> At the end your actually agree with me half way. And that what I was insisting. The current frequency and limits are a product of the GCN architecture and the improvements of finfet. I never said GCN4 was going to clock worse when built on finfet, I said if GCN4 was made on 28nm, it would have shown the frequency regression pattern. Finfet counters this pattern but doesn't mean we are going to see 1.6ghz GCN chips.
> 
> Also, this 1.48ghz should not be included in the overclockability of an average sample. It is too overvolted for 24/7 usability and the author admits this and the power consumption and costs make it impractical to apply to a card. E.g AIB card or stock card + hard mod + water block = just a bit above a gtx 980 with a cost near 1070 with the water block, - 30% performance vs a gtx 1070 and over twice the power consumption.
> 
> Hence real world overclockability, which preserve the rx 480 value proposition is more along the lines of 1400mhz to 1450 on a good sample.
> 
> AMD exclusive partners have everything to gain by having people wait to buy their AIB models. AMD reference cooler has the lowest margins because it has the lowest cost and has the lowest potential for marketing. Also for partners that are not sapphire, because they are buying the cards from sapphire to rebuy, they lose part of the margin in the process. In addition because the rx 480 doesn't have too much competition in its price range, they don't have to worry too much about losing a sale to nvidia(atleast prior to the 1060 launches). Because of this, they would rather customers wait, so they can sell you a heavier margin AIB card, then to sell you a low margin reference design.
> 
> Reference cards overclocks as seen in reviews and hard modded cards by professional overclockers are much better evidence for the overclockability of polaris than some rumors and a single phrase by a person a couple weeks ago was said by the very same people clinging to these rumors, that his words couldn't be trusted and he was a shill for Nvidia.


While the architecture itself remains the same, here with 14nm it is a similar scenario with GCN 1st Generation. Smaller die chip, new process, high clockspeeds. It has a lot more in common with GCN 1st Gen cards that it does with 2nd Gen or 3rd Gen cards (where the trend for bigger and more power hungry begins to take over).

Surely it wouldn't make sense if custom cards were capable of only doing 1400-1450MHz, when there is a dry ice run on hwbot with a RX 480 and he's only hitting 1460MHz (he even says himself this is achievable with water). Obviously there is some issues with the reference card, if when cooling is not a limitation, they still are only able to achieve 1460MHz. Or in the case of the example you were using 1480MHz. I mean, if custom AIB 480's can hit 1450MHz, but a reference card can only do 1460MHz with dry ice? That doesn't seem to add up here.

Again seems more like a limitation with the reference design, rather than a direct product of GCN.

Here my point is, that here with Polaris, it is a more similar scenario as the 7970, compared to the 290X and the Fury X. The major reason why you saw a regression in clockspeeds, is because the cards became bigger with more Stream Processors, and of course as a result became more power hungry. These types of directions with GPU design will automatically as a result make it harder for a card to clock higher. You tend to see this with any big die cards. Perfect example is the stock clocks of the 980 and 970, compared to the 980 Ti. And of course the average overall overclocks changed:

http://hwbot.org/hardware/videocard/geforce_gtx_980/

http://hwbot.org/hardware/videocard/geforce_gtx_980_ti/

Quite a variance in average Air and Water results.

So while we can agree that GCN doesn't clock as well compared to NVIDIA's architectures, the pattern you are seeing of regression in clockspeeds with GCN is more closely related to the size of the GPUs that they were releasing with each GCN revision. As you can see with NVIDIA, even NVIDIA loses clockspeeds when it comes to their big die cards and AMD's direction for quite a while has been mainly on big die cards.

Here with Polaris much smaller card, so has the ability to be more promising in terms of clockspeeds right from the get go and already we are seeing that coming to fruition. As the clocks they are achieving now with this card is far better than anything we've seen in a while from AMD.

There was another person who got a 480 to 1487MHz with an H100, so it's not limited to extreme hardmodding:



Better sample sizes could prove to be better in terms of what voltage mods can do. Also @ 1480MHz it will be faster than a 980. This 480 @ 1487MHz with no memory overclock (meaning more gains) did 67.5 fps in Valley 1.0 which is right around Fury/Nano performance. Again without a memory overclock. So it is better than a plain old 980.

I agree with you that 1450MHz is almost 40% but I do believe there is more to be discovered if not now, maybe with future AIB cards (couple months out), and definitely with revisions of the card because by then 14nm will be more mature. The only reason for this is I would like to see more numbers with AIB cards because I find it peculiar that a dry ice run with reference 480 only yielded 1460MHz but yet you seem to believe that AIB cards will be able to make it 1450MHz. Which means there is an inherent problem with the reference design. Because that doesn't seem to add up. It also doesn't make very much sense that with a volt mod 1480MHz is the max oc of the card but yet AIB cards have the ability to make it to 1450MHz stock. So with hwbot level modding, you only gain a 2% increase in clockspeeds? You have to at least agree with me that that doesn't make much sense and der8auer's results might not the end all be all.

While AIB partners have something to gain, you have to agree that it would be quite counterproductive to tell a member of the press their cards are hitting certain clockspeeds, when in fact it was just some scheme to sell more AIB cards.
Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Well to be fair, the GCN architecture has been very lackluster in clock potential since its launch (and clock speeds have steadily declined since the 7970). That said, I do agree that 14nm should allow for higher clocks than we are currently seeing but we are also limited by the fact that only reference boards have been launched and tested so far so we really can't say for sure what the absolute clock speed potential of P10 is as of yet...
> 
> The flip side of this lower clock speed potential than Nvidia's Maxwell and Pascal architectures is that generally GCN scales much better with higher clocks, so even though the ultimate speed may be lower than Nvidia, the performance for each MHz is higher...


If you read above, I have a basic explanation as to why we see a regression in clockspeeds since the 7970, it's a lot more logical than just saying, "It's GCN."

I agree that we only have reference boards right now showing clock potential, so I'm more interested in seeing where AIB cards end up because from what we see and hear right now it just isn't adding up.

I do believe that they will scale much better in the end, but at 1450MHz you are only really at Nano/Fury performance or overclocked 980 Performance. I say only, because seeing Fury X performance only 100MHz or so away is kind of disappointing.


----------



## jellybeans69

Quote:


> Originally Posted by *ekg84*
> 
> How much does it cost? has price been confirmed yet?


Judging by suppliers 300$ in murica and 300-360e in europe. No official conf. yet though


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *tajoh111*
> 
> But what if they are correct again? Everything about my performance/clock predictions, performance per mm2 and power consumption figures came true as well as the number of chips that we would see this year and the specific segments that they were going to launch with. This frequency prediction has a good chance of proving right as well. You have to ask yourself why was I so accurate for this launch when I don't have sources, where guys like mahigan were so off the performance mark that a dogs prediction were closer to the mark? I was going to take a break anyways until the next architecture launch.
> 
> I am not saying Nvidia's architecture is perfect as they have their flaws when most directx 12 titles show, but they have just a consistent advantage almost everywhere. When in their hands with their marketing/brand advantage it gives them enough leverage to push AMD out of the market which is the last thing I want. But the reality exists looking at Polaris, that they need something better than GCN. GCN ages well but this isn't a selling point reviewers or marketers can use to sell cards. They need a big push in performance per watt so that their large cards are faster(and don't need water cooling) and their low end cards are preferable to partners and oems.
> 
> I want AMD to stop underpricing their products in respect how much they cost to make. This is not a good long term strategy with so little money in the war chest.
> 
> I would rather see AMD have better products so they can charge more for them. I would happily pay 300-350 dollars for a polaris chip if the full die was almost as fast as a gtx 980 ti particularly after the founders edition stunt and Nvidia's abandoning the improvement of driver improvements for Keplar owners. AMD needs the money and I would rather have AMD have worse price to performance but generally faster products. I want engineering advances from AMD so they can charge more.


I too would love to see AMD build a card that handily beats a 1080 in performance, OCing, efficiency or whatever. Problem is they are heavily invested in GCN and need to continue to tweak it rather than going with a new architecture because that is what their current business model is based upon (think consoles for instance). They laid the framework for this strategy way back in 2011 with the release of the 7970 and are still in the middle of this long term strategy. They can't abandon GCN now. That said, Polaris is still a high performance gpu that they can sell for peanuts and we still have no idea how much these cards can improve considering OCing and AIB cards. Taking expectations out of it, the card itself is still an amazing value for the performance.


----------



## epic1337

the pricing estimates seems to be getting more absurd.


----------



## sammkv

whats with Nvidia and their prices ***!!!


----------



## prznar1

Quote:


> Originally Posted by *sammkv*
> 
> whats with Nvidia and their prices ***!!!


Maxwell. They are putting prices so high so older cards can sell easier as they still have huge supplie.


----------



## Arturo.Zise

Love how people just seem to pluck performance/pricing out of thin air and suddenly it's confirmed fact


----------



## oxidized

Quote:


> Originally Posted by *Arturo.Zise*
> 
> Love how people just seem to pluck performance/pricing out of thin air and suddenly it's confirmed fact


----------



## Sleazybigfoot

Quote:


> Originally Posted by *oxidized*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Arturo.Zise*
> 
> Love how people just seem to pluck performance/pricing out of thin air and suddenly it's confirmed fact rolleyes.gif
Click to expand...

AMD Navi confirmed for Q1 2017.


----------



## tashcz

Are specs due for tomorrow?


----------



## Redwoodz

120w will be average draw,max will hit 200w. Price will be high because die-size is much greater than 480.


----------



## tashcz

I ment more like, are they officially going to announce it tomorrow? I've read that it's due for 7th July.


----------



## Roadkill95

I can't believe what I'm seeing, are people really okay with paying over $250 for a xx60 model?!!!?? If it's 15% faster than the rx480 and costs more than $250 it's a failure in my books. People were crying about how the rx480 was disappointing dor having only 970 levels of perf well this'll be just as much of a disappointment if it comes in at the rumored price.


----------



## gigafloppy

Quote:


> Originally Posted by *jellybeans69*
> 
> Judging by suppliers 300$ in murica and 300-360e in europe. No official conf. yet though


The same old GTX 970 performance with the same old GTX 970 price. Pascal has been such an epic fail price-wise.


----------



## sugalumps

Quote:


> Originally Posted by *Roadkill95*
> 
> I can't believe what I'm seeing, are people really okay with paying over $250 for a xx60 model?!!!?? If it's 15% faster than the rx480 and costs more than $250 it's a failure in my books. People were crying about how the rx480 was disappointing dor having only 970 levels of perf well this'll be just as much of a disappointment if it comes in at the rumored price.


Ye but nvidia is not touting the card to be a "revolution" and people are not hyping it up to match a 980ti after oc, major difference.

It will have its place, for people that dont want to put up with the awful reference model of the 480 and for people that want g-sync(a game changer imo, you dont need it but once you try it it's hard to go back).


----------



## epic1337

Quote:


> Originally Posted by *sugalumps*
> 
> Ye but nvidia is not touting the card to be a "revolution" and people are not hyping it up to match a 980ti after oc, major difference.
> 
> It will have its place, for people that dont want to put up with the awful reference model of the 480 and for people that want g-sync(a game changer imo, you dont need it but once you try it it's hard to go back).


"G-Sync" with a GTX1060? you serious? the monitor is way more expensive than the card, its not even funny that you could afford a decent 144Hz monitor for half the price.

if you could afford a G-Sync monitor, you'd obviously can afford a GTX1070, better yet GTX1080 for a more consistent FPS at max settings.


----------



## oxidized

The more i follow this, the more nonsense i read, incredible


----------



## Max78

There is a VERY important bar missing from that fake graph! Can we see a support timeline for the 1060 or is it too short to graph?









So much speculation is being thrown around and then people regurgitate it as fact, because, well I read it on the internet so it must be true! . . . .


----------



## EastCoast

Quote:


> Originally Posted by *SlackerITGuy*
> 
> This launch should be somewhat exciting.
> 
> *But first NVIDIA needs to solve the horrible DPC Latancy issues, micro-stuttering and low performance that's affecting a lot of Pascal buyers.*


Wow, thats still going on? I recall that as far back as the gtx480.
I thought that was fixed









Quote:


> Originally Posted by *Clocknut*
> 
> dont really care about power consumption. The price is really what I care.
> 
> I hope is not above $250.
> 
> If it is 120w-130w TDP, I might be able to run SLI within 650w spec. (buy 1 now, buy another 1-2yr later)


If they release a 1060 for $199 I will be honestly shocked. Good ole competition would actually be...working?








Quote:


> Originally Posted by *andrews2547*
> 
> I am completely serious.
> 
> Who's to say they didn't only test content that only uses Gameworks or DX11 to get those results? Or if they used one game to get those results. I'm not denying the GTX 1060 is going to be more powerful than the RX480. I'm just pointing out the graph that is in the source is terrible. It's worse than the graphs that AMD used to show 2x RX480s are "better" than a single GTX 1080. Those graphs at least said how they got the results.
> I'm not denying the GTX 1060 is more power efficient. I just want to know what tests were done to get that specific results. If they just said "The GTX 1060 is more power efficient than the RX480" then it wouldn't be an issue, but they figures for it, which means tests were done but they aren't revealing what tests were done.
> 
> This is assuming that graph isn't made up by Videocardz to get ad revenue.


By now I simply assume they used games heavy in GW to get any of their performance results. I suspect a batman game to be in there somewhere.


----------



## magnek

Quote:


> Originally Posted by *epic1337*
> 
> "G-Sync" with a GTX1060? you serious? the monitor is way more expensive than the card, its not even funny that you could afford a decent 144Hz monitor for half the price.
> 
> if you could afford a G-Sync monitor, you'd obviously can afford a GTX1070, better yet GTX1080 for a more consistent FPS at max settings.


To be fair there is this one AOC monitor that's only $332.77 (1080p 144Hz).

But generally yeah I agree. Also depends on how much you're bothered by screen tearing.


----------



## Chargeit

Quote:


> Originally Posted by *epic1337*
> 
> "G-Sync" with a GTX1060? you serious? the monitor is way more expensive than the card, its not even funny that you could afford a decent 144Hz monitor for half the price.
> 
> if you could afford a G-Sync monitor, you'd obviously can afford a GTX1070, better yet GTX1080 for a more consistent FPS at max settings.


The funny part being someones money would be better spent on a Gsync gaming monitor then a higher end gpu. Gsync makes your gpu hit above its weight.


----------



## tajoh111

Quote:


> Originally Posted by *GorillaSceptre*
> 
> You might want to post some evidence of your "predictions" and mahigans failed ones before calling him out, or anyone else for that matter.. Polaris performs pretty close to where he said it would iirc.. It's the perf/watt that caught a lot of people by surprise.
> 
> Everything is speculation, no one knows exactly what is happening besides AMD/Nvidia. Guessing a few things is meaningless as even a broken clock is right twice a day..
> 
> Nvidia having an advantage everywhere except DX12 is a bit of an oxymoron wouldn't you say? A large part of Nvidias advantage in perf/Watt is because they cut out most of the stuff/don't have what makes AMD strong using DX12 in the first place..
> 
> How do you know what margins AMD is getting on P10? We also don't know what happened with the 14nm process they used, it's pretty clear something went wrong considering how much people can under-volt P10.. Vega will be using TSMC as far as i know, one chip in the Polaris family on a different process to Nvidia doesn't give us any insight into the future/ show us where Polaris would of stacked up against Pascal if they were both using the same process.


Some of the quotes.

http://www.overclock.net/t/1591625/guru3d-nvidia-slide-reveals-numbers-on-single-and-double-precision-for-flagship-pascal-gpu/40#post_24906129

"And I ball parked Polaris as well... Worst case is around 16-17 Tflops (without taking into consideration the higher fp32 performance per CU of Polaris over Fiji).

*Polaris will beat Pascal. Not because I want it too but because it will."
*

http://www.overclock.net/t/1592431/anand-ashes-of-the-singularity-revisited-a-beta-look-at-directx-12-asynchronous-shading/580#post_24953038

"Baffin/XT is Polaris 11. Greenland/XT is Vega 11.

BaffintXT is likely only 232mm2 which is about equivalent to a 464mm2 GPU on 28nm. So in terms of performance per watt, it is 2.5x that of an equivalent AMD Fiji GPU on the 28nm process assuming similar clockspeeds.

AMD have stated that Polaris' performance gain are 70% due to the 14LPP process and 30% due to architectural improvements. So it's not a big GPU but it packs a punch. An AMD Hawaii part (290/290x) or Grenada part (390/390x) has a die size of 438mm2. So that should give you an idea of the relative density of Polaris 11.

*So Polaris 11 will likely be faster than Fiji* (Fury/FuryX) but will likely also include less Stream Processors (thus less CUs)."

He recently try to back track and realign these to performance at 390x levels and fury x level. And did a little Fib.


Spoiler: Warning: Spoiler!



"False...

*I was talking about Vega and not Polaris. I speculated that Polaris 10 would output around FuryX performance. The RX 480 appears to be around 390x performance which means that the 480x will probably hit the Fury X performance figures.*"



He also said Polaris would be getting HBM.

http://www.overclock.net/t/1594965/oc3d-amd-polaris-10-engineering-sample-pictured/70#post_24999000

He also predicted that Polaris would have better performance per watt than pascal.

http://www.overclock.net/t/1597918/tweaktown-amds-next-gen-polaris-gpus-to-be-used-in-apple-desktops-and-notebooks/80#post_25101841


Spoiler: Warning: Spoiler!



"I'm laughing biggrin.gif

Polaris has the capacity to power gate at the CU-level. Meaning that unused ALUs switch off, used ALUs boost (run at a high frequency).

CUs also adapt to work loads on the fly. Meaning, not all SIMDs will be used or organized the same way. It will entirely depend on the work loads.

*In other words, Polaris will be quite revolutionary in terms of perf/watt. Yes, better than Pascal in this respect.*"



Considering the rx 480 is the full chip, he went from expecting better than fury x performance, better than pascal in certain scenarios. To eventually down to 390x performance for the cutdown variant of polaris 10 when it was becoming clear his prediction were off big time, even his lowered expectations as things became clear. Compared to reality, he was quite off. As mentioned in his post this was because he predicted this massive IPC gain of 40%. E.g 5tflops of polaris is equal to 7 tflops of fiji. Add in the performance per watt claims and he was off big time. He fed the hype machine.

Compare this to my prediction and solid logic which produced a more accurate prediction.

People need to pull back their expectations big time. A 232mm2 14nm GPU with Hawaii like performance(a 440mm2 die) is much more realistic, Particularly since all the 390x versions have been clocked very high because of the maturity of the chip and has that huge 512bit bus aiding it.

http://www.overclock.net/t/1594675/pcper-amds-raja-koduri-talks-moving-past-crossfire-smaller-gpu-dies-hbm2-and-more/70#post_24992080


Spoiler: Warning: Spoiler!



It would be very difficult to make a die that's 25% faster than a fury x with a die only 232mm2. You have to remember performance over a certain point is going to be limited by a 256bit bus. Getting over gtx 980 ti performance and Fury x, I think is impossible with a 256bit bus. Particularly because AMD color compression isn't as good as Nvidia. Average performance of the 380x is lower than the 280x according to techpowerup, not by much but it is lower.

In addition, it's a 232mm2 die. That's already a giant limiting factor on performance.

AMD was only able to beat 398mm2 made on 40nm with a 212mm2 made on 28nm die by 10% in performance. This was with a completely new architecture to boot. On top of this, Cayman or the 398mm2 die still had dp functionality.

The Fury x is a 600mm2 die, which has HBM to save die space because of the more basic memory controller(without it, it would be a 650-700mm2 die(it's Tonga x 2 which is a 360mm2 die)), is running water cooling to add to the performane and sacrifaced double precision for gaming performance. Polaris 10 a 232mm2 die, beating fury x with a ddr5 card restricted to a 256 bit bus by 25-30% is a fantasy. Particularly when you add in this isn't a new architecture, it's an evolution of a existing one.

Add in Fire pro duo would be obsolete considering a polaris x2, would be cheaper to manufacture, equipable with 8gb of memory.
*
People need to pull back their expectations big time. A 232mm2 14nm GPU with Hawaii like performance(a 440mm2 die) is much more realistic, Particularly since all the 390x versions have been clocked very high because of the maturity of the chip and has that huge 512bit bus aiding it.*

People need to remember people don't upgrade every 2 years like overclock.net. There is alot of people on keplar and pitcairns that upgrade every or 5 years and polaris 10 is for those people.



Here I predicted that AMD was exaggerating their improvements and the possibility of AMD exceeding their 120watt goals

What this ultimately translates into is 390-390x levels of performance for 140-150 watts. This is a realistic expectation given marketing exaggeration and what information they have given us and taking into account deficits of last generation. What AMD fans fail to realize generally is that Tonga is so far behind gm204 as far as performance per watt, that reaching gtx 980 ti levels of performance in 120-140 watt chip would need AMD to basically triple their performance per watt and a real one, not a fake marketing derived one.

http://www.overclock.net/t/1598515/game-debate-rumour-amd-polaris-10-reportedly-offers-near-980-ti-performance-for-300-usd/480#post_25139694

I also predicted samsungs 14nm node wasn't going to have superior performance characteristics for desktop and high performance use vs TSMC.

http://www.overclock.net/t/1587480/wccftech-nvidia-s-pascal-is-mia-could-be-in-trouble-reports-allege-drive-px2-demoed-with-gtx-980m-instead-of-pascal/10#post_24786826


Spoiler: Warning: Spoiler!



Samsung doesn't have the experience of TSMC on high performance processes. It all about low power, low performance and frequency which is built for mobile. In fact global foundaries even has more experience than samsung in this regard, and GF has consistently crapped the bed for the most part.

Samsung getting the high performance node right the first time would be a real surprise considering their lack of experience and there nodes and experience, specialized for mobile application rather than desktop and laptop application.



I also predicted that AMD wasn't going to launch a limited lineup of GPU on finfet for the year.

http://www.overclock.net/t/1584297/tomshardware-amds-next-gen-products-taped-out-possibly-delayed/40#post_24712015

If you read my posts I stressed pitcairns class chip with hawaii like performance. I also predicted the largest variant of polaris was the 232mm2 when people were still fantasizing about something tahiti sized.

http://www.overclock.net/t/1594256/digitimes-amd-expected-to-unveil-polaris-gpu-in-june/20#post_24980545

I predicted the limited IPC gains which contradicted mahigans 40% gain ones.

What I did get wrong is I thought AMD was going to keep fury around alot longer, because AMD is too poor of a company to amortize the Fiji product so quickly. However this seems to be a result of pascal.

I didn't expect Nvidia to value price their 1070 with it's level of performance and torpedo the 600mm2 cards and raise the amount of performance for the dollar so much. Although it doesn't exist at 379, for marketing purposes it does and if Nvidia wished, they could have partners release cards at that price. I am also pretty sure, AMD didn't and it's why the price of polaris is so low. Since the 1070 is 50 percent faster than it, it has to cost atleast 50 percent more for Polaris to keep it's value proposition.


----------



## magnek

Damn dude how long did it take for you piece together that post. And you seriously need to get a life.







j/k

Still impressive nonetheless.


----------



## DMatthewStewart

Quote:


> Originally Posted by *Sequences*
> 
> Graphs like this should be illegal.


How did they even come up with what htey have labeled as "power efficiency"? Because their graph is not even close to the numbers they report (among other things). Its almost like they were thinking "well, it did better in certain areas so we should just make the bar huge, dont worry about scale, and people wont actually compare it to our own data, so..."


----------



## tajoh111

Quote:


> Originally Posted by *magnek*
> 
> Damn dude how long did it take for you piece together that post. And you seriously need to get a life.
> 
> 
> 
> 
> 
> 
> 
> j/k
> 
> Still impressive nonetheless.


Not too long since it's quotes and I had read my previous ones recently to search for some of my other predictions. The Mahigan ones were easy to find because his predictions were so bold. I actually didn't want to post this because it pretty dickish to post and quote how wrong a person was about a launch but since someone asked for it and this person was also a fellow AMD fan like Mahigan, I felt a little more compelled to. He was one of the people most responsible for feeding the hype machines because he kept on saying he had sources, he kept on exaggerating the gains that could be made with GCN.

When his claims, as launch became closer to date were completely off, he tried to hide and say he put those expectations on vega, but from the 232mm2 quote I just showed, he was expecting better than fiji performance from Polaris 10.

It's funny because he was predicting gp104 to have 980 performance.


----------



## EightDee8D

Quote:


> Originally Posted by *tajoh111*
> 
> Not too long since it's quotes and I had read my previous ones recently to search for some of my other predictions. The Mahigan ones were easy to find because his predictions were so bold. I actually didn't want to post this because it pretty dickish to post and quote how wrong a person was about a launch but since someone asked for it and this person was also a fellow AMD fan like Mahigan, I felt a little more compelled to. He was one of the people most responsible for feeding the hype machines because he kept on saying he had sources, he kept on exaggerating the gains that could be made with GCN.
> 
> When his claims, as launch became closer to date were completely off, he tried to hide and say he put those expectations on vega, but from the 232mm2 quote I just showed, he was expecting better than fiji performance from Polaris 10.
> 
> It's funny because he was predicting gp104 to have 980 performance.


So, any predictions on vega (small and big) ?


----------



## magnek

Quote:


> Originally Posted by *tajoh111*
> 
> Not too long since it's quotes and I had read my previous ones recently to search for some of my other predictions. The Mahigan ones were easy to find because his predictions were so bold. I actually didn't want to post this because it pretty dickish to post and quote how wrong a person was about a launch but since someone asked for it and this person was also a fellow AMD fan like Mahigan, I felt a little more compelled to. He was one of the people most responsible for feeding the hype machines because he kept on saying he had sources, he kept on exaggerating the gains that could be made with GCN.
> 
> When his claims, as launch became closer to date were completely off, he tried to hide and say he put those expectations on vega, but from the 232mm2 quote I just showed, he was expecting better than fiji performance from Polaris 10.
> 
> It's funny because he was predicting gp104 to have 980 performance.


I respected Mahigan when he first joined since most of his posts were technical in nature and promoted discussion, but now it seems he's just gone off the deep end.

I really don't get the hyping part either. I mean if AMD fails to deliver on the hype, you're just setting yourself up for disappointment. Yet people just can't seem to learn.
Quote:


> Originally Posted by *EightDee8D*
> 
> So, any predictions on vega (small and big) ?


Within 10% of 1080 performance, assuming 4096 shaders and similar clockspeeds as Polaris 10.


----------



## oxidized

please stop posting such walls, cba reading them all


----------



## C2H5OH

Quote:


> Originally Posted by *magnek*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> I respected Mahigan when he first joined since most of his posts were technical in nature and promoted discussion, but now it seems he's just gone off the deep end.
> 
> I really don't get the hyping part either. I mean if AMD fails to deliver on the hype, you're just setting yourself up for disappointment. Yet people just can't seem to learn.
> 
> 
> Within 10% of 1080 performance, assuming 4096 shaders and similar clockspeeds as Polaris 10.


Vega is redesigned and is IP9, so I expect it to be within 500-600% of XboxOne







but I could be wrong, though.


----------



## MikeDuffy

@Mahigan was the first to question Nvidia's claim about multi-engine performance in directx12 - the first!

He was completely correct with his analysis of Maxwell's shortcomings - funny how you guys don't seem to give him credit on that one.

*Maxwell is utter garbage in directx12*- don't believe me? Here is a quote from SkyMtl at HardwareCanucks:

_Moving on to DX12 and we see AMD's new architecture really coming into its own against the NVIDIA cards. It absolutely demolishes the GTX 970 across the board (even in NVIDIA-friendly games like Tomb Raider) and even manages to run circles around that once-expensive GTX 980. These tests show Maxwell's performance in current DX12 applications is nothing short of embarrassing and proves this architecture simply wasn't designed with these types of workloads in mind._

@tajoh111- where is Nvidia's magical async compute driver? It's been over a year now and I haven't once seen you giving them a hard time about it. Why do you guys ignore Nvidia lies?


----------



## GoLDii3

Quote:


> Originally Posted by *MikeDuffy*
> 
> @Mahigan was the first to question Nvidia's claim about multi-engine performance in directx12 - the first!
> 
> He was completely correct with his analysis of Maxwell's shortcomings - funny how you guys don't seem to give him credit on that one.
> 
> *Maxwell is utter garbage in directx12*- don't believe me? Here is a quote from SkyMtl at HardwareCanucks:
> 
> _Moving on to DX12 and we see AMD's new architecture really coming into its own against the NVIDIA cards. It absolutely demolishes the GTX 970 across the board (even in NVIDIA-friendly games like Tomb Raider) and even manages to run circles around that once-expensive GTX 980. These tests show Maxwell's performance in current DX12 applications is nothing short of embarrassing and proves this architecture simply wasn't designed with these types of workloads in mind._
> 
> @tajoh111- where is Nvidia's magical async compute driver? It's been over a year now and I haven't once seen you giving them a hard time about it. Why do you guys ignore Nvidia lies?


I suppose you know Async Compute is only a feature of DX12,it's not DX12 itself.


----------



## MikeDuffy

Quote:


> Originally Posted by *GoLDii3*
> 
> I suppose you know Async Compute is only a feature of DX12,it's not DX12 itself.


It's main performance feature of Directx12.

Multi-engine, or async compute, is something that Nvidia promised last year. Owners from the 960-980ti should be pretty pissed as they get further and further left behind.

Anyways, @Mahigan's work on Maxwell was a real headache for Nvidia PR - they resorted to outright lies in order to keep from admitting their shortcomings.


----------



## NuclearPeace

But Maxwell performed really well for when it was still NVIDIA's premier architecture. It doesn't matter that Maxwell sucks at DX12 unless you are a long tern GPU user because its already being replaced by Pascal. Its great for GCN owners like me that GCN is so great at DX12, but who is looking to buy a 380x anymore?

Its clear as day that Maxwell wasn't designed to be used in DX12 games because its a very barebones, ad-hoc architecture. NVIDIA is a R&D machine and their technology moves very fast because they can afford to be that bleeding edge, hence the more disposable and short term nature of their architectures. Meanwhile, AMD is still performing upgrades on the same GCN archtecutre that debuted almost five years ago. Kepler and Maxwell cards, when they were still in production, were superior to GCN cards in terms of perf/watt and perf/mm2.

GCN is too ahead of its time for its own good. High amounts of DX11 overhead on GCN cards is a feature, not a bug.


----------



## tajoh111

Quote:


> Originally Posted by *EightDee8D*
> 
> So, any predictions on vega (small and big) ?


I am not sure there is going to be a big vega.

Two vega's in nearly the same span of time considering the performance class is just cannibalization of sales. Considering the failure of Fiji sales, I don't think they ever want to do a 600mm2 die again. The just don't have the professional market for it to make the investment worth it.

I think all AMD is going to release is the 4096 shader vega card. This card should have it's double precision added back in to address the professional market. This along with the additions of Vega/polaris should make this around a 400mm2 die.

Since this card has 77% more shaders. That should lead to a straight forward 77% increase in performance. However, because utilization is more difficult is more difficult as an architecture is scaled up, scaling is negatively effected. Fiji vs full Tonga, 2x the specs but only 75-80 percent increase in performance when clocks or equalized. The same thing will happen to vega, but perhaps a bit better but there will be some scaling loss.

So if clocks are equal to Polaris, I would say performance should be around 55-65 percent better than an rx 480. Depending how the driver situation changes along with the directx 12 additions, it's competitiveness vs pascal could change. With HBM2 power savings and maturity and revisions, we might be able to get a slight clock boost vs polaris. If this happens, add 5% to these estimates.

However at the price range, we won't be getting a water cooler, so don't expect a big jump in clocks. Maybe 1300mhz or 1320mhz with around 225 watt power consumption.


----------



## Raghar

Quote:


> Originally Posted by *MikeDuffy*
> 
> *Maxwell is utter garbage in directx12*- don't believe me? Here is a quote from SkyMtl at HardwareCanucks:
> 
> _Moving on to DX12 and we see AMD's new architecture really coming into its own against the NVIDIA cards. It absolutely demolishes the GTX 970 across the board (even in NVIDIA-friendly games like Tomb Raider) and even manages to run circles around that once-expensive GTX 980. These tests show Maxwell's performance in current DX12 applications is nothing short of embarrassing and proves this architecture simply wasn't designed with these types of workloads in mind._


It would be nice when someone compare it to DX12 Keppler, SMD to SMD clock per clock.

Actually, the current DX12 controversy only shows that AMD drivers utterly sucked, and only NVidia showed some competence. Well, with simple DX12 interface, the main effort has been send to game developers, which would be responsible for proper optimization. For AAA games that means just a minor change, for really small developers (including freeware), it's better to use OGL version and stay away from DX12 for anything that would need extensive optimizations. Things might be really variable on different configurations...


----------



## Fb74

Could we stick with talking about GTX 1060 please ?

I know it's difficult, because one assertion leads to another and the will to discuss about it, but I think we're going far from the original thread.


----------



## tajoh111

Quote:


> Originally Posted by *MikeDuffy*
> 
> @Mahigan was the first to question Nvidia's claim about multi-engine performance in directx12 - the first!
> 
> He was completely correct with his analysis of Maxwell's shortcomings - funny how you guys don't seem to give him credit on that one.
> 
> *Maxwell is utter garbage in directx12*- don't believe me? Here is a quote from SkyMtl at HardwareCanucks:
> 
> _Moving on to DX12 and we see AMD's new architecture really coming into its own against the NVIDIA cards. It absolutely demolishes the GTX 970 across the board (even in NVIDIA-friendly games like Tomb Raider) and even manages to run circles around that once-expensive GTX 980. These tests show Maxwell's performance in current DX12 applications is nothing short of embarrassing and proves this architecture simply wasn't designed with these types of workloads in mind._
> 
> @tajoh111- where is Nvidia's magical async compute driver? It's been over a year now and I haven't once seen you giving them a hard time about it. Why do you guys ignore Nvidia lies?


He definitely knows his stuff but his predictions about performance are way off.

The problem for AMD is in the best case scenario of rx480 vs pascal so far is pascal being 30 percent faster than it. This isn't that much off between their difference in die size or transistor count. Meaning at equal size they almost perform the same in directx 12, but everywhere else, Nvidia has a 50% performance advantage. If Nvidia add's that directx 12 functionality, then AMD is toast.

That's why we need something better and newer than GCN, so when that day happens, AMD is not screwed.


----------



## Blameless

Quote:


> Originally Posted by *DMatthewStewart*
> 
> How did they even come up with what htey have labeled as "power efficiency"?


Pretty simple really...typical power vs. average performance.

RX480 is ~150w 1060 is ~120w.

NVIDIA says 1060 is ~15% faster.

So, (150 / 120) * 1.15 = 1.4375
Quote:


> Originally Posted by *DMatthewStewart*
> 
> Because their graph is not even close to the numbers they report (among other things). Its almost like they were thinking "well, it did better in certain areas so we should just make the bar huge, dont worry about scale, and people wont actually compare it to our own data, so..."


I don't really have a problem with the graph. It's cut off, which may mislead some, but that's really the fault of those misreading things.


----------



## variant

Quote:


> Originally Posted by *magnek*
> 
> Within 10% of 1080 performance, assuming 4096 shaders and similar clockspeeds as Polaris 10.


That's impossible to predict. Polaris and Vega don't share the same architecture. Polaris is Graphics IPv8 similar to Tonga/Fiji, while it seems Vega is Graphics IPv9 which was developed for 14nm.


----------



## GorillaSceptre

@tajoh111

Rep+ for effort.









Don't take this the wrong way... but, all of that was/is speculation, no one knew what was Greenland/Vega, Polaris, HBM2, etc. As i said in my earlier post, it was perf per Watt that caught most people off guard, still best not to call anyone out unless they were stating everything they said as fact.

Half the fun of these forums is speculation, for me anyway.


----------



## EightDee8D

Quote:


> Originally Posted by *variant*
> 
> That's impossible to predict. Polaris and Vega don't share the same architecture. Polaris is Graphics IPv8 similar to Tonga/Fiji, while it seems Vega is Graphics IPv9 which was developed for 14nm.


Better hope it's same ipv8 cuz we know what huge expectations and hype does.


----------



## DMatthewStewart

Quote:


> Originally Posted by *Blameless*
> 
> Pretty simple really...typical power vs. average performance.
> 
> RX480 is ~150w 1060 is ~120w.
> 
> NVIDIA says 1060 is ~15% faster.
> 
> So, (150 / 120) * 1.15 = 1.4375
> I don't really have a problem with the graph. It's cut off, which may mislead some, but that's really the fault of those misreading things.


Hold on a minute. That is entirely wrong. We are comparing the 120w usage to a device that uses 150w. That means that, since we are trying to equate the performance, our max amount of power used is 150w. 1060 uses 120w. *That means its 20% more power effecient* because its only using 80% of the total wattage calculated, or total wattage pool. 120w is 80% of 150w. So youre calculating it incorrectly just as they have done. Thats strictly full TDP. Before we even touch on performance. So 150w is 100% of the pool of power that we are rating. The 1060 uses 120w. Its 20% more power effecient

And where does 1.15 come from?

Dividing 150/120 gives us no number related to power efficiency. The original graph is incorrect because it was calculated incorrectly. Working it backwards and then typing it out seems right at first. Its not. Unless Im missing something here. Im open minded to being totally wrong but this seems way too easily identifiable as incorrect.


----------



## Aymanb

Is it confirmed anywhere that the 1060 is going to be better than 970 in terms of performance, and if so, how much do you think will be? I'm in need of a new graphics card and 970 fell in price, but I'm waiting to see what 1060 offers.

Also 1070 prices are RIDICULOUS, how long does it take to stabilize? I'm not paying €500 for that thing.


----------



## DMatthewStewart

Quote:


> Originally Posted by *Aymanb*
> 
> Is it confirmed anywhere that the 1060 is going to be better than 970 in terms of performance, and if so, how much do you think will be? I'm in need of a new graphics card and 970 fell in price, but I'm waiting to see what 1060 offers.
> 
> Also 1070 prices are RIDICULOUS, how long does it take to stabilize? I'm not paying €500 for that thing.


Im with you on the prices. Especially for the components and PCB's. When they called the 1080 the "Founders Edition" they mean that the people buying them are creating the financial foundation for the next run. Im waiting. And Im waiting a long time before I move on anything


----------



## tajoh111

Quote:


> Originally Posted by *DMatthewStewart*
> 
> Hold on a minute. That is entirely wrong. We are comparing the 120w usage to a device that uses 150w. That means that, since we are trying to equate the performance, our max amount of power used is 150w. 1060 uses 120w. *That means its 20% more power effecient* because its only using 80% of the total wattage calculated, or total wattage pool. 120w is 80% of 150w. So youre calculating it incorrectly just as they have done. Thats strictly full TDP. Before we even touch on performance. So 150w is 100% of the pool of power that we are rating. The 1060 uses 120w. Its 20% more power effecient
> 
> And where does 1.15 come from?
> 
> Dividing 150/120 gives us no number related to power efficiency. The original graph is incorrect because it was calculated incorrectly. Working it backwards and then typing it out seems right at first. Its not. Unless Im missing something here. Im open minded to being totally wrong but this seems way too easily identifiable as incorrect.


If the cards had the same performance and same electricity consumption. Efficiency is 1.

E.g 120 watts divided by 120 watts divided by the equal performance would act as a modifier of 1.

However if one card uses 30 less wants. That means one card is 25 percent more efficient.

Even. 150 divided by 120.

1.25 I.e 25 percent more efficient. However of this card also performs 15 percent better while consuming 25 percent better it acts as a modifier.

I.e 1.15. IF IT performed the same this modifier would be 1 but since it's performs 15 percent better, this becomes 1.15.

There for to get the efficiency you have to multiply both numbers.

1.25 * 1.15 = efficiency.


----------



## magnek

Quote:


> Originally Posted by *variant*
> 
> That's impossible to predict. Polaris and Vega don't share the same architecture. Polaris is Graphics IPv8 similar to Tonga/Fiji, while it seems Vega is Graphics IPv9 which was developed for 14nm.


Quote:


> Originally Posted by *EightDee8D*
> 
> Better hope it's same ipv8 cuz we know what huge expectations and hype does.


^^^FINALLY somebody who gets it.

If you keep your expectations low at worst you won't be disappointed, and at best you'll be very pleasantly surprised.

If you hype your expectations through the roof then the best you could hope for is "it did what it was supposed to do".


----------



## C2H5OH

Quote:


> Originally Posted by *magnek*
> 
> ^^^FINALLY somebody who gets it.
> 
> If you keep your expectations low at worst you won't be disappointed, and at best you'll be very pleasantly surprised.
> 
> If you hype your expectations through the roof then the best you could hope for is "it did what it was supposed to do".


Well, earlier, when I also said that it's IP9, I wasn't implying that it will be better. I was implying that it will be different.
Just to show another interpretation of the IP9 thingy...







Of course it will be a disappointment - and occasionally burning houses - as any other AMD card has done...


----------



## magnek

No I'm saying staying conservative, not generating false hope or hype (or worse, actively generating false hype for nefarious purposes) is the key to long term happiness.


----------



## tajoh111

Quote:


> Originally Posted by *magnek*
> 
> No I'm saying staying conservative, not generating false hope or hype (or worse, actively generating false hype for nefarious purposes) is the key to long term happiness.


I think the big thing that's going to happen with Vega is they are going to bring back double precision and make it a compute architecture again. 4gb of memory and the professional market don't mix. 16gb does and both tahiti and particularly hawaii were solid professional cards. Most of the improvements and changes will be with this I believe.

Considering the 4096 shader leak, I was wondering why there wasn't an increase in shaders vs fiji and I think it's because the double precision had to be taken out due to the effect on power and die area with Fiji. With 14nm finfet they can do this again while having something with reasonable power consumption and performance. What this means is more of the improvements will be for professional tasks and not gaming. So performance I could see being around fiji + 20-25% mostly from clocks. These changes or rarely improve gaming performance and typically lower frequency.

One notable exception would be if the broke down the CU's into smaller clusters.

If they reduce the number of cores in a CU from 64 to 32, it could up IPC up 10-15%. But other than that Polaris + 65% or Fur + 20-25% are pretty safe bets if the card has 4096 shaders because there wasn't an noticeable IPC increase from Polaris to Hawaii.


----------



## EightDee8D

Quote:


> Originally Posted by *tajoh111*
> 
> I think the big thing that's going to happen with Vega is they are going to bring back double precision and make it a compute architecture again. 4gb of memory and the professional market don't mix. 16gb does and both tahiti and particularly hawaii were solid professional cards. Most of the improvements and changes will be with this I believe.
> 
> Considering the 4096 shader leak, I was wondering why there wasn't an increase in shaders vs fiji and I think it's because the double precision had to be taken out due to the effect on power and die area with Fiji. With 14nm finfet they can do this again while having something with reasonable power consumption and performance. What this means is more of the improvements will be for professional tasks and not gaming. So performance I could see being around fiji + 20-25% mostly from clocks. These changes or rarely improve gaming performance and typically lower frequency.
> 
> One notable exception would be if the broke down the CU's into smaller clusters.
> 
> If they reduce the number of cores in a CU from 64 to 32, it could up IPC up 10-15%. But other than that Polaris + 65% or Fur + 20-25% are pretty safe bets if the card has 4096 shaders because there wasn't an noticeable IPC increase from Polaris to Hawaii.


They said 15% ipc increase from 290 to rx480, but actually it regressed. maybe rop/rbe bottlenecks ? but if you compare fiji vs pol10's perf/tflops there's like 10% increase.

i think 4k shader vega gpu will be -/+ 5% of 1080 stock. considering they fix all the bottlenecks.


----------



## epic1337

Quote:


> Originally Posted by *EightDee8D*
> 
> They said 15% ipc increase from 290 to rx480, but actually it regressed. maybe rop/rbe bottlenecks ? but if you compare fiji vs pol10's perf/tflops there's like 10% increase.
> 
> i think 4k shader vega gpu will be -/+ 5% of 1080 stock. considering they fix all the bottlenecks.


bandwidth bottleneck, there had been indications that VRAM OC increases perf, this indicates that bandwidth is insufficient.

if vega will use HBM, then yes they'll get more out of the chip, supposedly it'll be sufficient enough to satiate vega's compute units.


----------



## prznar1

Soooo.... where are the reviews? its 7.7.2016


----------



## WhyCry

AFAIK there are no reviews today, it's just a paper launch.


----------



## Fb74

Prices *seem* confirmed:

http://videocardz.com/61917/nvidia-geforce-gtx-1060-to-cost-249-299-usd

But beware... are we talking about 3GB or 6GB version for 249 US$ ?

Note that the primary link for that info (end of the page) is suspicious...


----------



## rainzor

Doesn't seem like there is a 3GB version. $249 is a good price actually, going to be interesting to see the availability and the amount of AIB/retailer gouging tho lol.


----------



## ChevChelios

Quote:


> Originally Posted by *Fb74*
> 
> Prices *seem* confirmed:
> 
> http://videocardz.com/61917/nvidia-geforce-gtx-1060-to-cost-249-299-usd
> 
> But beware... are we talking about 3GB or 6GB version for 249 US$ ?
> 
> Note that the primary link for that info (end of the page) is suspicious...


well according to those slides there is no 3GB version at all ?

its just 6GB 1060 with (apparently) 980 level performance for $250-300

and Founders Edition makes a return









$300 is too high a price, but $250+ for marginally higher performance, less power draw/heat and better ref cooler seems ok


----------



## Fb74

Quote:


> Originally Posted by *ChevChelios*
> 
> well according to those slides there is no 3GB version at all ?


Same guess.

But the link given at the end of the page is... weird.
Hope it's not a joke.

Edit: Primary link has disappeared.


----------



## Clocknut

Quote:


> Originally Posted by *rainzor*
> 
> Doesn't seem like there is a 3GB version. $249 is a good price actually, going to be interesting to see the availability and the amount of AIB/retailer gouging tho lol.


suddenly the 8GB RX480 aint that great anymore lol.


----------



## ChevChelios

if its on 980 level then at least its not a turd like the 960

lol


----------



## oxidized

IF those prices are confirmed custom are gonna cost more than 300€, probably 350€ for a decent one


----------



## ChevChelios

Quote:


> Originally Posted by *oxidized*
> 
> IF those prices are confirmed custom are gonna cost more than 300€, probably 350€ for a decent one


same for custom 480s though

right now I see ref 480 for 260+ EUR here locally, good custom will easily be 300+

then it will come down to OC vs OC


----------



## oxidized

Quote:


> Originally Posted by *ChevChelios*
> 
> same for custom 480s though
> 
> right now I see ref 480 for 260+ EUR here locally, good custom will easily be 300+
> 
> then it will come down to OC vs OC


ye ofc, hey, i'm no fanboy!!! was just stating the possible price


----------



## BigTree

Quote:


> Originally Posted by *ChevChelios*
> 
> well according to those slides there is no 3GB version at all ?
> 
> its just 6GB 1060 with (apparently) 980 level performance for $250-300
> 
> and Founders Edition makes a return
> 
> 
> 
> 
> 
> 
> 
> 
> 
> $300 is too high a price, but $250+ for marginally higher performance, less power draw/heat and better ref cooler seems ok




So this is almost confirm if you add vat and taxes.


----------



## ChevChelios

Quote:


> Originally Posted by *BigTree*
> 
> 
> 
> So this is almost confirm if you add vat and taxes.


no









it'll be 300-350 EUR here

I can buy a custom 10*7*0 right now for 460+ EUR

but prices obviously differ a lot worldwide


----------



## BigTree

Quote:


> Originally Posted by *ChevChelios*
> 
> no
> 
> 
> 
> 
> 
> 
> 
> 
> 
> it'll be 300-350 EUR here
> 
> I can buy a custom 10*7*0 right now for 460+ EUR
> 
> but prices obviously differ a lot worldwide


Asus Strix 1070 version costs 543€ in the same store.

http://www.skytech.lt/strixgtx1070o8ggaming-asus-geforce-gtx-1070-8gb-gddr5-256-bit-2xhdmi-dvi-2xdp-p-316077.html

And as we know it non FE cards will nowhere to be found. So 300$ = 400€ in Europe


----------



## ChevChelios

well if a custom 1070 costs 540 EUR then a 1060 obviously wont cost 470-500

either way you just need to look at your own local stores and not care what it costs somewhere else


----------



## BigTree

I said almost. Strix version will be about 430-450€


----------



## TrueForm




----------



## oxidized

Quote:


> Originally Posted by *BigTree*
> 
> I said almost. Strix version will be about 430-450€


What are you talking about, at that price nobody will even think to buy it

Also starting at 350/370 in 2/3 weeks will drop 20/30 euro, as always when cards come out


----------



## BigTree

I am talking about the *STRIX* version.
In Germany 1070 Strix cost about 530€ how much do you think the 1060 Strix will cost?


----------



## oxidized

Quote:


> Originally Posted by *BigTree*
> 
> I am talking about the *STRIX* version.
> In Germany 1070 Strix cost about 530€ how much do you think the 1060 Strix will cost?


Surely not more than 400, otherwise makes no sense


----------



## maltamonk

I've got nothing positive to say about these prices. Not only are they 25% too high but they don't realistically reflect actual prices which are likely to be another 15% on top.


----------



## vinodfrndz

Quote:


> Taipei, Taiwan, July 7, 2016 - GIGABYTE, the world's leading premium gaming hardware manufacturer, will be releasing its GeForce® GTX 1060 G1 GAMING graphics card. Based on the Pascal architecture, the GTX 1060 G1 GAMING steps up further with Super Overclocked GPU, the signature WINDFORCE technology, and RGB Illumination, making G1 GAMING a winning formula for gamers who seek superior cooling and overclocking performances for next-gen gaming.


http://videocardz.com/61934/gigabyte-announces-geforce-gtx-1060-g1-gaming


----------



## sugalumps

Quote:


> Originally Posted by *vinodfrndz*
> 
> http://videocardz.com/61934/gigabyte-announces-geforce-gtx-1060-g1-gaming


If the 1060 gets custom cards out before the 480 then it's game over, amd better hurry up and stop riding that awful reference cooler.


----------



## jellybeans69

Quote:


> Originally Posted by *BigTree*
> 
> 
> 
> So this is almost confirm if you add vat and taxes.


Sounds about right both versions without VAT are 339 euro and 359 euros (+21% VAT and some e-shop % so they make some money) and you get the price. All i can say told you so plenty of other times in these threads about 1060


----------



## Rahkeesh

Quote:


> Originally Posted by *sugalumps*
> 
> If the 1060 gets custom cards out before the 480 then it's game over, amd better hurry up and stop riding that awful reference cooler.


Don't worry, Nvidia will take their time mugging people with the fool's edition.


----------



## ChevChelios

the reference 1060 is bound have a better cooler the the ref 480 and thats on top of being less hungry and hot .. plus the performance difference (the leaked FS results we saw were apparently without release drivers, they _may_ improve)

if both the reference and the custom 1060 are out at about the same as custom 480 _and_ there is some semblance of a stock - Polaris wont have an easy time


----------



## FLCLimax

the cheapest 1060 is priced in line with the Nitro 480 which is going to be equal in performance at worst, unless OC scales better on the 1060 than on the higher pascal(it won't). Also the Nitro will be in stock.

EDIT: Actually since the WCCf graph was fake andleak benchmarks put the 1060 at 8-10% over the 480, the nitro could be faster.

BUT the G1 1060 could be faster than the Nitro 480 too.


----------



## C2H5OH

NVIDIA GeForce GTX 1060 Preview: Pascal with GP106
https://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-1060-Preview-Pascal-GP106
Quote:


> At $249, the GTX 1060 partner cards, available and shipping on July 19th, will compete very well with the 8GB variant of the Radeon RX 480


Quote:


> The Founders Edition card, designed by NVIDIA and the one we were sent for our initial reviews, will cost $299 and will be available ONLY at NVIDIA.com


So Strix, G1 etc, might more, something like $330?


----------



## ChevChelios

the $ prices of a bunch of custom 1080s are under $700 in US (like FTW etc.)

and I see a similar picture here in Europe (at least where I live), plenty of custom (not all) 1080/1070 are cheaper or on par with founders prices

dont see why that doesnt apply to 1060

custom 1060 will be ~up to $300, the better models (top tier customs) may be $320-330+


----------



## jologskyblues

Is it confirmed that Nvidia will not be releasing a 3GB SKU?


----------



## Fb74

Quote:


> Originally Posted by *jologskyblues*
> 
> Is it confirmed that Nvidia will not be releasing a 3GB SKU?


No, it could come after.

But the 249-299 range is for that 6GB version.

So, no benches before days ?


----------



## jologskyblues

Quote:


> Originally Posted by *Fb74*
> 
> No, it could come after.
> 
> But the 249-299 range is for that 6GB version.
> 
> So, no benches before days ?


They shouldn't even bother. That would be another reason for them to get bashed by critics. As if the nonsense that is the FE isn't enough.


----------



## Klocek001

Quote:


> Originally Posted by *FLCLimax*
> 
> the cheapest 1060 is priced in line with the Nitro 480 which is going to be equal in performance at worst, unless OC scales better on the 1060 than on the higher pascal(it won't). Also the Nitro will be in stock.
> 
> EDIT: Actually since the WCCf graph was fake andleak benchmarks put the 1060 at 8-10% over the 480, the nitro could be faster.
> 
> BUT the G1 1060 could be faster than the Nitro 480 too.


you're forgetting that all 1070 and 1080 custom coolers are ones from the top shelf, so they cost more than theoretical 600 MSRP of 1080.
1060 AIB will likely be simpler versions of top shelf air coolers and therefore we might see $250 windforce 2X,some with old gen twin frozr taken from 960 or a single fan acx one.
I've seen leaked benches put 1060 15% over 480, so if we get an AIB version like dual fan windofrce at near $250 price then I can't see AIB RX480 touch it.
we'll see what happens, but I think with Polaris 10 only matching maxwell performance/watt while Pascal is around 1.75x of that I can't really see RX480 beating 1060 unless we compare oc'd AIB version of 480 to stock 1060.


----------



## FLCLimax

Quote:


> Originally Posted by *Klocek001*
> 
> Quote:
> 
> 
> 
> Originally Posted by *FLCLimax*
> 
> the cheapest 1060 is priced in line with the Nitro 480 which is going to be equal in performance at worst, unless OC scales better on the 1060 than on the higher pascal(it won't). Also the Nitro will be in stock.
> 
> EDIT: Actually since the WCCf graph was fake andleak benchmarks put the 1060 at 8-10% over the 480, the nitro could be faster.
> 
> BUT the G1 1060 could be faster than the Nitro 480 too.
> 
> 
> 
> you're forgetting that all 1070 and 1080 custom coolers are ones from the top shelf, so they cost more than theoretical 600 MSRP of 1080.
> 1060 AIB will likely be simpler versions of top shelf air coolers and therefore we might see $250 windforce,twin frozr or acx ones.
> I've seen leaked benches put 1060 15% over 480, so if we get an AIB version like dual fan windofrce at near $250 price then I can't see AIB RX480 touch it.
> we'll see what happens, but I think with Polaris 10 only matching maxwell performance/watt while Pascal is around 1.75x of that I can't really see RX480 beating 1060 unless we compare oc'd AIB version of 480 to stock 1060.
Click to expand...

I think(know) that 1060 AIB will be less than FE. I also think supply issues will see this card costing more than MSRP across the board. What benchmarks put it at 15% above the 480? The leaked 3dmark bench put it 8% faster. I could stand to be corrected if i'm wrong.


----------



## Klocek001

Quote:


> Originally Posted by *FLCLimax*
> 
> I think(know) that 1060 AIB will be less than FE. I also think supply issues will see this card costing more than MSRP across the board. What benchmarks put it at 15% above the 480? The leaked 3dmark bench put it 8% faster. I could stand to be corrected if i'm wrong.


synthetic benches







like they ever reflect the actual gaming performance........ Fury X was 35% faster than 980 in leaked synthetics, then we saw what happened.

this is the one I was referring to, 15,7% faster than 480 @1080p


----------



## FLCLimax

Quote:


> Originally Posted by *Klocek001*
> 
> Quote:
> 
> 
> 
> Originally Posted by *FLCLimax*
> 
> I think(know) that 1060 AIB will be less than FE. I also think supply issues will see this card costing more than MSRP across the board. What benchmarks put it at 15% above the 480? The leaked 3dmark bench put it 8% faster. I could stand to be corrected if i'm wrong.
> 
> 
> 
> synthetic benches
> 
> 
> 
> 
> 
> 
> 
> like they ever reflect the actual gaming performance........ Fury X was 35% faster than 980 in leaked synthetics, then we saw what happened.
> 
> this is the one I was referring to, 15,7% faster than 480 @1080p
Click to expand...

what is this? games, synths? which ones? details?


----------



## EightDee8D

Quote:


> Originally Posted by *FLCLimax*
> 
> what is this? games, synths? which ones? details?


Probably w3.


----------



## Fb74

Quote:


> Originally Posted by *FLCLimax*
> 
> what is this? games, synths? which ones? details?


It was given days ago, but it's pure speculation, based on "smart" calculus" (taking into account frequency, Tflops, Rops, etc...) so nothing very reliable at that step.


----------



## FLCLimax

So from somebody's imagination.


----------



## Blameless

Quote:


> Originally Posted by *DMatthewStewart*
> 
> And where does 1.15 come from?


It's the performance bar.
Quote:


> Originally Posted by *DMatthewStewart*
> 
> Dividing 150/120 gives us no number related to power efficiency.


It's the power half of the power/performance ratio.
Quote:


> Originally Posted by *DMatthewStewart*
> 
> The original graph is incorrect because it was calculated incorrectly.


If the performance figure is accurate and the TDP figures are representative of typical power consumption, then the graph is correct.
Quote:


> Originally Posted by *tajoh111*
> 
> If the cards had the same performance and same electricity consumption. Efficiency is 1.
> 
> E.g 120 watts divided by 120 watts divided by the equal performance would act as a modifier of 1.
> 
> However if one card uses 30 less wants. That means one card is 25 percent more efficient.
> 
> Even. 150 divided by 120.
> 
> 1.25 I.e 25 percent more efficient. However of this card also performs 15 percent better while consuming 25 percent better it acts as a modifier.
> 
> I.e 1.15. IF IT performed the same this modifier would be 1 but since it's performs 15 percent better, this becomes 1.15.
> 
> There for to get the efficiency you have to multiply both numbers.
> 
> 1.25 * 1.15 = efficiency.


Yes.


----------



## fatmario

Quote:


> Originally Posted by *Clocknut*
> 
> 1060 wont be above $280.
> 
> 8gb 1070 is price at $380. Selling 6GB 1060 for $299 wont make any sense. Who is going to buy EVGA 1060 SC edition when they can just buy 1070 base model?


i was right about the price


----------



## FLCLimax

$299 MSRP is not above $280? We must be in the twilight zone.


----------



## DMatthewStewart

Quote:


> Originally Posted by *tajoh111*
> 
> If the cards had the same performance and same electricity consumption. Efficiency is 1.
> 
> E.g 120 watts divided by 120 watts divided by the equal performance would act as a modifier of 1.
> 
> However if one card uses 30 less wants. That means one card is 25 percent more efficient.
> 
> Even. 150 divided by 120.
> 
> 1.25 I.e 25 percent more efficient. However of this card also performs 15 percent better while consuming 25 percent better it acts as a modifier.
> 
> I.e 1.15. IF IT performed the same this modifier would be 1 but since it's performs 15 percent better, this becomes 1.15.
> 
> There for to get the efficiency you have to multiply both numbers.
> 
> 1.25 * 1.15 = efficiency.


1) No. If the the max amount of usage is 150 watts that is our constant max. The 1060 is 20% more efficient. Dividing 150 by 120 does NOT give you the percentage!! Everyone is making the same mistake

I and the calculator cant make this anymore simple -> http://www.geteasysolution.com/120-is-what-percent-of-150

1A) Finding 15% of something doesnt happen with 1.15 and especially *cant* be done with an arbitrary multiplier of 1. No wonder why this whole thing is so messed up.

1B) So we are saying "IF" it performed the same we would add this modifier of "1". Yet they dont perform the same, nor do they use the same amount of power in these tests. So this modifier of "1" is being added A) without having defined parameters and B) is admittedly being used incorrectly. Because that is being added in "IF" they perform the same. They dont. SO why is it being used?

2) Secondly, you have to have a stable, constant, and sensible way to equate power-consumption to speed/performance. We dont have that. Not even close. Heres a perfect example. A Lotus Elise consumes less power than a Shelby Mustang. Which one is faster? Power consumption is a totally different factor with no real way to equate it to the performance. None of the tests done make any sense at all.

Btw, since posting this yesterday I showed it to an electrical engineer/avionics guy from Sikorsky Aircraft. He's familiar with gaming computers. Took one look at the graph and said "Thats totally wrong".


----------



## poii

Quote:


> Originally Posted by *EightDee8D*
> 
> Probably w3.


w3= Witcher 3?

GTX 780 Ti would be around 280X performance


----------



## oxidized

Quote:


> Originally Posted by *poii*
> 
> GTX 780 Ti would be around 280X performance


0/10 meme


----------



## VeritronX

Just seen this from PCPer:


----------



## Waitng4realGPU

Quote:


> Originally Posted by *VeritronX*
> 
> Just seen this from PCPer:


They got a card early, probably the first with a video out.

Must be a reward for bringing the AMD PCIE problem to light


----------



## tajoh111

Quote:


> Originally Posted by *DMatthewStewart*
> 
> 1) No. If the the max amount of usage is 150 watts that is our constant max. The 1060 is 20% more efficient. Dividing 150 by 120 does NOT give you the percentage!! Everyone is making the same mistake
> 
> I and the calculator cant make this anymore simple -> http://www.geteasysolution.com/120-is-what-percent-of-150
> 
> 1A) Finding 15% of something doesnt happen with 1.15 and especially *cant* be done with an arbitrary multiplier of 1. No wonder why this whole thing is so messed up.
> 
> 1B) So we are saying "IF" it performed the same we would add this modifier of "1". Yet they dont perform the same, nor do they use the same amount of power in these tests. So this modifier of "1" is being added A) without having defined parameters and B) is admittedly being used incorrectly. Because that is being added in "IF" they perform the same. They dont. SO why is it being used?
> 
> 2) Secondly, you have to have a stable, constant, and sensible way to equate power-consumption to speed/performance. We dont have that. Not even close. Heres a perfect example. A Lotus Elise consumes less power than a Shelby Mustang. Which one is faster? Power consumption i
> 
> Code:


s a totally different factor with no real way to equate it to the performance. None of the tests done make any sense at all.

Btw, since posting this yesterday I showed it to an electrical engineer/avionics guy from Sikorsky Aircraft. He's familiar with gaming computers. Took one look at the graph and said "Thats totally wrong".

No it depends on how your phrasing it.

Your phrasing it on what percentage of 120 is 150 a portion of. This is 80 percent. 100-80 = 20%.

However if you phrase it differently, e.g how much larger is 150 than 120. You get 25%

i.e the rx480 uses 25 percent more power than a gtx 1060 i.e 1.25x 120 = 150watts

or the 1060 uses 20% less power than a rx480. (100-20)* 150 = 120 watts

2. Your second example is flawed. Your adding multiple variables which screw up that example. Not hard metric numbers which is all we need for performance per watt. All we need for performance per watt is performance and power consumption.

It more equivalent example with cars is how much better is the power to rate ratio of the lotus compared to the mustang, which is much more similar to performance per watt.

performance = HP in this case. So lets says the lotus has 200hp while the mustang has 300hp. The weight of the lotus is 1800pounds and the mustang is 3500 pounds(this is equivalent to our power consumption).

The lotus elise hp to pounds ratio is 0.11111Hp/pound or 200hp/1800pounds while the mustangs is 0.085714 hp/pound or 300hp/3500pounds

How much higher is the power to weight ratio of the elise vs the mustang. This is done by dividing the two numbers.

0.111111/0.085714. It is 30 percent higher.

So how much higher is the performance per watt of the gtx 1060 vs the rx 480?

If the gtx 1060 has 115% of the performance of the rx 480 we basically have all the information we need if we have the power to determine both the performance per watt and how much better one is compared to the other.

115performance/120 watt = 0.95833333 performance/watt for 1060

100performance/150watts = 0.66666666 performance/watt for rx480

0.95833333/0.66666666= 1.43750000 or 43.75 percent higher performance per watt.

The performance metric is arbitrary e.g we could switch this to FPS and as long as it's 15% larger we get the same ratio difference.

345 fps / 120 watts = 2.875

300 fps / 150 watts = 2.000

2.875 / 2.000 = 1.43750000


----------



## Blameless

Quote:


> Originally Posted by *DMatthewStewart*
> 
> Dividing 150 by 120 does NOT give you the percentage!!


Yes it does. It shows that 150 is 25% more than 120.
Quote:


> Originally Posted by *DMatthewStewart*
> 
> 1A) Finding 15% of something doesnt happen with 1.15 and especially *cant* be done with an arbitrary multiplier of 1.


There is no arbitrary multiplier. The RX 480 is the baseline. Everything the RX 480 is = 1 in a graph like this. This is a very common way to compare things.

If the RX 480 is 1 and the 1060 is 1.15 on the performance bar of the graph, that means NVIDIA is saying the 1060 is 15% faster.

If the RX 480 is 1 and the 1060 is ~1.44 on the efficiency portion of the graph, and we know that efficiency is the difference in work done per unit of power, then dividing ~1.44 by 1.15 gives us ~1.25, which is quite clearly derived from the differences in TDP mentioned.
Quote:


> Originally Posted by *DMatthewStewart*
> 
> 1B) So we are saying "IF" it performed the same we would add this modifier of "1". Yet they dont perform the same, nor do they use the same amount of power in these tests. So this modifier of "1" is being added A) without having defined parameters and B) is admittedly being used incorrectly. Because that is being added in "IF" they perform the same. They dont. SO why is it being used?


I have no idea where you are getting this "modifier of 1" stuff.
Quote:


> Originally Posted by *DMatthewStewart*
> 
> 2) Secondly, you have to have a stable, constant, and sensible way to equate power-consumption to speed/performance. We dont have that. Not even close. Heres a perfect example. A Lotus Elise consumes less power than a Shelby Mustang. Which one is faster? Power consumption is a totally different factor with no real way to equate it to the performance. None of the tests done make any sense at all.


You are making things needlessly complex. The graph and the TDP figure mentioned contain more than enough information to plug in any variable needed to account for anything mentioned.
Quote:


> Originally Posted by *DMatthewStewart*
> 
> Btw, since posting this yesterday I showed it to an electrical engineer/avionics guy from Sikorsky Aircraft. He's familiar with gaming computers. Took one look at the graph and said "Thats totally wrong".


Your acquaintance is mistaken.

There is nothing wrong with the graph and it only takes 3rd grade math to verify this.

NVIDIA is saying the 1060 will use 25% less power and be 15% faster. This is where that efficiency (work done for a give unit of energy) figure is coming from.


----------



## Scotty99

Whats a better buy waiting for the 1060 or grabbing a used EVGA SSC 970 for 200? It has 1.5 years of warranty left on it.


----------



## BulletSponge

Quote:


> Originally Posted by *Scotty99*
> 
> Whats a better buy waiting for the 1060 or grabbing a used EVGA SSC 970 for 200? It has 1.5 years of warranty left on it.


Buy an open box RX 480 4GB for cheap after people return all the ones they cannot unlock to 8GB.


----------



## Chargeit

Quote:


> Originally Posted by *Scotty99*
> 
> Whats a better buy waiting for the 1060 or grabbing a used EVGA SSC 970 for 200? It has 1.5 years of warranty left on it.


I'd wait for the 1060's to hit.


----------



## ekg84

Quote:


> Originally Posted by *Fb74*
> 
> So, no benches before days ?


Looks like reviews will be published same day card goes on sale, on the 19th.
Quote:


> Stay tuned for our full review of the GTX 1060 on July 19.


From www.Bit-Tech.net announcement article which can be found HERE

I think the guy from PCPer also mentioned that reviews will go live on 19th.


----------



## FLCLimax

Quote:


> Originally Posted by *ekg84*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fb74*
> 
> So, no benches before days ?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Looks like reviews will be published same day card goes on sale, on the 19th.
> Quote:
> 
> 
> 
> Stay tuned for our full review of the GTX 1060 on July 19.
> 
> Click to expand...
> 
> From www.Bit-Tech.net announcement article which can be found HERE
> 
> I think the guy from PCPer also mentioned that reviews will go live on 19th.
Click to expand...

Reviews held back just like the 480. Means it's just as disappointing, and costs more.


----------



## JackCY

In shops start of autumn







And probably as seen with Flounder/Fanboy Edition on 10x0s so far there are almost none sold at the actual nonFE MSRP especially not 1070s, cheapest is like +40 USD from what Nvidia set as the MSRP for AIBs.


----------



## NYU87

Quote:


> Originally Posted by *FLCLimax*
> 
> Reviews held back just like the 480. Means it's just as disappointing, and costs more.


Man every Nvidia thread, you are very negative.


----------



## magnek

Quote:


> Originally Posted by *FLCLimax*
> 
> Reviews held back just like the 480. Means it's just as disappointing, and costs more.


Maybe they want to avoid getting labelled a "paper launch" again.









But yeah we already know the price will be higher, so won't be any surprises there.


----------



## rv8000

Quote:


> Originally Posted by *EightDee8D*
> 
> They said 15% ipc increase from 290 to rx480, *but actually it regressed.* maybe rop/rbe bottlenecks ? but if you compare fiji vs pol10's perf/tflops there's like 10% increase.
> 
> i think 4k shader vega gpu will be -/+ 5% of 1080 stock. considering they fix all the bottlenecks.


I apologize for being OT...

This simply isn't true.

For reference 1308/1700 R9 390 from hwbot vs der8auer RX 480 @ 1480/2250 (both with tess mod)

https://www.youtube.com/watch?v=Jq47qmwcus8&t=11m52s
http://hwbot.org/submission/3035224_otaner_3dmark___fire_strike_radeon_r9_390_15284_marks
_________

RX 480 pts/shader = 16612/2304 = 7.21

R9 390 pts/shader = 16815/2560 = 6.57

RX 480 clock speed advantage = (1480*100)/1308 = 13%

R9 390 Bwidth advantage = (435.2*100)/(288*1.35) = 12%

R9 390 pts/shader w/Bwidth reduc. = 6.57 * 0.88 = 5.78

RX 480 pts/shader w/clk adv. reduc. = 7.21 * 0.87 = 6.27

pts/shader increase RX 480/R9 390 = (6.27*100)/ 5.78 = 8.5%

Not to mention having half the rops and less tmus, I'm honestly surprised they got that much out of another GCN revision considering the seriously cut down hardware specs versus an R9 390.
_________

Back on topic... really Nvidia another FE for what should be a entry-low mid GPU! I'm expecting most reference samples to boost above their stated values, upwards of high 1700's a little bit over 1800, there isn't going to be much left in the tank to beat out a 980 if the 1070/1080 are anything to judge by.


----------



## EightDee8D

Quote:


> Originally Posted by *rv8000*
> 
> 
> 
> Spoiler: Warning: OT!
> 
> 
> 
> I apologize for being OT...
> 
> This simply isn't true.
> 
> For reference 1308/1700 R9 390 from hwbot vs der8auer RX 480 @ 1480/2250 (both with tess mod)
> 
> https://www.youtube.com/watch?v=Jq47qmwcus8&t=11m52s
> http://hwbot.org/submission/3035224_otaner_3dmark___fire_strike_radeon_r9_390_15284_marks
> _________
> 
> RX 480 pts/shader = 16612/2304 = 7.21
> 
> R9 390 pts/shader = 16815/2560 = 6.57
> 
> RX 480 clock speed advantage = (1480*100)/1308 = 13%
> 
> R9 390 Bwidth advantage = (435.2*100)/(288*1.35) = 12%
> 
> R9 390 pts/shader w/Bwidth reduc. = 6.57 * 0.88 = 5.78
> 
> RX 480 pts/shader w/clk adv. reduc. = 7.21 * 0.87 = 6.27
> 
> pts/shader increase RX 480/R9 390 = (6.27*100)/ 5.78 = 8.5%
> 
> Not to mention having half the rops and less tmus, I'm honestly surprised they got that much out of another GCN revision considering the seriously cut down hardware specs versus an R9 390.
> _________
> 
> 
> 
> Back on topic... really Nvidia another FE for what should be a entry-low mid GPU! I'm expecting most reference samples to boost above their stated values, upwards of high 1700's a little bit over 1800, there isn't going to be much left in the tank to beat out a 980 if the 1070/1080 are anything to judge by.


Synthetic bench =/= actual game performance. just look at tpu benchmark chart. heck it's 4% behind 390x on 1080p with 1% less tflops. (and that's what i was comparing)

rop/rbe/bw bottleneck is irrelevant because that's not how you calculate progress. end results are what that matters otherwise we won't ever get more performance if they keep bottlenecking it somewhere. and looking at perf/tflops it regressed compared to 290/390. but gained 10% compared to furyx.


----------



## rv8000

Quote:


> Originally Posted by *EightDee8D*
> 
> Synthetic bench =/= actual game performance. just look at tpu benchmark chart. heck it's 4% behind 390x on 1080p with 1% less tflops. (and that's what i was comparing)
> 
> rop/rbe/bw bottleneck is irrelevant because that's not how you calculate progress. end results are what that matters otherwise we won't ever get more performance if they keep bottlenecking it somewhere. and looking at perf/tflops it regressed compared to 290/390. but gained 10% compared to furyx.


So you're saying that while having 20% less shaders, HALF the ROPs, less TMU's, a smaller bus and less bandwidth, a RX480 performing 4% behind in 1 gaming benchmark doesn't have any form of IPC gain?! Please rethink what you wrote. What was that common saying going around lately with all the new GPU releases... "TFlops =/= gaming performance"....


----------



## ekg84

Here's another 1060 video preview


----------



## EightDee8D

Quote:


> Originally Posted by *rv8000*
> 
> So you're saying that while having 20% less shaders, HALF the ROPs, less TMU's, a smaller bus and less bandwidth, a RX480 performing 4% behind in *1 gaming benchmark* doesn't have any form of IPC gain?! Please rethink what you wrote. What was that common saying going around lately with all the new GPU releases... "TFlops =/= gaming performance"....










tpu has like 10-14 games not just 1.

look at kepler vs maxwell, it gained 35% more performance and it shows, but this gcn 3 to 4, 15%/cu performance uplift doesn't show up anywhere. why are are you forgetting about clockspeed bumbp it got ? regardless i was strictly comparing perf/tflops nothing else. and you can't prove me wrong







. they can increase 100% cu performance but if they pair it up with 8rop we won't see that gain, so it's irrelevant.


----------



## rv8000

Quote:


> Originally Posted by *EightDee8D*
> 
> 
> 
> 
> 
> 
> 
> 
> tpu has like 10-14 games not just 1.
> 
> look at kepler vs maxwell, it gained 35% more performance and it shows, but this gcn 3 to 4, 15%/cu performance uplift doesn't show up anywhere. why are are you forgetting about clockspeed bumbp it got ? regardless i was strictly comparing perf/tflops nothing else. and you can't prove me wrong
> 
> 
> 
> 
> 
> 
> 
> . they can increase 100% cu performance but if they pair it up with 8rop we won't see that gain, so it's irrelevant.


I can't help it if you refuse to see what was so wrong with your previous statement. Significantly cut down hardware performing almost identically to hardware that is superior by over 20% based on hardware specs alone. By a simple logical process one has to realize that there was IPC increase when core frequency increase only accounts for 13% of that 20+%...

And now you're comparing Kepler to Maxwell vs GCN to GCN; one is an entirely new architecture, one is slightly tweaked.


----------



## EightDee8D

Quote:


> Originally Posted by *rv8000*
> 
> I can't help it if you refuse to see what was so wrong with your previous statement. Significantly cut down hardware performing almost identically to hardware that is superior by over 20% based on hardware specs alone. By a simple logical process one has to realize that there was IPC increase when core frequency increase only accounts for 13% of that 20+%...
> 
> And now you're comparing Kepler to Maxwell vs GCN to GCN; one is an entirely new architecture, one is slightly tweaked.


Your are talking about architecture gains, i'm talking about the rx480 card, not gcn 4. and looking at its tflops ( 5.8 for 480 vs 5.9 for 390x ) it doesn't show any improvements. now if you still can't see that i can't explain any better. it's already way too ot so i'll leave it here.


----------



## rv8000

Quote:


> Originally Posted by *EightDee8D*
> 
> Your are talking about architecture gains, i'm talking about the rx480 card, not gcn 4. and looking at its tflops ( 5.8 for 480 vs 5.9 for 390x ) it doesn't show any improvements. now if you still can't see that i can't explain any better. it's already way too ot so i'll leave it here.


It's clear you have no idea what you're talking about in regards to IPC gain, simple comparisons of hardware specs, and the difference between new architectures and small revisions.

Sorry for the OT, please take what you like from my earlier post on IPC gains in reponse to his statement that GCN has somehow regressed in IPC in relation to gaming performance.

/end & hello egghead (block) list


----------



## EightDee8D

So many little snowflakes here.


----------



## sugarhell

Quote:


> Originally Posted by *EightDee8D*
> 
> Your are talking about architecture gains, i'm talking about the rx480 card, not gcn 4. and looking at its tflops ( 5.8 for 480 vs 5.9 for 390x ) it doesn't show any improvements. now if you still can't see that i can't explain any better. it's already way too ot so i'll leave it here.


Tflops is such a bad metric to compare cards.

You know it calculates the peak shader performance. Nothing else.

What about geometry processing and output, rasterization, bandwidth, ROP performance , texture filtering, texel per second?

Yeah, nothing about all these.

480 match the 390 with much less hardware. If you can't see that then you are blind.


----------



## Yungbenny911

Wow, i never thought an xx60 GPU would not support SLI. I thought the whole deal was for it to be bang for buck. I remember when i had my GTX 660 SLI (faster than GTX 680), that was one of my best setups.

I'm sure nvidia is trying to prevent people from jumping on the xx60 SLI train, so they can promote their xx80 sales which is a total shame. If there is really no SLI support, then they just threw the low end GPU market to AMD.


----------



## Diogenes5

Quote:


> Originally Posted by *Yungbenny911*
> 
> Wow, i never thought an xx60 GPU would not support SLI. I thought the whole deal was for it to be bang for buck. I remember when i had my GTX 660 SLI (faster than GTX 680), that was one of my best setups.
> 
> I'm sure nvidia is trying to prevent people from jumping on the xx60 SLI train, so they can promote their xx80 sales which is a total shame. If there is really no SLI support, then they just threw the low end GPU market to AMD.


AMD threw down the gauntle and Nvidia responded. A Rx 480 is 970/980 performance at around $240. Now the 1060 will be around the performance of a 980/390x at stock +5%. Nvidia always wants people to buy their xx70 line, that's where perf/$ is highest. By matching AMD basically, Nvidia made the GTX 1060 have better perf/$ than the xx70. So first they gimp it with a weird amount of ram and then they gimp it with no SLI.

We know that the pascal line overclocks to over 2000 mhz meaning about a 12-14% boost in real world performance performance so the 1060 will likely match a Fury in performance when all is said and done. All for $250 (or more accurately $270 to $300 with the AIB's). It's total bull**** that they cut SLI but that's Nvidia for you.


----------



## epic1337

well SLI was hard to come by to begin with, you needed a board with even lanes, e.g. 4+4, 8+8, or 16+16.
most mainstream boards have 16+4, so its unlikely to use SLI unless you drop it to 4+4, and thats still not guaranteed.


----------



## Scotty99

Quote:


> Originally Posted by *Diogenes5*
> 
> AMD threw down the gauntle and Nvidia responded. A Rx 480 is 970/980 performance at around $240. Now the 1060 will be around the performance of a 980/390x at stock +5%. Nvidia always wants people to buy their xx70 line, that's where perf/$ is highest. By matching AMD basically, Nvidia made the GTX 1060 have better perf/$ than the xx70. So first they gimp it with a weird amount of ram and then they gimp it with no SLI.
> 
> We know that the pascal line overclocks to over 2000 mhz meaning about a 12-14% boost in real world performance performance so the 1060 will likely match a Fury in performance when all is said and done. All for $250 (or more accurately $270 to $300 with the AIB's). It's total bull**** that they cut SLI but that's Nvidia for you.


1060 isnt as fast as a 980, go watch pcper's video they even state as much (they are in nvidia's pocket, if anyone knows they do).


----------



## tajoh111

Quote:


> Originally Posted by *EightDee8D*
> 
> Your are talking about architecture gains, i'm talking about the rx480 card, not gcn 4. and looking at its tflops ( 5.8 for 480 vs 5.9 for 390x ) it doesn't show any improvements. now if you still can't see that i can't explain any better. it's already way too ot so i'll leave it here.


Yup.

I don't think there was much of an IPC gain either.

Why we have to use tflops as a basis as it takes into account both core count and frequency. IPC is information per cycle but because GPU's have such varying clusters of cores, you have to integrate this into the equation which results as tflops.I.e tflops = frequency * cores * 2

Even Mahigan was using this metric before his prediction went off the rains.

IPC is best translated because it is normally a term reserved for CPU's as efficiency of work done per clock. Since GPU cores vary so much, we have to include the core count. This is then how much does a tflop equal into performance for gaming. or a ratio of real performance vs theoretical.

Because of this IPC is a great measurement of core occupancy or how efficient cores are being used towards actual work.

If we look at the performance of polaris, most of the gains from each CU have come from clocks. And the 15% just seems like the difference in clocks, not improvements in the architecture.

What is strange is IPC appears to have gone down because 5.8tflops of polaris performs somewhere along the lines of 5.2-5.4 tflops of 390x.

The problem for vega is besides pure bandwidth, if it is most the same architecture, they are going to run into the same bottleneck issues. This being we are likely to get 64ROP like fiji and it is just a limit on the architecture. And considering how inefficiently fiji tflops turned into real world performance, this doesn't bode well for vega.

If Polaris made some IPC gains were 5.8tflops was equal to 7tflops of hawaii, I would have much higher hopes for it.


----------



## gamervivek

Considering 32ROPs of Ellsmere are matching 64 on Hawaii, this bodes fairly well for Vega unless of course the bottleneck was elsewhere.


----------



## epic1337

Quote:


> Originally Posted by *gamervivek*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Considering 32ROPs of Ellsmere are matching 64 on Hawaii, this bodes fairly well for Vega unless of course the bottleneck was elsewhere.


i mentioned it before, the bottleneck was in the bandwidth.
which means they could afford to reduce ROPs if they can make each individual ROPs perform closer to their peak throughput.

http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/4
Quote:


> ROP operations are extremely bandwidth intensive, so much so that even when pairing up ROPs with memory controllers, the ROPs are often still starved of memory bandwidth.
> 
> The solution to that was rather counter-intuitive: decouple the ROPs from the memory controllers. By servicing the ROPs through a crossbar AMD can hold the number of ROPs constant at 32 while increasing the width of the memory bus by 50%. The end result is that the same number of ROPs perform better by having access to the additional bandwidth they need.


so in AMD's case, they had to reduce the ROPs whenever they reduce their buswidth and overall bandwidth.
otherwise their ROPs would just end up being ran at a less efficient state, making more ROPs simply pointless.

also, this indicates one thing, Tonga and Hawaii are mostly bandwidth starved when it comes to their ROPs.
Hawaii = 64ROPs @ 384GB/s = 6GB/s per ROP
Tonga = 32ROPs @ 182.4GB/s = 5.7GB/s per ROP
Polaris = 32ROPs @ 256GB/s = 8GB/s per ROP
* take note that Polaris has a much higher theoretical bandwidth due to more efficient compression.


----------



## EightDee8D

I was right there's no ipc increase, just frequency increase. what they did is compared 290 vs 480 in several games and divided their fps by cu. read the actual footnote 1* here-


Source - https://forums.overclockers.co.uk/showthread.php?t=18737717

So as i said there's no ipc gain per cu at same frequency.







and it's kind of pathetic imo.

also kiss goodbye to 2.7x p/w for 470, they are comparing wrong tdp of 270x (180w). basically it's a meh architecture aside from tessellation and 34% memory bandwidth increase from dcc and even that isn't enough.


----------



## epic1337

well, neither did Pascal, the perf increase are mostly attributed to their vastly higher clocks, and even worse perf per CU had slightly regressed due to diminishing returns.


----------



## EightDee8D

Quote:


> Originally Posted by *epic1337*
> 
> well, neither did Pascal, the perf increase are mostly attributed to their vastly higher clocks, and even worse perf per CU had slightly regressed due to diminishing returns.


i know that, but they are so behind frequency wise and they can't just increase cu count, they will hit die size wall. they really need ipc increase this time. or goodbye to gcn and start new architecture. it's just not good enough to battle future nvidia architectures.


----------



## NvNw

:x If you buy the founder's edition, better to like that cooler...

And no water compatibility, unless you leave the cables hanging over...


----------



## rv8000

Quote:


> Originally Posted by *tajoh111*
> 
> Yup.
> 
> I don't think there was much of an IPC gain either.
> 
> Why we have to use tflops as a basis as it takes into account both core count and frequency. IPC is information per cycle but because GPU's have such varying clusters of cores, you have to integrate this into the equation which results as tflops.I.e tflops = frequency * cores * 2
> 
> Even Mahigan was using this metric before his prediction went off the rains.
> 
> IPC is best translated because it is normally a term reserved for CPU's as efficiency of work done per clock. Since GPU cores vary so much, we have to include the core count. This is then how much does a tflop equal into performance for gaming. or a ratio of real performance vs theoretical.
> 
> Because of this IPC is a great measurement of core occupancy or how efficient cores are being used towards actual work.
> 
> If we look at the performance of polaris, most of the gains from each CU have come from clocks. And the 15% just seems like the difference in clocks, not improvements in the architecture.
> 
> What is strange is *IPC appears to have gone down* because 5.8tflops of polaris performs somewhere along the lines of *5.2-5.4* tflops of 390x.
> 
> The problem for vega is besides pure bandwidth, if it is most the same architecture, they are going to run into the same bottleneck issues. This being we are likely to get 64ROP like fiji and it is just a limit on the architecture. And considering how inefficiently fiji tflops turned into real world performance, this doesn't bode well for vega.
> 
> If Polaris made some IPC gains were 5.8tflops was equal to 7tflops of hawaii, I would have much higher hopes for it.


First bold statement is incorrect as I've already proved per shader performance is up vs hawaii by at least ~8.5%; this is in a synthetic situation in which, or as you like to cling to the fact that TFlops is a good representation of real performance, should best represent IPC gains.

Second, if we take the TPU averaged scores that were referenced earlier, assuming the 4% difference average by your standards that would put P10 @ 5.68 TFlops of "Hawaii performance". There is also proof from reviews and users that depending on resolution and per game basis that reference p10 is throttling. Taking into account the minimum core clock of 1120, this brings p10 down to ~ 5.16 TFlops. Now considering this range (5.16-5.83) there are games in which the RX 480 is faster than 390X in both DX11 and DX12, games where performance is almost Identical, and games in which it is behind. How could P10 beat Hawaii with significantly less hardware and lower IPC in ANY GAME?

You have to be a moron to say CU performance has regressed when P10 is Hawaii or GCN 1 on a HUGE diet, yet performance is almost Identical. P10 is not an enthusiast part. It's not a big chip at a premium price, with a crazy cooler. I really can't fathom how people expect so much of this card. For AMD to actively produce a product that has regressed in performance when they have all their chips in the GCN bag would be the STUPIDEST thing on the face of the planet. You heard it here, AMD, pack your bags your engineers can do nothing but go backwards.


----------



## epic1337

Quote:


> Originally Posted by *EightDee8D*
> 
> i know that, but they are so behind frequency wise and they can't just increase cu count, they will hit die size wall. they really need ipc increase this time. or goodbye to gcn and start new architecture. it's just not good enough to battle future nvidia architectures.


they don't actually need to abandon GCN, they'll just have to revise it a bit more than it currently is.
they've made multiple changes so far in making each CU more efficient in bandwidth usage, and they've made quite a bit of progress.

the last and most game-changing step would be pushing for 16bit compute (half-precision) support, the same with what NV is currently doing.



Quote:


> Originally Posted by *NvNw*
> 
> 
> 
> :x If you buy the founder's edition, better to like that cooler...
> 
> And no water compatibility, unless you leave the cables hanging over...


very simple solution, buy an 8PIN PCI-E extension cable, cut-off the male end and solder that on the board.


----------



## EightDee8D

Quote:


> Originally Posted by *epic1337*
> 
> they don't actually need to abandon GCN, they'll just have to revise it a bit more than it currently is.
> they've made multiple changes so far in making each CU more efficient in bandwidth usage, and they've made quite a bit of progress.
> 
> the last and most game-changing step would be pushing for 16bit compute or half-precision support, the same with what NV is currently doing.


compare 280x ( 2011 28nm gpu) vs 480 (2016 14nm gpu ) there's barely any ipc increase, and that's with 4 revisions. imo that's the most pathetic thing about GCN.


----------



## Klocek001

GTX 1060 iChill Gaming OC X2

*base clock: 1784
boost clock:1860*
mem:8200

wow this thing should break +2GHz out of the box with custom cooler


----------



## epic1337

Quote:


> Originally Posted by *EightDee8D*
> 
> compare 280x ( 2011 28nm gpu) vs 480 (2016 14nm gpu ) there's barely any ipc increase, and that's with 4 revisions. imo that's the most pathetic thing about GCN.


why don't you compare the buswidth and bandwidth between the two uarch?
280X (Tahiti) = 2048:128:32 | 384bit @ 288GB/s = 100% perf
380X (Tonga) = 2048:128:32 | 256bit @ 182.4GB/s = 100% perf
480 (Polaris) = 2304:144:32 | 256bit @ 256GB/s = ~110% perf

you see the thing here? Tonga has much less buswidth, yet performs roughly the same as Tahiti.
this is because of multiple design changes, one is the introduction of compression, the other is ROP optimizations.

you're being too much biased in raw CU throughput and forgot that the front-end also needs optimizations.
with optimizations on the front-end, they can fit more CUs and thus increase card perf without dramatically increasing die-size.


----------



## EightDee8D

Quote:


> Originally Posted by *epic1337*
> 
> why don't you compare the buswidth and bandwidth between the two uarch?
> 280X (Tahiti) = 2048:128:32 | 384bit @ 288GB/s = 100% perf
> 380X (Tonga) = 2048:128:32 | 256bit @ 182.4GB/s = 100% perf
> 480 (Polaris) = 2304:144:32 | 256bit @ 256GB/s = ~110% perf
> 
> you see the thing here? Tonga has much less buswidth, yet performs roughly the same as Tahiti.
> this is because of multiple design changes, one is the introduction of compression, the other is ROP optimizations.
> 
> you're being too much biased in raw CU throughput and forgot that the front-end also needs optimizations.


run tahiti @ 1266mhz and i guarantee it will just 10% behind 480 because 12% less cu. 5 years and that's what they have achieved ?. nobody cares if you can achieve that same performance with 32bit bus and 4rop because you are still requiring more cu, where the hell is progression in perf/cu/mhz ? you know they cannot just keep adding cu right ?

move to 480 thread if you want to continue this, way ot atm.


----------



## epic1337

Quote:


> Originally Posted by *EightDee8D*
> 
> run tahiti @ 1266mhz and i guarantee it will just 10% behind 480 because 12% less cu. 5 years and that's what they have achieved ?. nobody cares if you can achieve that same performance with 32bit bus and 4rop because you are still requiring more cu, where the hell is progression in perf/cu/mhz ? you know they cannot just keep adding cu right ?
> 
> move to 480 thread if you want to continue this, way ot atm.


the progress is on the perf/die cost, are you so short sighted as to not see this as progress?

furthermore, as i've mentioned, you're biased on raw CU throughput, the previous GCN can only handle 32CUs per 32ROPs, while Polaris can now handle 36CUs per 32ROPs.
or if i'd put it on a literal sense where you can understand, *ROPs gained roughly 10% increase in IPC*, oh hey thats an increase in IPC, imagine that surprise.

so overall, instead of making a die thats ~20% larger using 2304:144:36 | 320bit configuration and only gaining ~10% increase in perf.
they've managed to make a die thats only ~10% larger using 2304:144:32 | 256bit configuration and gaining ~10% increase in perf.
thats an improvement on the uarch itself, as the die will cost less for the same increase in perf.


----------



## tajoh111

Quote:


> Originally Posted by *gamervivek*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Considering 32ROPs of Ellsmere are matching 64 on Hawaii, this bodes fairly well for Vega unless of course the bottleneck was elsewhere.


Since they are near doubling shader count again, they are going to run into similar bottlenecks again with Vega, if vega only gets 64 ROPs.

It would have been better if Polaris started with 64 ROPs while Vega got 128.
Quote:


> Originally Posted by *epic1337*
> 
> why don't you compare the buswidth and bandwidth between the two uarch?
> 280X (Tahiti) = 2048:128:32 | 384bit @ 288GB/s = 100% perf
> 380X (Tonga) = 2048:128:32 | 256bit @ 182.4GB/s = 100% perf
> 480 (Polaris) = 2304:144:32 | 256bit @ 256GB/s = ~110% perf
> 
> you see the thing here? Tonga has much less buswidth, yet performs roughly the same as Tahiti.
> this is because of multiple design changes, one is the introduction of compression, the other is ROP optimizations.
> 
> you're being too much biased in raw CU throughput and forgot that the front-end also needs optimizations.
> with optimizations on the front-end, they can fit more CUs and thus increase card perf without dramatically increasing die-size.


If we going to fawn over those improvements, what are we going to say about pascal? It has 50% increase of the performance over the rx 480 with the same bandwidth of polaris. And this is with 25% of it's shader's disabled.

The problem with being marveled by these accomplishment is that Tonga actually didn't shrink the die down or decrease the number of transistors vs Tahiti/hawaii. Tonga is actually a bit bigger and has 20% more transistors than tahiti but has performance maybe 5% better overall. I would be more impressed if the changes in Tonga and similarly Polaris(not changes due to change in node) decreased the transistor count for the same performance or increased the performance per transistor or performance per mm2 of die. This is what happened between the transition between kepler and maxwell and maxwell vs pascal. And this is what AMD needs to do to catch up.

Why AMD has been letting so many people down is for their changes to GCN, they are not increasing the performance per transistor or die size. This comes with good architectural changes. If their changes were actually doing things which increased the IPC, for the number of transistors the rx480 has, it should be 20% faster than it is. Hawaii has double precision which occupy alot of transistors, is 28nm and has better overall performance than the rx480 for 9% percent more transistors.

All these changes are just not getting the job done. For a pure gaming part with no double precision, unlike hawaii, on a superior node, 5.7 billions transistors of Polaris should outperform 6.2 billion hawaii transistors.
*
Pascal was able to beat the titan X by 30% using 10% less transistors.

RX480 loses to the 390x by 10% while giving double precision the boot while using 9% less transistors, and they were the ones that made the bigger architectural changes.*

This is the most disappointing things. AMD should have gained more for the architectural changes.

What we have is AMD shuffling around it's transistor to beef some things up while making some things weaker which for all their changes = the same performance. That is a waste of R and D. What we need from GCN, if they are to keep on using it, are changes that result in overall increases in performance across the board, not weakening in some area while others increase so the overall performance is the same.

We have known for the 232mm2 part for a while now but when you take the lowest performance expectation that anyone was expecting on this forum and combine it with a worse power consumption than anyone could have predicted, you have mostly disappointing feeling towards polaris as an architecture. Anyone saying anything else is lowering their bar for their favorite company.


----------



## epic1337

Quote:


> Originally Posted by *tajoh111*
> 
> Since they are near doubling shader count again, they are going to run into similar bottlenecks again with Vega, if vega only gets 64 ROPs.
> 
> It would have been better if Polaris started with 64 ROPs while Vega got 128.


it won't be as drastic as before, as each ROPs can sustain more CUs.
Quote:


> Originally Posted by *tajoh111*
> 
> If we going to fawn over those improvements, what are we going to say about pascal? It has 50% increase of the performance over the rx 480 with the same bandwidth of polaris. And this is with 25% of it's shader's disabled.
> 
> The problem with being marveled by these accomplishment is that Tonga actually didn't shrink the die down or decrease the number of transistors vs Tahiti/hawaii. Tonga is actually a bit bigger and has 20% more transistors than tahiti but has performance maybe 5% better overall. I would be more impressed if the changes in Tonga and similarly Polaris(not changes due to change in node) decreased the transistor count for the same performance or increased the performance per transistor or performance per mm2 of die. This is what happened between the transition between kepler and maxwell and maxwell vs pascal. And this is what AMD needs to do to catch up.
> 
> Why AMD has been letting so many people down is for their changes to GCN, they are not increasing the performance per transistor or die size. This comes with good architectural changes. If their changes were actually doing things which increased the IPC, for the number of transistors the rx480 has, it should be 20% faster than it is. Hawaii has double precision which occupy alot of transistors, is 28nm and has better overall performance than the rx480 for 9% percent more transistors.
> 
> All these changes are just not getting the job done. For a pure gaming part with no double precision, unlike hawaii, on a superior node, 5.7 billions transistors of Polaris should outperform 6.2 billion hawaii transistors.
> *
> Pascal was able to beat the titan X by 30% using 10% less transistors.
> 
> RX480 loses to the 390x by 10% while giving double precision the boot while using 9% less transistors, and they were the ones that made the bigger architectural changes.*
> 
> This is the most disappointing things. AMD should have gained more for the architectural changes.
> 
> What we have is AMD shuffling around it's transistor to beef some things up while making some things weaker which for all their changes = the same performance. That is a waste of R and D. What we need from GCN, if they are to keep on using it, are changes that result in overall increases in performance across the board, not weakening in some area while others increase so the overall performance is the same.


the "perf gain" is mostly attributed to the difference between clock speed, although it can be argued otherwise as Pascal uses a bit less transistors.

Pascal GP104 doesn't have that much of a throughput in double precision, or rather GP104 doesn't have any DP units, they rely on software to translate DP workload for the SP units.
comparatively GP100 (tesla P100) has 40% more CUDAs, but has 15.3billion transistors, that is because its mostly made out of DP units.
GP106 = 1280:80:48 @ 1708Mhz | ???mm^2 @ ??? transistors = 0.14 TFLOPs Double Precision
GP104 = 2560:160:64 @ 1733Mhz | 314mm^2 @ 7.2 billion transistors = 0.28 TFLOPs Double Precision
GP100 = 3584:???:?? @ 1480Mhz | 610mm^2 @ 15.3 billion transistors = 1.30 TFLOPs Double Precision

i can't say much for transistor count, as theres some notable difference between finfet and planar.
on a side note, the reason why Tonga has such a high transistor count, is probably because it has a 128bit disabled bus for whatever reason.


----------



## Catscratch

Quote:


> Originally Posted by *tajoh111*
> 
> Since they are near doubling shader count again, they are going to run into similar bottlenecks again with Vega, if vega only gets 64 ROPs.
> 
> It would have been better if Polaris started with 64 ROPs while Vega got 128.
> If we going to fawn over those improvements, what are we going to say about pascal? It has 50% increase of the performance over the rx 480 with the same bandwidth of polaris. And this is with 25% of it's shader's disabled.
> 
> The problem with being marveled by these accomplishment is that Tonga actually didn't shrink the die down or decrease the number of transistors vs Tahiti/hawaii. Tonga is actually a bit bigger and has 20% more transistors than tahiti but has performance maybe 5% better overall. I would be more impressed if the changes in Tonga and similarly Polaris(not changes due to change in node) decreased the transistor count for the same performance or increased the performance per transistor or performance per mm2 of die. This is what happened between the transition between kepler and maxwell and maxwell vs pascal. And this is what AMD needs to do to catch up.
> 
> Why AMD has been letting so many people down is for their changes to GCN, they are not increasing the performance per transistor or die size. This comes with good architectural changes. If their changes were actually doing things which increased the IPC, for the number of transistors the rx480 has, it should be 20% faster than it is. Hawaii has double precision which occupy alot of transistors, is 28nm and has better overall performance than the rx480 for 9% percent more transistors.
> 
> All these changes are just not getting the job done. For a pure gaming part with no double precision, unlike hawaii, on a superior node, 5.7 billions transistors of Polaris should outperform 6.2 billion hawaii transistors.
> *
> Pascal was able to beat the titan X by 30% using 10% less transistors.
> 
> RX480 loses to the 390x by 10% while giving double precision the boot while using 9% less transistors, and they were the ones that made the bigger architectural changes.*
> 
> This is the most disappointing things. AMD should have gained more for the architectural changes.
> 
> What we have is AMD shuffling around it's transistor to beef some things up while making some things weaker which for all their changes = the same performance. That is a waste of R and D. What we need from GCN, if they are to keep on using it, are changes that result in overall increases in performance across the board, not weakening in some area while others increase so the overall performance is the same.
> 
> We have known for the 232mm2 part for a while now but when you take the lowest performance expectation that anyone was expecting on this forum and combine it with a worse power consumption than anyone could have predicted, you have mostly disappointing feeling towards polaris as an architecture. Anyone saying anything else is lowering their bar for their favorite company.


Yeah, 1900 mhz pascal beating 1000 mhz titan x by %30 is clearly an architectural win







Well, it's still improvement and I agree AMD needed a bit more gain from polaris, especially higher mhz.


----------



## tajoh111

Quote:


> Originally Posted by *Catscratch*
> 
> Yeah, 1900 mhz pascal beating 1000 mhz titan x by %30 is clearly an architectural win
> 
> 
> 
> 
> 
> 
> 
> Well, it's still improvement and I agree AMD needed a bit more gain from polaris, especially higher mhz.


If there wasn't an increase in power consumption or die size which actually cost Nvidia something, then it is.

As far as engineering, there are two concerns costs and performance, power consumption adds to the cost as does die size.

If the cards were clocked at 10000mhz and there was no increase in power consumption or die size, but added 40% more performance, it wouldn't matter in terms of engineering. The cards would still cost the same for that level of performance and that is what matters most to the company. What maxwell and pascal triumphs is they can clock so high and the wattage doesn't climb that much. Add in they have less cores than AMD and it's part of the reason why they need high frequency to compete against AMD.

In addition, pascal is clocked at 1750mhz really, titan x is clocked at 1130mhz and has 20% more cores and it mostly the reason why there is only a 30% difference in performance. Also pascal at the top end is highly bandwidth limited which is shown by the less than proportional decrease between the 1070 and 1080.


----------



## C2H5OH

Quote:


> Originally Posted by *epic1337*
> 
> they don't actually need to abandon GCN, they'll just have to revise it a bit more than it currently is.
> they've made multiple changes so far in making each CU more efficient in bandwidth usage, and they've made quite a bit of progress.
> 
> the last and most game-changing step would be pushing for 16bit compute (half-precision) support, the same with what NV is currently doing.
> ......


Actually, there is bit of misconception about FP16.

AMD RX Polaris does support native FP16, while Nvidia GTX Pascal doesn't and this has been discussed in great length at beyound3d. It's reserved for Tesla cards.

If I'm not mistaken they managed to test it and found that they can execute FP16 but nowhere fast enough to matter.

I will quote Andrew Lauritzen here:
Quote:


> TBH it doesn't really bother me that they don't have fast fp16 on desktop, but they should have been a lot more upfront about it when they launched consumer Pascal. Lots of folks in the games industry are still working under the incorrect assumption that fp16 is supported and faster on NVIDIA.


Link to the full quote and thread:
https://forum.beyond3d.com/threads/nvidia-pascal-announcement.57763/page-80

But as we know, Nvidia are masters at this PR game.


----------



## C2H5OH

Quote:


> Originally Posted by *tajoh111*
> 
> If there wasn't an increase in power consumption or die size which actually cost Nvidia something, then it is.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> As far as engineering, there are two concerns costs and performance, power consumption adds to the cost as does die size.
> 
> If the cards were clocked at 10000mhz and there was no increase in power consumption or die size, but added 40% more performance, it wouldn't matter in terms of engineering. The cards would still cost the same for that level of performance and that is what matters most to the company. What maxwell and pascal triumphs is they can clock so high and the wattage doesn't climb that much. Add in they have less cores than AMD and it's part of the reason why they need high frequency to compete against AMD.
> 
> In addition, pascal is clocked at 1750mhz really, titan x is clocked at 1130mhz and has 20% more cores and it mostly the reason why there is only a 30% difference in performance. Also pascal at the top end is highly bandwidth limited which is shown by the less than proportional decrease between the 1070 and 1080.


Maxwell was already capable of reaching 1500Mhz, so maybe a slight tweak along with TSMC:
Quote:


> TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology.


At the end, most of the gains in Pascal are due to process.


----------



## epic1337

Quote:


> Originally Posted by *C2H5OH*
> 
> Actually, there is bit of misconception about FP16.
> 
> AMD RX Polaris does support native FP16, while Nvidia GTX Pascal doesn't and this has been discussed in great length at beyound3d. It's reserved for Tesla cards.
> 
> If I'm not mistaken they managed to test it and found that they can execute FP16 but nowhere fast enough to matter.
> 
> I will quote Andrew Lauritzen here:
> Link to the full quote and thread:
> https://forum.beyond3d.com/threads/nvidia-pascal-announcement.57763/page-80
> 
> But as we know, Nvidia are masters at this PR game.


shockingly unexpected, they hadn't clarified this so i wasn't aware, but wow Nvidia you're getting more sly.
though i'm aware of AMD's cards supporting FP16, just that they hadn't specifically designed it as so.

it would be better if most of the codes were shifted to FP16, as you get about 2x more performance out of it.
and not all things needs high precision, like rendering grass can tolerate errors.
as a matter of fact it might even look even more realistic with errors as it introduces more random variables.


----------



## Exeed Orbit

It appears that some of the people in here need to go work for Nvidia and AMD, since they know everything there is to know about making GPUs.


----------



## oxidized

Quote:


> Originally Posted by *Exeed Orbit*
> 
> It appears that some of the people in here need to go work for Nvidia and AMD, since they know everything there is to know about making GPUs.


+


----------



## Fb74

GTX1060 Turbo Model by Asus announced at 279 euros in France (20% VAT included).

(It's the blower design I think).

But sometimes on the very first weeks of launch you can add 20 euros from the reseller.


----------



## Fb74

If the box is not photoshopped, I read 3 GB on it.


----------



## ChevChelios

Quote:


> Originally Posted by *Fb74*
> 
> GTX1060 Turbo Model by Asus announced at 279 euros in France (20% VAT included).


its scary how many 1060s they will sell


----------



## epic1337

Quote:


> Originally Posted by *ChevChelios*
> 
> its scary how many 1060s they will sell


both GTX960 and GTX970 were popular.
as GTX1060 sits between those two's price point, we can expect millions of units.


----------



## Fb74

Quote:


> Originally Posted by *ChevChelios*
> 
> its scary how many 1060s they will sell


I think I am going to be in, but I am going to wait a little bit.

I just want to replace my GTX670 and a 970/980 level of performance would be good enough for the games I play in 1920x1200.

But I agree that 30 to 50 euros less for the price could have been a better deal.









I was interested in GTX1070, but there is no way I spend 400 euros and more in a graphic card.


----------



## tpi2007

Semiaccurate's article about the first cards being actually GP104 based seems to hold some merit given the empty VRAM spots on the PCB that would allow for 8 GB. It seems that only a board made to also accommodate binned GP104 chips explains that PCB design.

It will be interesting to see what reviewers and first buyers are going to get.


----------



## Aymanb

Quote:


> Originally Posted by *Fb74*
> 
> I think I am going to be in, but I am going to wait a little bit.
> 
> I just want to replace my GTX670 and a 970/980 level of performance would be good enough for the games I play in 1920x1200.
> 
> But I agree that 30 to 50 euros less for the price could have been a better deal.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I was interested in GTX1070, but there is no way I spend 400 euros and more in a graphic card.


I wish I could spend €400 on that thing. The price here is like 500 right now which is insane.


----------



## tpi2007

Quote:


> Originally Posted by *Aymanb*
> 
> I wish I could spend €400 on that thing. The price here is like 500 right now which is insane.


Yep, and as predicted AIBs are taking the opportunity to price their cards _above_ the FE price. So the MSRP effectively doesn't exist.


----------



## JackCY

Quote:


> Originally Posted by *ChevChelios*
> 
> its scary how many 1060s they will sell


It's scary how few 1060s they will be able to ship to shops


----------



## Luciferxy

$299 for the fanboy edition & that disgusting pcie power connection & no sli finger is a big no.


----------



## GoLDii3

Quote:


> Originally Posted by *ChevChelios*
> 
> its scary how many 1060s they will sell


It's scary how many they won't sell


----------



## Exeed Orbit

Quote:


> Originally Posted by *GoLDii3*
> 
> It's scary how many they won't sell


I'm pretty sure they'll sell plenty.


----------



## ChevChelios

they would have sold tons any way but with a price of sub 300 EUR - even more


----------



## Cyro999

Quote:


> When the GTX 1060 comes out, it will be the best mid-range card at 1080p in DX11.
> 
> It will lose to the RX 480 in VR


Does AMD have a feature like SMP that i missed?


----------



## ChevChelios

what VR specific features does Polaris have ? and how is it VR performance compared to Pascals VR ?


----------



## Fb74

Quote:


> Originally Posted by *Aymanb*
> 
> I wish I could spend €400 on that thing. The price here is like 500 right now which is insane.


At this time, 479 euros for the cheapest one in France.
http://www.ldlc.com/navigation-p1e48t3o0a1/gtx+1070/

But we agree, too much expensive.


----------



## Ha-Nocri

yep, around 500-520e for a decent AIB 1070. Hope cut-down Vega will be less than that


----------



## STEvil

5-10% faster and 10-15% more expensive it looks like.


----------



## Derp

Some more leaks including ashes benchmark.

http://videocardz.com/62086/nvidia-geforce-gtx-1060-rumors-part-5-full-specs-2-0-ghz-overclocking


----------



## FLCLimax

Quote:


> Originally Posted by *Derp*
> 
> Some more leaks including ashes benchmark.
> 
> http://videocardz.com/62086/nvidia-geforce-gtx-1060-rumors-part-5-full-specs-2-0-ghz-overclocking


I can see why NDA stays on till the card is on sale.
Quote:


> Next up we have OpenCL performance. The GTX 1060 appears to be faster than GTX 970 and RX 480, but it struggles to perform better than GTX 980.


Does Videocardz author post here? I want to give him an eye exam cuz the 480 wins 6 out of 8 OpenCL tests.


----------



## Klocek001

Quote:


> Originally Posted by *FLCLimax*
> 
> I can see why NDA stays on till the card is on sale.
> Does Videocardz author post here? I want to give him an eye exam cuz the 480 wins 6 out of 8 OpenCL tests.


wonder how aots and cl benches translate to games


----------



## ZoePancakes

Quote:


> Originally Posted by *Klocek001*
> 
> wonder how aots and cl benches translate to games


Aots is a game -.-


----------



## FLCLimax

He means World of Tanks and DOTA, lol. They can't afford current games, spent their whole check on the GPU.


----------



## epic1337

Quote:


> Originally Posted by *FLCLimax*
> 
> He means World of Tanks and DOTA, lol. They can't afford current games, spent their whole check on the GPU.


actually, review sites should add those games on their list of benches, using the popular games as a reference point is simple logic.


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> actually, review sites should add those games on their list of benches, using the popular games as a reference point is simple logic.


There is no point to doing that. A midrange card from a few years ago can run those games maxed out fairly easily. I wouldnt be surprised if you could do 4k in them with a 290.


----------



## ZoePancakes

Quote:


> Originally Posted by *epic1337*
> 
> actually, review sites should add those games on their list of benches, using the popular games as a reference point is simple logic.


almost al pc's younger then 7 years with a budget videocard can play those games to be honest it whouldn t stress the cpu or videocard at al


----------



## epic1337

Quote:


> Originally Posted by *KarathKasun*
> 
> There is no point to doing that. A midrange card from a few years ago can run those games maxed out fairly easily. I wouldnt be surprised if you could do 4k in them with a 290.


Quote:


> Originally Posted by *ZoePancakes*
> 
> almost al pc's younger then 7 years with a budget videocard can play those games to be honest it whouldn t stress the cpu or videocard at al


depends on the end goal, DOOM and DOTA 2 in particular are some of the few games that supports Vulkan API, you can use those for testing GPU's Vulkan capabilities.
since you can't just pitch in a single game to conclude a card's capabilities, the more games you can test the card on, the better the results.


----------



## FLCLimax

Add WoW and Call of Duty too


----------



## epic1337

Quote:


> Originally Posted by *FLCLimax*
> 
> Add WoW and Call of Duty too


they already do, i forgot those.

WoW = https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/21.html
CoD = https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/11.html


----------



## KarathKasun

Quote:


> Originally Posted by *epic1337*
> 
> depends on the end goal, DOOM and DOTA 2 in particular are some of the few games that supports Vulkan API, you can use those for testing GPU's Vulkan capabilities.
> since you can't just pitch in a single game to conclude a card's capabilities, the more games you can test the card on, the better the results.


Doom does not support Vulkan. It was promised "soon" at launch, and it has not happened yet.

DOTA2 would be testing the CPU not the GPU.


----------



## SpeedyVT

Quote:


> Originally Posted by *Derp*
> 
> Some more leaks including ashes benchmark.
> 
> http://videocardz.com/62086/nvidia-geforce-gtx-1060-rumors-part-5-full-specs-2-0-ghz-overclocking


http://www.ashesofthesingularity.com/metaverse#/personas/ba949b62-ec10-4fea-9263-0ace4939e9aa/match-details/4588fe50-3247-4029-9c2d-0a3274333d20

http://www.ashesofthesingularity.com/metaverse#/personas/ba949b62-ec10-4fea-9263-0ace4939e9aa/match-details/c70580b2-9b83-43a6-b9db-d9a4785c9e6d

Look at an exactly same specs RX 480 rig with one GPU and the RX 480 beats it by a 1-2 frames. This was done pre-driver update that gave a 5-7.5% performance boost.


----------



## ZoePancakes

Quote:


> Originally Posted by *epic1337*
> 
> depends on the end goal, DOOM and DOTA 2 in particular are some of the few games that supports Vulkan API, you can use those for testing GPU's Vulkan capabilities.
> since you can't just pitch in a single game to conclude a card's capabilities, the more games you can test the card on, the better the results.


true im niether for amd or nvidia in the end there both company;s who arent there to help you but to take your money

but in deed its a good thing for reviewers to have more games over a wider architecture to test

and on the r9 290 running dota 2 in 4k yes it does perfectly above 60 fps tho the health bars are smal as hell ^^


----------



## ZoePancakes

Quote:


> Originally Posted by *FLCLimax*
> 
> Add WoW and Call of Duty too


WoW actualy gets heavier each expension as it gets engine updates still not super heavy of a game but newer area's need a beefier machine


----------



## ChevChelios

Quote:


> Originally Posted by *SpeedyVT*
> 
> http://www.ashesofthesingularity.com/metaverse#/personas/ba949b62-ec10-4fea-9263-0ace4939e9aa/match-details/4588fe50-3247-4029-9c2d-0a3274333d20
> 
> http://www.ashesofthesingularity.com/metaverse#/personas/ba949b62-ec10-4fea-9263-0ace4939e9aa/match-details/c70580b2-9b83-43a6-b9db-d9a4785c9e6d
> 
> Look at an exactly same specs RX 480 rig with one GPU and the RX 480 beats it by a 1-2 frames. This was done pre-driver update that gave a 5-7.5% performance boost.


if 1060 only loses by 1-2 frames in Ashes then its going to smoke 480 in other games (DX12 included, sans Hitman)


----------



## FLCLimax

Quote:


> Originally Posted by *ChevChelios*
> 
> stuff


http://www.overclock.net/t/1604750/dsog-report-total-war-warhammer-runs-27-slower-in-dx12-on-nvidia-s-hardware/200_50#post_25334065


----------



## ChevChelios

Quote:


> Originally Posted by *FLCLimax*
> 
> http://www.overclock.net/t/1604750/dsog-report-total-war-warhammer-runs-27-slower-in-dx12-on-nvidia-s-hardware/200_50#post_25334065


beta DX12

run the working DX11 and get fps as high/higher than GCN


----------



## STEvil

So if you emulated DX10 will the FPS double?


----------



## pengs

...alright fine, I'll post it!

http://www.3dmark.com/compare/fs/9202637/fs/9188962# taken with salt (though, if reviewers have cards... insert)

_Quick foot steps that lead to a screen door _*SLAMMING*


----------



## prznar1

Quote:


> Originally Posted by *pengs*
> 
> ...alright fine, I'll post it!
> 
> http://www.3dmark.com/compare/fs/9202637/fs/9188962# taken with salt (though, if reviewers have cards... insert)
> 
> _Quick foot steps that lead to a screen door _*SLAMMING*


This is odd, seriously odd. In the past nvidia knew what price they should put for a product. Now it looks like they will put a lower performance with higher price over the AMD card. What they are playing?


----------



## aDyerSituation

Quote:


> Originally Posted by *prznar1*
> 
> This is odd, seriously odd. In the past nvidia knew what price they should put for a product. Now it looks like they will put a lower performance with higher price over the AMD card. What they are playing?


Hasn't that always been the case?


----------



## GoLDii3

Quote:


> Originally Posted by *prznar1*
> 
> This is odd, seriously odd. In the past nvidia knew what price they should put for a product. Now it looks like they will put a lower performance with higher price over the AMD card. What they are playing?


Maybe they don't care about low-end stuff,R9 380 was better than 960. History repeats.


----------



## czerro

Quote:


> Originally Posted by *prznar1*
> 
> This is odd, seriously odd. In the past nvidia knew what price they should put for a product. Now it looks like they will put a lower performance with higher price over the AMD card. What they are playing?


The differences are pretty negligible in 3dmark. We'd have to know a bit more about real-world performance. It is a tad slower and a tad more expensive.

If the 10xx 'soft' launch continues as is, you won't be able to even buy one until much further down the line.

Nvidia will have some spin for this to be sure. Clocks are really funny though. I'm not sure if that 3dmark is legit....


----------



## aDyerSituation

If the 1060 is already behind the 480 I could imagine how far it'll be behind it in a few months


----------



## prznar1

Quote:


> Originally Posted by *aDyerSituation*
> 
> If the 1060 is already behind the 480 I could imagine how far it'll be behind it in a few months


Dont say hop before you jump.

This would be a great card if it would be branded as 1050. I dont know what happened to nvidia. They havent been that stupid in the past.


----------



## czerro

Quote:


> Originally Posted by *prznar1*
> 
> Dont say hop before you jump.
> 
> This would be a great card if it would be branded as 1050. I dont know what happened to nvidia. They havent been that stupid in the past.


Oh, yeah, that's a great point. Hrm. I have to think that 3dmark is fake or on some busted drivers. It will be 2-5% better than a 480 and 10% more expensive. Right? That's where Nvidia would land the 1060? I would be interested to know if Nvidia is eating their shoes on the 1060 though. Those clocks are pretty crazy in that 3dmark...

That's pretty good marketing though, if they dropped it as the 1050







. That's brilliant.


----------



## aDyerSituation

Quote:


> Originally Posted by *prznar1*
> 
> Dont say hop before you jump.
> 
> This would be a great card if it would be branded as 1050. I dont know what happened to nvidia. They havent been that stupid in the past.


Yes they have. The 960 is a perfect example.


----------



## SpeedyVT

Quote:


> Originally Posted by *prznar1*
> 
> Dont say hop before you jump.
> 
> This would be a great card if it would be branded as 1050. I dont know what happened to nvidia. They havent been that stupid in the past.


Greed? Every year they get a tad greedier, I know AMD has been in dire straights but why would anyone buy a card that's already dated?


----------



## ChevChelios

Quote:


> Originally Posted by *prznar1*
> 
> This is odd, seriously odd. In the past nvidia knew what price they should put for a product. Now it looks like they will put a lower performance with higher price over the AMD card. What they are playing?


?

1060 is going to be a bit faster than 480, you'll see


----------



## EightDee8D

Quote:


> Originally Posted by *ChevChelios*
> 
> ?
> 
> 1060 is going to be a bit faster than 480, you'll see


Yep, only in select dx11/dx12 games for first 4-6 months. you'll see


----------



## Master__Shake

Quote:


> Originally Posted by *ChevChelios*
> 
> ?
> 
> 1060 is going to be a bit faster than 480, you'll see


it'll be out classed at release like the 960 was before it.


----------



## Majin SSJ Eric

I have a feeling a lot of die hard fanboys are going to be in shock when the real 1060 performance numbers start coming in. I don't think its going to be faster than the 480 at all, especially OC to OC. Its also going to cost significantly more with less memory and be saddled with the same supply issues as the 1080/1070 as well as the ridiculous Fanboy Edition nonsense. That PCB looks less than inspiring to me; the 480 reference looks like a more premium product to my eyes. AIB's will sort out both cards' issues but the 480 and 1060 will probably end up in a virtual tie performance-wise with the AMD card offering much better value for money...


----------



## EightDee8D

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I have a feeling a lot of die hard fanboys are going to be in shock when the real 1060 performance numbers start coming in. I don't think its going to be faster than the 480 at all, especially OC to OC. Its also going to cost significantly more with less memory and be saddled with the same supply issues as the 1080/1070 as well as the ridiculous Fanboy Edition nonsense. That PCB looks less than inspiring to me; the 480 reference looks like a more premium product to my eyes. AIB's will sort out both cards' issues but the 480 and 1060 will probably end up in a virtual tie performance-wise with the AMD card offering much better value for money...


It will be faster than 480 for sure, at least on launch and first few months.


----------



## FLCLimax

BTW, 3GB is more than enough VRAM.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *EightDee8D*
> 
> It will be faster than 480 for sure, at least on launch and first few months.


Stock to stock, possibly. Not OC'd, at least if the OC scaling of the other Pascal cards is any indication. We will see soon enough; its not like I'm stating facts here, just my opinion.


----------



## variant

Quote:


> Originally Posted by *prznar1*
> 
> Dont say hop before you jump.
> 
> This would be a great card if it would be branded as 1050. I dont know what happened to nvidia. They havent been that stupid in the past.


The benchmark says it's a NVIDIA GeForce GTX 1060 6GB under the driver name.


----------



## EightDee8D

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Stock to stock, possibly. Not OC'd, at least if the OC scaling of the other Pascal cards is any indication. We will see soon enough; its not like I'm stating facts here, just my opinion.


Agree


----------



## Scotty99

When are we going to see aftermarket cards? I refuse to buy a reference cooler lol.


----------



## FLCLimax

Quote:


> Originally Posted by *Scotty99*
> 
> When are we going to see aftermarket cards? I refuse to buy a reference cooler lol.


\

Day 1.


----------



## SpeedyVT

Quote:


> Originally Posted by *FLCLimax*
> 
> \
> 
> Day 1.


1060 reference is the ugliest.


----------



## Klocek001

Quote:


> Originally Posted by *EightDee8D*
> 
> Yep, only in select dx11/dx12 games for first 4-6 months. you'll see


I think it will be the other way round. rx480 will be faster in selected dx12 games or bench-games.


----------



## EightDee8D

Quote:


> Originally Posted by *Klocek001*
> 
> I think it will be the other way round. rx480 will be faster in selected dx12 games or bench-games.


Just like 660/760/960 ? yep.


----------



## infranoia

Quote:


> Originally Posted by *Cyro999*
> 
> Does AMD have a feature like SMP that i missed?


Quote:


> Originally Posted by *ChevChelios*
> 
> what VR specific features does Polaris have ? and how is it VR performance compared to Pascals VR ?


These questions come up and are often unanswered, so I'll just leave this here.

http://www.pcgamer.com/amd-liquidvr-vs-nvidia-vrworks-the-sdk-wars/

Direct-To-Display, Data Latch, Asynchronous Time Warp implemented as an Async Shader (runs parallel to the graphics queue in hardware), and Affinity MultiGPU. Nothing specific to Polaris, this stuff works across GCN.

It would be nice to have a comprehensive head-to-head once the dust settles and we have some decent AIBs and multi-GPU options.


----------



## Klocek001

I figured this belongs to this thread as much as it belongs to rx480 thread, since the autor promises to re-do the tests on 1060 instead of 970

https://translate.google.pl/translate?sl=pl&tl=en&js=y&prev=_t&hl=pl&ie=UTF-8&u=http%3A%2F%2Fwww.purepc.pl%2Fkarty_graficzne%2Fradeon_rx_480_vs_geforce_gtx_970_test_na_kilku_procesorach&edit-text=&act=url

GTX 970 21% faster than RX480 on i3 4170 3.7GHz in CPU bound scenarios
Quote:


> In the case of the Radeon RX 480, nothing changes, so like before, card AMD needs a much faster processor than the GeForce GTX 970, that in critical moments provide the same liquidity.


Quote:


> If anyone expected that, with the launch of the AMD architecture Polaris programmers removed through the problem of overhead or optimize the work of the driver, especially on the weak processors, DirectX 11, you may delete it from your dream list. Improvement probably never see again. Reds investing heavily in the promotion of low-level DirectX 12 and Vulkan, which (simplifying) the main idea is to squeeze the maximum out of graphics chips, while offloading the CPU. Too bad, because most games still intends to use DirectX 11, so the owners of Radeons may experience declines in pefrormance.


sorry for the google translation, but you get the idea...

the author says he'll repeat the tests on 1060 as soon as NDA is lifted

edit: I remembered our yesterday's "discussion" (in rx480 thread) on how Skylake i5 with DDR4 is very affordable for every RX480 buyer, so I decided to comapre both on a high-end CPU. Turns out that even if we compare GTX970 and RX480 both on i7 4790K 4.5GHz, RX480 is slower in CPU bound parts of games. 3% slower on avg. including dx12 Hitman, 8% on avg. including dx11 games only.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Klocek001*
> 
> I think it will be the other way round. rx480 will be faster in selected dx12 games or bench-games.


Selected? It should be faster in *all* DX12 games.


----------



## Klocek001

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Selected? It should be faster in *all* DX12 games.


just like it's faster than 970 in dx12 rotr

http://www.purepc.pl/karty_graficzne/radeon_rx_480_vs_geforce_gtx_970_test_na_kilku_procesorach?page=0,13


----------



## ChevChelios

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Selected? It should be faster in *all* DX12 games.


Hitman and Ashes sure, as per usual

the rest - we will see, I think equal or maybe 1060 can pull a bit ahead here or there

but then again _what other_ DX12 games do they even test in reviews ?







just RotR I think

thats how "widespread" DX12 is


----------



## Klocek001

Quote:


> Originally Posted by *ChevChelios*
> 
> Hitman and Ashes sure, as per usual
> 
> the rest - we will see, I think equal or maybe 1060 can pull a bit ahead here or there
> 
> but then again _what other_ DX12 games do they even test in reviews ?
> 
> 
> 
> 
> 
> 
> 
> just RotR I think
> 
> thats how "widespread" DX12 is


a comprehensive array of tests as of right now:
-Hitman, but only in dx12 mode where the memory restraint lowers textures on amd cards but shows better fps
-AoTS
-Warhammer dx12 beta
-Quantum Break

the list cannot include rotr, since it's nvidia sponsored and under no circumstances should the benchmarks mention that nvidia beats amd's dx12 results in dx11. AMD's lead should then be multiplied by 2.8x, which is the power efficiency improvement on Polaris architecture. Now we've got RX480 matching or beating 1080, touche nvidia.


----------



## ChevChelios

in all the different reviews of 1080/1070/480 Ive read since early June I dont believe Ive EVER seen QB anywhere, not once









I might have seen WH DX12 a few times

Hitman DX12 and RotR DX12 are the staples (often the only DX12 games tested) and if the review is good/comprehensive they also include DX11 results of those games, at least for RotR

Ashes comes up often, likely in more than half of the tests


----------



## Ha-Nocri

Tomb Raider was the only DX12 hope for nVidia, not anymore since last updates




http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/72889-radeon-rx480-8gb-performance-review-19.html

Poor 970


----------



## ChevChelios

Quote:


> Tomb Raider was the only DX12 hope for nVidia, not anymore since last updates


source of those charts ?

besides you can always run RotR @ DX11 for nvidia and get superior performance









edit: i see the source link, thats for the old/previous RotR patch (June 25-th), not the new one

and here theres a different picture, the 970 still loses, but the 980 beats 390X, 480 _and_ Fury









https://www.techpowerup.com/reviews/AMD/RX_480/19.html









http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,10.html
^ oh wow, 980 beats Fury X









so according to TPU and guru3D, AMDs RotR Dx12 is as bad as ever


----------



## PlugSeven

Quote:


> Originally Posted by *ChevChelios*
> 
> source of those charts ?
> 
> besides you can always run RotR @ DX11 for nvidia and get superior performance
> 
> 
> 
> 
> 
> 
> 
> 
> 
> edit: i see the source link, thats for the old/previous RotR patch (June 25-th), not the new one
> 
> and here theres a different picture, the 970 still loses, but the 980 beats 390X, 480 _and_ Fury
> 
> 
> 
> 
> 
> 
> 
> 
> 
> https://www.techpowerup.com/reviews/AMD/RX_480/19.html
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,10.html
> ^ oh wow, 980 beats Fury X
> 
> 
> 
> 
> 
> 
> 
> 
> 
> so according to TPU and guru3D, AMDs RotR Dx12 is as bad as ever


Didn't the new patch come out only a few days ago? That TPU review is from the 29 June


----------



## Ha-Nocri

http://wccftech.com/rise-tomb-raider-dx12-async-compute-amd-performance-improved-nvidia-users-better-off-using-dx11/





AMD cards gained performance from that latest DX12 update. Fury X now is faster than 980ti, while it was slower before. And if Fury X is faster than 980ti, you can count that 970 will be destroyed by 390/480


----------



## Klocek001

Quote:


> Originally Posted by *Ha-Nocri*
> 
> Tomb Raider was the only DX12 hope for nVidia, not anymore since last updates


lol at "only hope for nvidia", enjoy while it lasts, 1060 is next week.


----------



## Ha-Nocri

Quote:


> Originally Posted by *Klocek001*
> 
> lol at "only hope for nvidia", enjoy while it lasts, 1060 is next week.


So they implemented async compute on 1060?


----------



## ChevChelios

Quote:


> Originally Posted by *PlugSeven*
> 
> Didn't the new patch come out only a few days ago? That TPU review is from the 29 June


yes and so is the hardware canucks charts he linked earlier


----------



## Klocek001

Quote:


> Originally Posted by *Ha-Nocri*
> 
> So they implemented async compute on 1060?


lol how embarassing it will be when 1060 matches rx480 without async ..... you guys will have to sit quiet till Vega I suppose, then you'll start pumping up the hype again.


----------



## ChevChelios

Quote:


> Originally Posted by *Ha-Nocri*
> 
> http://wccftech.com/rise-tomb-raider-dx12-async-compute-amd-performance-improved-nvidia-users-better-off-using-dx11/


http://www.overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5

and yet even with async implemented, Fury X struggles to reach 980Ti DX11 performance


----------



## PlugSeven

Quote:


> Originally Posted by *ChevChelios*
> 
> yes and so is the hardware canucks charts he linked earlier


Going by the numbers in post #655, it looks like going from bad to worse for the 970


----------



## ChevChelios

but its the 1060 that 480 goes up against, not the 970


----------



## TrueForm

''I'm defending this company because I choose to buy one of their products and by you saying that company isn't doing so well makes me feel insulted''

This thread


----------



## JackCY

Quote:


> Originally Posted by *Cyro999*
> 
> Does AMD have a feature like SMP that i missed?


Is any game engine even supporting SMP?
nGreedia made it sound all nice and dandy but forgot to say that it's not a HW issue but a software issue to begin with, devs can't be bothered to do projections properly and HW that can process it faster isn't gonna change that. Any HW can do multi projection.


----------



## Phixit

http://videocardz.com/62086/nvidia-geforce-gtx-1060-rumors-part-5-full-specs-2-0-ghz-overclocking

Performances seem pretty similar to the RX 480.


----------



## ChevChelios

if its the same in _Ashes_ then it means 1060 will beat 480 everywhere else except Hitman

possibly


----------



## maltamonk

I want to see results from 24/7 stable clocks from both camps. I don't care how each card performs for a min or two before it throttles.


----------



## oxidized

so many keyboard warriors on these kind of threads, talk after, not before, we just got to wait to really compare the 2


----------



## Cakewalk_S

Does anyone know the actual NDA lift on this 1060? Very interesting card. Sounds like it'll match a 980 with about 50w less power draw and potentially better overclocking potential...


----------



## VeritronX

Quote:


> Originally Posted by *Cakewalk_S*
> 
> Does anyone know the actual NDA lift on this 1060? Very interesting card. Sounds like it'll match a 980 with about 50w less power draw and potentially better overclocking potential...


Should be the 19th, according to pcper iirc.

Biggest problem for me is lack of SLI support, I would totally buy a 1060 and then another one later to get high performance at a lower entry cost.


----------



## Cakewalk_S

Quote:


> Originally Posted by *VeritronX*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Cakewalk_S*
> 
> Does anyone know the actual NDA lift on this 1060? Very interesting card. Sounds like it'll match a 980 with about 50w less power draw and potentially better overclocking potential...
> 
> 
> 
> Should be the 19th, according to pcper iirc.
> 
> Biggest problem for me is lack of SLI support, I would totally buy a 1060 and then another one later to get high performance at a lower entry cost.
Click to expand...

Thx, rep++. I'm not able to SLI since I have an ITX board and ITX build...so it'd be perfect for me. I'm definitely interested if the price is right. I can maybe squeeze out $200 for my 970 when I get it back and maybe pick up a 1060 for an extra $50-75????


----------



## Derp

Quote:


> Originally Posted by *Klocek001*
> 
> I figured this belongs to this thread as much as it belongs to rx480 thread, since the autor promises to re-do the tests on 1060 instead of 970
> 
> https://translate.google.pl/translate?sl=pl&tl=en&js=y&prev=_t&hl=pl&ie=UTF-8&u=http%3A%2F%2Fwww.purepc.pl%2Fkarty_graficzne%2Fradeon_rx_480_vs_geforce_gtx_970_test_na_kilku_procesorach&edit-text=&act=url
> 
> GTX 970 21% faster than RX480 on i3 4170 3.7GHz in CPU bound scenarios
> 
> sorry for the google translation, but you get the idea...
> 
> the author says he'll repeat the tests on 1060 as soon as NDA is lifted
> 
> edit: I remembered our yesterday's "discussion" (in rx480 thread) on how Skylake i5 with DDR4 is very affordable for every RX480 buyer, so I decided to comapre both on a high-end CPU. Turns out that even if we compare GTX970 and RX480 both on i7 4790K 4.5GHz, RX480 is slower in CPU bound parts of games. 3% slower on avg. including dx12 Hitman, 8% on avg. including dx11 games only.


Thanks for sharing the link. I wish more review sites tested this and made a big deal out of it, maybe AMD would actually do something. "Just deal with our terrible DX11 drivers until your favorite games magically become DX12 and Vulkan aware or purchase Nvidia instead" is not a smart attitude to have when your market share is abysmal.


----------



## Scotty99

Quote:


> Originally Posted by *Cakewalk_S*
> 
> Thx, rep++. I'm not able to SLI since I have an ITX board and ITX build...so it'd be perfect for me. I'm definitely interested if the price is right. I can maybe squeeze out $200 for my 970 when I get it back and maybe pick up a 1060 for an extra $50-75????


1060 is not going to be much faster than a 970, you should save up for a 1070 if you want a real upgrade.


----------



## Clocknut

Quote:


> Originally Posted by *VeritronX*
> 
> Should be the 19th, according to pcper iirc.
> 
> Biggest problem for me is lack of SLI support, I would totally buy a 1060 and then another one later to get high performance at a lower entry cost.


yeah the biggest meh is the no SLI support. I needed to get 2 GPU for 2 computers & SLI them on one of the computer in future.


----------



## Newbie2009

When are we expecting these cards to be releaseD?


----------



## Fb74

Quote:


> Originally Posted by *Newbie2009*
> 
> When are we expecting these cards to be releaseD?


19th of July.


----------



## Fuell

Quote:


> Originally Posted by *iLeakStuff*
> 
> Nvidia got too much money to spew in development to always come on top.
> *AMD thought they were clever to go for low end and let Nvidia have high end alone.
> BAM, here comes GTX 1060 out of the blue to challenge RX 480.
> The card got like a week or two alone.*
> 
> AMD need money badly to be able to survive, meaning being able to develop chips for all segments, low, midrange and high end, at the same time. One can also start to wonder why the HUGE efficiency lead of Pascal. Where does it come from? The architecture or 16nm?


Now that the 1060 has launched, AMD is in big trouble right? I mean, the 1060 stomps the 480 for same price and AMD had 2 weeks to try to sell 480's. The truckloads of 480's selling about as fast as they ship has now ended, the new king of mainstream and value is here. Poor AMD, just 2 weeks and squashed.

Man those 1060's are selling by the millions already, AMD is done for...

Oh wait...


----------



## Aymanb

Bit offtopic but as a 1080p gamer, is 1060 enough for maxing everything such as GTA, upcoming Watchdogs 2 etc. or do i need a 1070?


----------



## candy_van

Quote:


> Originally Posted by *Aymanb*
> 
> Bit offtopic but as a 1080p gamer, is 1060 enough for maxing everything such as GTA, upcoming Watchdogs 2 etc. or do i need a 1070?


No you don't "need' it for that, but if you keep your GPUs around for a few years (or more) it certainly wouldn't hurt.


----------



## Aymanb

Quote:


> Originally Posted by *candy_van*
> 
> No you don't "need' it for that, but if you keep your GPUs around for a few years (or more) it certainly wouldn't hurt.


Yeah this is the first time upgrading since 660ti, I keep them for a long time. I was just curious if 1060 can handle everything easily at 1080p and the 1070 being more for 1440p etc.

All my years I've always used medium settings etc. I want to finally be able to max everything In singleplayer games (i dont do mods) so not sure which one to get. But if the 1070 doesnt decrease in price from the ridiculous €500 ill have to go with 1060.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Aymanb*
> 
> Yeah this is the first time upgrading since 660ti, I keep them for a long time. I was just curious if 1060 can handle everything easily at 1080p and the 1070 being more for 1440p etc.
> 
> All my years I've always used medium settings etc. I want to finally be able to max everything In singleplayer games (i dont do mods) so not sure which one to get. But if the 1070 doesnt decrease in price from the ridiculous €500 ill have to go with 1060.


I think both the 480 and 1060 will be able to handle most games at max settings at 1080p. That's sort of what they were designed to do...


----------



## Aymanb

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think both the 480 and 1060 will be able to handle most games at max settings at 1080p. That's sort of what they were designed to do...


Thanks, ill cross my fingers then


----------



## Fb74

I received a mail from a french shop:
http://infos.materiel.net/a/?F=ghsbehpnhkaxp7dgdc8c5re4hjkxddajt649x7w6gtezbs36h2v8w3z-7107511

It says "Play well with Nvidia"

First, we can find (bottom) the GTX1060 with the mention RDV (Rendez-vous) the 19th of July at 15h00 (3 PM for NDA ending in french timezone).

BUT, what i find interesting is that they put together the GTX980 to the GTX1080 (as high end graphic cards) and they put the GTX 1060 with the GTX 970 and the GTX 960.

I am sure they dit it on purpose.

Might be a hint and a confirmation that GTX 1060 is not expected (as I strongly believe) as on par with GTX980.


----------



## ChevChelios

the top one is just the "high-end" line

and the lower one - the middle end

that is to say, Im not sure 1060 will necessarily fully match a stock 980, but it could come within 3-5% or so


----------



## EightDee8D

It will beat 980, same as 1070 beats tx, means it won't beat it once both are oc. but after 3-4 months and in new games it will beat even when oc.

just like 970/780ti and 960/770.


----------



## ChevChelios

if it beats 980 then it will crush 480 in sales even more


----------



## ChevChelios

Quote:


> Even if it doesn't beat 980, or 480 heck even 970. it will still crush 480 in sales. cuz just like when people bought slower 960 over 380/x.


so is AMD doomed then and its struggle and red revolution all futile ?


----------



## Fb74

Quote:


> Originally Posted by *ChevChelios*
> 
> so is AMD doomed then and its struggle and red revolution all futile ?


You know what happens when reds are coming...


----------



## Fuell

Quote:


> Originally Posted by *ChevChelios*
> 
> if it beats 980 then it will crush 480 in sales even more


They might want to release the card soon then, the 480 is selling by the truckload... Even a bit of faked mobo deaths and overexaggerated PCI power issues could only fuell fanboy wars and get tech sites a bunch of ad revenue. The majority of customers in this price range still have no clue those "issues" even happened.


----------



## cowie

Quote:


> Originally Posted by *Fuell*
> 
> They might want to release the card soon then, the 480 is selling by the truckload... Even a bit of faked mobo deaths and overexaggerated PCI power issues could only fuell fanboy wars and get tech sites a bunch of ad revenue. The majority of customers in this price range still have no clue those "issues" even happened.


oh yeah sure like you would know


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *ChevChelios*
> 
> if it beats 980 then it will crush 480 in sales even more


The AIB 480's will probably be just as fast as the 1060's once both are OC'd. Certainly Pascal OC's far better but the performance per OC for Pascal is nothing like that of GCN. If we really do see some 1500MHz 480's (getting more and more doubtful by the day) there's no question they would kill both the 980 and the 1060...


----------



## Orthello

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> The AIB 480's will probably be just as fast as the 1060's once both are OC'd. Certainly Pascal OC's far better but the performance per OC for Pascal is nothing like that of GCN. If we really do see some 1500MHz 480's (getting more and more doubtful by the day) there's no question they would kill both the 980 and the 1060...


I don't think it needs to get to 1500 tbh. If the AIB cards can hold clocks at 1425-1450 mhz in 4K , vs the ~ 1100 mhz boosts the ref card does without tweaks to power target then just think of what that does for performance overall for that res - particularly vs reference in the reviews . 29.5% clock rate improvement for that res based on a 1425 sustained boost clock vs ~ 1100. Take half of that as increase (its should get more knowing GCN) and you have a nice minimum 15% improvement to 4k results from an official 12.5 % increase in clock rate. Add in a nice memory oc to 9ghz which may not throttle the gpu further (like the ref card) and you could see 20% gains in that res.

I guess it depends on the level of throttling that is occuring in the res been used. 1440p results from some sites were getting lower numbers than default boost. I think it was about 50 mhz off from memory.

So yeah i'm just looking forward to more consistantly higher results from the AIB cards.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Orthello*
> 
> I don't think it needs to get to 1500 tbh. If the AIB cards can hold clocks at 1425-1450 mhz in 4K , vs the ~ 1100 mhz boosts the ref card does without tweaks to power target then just think of what that does for performance overall for that res - particularly vs reference in the reviews . 29.5% clock rate improvement for that res based on a 1425 sustained boost clock vs ~ 1100. Take half of that as increase (its should get more knowing GCN) and you have a nice minimum 15% improvement to 4k results from an official 12.5 % increase in clock rate. Add in a nice memory oc to 9ghz which won't throttle the gpu further and you could see 20% gains in that res.
> 
> I guess it depends on the level of throttling that is occuring in the res been used. 1440p results from some sites were getting lower numbers than default boost. I think it was about 50 mhz off from memory.
> 
> So yeah i'm just looking forward to more consistantly higher results from the AIB cards.


I'm just excited that these cards will likely benefit greatly from water cooling, unlike Nvidia's past two architectures.


----------



## Orthello

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm just excited that these cards will likely benefit greatly from water cooling, unlike Nvidia's past two architectures.


Yeah , these TXs did 1633 mhz benching at -22c . Yes sub zero, but i can bench them at 1550 at ambient liquid so yeah not so much fun or gain with the colder temps (i guess they are ref cards after all). Air cooling making a lot of a noise they did 1480 .

I've been thinking about building a dual rx480 h20 rig for the fun of it .. i won't because it won't give me extra performance and its $$ , but man i'd like to just to tweak it to the extremes. Its a hot little chip so it could benefit greatly from the extra cooling.

I think you would see 480 chips that could maintain 1500mhz h20 clocks without obscene voltages , i'd wait for an AIB pcb first however.


----------



## Scotty99

Launch is tomorrow and no more leaks? Whats going on lol.


----------



## ChevChelios

tomorrow the reviews

gonna be a fun day


----------



## Fb74

There were some "benches" posted on a chinese forum, with different games, but as I do not speak chinese it's very difficult to understand what they did.

It seems that the card is a serious contester for a RX480 (~ equal) but, as expected, not made for 4k.

So, that being said, we need to wait a little bit than more of a day to have something official, so let us wait.


----------



## flippin_waffles

Will have to see if reviewers are willing to point out how poor this card is going to be from a relevency perspective. Its clear without a doubt what is the best architecture for these new APIs and games. Reviews better show some impartiallity and call NV on all the shortcommings in Pascal and especially Maxwell, Kepler and Fermi that are now being obliterated by their competion. GCN is providing tier and in some cases generational performance improvements from the two major APIs just releasing. There should be very few recommendations given for this card considering how poor it appears to be for new games hitting the market and releasing soon compared to a RX480.


----------



## Fb74

If you want to read about "leaked benchmarks":

http://wccftech.com/nvidia-gtx-1060-leaked-benchmarks/

(See pictures)

But beware, I read that some benches were made with different processors... so it's very difficult to compare final results as GPU could benefit from a more powerful processor.


----------



## ChevChelios

Quote:


> Originally Posted by *flippin_waffles*
> 
> Will have to see if reviewers are willing to point out how poor this card is going to be from a relevency perspective. Its clear without a doubt what is the best architecture for these new APIs and games. Reviews better show some impartiallity and call NV on all the shortcommings in Pascal and especially Maxwell, Kepler and Fermi that are now being obliterated by their competion. GCN is providing tier and in some cases generational performance improvements from the two major APIs just releasing. There should be very few recommendations given for this card considering how poor it appears to be for new games hitting the market and releasing soon compared to a RX480.


Pascal is the best architecture to have right now, where you dont have to suffer from the atrocious DX11/OpenGL performance of GCN and forced to rely on DX12 patches which are scarce and take a long time to come out post release

while still getting more benefit out of DX12/Vulkan & async than Maxwell

I expect reviewers to recommend 1060 over the ref 480 for the better cooler, less power draw & noise and better performance .. unless your budget tops out at $200 and you can only afford the ref 4GB 480 (in which case you should probably wait to get a custom 470 or maybe a 1050 anyway)

vs custom 480 it would depend on the prices of specific models, which also vary locally .. but then again no custom 480 will be available for review or sale tomorrow, so cant compare anyway yet


----------



## daviejams

Interesting to see what it's doom performance is with Vulcan and some of the DX12 titles

I bet most of the review sites use DX11 games 1-2 year old though


----------



## Glottis

Quote:


> Originally Posted by *daviejams*
> 
> Interesting to see what it's doom performance is with Vulcan and some of the DX12 titles
> 
> I bet most of the review sites use DX11 games 1-2 year old though


most review sites use games that most people play, which is logical thing to do. most people, especially those buying 1060, will be playing stuff like CSGO, Dota2, WoW, Overwatch. you want review sites to only use DX12/Vulcan games? that doesn't represent how the card will be used.


----------



## ChevChelios

Quote:


> better longevity and competitive performance where you will get performance boost even 1-2 years later so you won't need to upgrade


more red fud/mantra









also you insane if you think 1060 reviews wont include Vulkan Doom

what the red shills should be worrying about instead is when the next such Vulkan game will come out, if it does


----------



## Glottis

yeah, all reviewers include 2/3 of DX11 games and 1/3 of DX12/Vulcan games. because that about represents how GPU will be used at the day of purchase. i don't know what that guy's problem is, trying to find conspiracies in everything.


----------



## daviejams

Quote:


> Originally Posted by *Glottis*
> 
> most review sites use games that most people play, which is logical thing to do. most people, especially those buying 1060, will be playing stuff like CSGO, Dota2, WoW, Overwatch. you want review sites to only use DX12/Vulcan games? that doesn't represent how the card will be used.


You could play those games on integrated graphics at a reasonable framerate , the 1060 is going to run them at hundreds off frames per second. Utterly pointless to test

Going forward nearly all AAA games will be DX12 - so those are the games I am interested in seeing tested.

A new GPU should last longer than a year


----------



## Glottis

Quote:


> Originally Posted by *daviejams*
> 
> You could play those games on integrated graphics at a reasonable framerate , the 1060 is going to run them at hundreds off frames per second. Utterly pointless to test
> 
> Going forward nearly all AAA games will be DX12 - so those are the games I am interested in seeing tested.
> 
> A new GPU should last longer than a year


you know now clue what you are talking about do you? no one who is serious into CSGO, Overwatch, Dota2, WoW etc plays on iGPU, stop talking utter nonsense.


----------



## EightDee8D

I don't get your mindset either, you say doom results don't matter cuz people already finished it, ok. so they have already finished all previous games too ? so only thing that matters is future games where 480 will shine going by how great 480 performs on dx12/vlk.

and what fud are you talking about ? 680/770/780ti/970/980 all gpus are now slower than their competition. only a green shill can deny that. oh wait


----------



## ChevChelios

Quote:


> Going forward nearly all AAA games will be DX12


BF1, Watch Dogs 2, Mafia 3 = 3 games = all AAA games ?

not to mention some/all ofthem will still be DX11 games with DX11 engines with a DX12 setting/patch slapped on


----------



## ChevChelios

Quote:


> Going forward nearly all AAA games will be DX12 - so those are the games I am interested in seeing tested.


they do test DX12 games in all reviews .. all 3/4 of them, since thats what DX12 amounts to so far

Quote:


> 680/770/780ti/970/980 all gpus are now slower than their competition.


980 OC > 390X OC > 980 = 390X


----------



## daviejams

Quote:


> Originally Posted by *Glottis*
> 
> you know now clue what you are talking about do you? no one who is serious into CSGO, Overwatch, Dota2, WoW etc plays on iGPU, stop talking utter nonsense.


I said you can run those games on integrated graphics not that you should

It's going to be running those games at 100s of frames per second , just like any other GPU can. Pointless to test and I don't see what your getting your knickers in a twist for

I want to see how this performs on new AAA games. All these new AAA games are using DX12 and that will continue


----------



## EightDee8D

Quote:


> Originally Posted by *ChevChelios*
> 
> they do test DX12 games in al lreviews .. all 3/4 of them, since thats what DX12 amounts to so far
> 980 OC > 390X OC > 980 = 390X


Ahha, but you said people don't overclock, so it's irrelevant no ? and who was talking about oc ?
Atleast you agree 680/770/780ti/970 lost now.


----------



## Farih

Quote:


> Originally Posted by *ChevChelios*
> 
> BF1, Watch Dogs 2, Mafia 3 = 3 games = all AAA games ?
> 
> not to mention some/all ofthem will still be DX11 games with DX11 engines with a DX12 setting/patch slapped on


Its good going forward with DX12 and the more games getting it the better.
But your right, DX11 isnt dead yet not even close.

For me DX11 performance of a card is more important then DX12 performance ATM.
My view might change next year or after when there is more DX12 but for now i want DX11 performance before anything else.

Do think its a shame that my NV cards handle DX12 so poorly though but then again i dont even own 1 single DX12 game.


----------



## ChevChelios

Quote:


> Originally Posted by *EightDee8D*
> 
> so it's irrelevant no ?












Quote:


> and who was talking about oc ?


everyone who wants to be objective









Maxwell is not as good as GCN on DX12, but it has better OC

both ACEs/HWS and a plain ol OC give you more fps in all games


----------



## EightDee8D

Quote:


> Originally Posted by *ChevChelios*
> 
> everyone who wants to be hypocrite


well, ok


----------



## ChevChelios

Quote:


> Originally Posted by *Farih*
> 
> Its good going forward with DX12 and the more games getting it the better.
> But your right, DX11 isnt dead yet not even close.
> 
> For me DX11 performance of a card is more important then DX12 performance ATM.
> My view might change next year or after when there is more DX12 but for now i want DX11 performance before anything else.
> 
> Do think its a shame that my NV cards handle DX12 so poorly though but then again i dont even own 1 single DX12 game.


when games like AotS built from the ground up for DX12 & async start compromising ~half or more of popular games tested in reviews then we can talk about DX11 being "dead" and focus purely on cards DX12 performance with DX11 being relegated to legacy

IMHO


----------



## Glottis

So what exactly is the problem with Nvidia and DX12 that so many here keep crying all the time about? I've been reading MSI GeForce GTX 1080 GAMING Z 8G review today on Guru3D and specifically checked 980Ti vs FuryX (because that's the most relevant to me atm). Calculated 4 games average. 980Ti 62.5 FPS, Fury X 62.3FPS. That's with 3 out of 4 games being AMD partnered! AMD fanboys make it sound like Nvidia is losing 50-100% in performance in DX12 lol


----------



## ChevChelios

Quote:


> So what exactly is the problem with Nvidia and DX12 that so many here keep crying all the time about? I've been reading MSI GeForce GTX 1080 GAMING Z 8G review today on Guru3D and specifically checked 980Ti vs FuryX (because that's the most relevant to me atm). Calculated 4 games average. 980Ti 62.5 FPS, Fury X 62.3FPS. That's with 3 out of 4 games being AMD partnered! AMD fanboys make it sound like Nvidia is losing 50-100% in performance in DX12 lol


Fury X beats stock 980Ti in Hitman DX12 (AMD Gimp Evolved), Ashes DX12 and Doom Vulkan (though some benchmarks I seen show them as equal .. it may depend on the level/location)

with the latest async patch, Fury X also beats stock 980Ti by a little bit in RotR DX12 .. though its still slightly behind 980Tis DX11 level in RotR

Im not sure about Fury X DX12 vs 980Ti DX11 in WH: TW

1070 is overall ahead (or equal to Fury X in AMD favored titles - https://i.imgur.com/d0eoUbm.png) of both though (when all cards are in stock)

in OC it becomes 980Ti = 1070 > Fury X


----------



## Farih

Quote:


> Originally Posted by *ChevChelios*
> 
> when games like AotS built from the ground up for DX12 & async start compromising ~half or more of popular games tested in reviews then we can talk about DX11 being "dead" and focus purely on cards DX12 performance with DX11 being relegated to legacy
> 
> IMHO


I'm affraid DX11 is here to stay for another year or maybe 2.

For my use i wanted the best performing card in my budget.
It was the 980ti, i wish it was the Fury X but i couldnt deny the 980ti just being the better card overall.
If DX12 was relevant for me then maybe a Fury X but sadly DX12 isnt relevant for me.

Still think the 290x is the best card i ever owned though (price/performance/longevity)


----------



## Xuper

Quote:


> Originally Posted by *ChevChelios*
> 
> Fury X beats stock 980Ti in Hitman DX12 (*AMD Gimp Evolved title*
> 
> 
> 
> 
> 
> 
> 
> ), Ashes DX12 and Doom Vulkan (though some benchmarks I seen show them as equal .. it may depend on the level/location)
> 
> with the latest async patch, Fury X also beats stock 980Ti by a little bit in RotR DX12 .. though its still slightly behind 980Tis DX11 level in RotR
> 
> Im not sure about Fury X DX12 vs 980Ti DX11 in WH: TW
> 
> 1070 is overall ahead (or equal to Fury X in AMD favored titles - https://i.imgur.com/d0eoUbm.png) of both though (when all cards are in stock)
> 
> in OC it becomes 980Ti = 1070 > Fury X


----------



## sugarhell

Quote:


> Originally Posted by *Farih*
> 
> I'm affraid DX11 is here to stay for another year or maybe 2.


Because i know the particular market really well DX11/DX12 will coexist for the next year but after that, DX11 will be dead for all the AAA games.

Especially when unity and unreal release their stable version with DX12 we will also see smaller games to use DX12 too because it is way better than dx11.


----------



## Farih

Quote:


> Originally Posted by *sugarhell*
> 
> Because i know the particular market really well DX11/DX12 will coexist for the next year but after that, DX11 will be dead for all the AAA games.
> 
> Especially when unity and unreal release their stable version with DX12 we will also see smaller games to use DX12 too because it is way better than dx11.


The sooner the better IMO.
I just dont think DX11 will go away so soon, but here is for hoping it will









(might have to go back to AMD then)


----------



## NightAntilli

The majority of games are probably going to support both DX11 and DX12 for the next year or so, like we're seeing now. Slowly but surely they will adapt to DX12/Vulkan only. DX11 is too limiting on the long run.

nVidia's influence needs to drop. Whether their cards are better or not, I prefer to support AMD's market share, to reduce nVidia's foothold in the industry. They have been limiting it for way too long.


----------



## Scotty99

What time should we expect to see 1060 reviews? Im hyped, hoping its at least double as fast as my 760 that its replacing.


----------



## flippin_waffles

Quote:


> Originally Posted by *ChevChelios*
> 
> Pascal is the best architecture to have right now, where you dont have to suffer from the atrocious DX11/OpenGL performance of GCN and forced to rely on DX12 patches which are scarce and take a long time to come out post release
> 
> while still getting more benefit out of DX12/Vulkan & async than Maxwell
> 
> I expect reviewers to recommend 1060 over the ref 480 for the better cooler, less power draw & noise and better performance .. unless your budget tops out at $200 and you can only afford the ref 4GB 480 (in which case you should probably wait to get a custom 470 or maybe a 1050 anyway)
> 
> vs custom 480 it would depend on the prices of specific models, which also vary locally .. but then again no custom 480 will be available for review or sale tomorrow, so cant compare anyway yet


Pascal is the worst architecture to have right now (edit: actually no. Maxwell and Kepler are even worse!). They look dated compared to the competition. Not only do NV GPUs historically crater over time, they have to try and compete with the competioms GPUs being heavily favored with the newest APIs. This is bad news for NV. An RX 480 is more than enough for last gen games, its the future ones that im interested in with the new possibilities that DX12 and Vulkan bring.


----------



## ChevChelios

Quote:


> Originally Posted by *flippin_waffles*
> 
> Pascal is the worst architecture to have right now (edit: actually no. Maxwell and Kepler are even worse!). They look dated compared to the competition. Not only do NV GPUs historically crater over time, they have to try and compete with the competioms GPUs being heavily favored with the newest APIs. This is bad news for NV. An RX 480 is more than enough for last gen games, its the future ones that im interested in with the new possibilities that DX11 and Vulkan bring.


you should get a red avatar


----------



## flippin_waffles

Quote:


> Originally Posted by *ChevChelios*
> 
> you should get a red avatar


You should get a Ford


----------



## Fb74

Hi guys,

Any rumors about a Ti version ?


----------



## magnek

Is there any reason this thread is still open?


----------



## Klocek001

Quote:


> Originally Posted by *magnek*
> 
> Is there any reason this thread is still open?


yes.
for me to post this












you may close it now


----------



## rdr09

Quote:


> Originally Posted by *Klocek001*
> 
> yes.
> for me to post this
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> you may close it now


Higher fps with stutter 'cause of latency.


----------



## Klocek001

Quote:


> Originally Posted by *rdr09*
> 
> Higher fps with stutter 'cause of latency.


yeah right

turn to an issue that has already been fixed and happened only on certain system configurations.... maybe instead point me to a review that doesn't look pathetic for AIB RX480 that everyone was so confident would break 1.5GHz and put nvidia out of business in the low-end/mid range. oc'd vs oc'd 480 is 9% faster than 1060 in Hitman DX12, and that is the highest fps advantage you'll ever see rx480 get over 1060 in one game out of 10


----------



## rdr09

Quote:


> Originally Posted by *Klocek001*
> 
> yeah right
> 
> turn to an issue that has already been fixed and happened only on certain system configurations.... maybe instead point me to a review that doesn't look pathetic for AIB RX480 that everyone was so confident would break 1.5GHz and put nvidia out of business in the low-end/mid range. oc'd vs oc'd 480 is 9% faster than 1060 in Hitman DX12, and that is the highest fps advantage you'll ever see rx480 get over 1060 in one game out of 10


Is it really fixed? I bet some members/owners will disagree.


----------



## Klocek001

Quote:


> Originally Posted by *rdr09*
> 
> Is it really fixed? I bet some members/owners will disagree.


I bet you will disagree if you find one person that claims it's not fixed.
I'll gladly see rx480 dpc latency. if you claim 1060 is worse then prove it, show me any data.

meanwhile I'm gonna show you sth else. In addition to almost laughable oc potential in comparison to all the hype AIB 480s had, it seems the power draw increase for that 3-4% oc is skyrocketing

https://www.computerbase.de/2016-07/asus-radeon-rx-480-strix-test/4/#diagramm-leistungsaufnahme-des-gesamtsystems-anno-2205

4% oc at the cost of 11% higher power draw. Now the card is pulling close to 190W.


----------



## rdr09

Quote:


> Originally Posted by *Klocek001*
> 
> I bet you will disagree if you find one person that claims it's not fixed.
> I'll gladly see rx480 dpc latency. if you claim 1060 is worse then prove it, show me any data.


I can't even find a 480.lol

What I'm waiting for is PCPer's (forgot the guys name) review of the 10XX series dpc latency issue and "fix".

I don't think it will happen.

EDIT: We have this . . .

http://www.overclock.net/t/1605618/nv-pascal-latency-issues-hotfix-driver-now-available/650


----------



## mcg75

Quote:


> Originally Posted by *magnek*
> 
> Is there any reason this thread is still open?


Agreed.


----------

