# [WCCFTech] Nvidia Pascal GTX 1080, 1070 & 1060 Benchmarks Leaked – 3DMark 11 Performance Entries Spotted



## Cyber Locc

http://wccftech.com/nvidia-pascal-3dmark-11-entries-spotted/

I think this sums it up I so called this.

Quote:


> Multiple potential Nvidia Pascal 3DMark 11 GPU entries have been spotted. The entries are varied and show GPUs with performance figures that range from very high - GTX 980 Ti territory - to entries that show performance closer to the GTX 970.








So the GTX 1080 is on par with the 980ti as I thought it would be?, Discuss.

By these accounts I think its faster than a stock 980ti slightly as is the 1070 slightly faster than the 970 (my 980ti gets a 20k score in FS but its factory OCed, from what I have seen 970s get about 13k)

Note that I3 in the top one could be bottle necking the card.

Nope here is a FS for a i3 2100 and a 980ti similar score. http://www.3dmark.com/3dm11/10877835


----------



## rancor

Sounds like Nvidia and AMD both aimed at the same spot then going off the hitman Polaris demo for AMD. Should be interesting competition







.


----------



## clao

if AMD is only going for beating and not besting Nvidia ill just get the older cards probably cheaper that way


----------



## Robenger

Quote:


> Originally Posted by *clao*
> 
> if AMD is only going for beating and not besting intel then ill just get the older cards probably cheaper that way


What? Intel?


----------



## Dargonplay

I stopped reading after "WCCFT"


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *clao*
> 
> if AMD is only going for beating and not besting intel then ill just get the older cards probably cheaper that way


I am hoping that this is a language issue, bit lost as to what your reasoning is..

AMD vs nVidia on GPUs
AMD vs Intel on CPU..

CPU != GPU

This thread is about nVidia Pascal so we are looking at GPU comparison... not sure where you got intel from


----------



## Yttrium

Quote:


> Originally Posted by *Dargonplay*
> 
> I stopped reading after "WCCFT"


Yeah, This is a rumor in my book.


----------



## Cyber Locc

Quote:


> Originally Posted by *Yttrium*
> 
> Yeah, This is a rumor in my book.


Umm kinda hard to denounce it as a rumor, unless you know of another NV card with 8gb of Vram LOL.

Dont trust the source trust the Firestrike facts that there is no 8gb NV card on the market. Kind of hard to hoax this.

But denial is a nice river if you want to stay there.


----------



## iLeakStuff

"GTX 1070 slightly faster than GTX 970" LOL.
Not a chance in hell. That 20k GPU score is from a GTX 1070 if its even real. There is no way that is a GTX 1080.

You are absolutely nuts if you think Nvidia launch a GTX 1070 that beats 970 by 10% and a GTX 1080 that beats a GTX 980Ti by 10%.


----------



## iLeakStuff

I will also point out that it could be desktop GTX 980 for notebooks that is tested here.
They come with 1750MHz VRAM and can be overclocked to 2000MHz.

They also score about 17500 in 3DMark11.
Since 3DMark isnt reporting the GPU clock correctly (540MHz), that could very well be an overclocked GTX 980 for notebooks.


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> "GTX 1070 slightly faster than GTX 970" LOL.
> Not a chance in hell. That 20k GPU score is from a GTX 1070 if its even real. There is no way that is a GTX 1080.
> 
> You are absolutely nuts if you think Nvidia launch a GTX 1070 that beats 970 by 10% and a GTX 1080 that beats a GTX 980Ti by 10%.


Yep about as nuts as you think a 1070 will beat a 980ti.

I agree some of these could be mobile cards, not a 980 mobile but a pascal mobile.

Only time will tell.


----------



## Assirra

How is it even remotely possible to have information like that already when the cards are still months away?


----------



## iLeakStuff

Quote:


> Originally Posted by *Cyber Locc*
> 
> Yep about as nuts as you think a 1070 will beat a 980ti.
> 
> I agree some of these could be mobile cards, not a 980 mobile but a pascal mobile.
> 
> Only time will tell.


Learn to read:
I said GTX 1070 will probably *match* GTX 980Ti and maybe beat it.

Scratch what I said about notebook cards being tested. That is clearly a desktop since its using this motherboard
http://www.gigabyte.com/products/product-page.aspx?pid=3771#ov


----------



## Cyber Locc

Quote:


> Originally Posted by *Assirra*
> 
> How is it even remotely possible to have information like that already when the cards are still months away?


Well according to benchlife the cards will be here in May, anyway the cards are done and some people most likely already have samples they are done they just dont have enough to launch yet plus boxing and all that. But there is defiantly pascal GPUs already made.


----------



## clao

Quote:


> Originally Posted by *Robenger*
> 
> What? Intel?


Nvidia I changed it oops


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> Learn to read:
> I said GTX 1070 will probably *match* GTX 980Ti and maybe beat it.


Ya I dont think so, that happened last time when? oh ya never. The 1080 will probably beat but not by much. And overclocked 3d party 1070 may match a reference 980ti but that is the most I would expect if even that.

The node is too new, the chips will be small. This is more focused on learning the node and cutting done power req I think. Plus they are bringing compute back on that small chip? I will be amazed if we get a lot of gaming performance with pascal.


----------



## Cakewalk_S

Nvidia ain't gonna sit on pascal if they're ready to launch the card...gotta be ready to make moar moneyyy!!!


----------



## Cyber Locc

Quote:


> Originally Posted by *Cakewalk_S*
> 
> Nvidia ain't gonna sit on pascal if they're ready to launch the card...gotta be ready to make moar moneyyy!!!


Who said they were ready to launch the card? Having a few hundred cards done is not ready to launch lol. Review samples come way before launches board partners need the cards months in advance.

Once they have enough stock to launch then it takes another 3-4 weeks to get boxing done then the retailers have to receive the cards and prepare for launch, all that takes time, Months.

They have already had the cards on shipping manifests there is finished cards around and has been.

However this does mean the the new cards will have gddr5 not 5x.

Also to the rumor people Khalid was talking about this yesterday and its not a rumor its a leak, he was saying this is solid and that we would not be happy.


----------



## headd

Quote:


> Originally Posted by *Cyber Locc*
> 
> Ya I dont think so, that happened last time when? oh ya never. The 1080 will probably beat but not by much. And overclocked 3d party 1070 may match a reference 980ti but that is the most I would expect if even that.
> 
> The node is too new, the chips will be small. This is more focused on learning the node and cutting done power req I think. Plus they are bringing compute back on that small chip? I will be amazed if we get a lot of gaming performance with pascal.


Last time when Die shrink happend GTX670 Beat GTX580 by 20%.If its really GTX1070 then it is really bad for NV.Lets hope its because 5year old crap benchmark and in games 1070 will beat 980TI atleast for 10-15%.

http://www.computerbase.de/2012-05/test-nvidia-geforce-gtx-670/4/


----------



## nyxagamemnon

Anyone notice the 7680 memory in the first screenshot? Incoming again 7.5gb+.5 rofl.


----------



## iLeakStuff

Nobody would buy GTX 1070 if it beats GTX 970 by 10%. That makes it slower than GTX 980. Stop listening to Locc and his silly estimations.









It would be an epic fail of huge proportions if new node + new architecture would only net that


----------



## Malinkadink

I expect a 1070 to match a 980 ti and no less than that at $349.99. The 1080 will be 20%~ faster than a 980 ti @ $549.99 or probably $599.99. 1080 Ti will most likely be 50% faster than a 980 Ti when that launches. I think those are fairly reasonable assumptions.


----------



## Cyber Locc

Quote:


> Originally Posted by *headd*
> 
> Last time when Die shrink happend GTX670 Beat GTX580 by 20%.If its really GTX1070 then it is really bad for NV.Lets hope its because 5year old crap benchmark and in games 1070 will beat 980TI atleast for 10-15%.
> 
> http://www.computerbase.de/2012-05/test-nvidia-geforce-gtx-670/4/


Okay ya however back then, there was less tiers. 580 was a flagship as was 680, the 670 was a cut down flagship. The 1070 is 4 tiers below the flagship not 1, for both the 900 series and pascal. So thats why things have changed, and that past representation does not apply.

If you think the last gen flagship will drop 4 tiers well you are delusional. If they do I would gladly add 3 more 980tis to my roster at 300 dollars a piece.


----------



## hyp36rmax

Quote:


> Originally Posted by *iLeakStuff*
> 
> "GTX 1070 slightly faster than GTX 970" LOL.
> Not a chance in hell. That 20k GPU score is from a GTX 1070 if its even real. There is no way that is a GTX 1080.
> 
> You are absolutely nuts if you think Nvidia launch a GTX 1070 that beats 970 by 10% and a GTX 1080 that beats a GTX 980Ti by 10%.


It may not be as bad as you think, this just may be the smaller equivalent of the 980 with performance of a 980ti and the power efficiency to boot. Nvidia seems to be trending this way since the GTX 680


----------



## Cyber Locc

Quote:


> Originally Posted by *hyp36rmax*
> 
> It may not be as bad as you think, this just may be the smaller equivalent of the 980 with performance of a 980ti and the power efficiency to boot. Nvidia seems to be trending this way since the GTX 680


I agree its focusing on power reduction and re adding compute. Gaming performance is an after thought the 1080 will match the 980ti, with much better compute and way less power.

Lets take what we know, Die shrink accompanied with a smaller chip (remember last time they did big chip with a die shrink 480 ya they aint doing that again).

Bringing compute back and more Cuda Cores (that takes alot of the chip up, add that with the chip size being much smaller and the smaller node is pretty much irrelevant.)

reduced power consumption

and you think a 1070 for 300 that does all that and matches a 980ti in gaming thats some steep dreams.

This is all assuming we are not looking a 1080/1070/1060 right here. 1 could be a mobile chip, however a mobile chip with 8gb of GDDR5 seems unlikely to me.

We also have 1 that shows 7.5 and the others at 8, so are the 2 8s mobile and the 7.5 a 1070 or 1080?


----------



## iLeakStuff

Quote:


> Originally Posted by *hyp36rmax*
> 
> It may not be as bad as you think, this just may be the smaller equivalent of the 980 with performance of a 980ti and the power efficiency to boot. Nvidia seems to be trending this way since the GTX 680


So they launch a GTX 980 for $500 that match a GTX 980Ti which can be bought for $550 today. Or a GTX 1070 for say $400 that beats a $300 GTX 970 by 10%?

Does that even makes 1% sense lol?


----------



## headd

Btw what GPU score have stock GTX980TI at 1150-1200Mhz in 3d mark 11?


----------



## GTRtank

Quote:


> Originally Posted by *Cyber Locc*
> 
> Ya I dont think so, that happened last time when? oh ya never. The 1080 will probably beat but not by much. And overclocked 3d party 1070 may match a reference 980ti but that is the most I would expect if even that.
> 
> The node is too new, the chips will be small. This is more focused on learning the node and cutting done power req I think. Plus they are bringing compute back on that small chip? I will be amazed if we get a lot of gaming performance with pascal.


The question is when has it not happened? And these cards are reference. So there is no excuse that they are factory OCd.


----------



## headd

hmm my GTX970 1500/8000 have 18K GPU score in 3dmark11.If its really 1070 and have only 19k then NV **** up badly.


----------



## hyp36rmax

Quote:


> Originally Posted by *iLeakStuff*
> 
> So they launch a GTX 980 for $500 that match a GTX 980Ti which can be bought for $550 today. Or a GTX 1070 for say $400 that beats a $300 GTX 970 by 10%?
> 
> Does that even makes 1% sense lol?


AMD and Nvidia do it all the time. They'll just phase out the older skus. Prices will drop to a certain point with limited quantities. Those that pick it up for said prices score a great deal. The new skus that are released will probably focus on their power efficiency and performance per watt. for the sake of it even added features that are not available on the older sku's to make them desirable... rinse and repeat.....


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay ya however back then, there was less tiers. 580 was a flagship as was 680, the 670 was a cut down flagship. The 1070 is 4 tiers below the flagship not 1, for both the 900 series and pascal. So thats why things have changed, and that past representation does not apply.
> 
> If you think the last gen flagship will drop 4 tiers well you are delusional. If they do I would gladly add 3 more 980tis to my roster at 300 dollars a piece.


The 680 was the small die, just like the 1070/1080 will be. The 970 is almost as fast as a 780Ti on the same node. With a die shrink and architecture changes the 1070/1080 will be faster than the 980 Ti (1070 may be even with the 980 Ti).

History tells us that the die shrink/architecture change gives about 80% gain over the previous comparable size chip. There is no way a 1080Ti is going to be 80% faster than a 1080, which is what would happen if the 1080 matches a 980Ti. The 1080 will be ~20% faster than a 980Ti.


----------



## headd

Quote:


> Originally Posted by *Forceman*
> 
> The 680 was the small die, just like the 1070/1080 will be. The 970 is almost as fast as a 780Ti on the same node. With a die shrink and architecture changes the 1070/1080 will be faster than the 980 Ti (1070 may be even with the 980 Ti).
> 
> History tells us that the die shrink/architecture change gives about 80% gain over the previous comparable size chip. There is no way a 1080Ti is going to be 80% faster than a 1080, which is what would happen if the 1080 matches a 980Ti. The 1080 will be ~20% faster than a 980Ti.


Well now GTX980TI is only 35-40% faster than GTX970.I dont see how 1070 can be only 35-40% faster(or even less) than GTX970 because we have new node.
Not to mention GTX970 is only 224bit 56rops 3.5GB card.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> The 680 was the small die, just like the 1070/1080 will be. The 970 is almost as fast as a 780Ti on the same node. With a die shrink and architecture changes the 1070/1080 will be faster than the 980 Ti (1070 may be even with the 980 Ti).
> 
> History tells us that the die shrink/architecture change gives about 80% gain over the previous comparable size chip. There is no way a 1080Ti is going to be 80% faster than a 1080, which is what would happen if the 1080 matches a 980Ti. The 1080 will be ~20% faster than a 980Ti.


Doubt it, we will see.

However the 980 was 7% faster than the 780ti. Thats on the same node yes, however this die shrink is accompanied by a massive increase in compute that will make a large difference.

Also not sure where you are getting the 970 was close to the 780ti. it was but so was the 980, the 780ti feel inbetween them and it will again, slightly faster than the 1070 and slightly slower than a 1080.

And to the 80% increase from TI to TI wells that is just crazy talk.

Also the 680 was 16% faster than the 580 and that didn't add compute. So ya where you are getting 80% I have no idea.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> And to the 80% increase from TI to TI wells that is just crazy talk.
> 
> Also the 680 was 16% faster than the 580 and that didn't add compute. So ya where you are getting 80% I have no idea.


The 680 is not the chip to compare to the 580, the 780 Ti is. The 680 is to the 580 as the 1080 will be to the 980Ti (about 15-20% faster). If you look back, and I have, new nodes gain about 80% for both AMD and Nvidia.


----------



## Cyber Locc

Quote:


> Originally Posted by *headd*
> 
> Well now GTX980TI is only 35-40% faster than GTX970.I dont see how 1070 can be only 35-40% faster(or even less) than GTX970 because we have new node.
> Not to mention GTX970 is only 224bit 56rops 3.5GB card.


Very easily look at the past few releases and show me a 35-45% during a node change and they dont add compute either which is what you guys seem to keep missing. The 680 was only 16% faster than the 580 so where do you come off thinking that we will see 40% this time?


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> The 680 was not the chip to compare to the 580,the 780 Ti is. If you look back, and I have, new nodes gain about 80% for both AMD and Nvidia.


What are you talking about now you are skipping generations lol.

However lets do that, 580 vs 780ti



I am seeing 46% where you are coming up with 80% is beyond me.

46/100 = .46 or 46% can you math bro?


----------



## iLeakStuff

Quote:


> Originally Posted by *Cyber Locc*
> 
> What are you talking about now you are skipping generations lol.
> 
> However lets do that, 580 vs 780ti
> 
> 
> 
> I am seeing 46% where you are coming up with 80% is beyond me.


Can you please stop posting on this forum if your are doing math or percentage or comparisons?








We had this discussion with you a couple of weeks back where it was pretty apparent that you dont know math. We even explained to you how to do it, and now you do the same mistakes again.

If you did that right, you would get +85% over GTX 580 actually.


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> Can you please stop posting on this forum if your are doing math or percentage or comparisons?
> 
> 
> 
> 
> 
> 
> 
> 
> We had this discussion with you a couple of weeks back where it was pretty apparant that you dont know math. We even explained to you how to do it, and now you do the same mistakes again.
> 
> If you did that right, you would get +85% over GTX 580 actually.


No my friend that is doing it backwards and that is a relative performance chart with a 100% baseline. Please take you backwards math elsewhere. It is 46%.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> What are you talking about now you are skipping generations lol.
> 
> However lets do that, 580 vs 780ti
> 
> 
> 
> I am seeing 46% where you are coming up with 80% is beyond me.
> 
> 46/100 = .46 or 46% can you math bro?


You do know that the 780Ti is the same architecture/node (Kepler) as the 680 right? The 780 Ti is one node/architecture change from the 580. Pro tip: check the die sizes.

And as for your math, I'll let you figure out how wrong you are there. Hint: if I have a dollar and you have 50 cents, I have twice as much money as you.

Edit: but thanks for posting the chart that proves my point for me.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> You do know that the 780Ti is the same architecture/node (Kepler) as the 680 right? The 780 Ti is one node/architecture change from the 580. Pro tip: check the die sizes.
> 
> And as for your math, I'll let you figure out how wrong you are there.


Dude I asked the guy at TPU he told me my math is right yours is wrong....... He made the charts.


----------



## andrews2547

Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm kinda hard to denounce it as a rumor, unless you know of another NV card with 8gb of Vram LOL.
> 
> Dont trust the source trust the Firestrike facts that there is no 8gb NV card on the market. Kind of hard to hoax this.
> 
> But denial is a nice river if you want to stay there.


Unless it comes directly from nVidia, or a trusted source (wccftech isn't one), it's a rumour.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Dude I asked the guy at TPU he told me my math is right yours is wrong....... He made the charts.


I hope you don't work in finance/banking. Although maybe that would explain a few things.

Edit: I don't know why I'm bothering, but the 580 is 46% *slower* than the 780 Ti. The 780 Ti is 85% (46/54, remember the baseline changes) *faster*. Here's a thought experiment: if you have 54 rocks and I have 100 rocks, I have almost twice as many rocks as you. Is 46% almost twice as much? No, but 85% is.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> I hope you don't work in finance/banking. Although maybe that would explain a few things.


It is fine I will call w1zzard to the thread, or you could look through the forums where he has told other people that said the same as you they are wrong.


----------



## Cyber Locc

Quote:


> Originally Posted by *andrews2547*
> 
> Unless it comes directly from nVidia, or a trusted source (wccftech isn't one), it's a rumour.


I get that aspect, however like I said there is no other 8gb cards on the market, these may not be desktop gpus and they dont claim what cards they are. Its not possible to fake a firestirke result on 3dmarks site so ya not sure how thats a rumor. These are real results what card they are off is a rumor.

I guess it could be photo shopped though.


----------



## DNMock

Quote:


> Originally Posted by *Cyber Locc*
> 
> It is fine I will call w1zzard to the thread, or you could look through the forums where he has told other people that said the same as you they are wrong.


I'm just gonna leave this here:

http://www.skillsyouneed.com/num/percent-change.html


----------



## Cyber Locc

Quote:


> Originally Posted by *DNMock*
> 
> I'm just gonna leave this here:
> 
> http://www.skillsyouneed.com/num/percent-change.html


Yep they even have a calulator that does it for you. Guess what number it gives me 46%........

The problem I am seeing here is people are trying to manipulate the percentages into percentages however they already are percentages. If we were dealing with FPS numbers than yes things would be different however he has already done the calculation for percentage and gave you the result. you do not need to manipulate the data further he is already giving you the answer.

I told them that, showed them him referencing the answer I do which was different then theirs, in every single one of his reviews and yet you guys keep saying the same tired thing. Sorry I am going with he is saying he made the charts.

As an example,




Compared to the GTX 980, the difference is a sizable 22% at 4K and 20% at 2560x1440-higher resolutions are, again, resulting in a higher performance difference. The GTX 970 is 26% behind at 1920x1080 and AMD's fastest single-GPU card, the R9 290X, is almost 30% slower. AMD's R9 295 X2 dual-GPU ends up being faster than a single GTX 980 Ti, and while priced similarly, suffers from all sorts of CrossFire issues in recent games.

Now if we used the same math that gave 85%, we would get a 28% difference at 4k, 25% difference at 1440p.

Oh but we can go on we would get a 35% difference between the 980ti and 970, and a 42% difference then the 290x.

Now we can say that is because the variances between slower and faster. So lets say that,

Well now he says this later,

"Averaged over our testing it increases performance by 16% (+37% vs. GTX 560 Ti!), and easily beats AMD's HD 7970. Achieving such performance levels nowadays has to go with improved performance per Watt, as modern high-end graphics cards are limited by power consumption and heat output."



By your math, it would be 19% faster than the 580, however he says 16% but wait there is more. The 680 would be 58% faster than the 560ti, yet again he says its 37% faster. So who is wrong and who doesn't know math? Me and the guy that did the charts or everyone else.


----------



## andrews2547

Quote:


> Originally Posted by *Cyber Locc*
> 
> I get that aspect, however like I said there is no other 8gb cards on the market, these may not be desktop gpus and they dont claim what cards they are. Its not possible to fake a firestirke result on 3dmarks site so ya not sure how thats a rumor. These are real results what card they are off is a rumor.
> 
> I guess it could be photo shopped though.


Faking those results can be easily done if you know what you are doing. Those $20-$30 "980Ti" cards you get on eBay fake those results. This is a rumour until nVidia or a reputable source confirms otherwise.

Not to mention the fact that the very first sentence in the article is "Multiple *potential* Nvidia Pascal 3DMark 11 GPU entries have been spotted."


----------



## MonarchX

I was expecting something MUCH MUCH faster than 980 Ti, at least by 30%...


----------



## Matt26LFC

Quote:


> Originally Posted by *MonarchX*
> 
> I was expecting something MUCH MUCH faster than 980 Ti, at least by 30%...


The OP shouldn't really change that expectation as its just a rumor from wccft

The Original Poster of said fud only posted it because it fits with what he predicted for Pascal, thats why he's been defending it because otherwise he's wrong and clearly can't cope with that.


----------



## Cyber Locc

Quote:


> Originally Posted by *MonarchX*
> 
> I was expecting something MUCH MUCH faster than 980 Ti, at least by 30%...


Well I just checked the stock 970 result and compared it to the middle card. The difference is 28.8% faster 970 to 1070 assuming the center card is a 1070.

The top card is 2.91% slower than the 980ti.

However it would be 27.81% over the GTX 980, so if it is a 1080 it has increased by 28% over the 980.

That sounds in line with what I have been expecting and what is realistic. A 30% increase Titan to Titan

I will be singing my song to the end no card released in the mid range pascal will beat a TI by more than a few percent not even in the double digits. Where this 1080 will beat it by 30% came from IDK but its unrealistic to think.
Quote:


> Originally Posted by *Matt26LFC*
> 
> The OP shouldn't really change that expectation as its just a rumor from wccft
> 
> The Original Poster of said fud only posted it because it fits with what he predicted for Pascal, thats why he's been defending it because otherwise he's wrong and clearly can't cope with that.


Okay lol, we will see who is wrong in the end wont we, I doubt it will be me. and I will be sure to laugh for days when I am right, However unlike you guys if it is faster I wont be disappointed that would be awesome, when your outrageous expectations are wrong you will be disappointed so.

I bet if it said that a 1080 was beating a 980ti by 50% you would believe it then lol. These results are reasonable and believable. 30% gains card for card is reasonable which is what we are seeing here. Jumping 4 tiers in cards is not and has never happened, ever.

I could see a Volta 1170 matching a 980ti, but not a 1070.


----------



## jdstock76

I fail to see this as any proof of anything.

On that note: I believe these to be fairly accurate. i3 is definitely bottlenecking these though. Definitely called this as well. Looks like all those peeps talking about "OMG I'm not buying unless it's 1000000000000000% better than Maxwell" will have to wait a bit longer.


----------



## Cyber Locc

Quote:


> Originally Posted by *jdstock76*
> 
> I fail to see this as any proof of anything.
> 
> On that note: I believe these to be fairly accurate. i3 is definitely bottlenecking these though. Definitely called this as well. Looks like all those peeps talking about "OMG I'm not buying unless it's 1000000000000000% better than Maxwell" will have to wait a bit longer.


Only 1 of these results have an I3 and seeing how this is a single threaded very old benchmark (Its not firestrike its 3d mark 11 OG) I doubt it being bottleneck,

the other 2 results are with Skylake I5s they are defiantly not bottlenecking anything.

Besides that I linked a similar review with the same I3 and a 980ti and the results for the 980ti are the same as with an I7. It isnt bottlenecking anything.


----------



## i7monkey

full chip GP100 better be 2x as fast as GM200 or else what a fail

twice as small
twice as many transistors
twice as many cuda cores


----------



## Cyber Locc

Quote:


> Originally Posted by *i7monkey*
> 
> full chip GP100 better be 2x as fast as GM200 or else what a fail
> 
> twice as small
> twice as many transistors
> twice as many cuda cores


LOL, so much delusion I swear. It could be twice as fast given the same chip size and no compute neither of those variables line up so it wont be anywhere near.

To the or else it will fail, hahaha it could be 10% faster we all still buy it in flocks I know I will.


----------



## Matt26LFC

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well I just checked the stock 970 result and compared it to the middle card. The difference is 28.8% faster 970 to 1070 assuming the center card is a 1070.
> 
> The top card is 2.91% slower than the 980ti.
> 
> However it would be 27.81% over the GTX 980, so if it is a 1080 it has increased by 28% over the 980.
> 
> That sounds in line with what I have been expecting and what is realistic. A 30% increase Titan to Titan
> 
> I will be singing my song to the end no card released in the mid range pascal will beat a TI by more than a few percent not even in the double digits. Where this 1080 will beat it by 30% came from IDK but its unrealistic to think.
> Okay lol, we will see who is wrong in the end wont we, I doubt it will be me. and I will be sure to laugh for days when I am right, However unlike you guys if it is faster I wont be disappointed that would be awesome, when your outrageous expectations are wrong you will be disappointed so.
> 
> I bet if it said that a 1080 was beating a 980ti by 50% you would believe it then lol. These results are reasonable and believable. 30% gains card for card is reasonable which is what we are seeing here. Jumping 4 tiers in cards is not and has never happened, ever.
> 
> I could see a Volta 1170 matching a 980ti, but not a 1070.


Pretty sure I haven't listed any expectations, however if you wrote 50% gain over 980Ti no I wouldn't believe that.

I think 10-15% gain over a 980Ti from the 1080 is resonable at 1440p and say upward of 35% when big Pascal comes along


----------



## G woodlogger

Quote:


> Originally Posted by *jdstock76*
> 
> I fail to see this as any proof of anything.
> 
> On that note: I believe these to be fairly accurate. i3 is definitely bottlenecking these though. Definitely called this as well. Looks like all those peeps talking about "OMG I'm not buying unless it's 1000000000000000% better than Maxwell" will have to wait a bit longer.


Money saved







I want to really feel the upgrade.

Coming into this tread from the Dark Zone phew what a day!


----------



## Cyber Locc

Quote:


> Originally Posted by *Matt26LFC*
> 
> Pretty sure I haven't listed any expectations, however if you wrote 50% gain over 980Ti no I wouldn't believe that.
> 
> I think 10-15% gain over a 980Ti from the 1080 is resonable at 1440p and say upward of 35% when big Pascal comes along


I dont see how you think that lines up though, the thing is that the 1080 is not replacing the 980ti its replacing the 980 so it is most likely 30% faster than a 980 well so is a 980ti. So if your calculations are correct then a Titan P would have to be 45-50% faster than a Titan X.

So a 980ti is 31% faster in this bench than a 980. so if the 1080 is 28% faster than a 980 than its 3% slower than a 980ti. You want it 45% faster than a 980 well then titan for titan would have to be around that I think 45% is way way wishful thinking.

And Ileakstuffs claims of 300 dollar 980tis and matching 1070s is just straight up insanity by the very definition (repeating the same experiments and expecting a different result)

I think these numbers will be on point, we will see the 980ti drop to around 500 and the new 1080 be at the same price, or it will launch at 650 and the price of the TI will remain.


----------



## jdstock76

Quote:


> Originally Posted by *Cyber Locc*
> 
> Only 1 of these results have an I3 and seeing how this is a single threaded very old benchmark (Its not firestrike its 3d mark 11 OG) I doubt it being bottleneck,
> 
> the other 2 results are with Skylake I5s they are defiantly not bottlenecking anything.
> 
> Besides that I linked a similar review with the same I3 and a 980ti and the results for the 980ti are the same as with an I7. It isnt bottlenecking anything.


Well I'm not going to throw insults at you like the others but I can most assuredly say that the i3 will offer less performance than the i5 which offers less than the i7. Albeit not a lot but enough to notice.

Quote:


> Originally Posted by *G woodlogger*
> 
> Money saved
> 
> 
> 
> 
> 
> 
> 
> I want to really feel the upgrade.
> 
> Coming into this tread from the Dark Zone phew what a day!


welcome to the Light my son.


----------



## davio

Maths & Physics Major here: its ~85%

The maths:
If there is a change from 54% to 100%, then we can represent the increase as: (0.54-1.00)/0.54 = 0.85 = 85% percent change.

For reference, it is easily shown here:
https://en.wikipedia.org/wiki/Relative_change_and_difference

Wiki is a very good reference in maths FYI.


----------



## Cyber Locc

Quote:


> Originally Posted by *jdstock76*
> 
> Well I'm not going to throw insults at you like the others but I can most assuredly say that the i3 will offer less performance than the i5 which offers less than the i7. Albeit not a lot but enough to notice.
> welcome to the Light my son.


I agree the results may be slight diffrent. However they are within the margin of error.

I linked the result in the op. The result is 19385 for graphics score. My gts 980ti stock gets around 20k however I have seen it hit as low as 19678 and that is with an i7 5820k heavily overclocked.

So I do not think that cpu is bottlenecking, it may very well drop the score by a few hindered points but as you have said not much.

At any rate I don't see it reducing the performance being hindered by more than 3%After looking at the 980ti result. So if we want to give leeway due to the i3 is say the top result is tied with a 980ti.

As ileakstuff saying a 980ti for 550? Where? I just paid 690 3 weeks ago lol.


----------



## prjindigo

Fake entries are fake. 3dmark doesn't have the ID info to make certified entries yet.


----------



## Dargonplay

Jesus, I've seen better math in Kindergarden. Many people have already explained how OP is wrong so I will refrain from doing it.

As many have said, why would I ever upgrade to something that's only 15% faster than what we have when there's 2nd hand used GPUs? I can get a R9 290 for as low as 140$ by hunting a good deal on eBay for a week.

I can get a Fury as low as 299$, (Yes AMD Cards are sold cheaper in the 2nd hand market) So why would I EVER choose to buy a 600$ card that's basically 15% faster but cost 100% more than most of these offerings?

Nonsense, this is a Business, Nvidia have to make it worth my money, a 1070 will at the very least match a 980Ti, otherwise I see no reason for me to spend a cent on these new cards.

Also, many people saying the 980 is only 5% faster are forgetting the fact that when a video card is freshly released Drivers are often times very immature, never at release a new architecture is being used to its full potential, I'd not be surprised to see a GTX 1080 being only 30% faster than a 980Ti just to see it jump in performance to 50%% more in a couple months after launch, just like the 980 was 5% faster than a 780Ti at launch just to see its performance raise to over 30% later.

The guy calling "Delusional" to everyone who happens to share this opinion, Save it, don't quote me, if I am to be corrected I'd prefer someone with better math.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> Jesus, I've seen better math in Kindergarden. Many people have already explained how OP is wrong so I will refrain from doing it.
> 
> As many have said, why would I ever upgrade to something that's only 15% faster than what we have when there's 2nd hand used GPUs? I can get a R9 290 for as low as 140$ by hunting a good deal on eBay for a week.
> 
> I can get a Fury as low as 299$, (Yes AMD Cards are sold cheaper in the 2nd hand market) So why would I EVER choose to buy a 600$ card that's basically 15% faster but cost 100% more than most of these offerings?
> 
> Nonsense, this is a Business, Nvidia have to make it worth my money, a 1070 will at the very least match a 980Ti, otherwise I see no reason for me to spend a cent on these new cards.
> 
> Also, many people saying the 980 is only 5% faster are forgetting the fact that when a video card is freshly released Drivers are often times very immature, never at release a new architecture is being used to its full potential, I'd not be surprised to see a GTX 1080 being only 30% faster than a 980Ti just to see it jump in performance to 50%% more in a couple months after launch, just like the 980 was 5% faster than a 780Ti at launch just to see its performance raise to over 30% later.
> 
> The guy calling "Delusional" to everyone who happens to share this opinion, Save it, don't quote me, if I am to be corrected I'd prefer someone with better math.


Lol then we better stop the presses because my math is taken from w1zard himself lol.

Regardless we will see who is right when the gpus launch. You think that that 1070 Will get a 50-60% boost from the 970, that just isn't likely imo. Aside from the fact that it has never happened and will not this time lol.

Thing is obviously my logic isn't flawed as no 70 card has matched the flagsip of the gen prior for th gems with TIs, if you can not understand tiers then I don't know what to tell you.

Never had a flagship this year been the low mid range of next, x70 use to be high and cards they are not anymore. If you think that 980TIs will be 300 dollars than show me proof of that ever happening. You can't because it doesnt.

Your right it is a business so they will give small incremental upgrades every year and people will buy them rather than give no updates for 2 years with 1 big one.

You say that no one will buy it, however tons of people replaced the re 780TIs with 980s that were slower/on par, those same people then upgraded to TIs a few months later, you must not understand this market.

Again if you have a problem with my math than we better stop listening to techpowerup.

I understand what you guys are saying, however tpu Says in 1 review that is a speed increase and in the next he says it's a decrease so which is it? You are making assumptions about what he is doing.

He is the problem not me, so take it up with him. I am simply parroting him for the 1000th time, I am using his mathsince he made the graphs period!


----------



## davio

Quote:


> Originally Posted by *Cyber Locc*
> 
> Lol then we better stop the presses because my math is taken from w1zard himself lol.
> 
> Again if you have a problem with my math than we better stop listening to techpowerup results at all as he apprently can't math....
> 
> There is nothing wrong with my math. You guys are doing math for something that doesn't need it. The math has already been done for you is what you are not getting. Is graph is already calculated you just need to look at it. Instead you try to over complicate it and and over think it and come up with padded numbers.
> 
> Take the firestrike results and do the calculations and you will get the same result as he shows already not the ones you get with your "math".


Your lack of though is so laughable its actually disturbing. I have provided you with a link to Wikipedia which shows how to do percentages of percentages properly. Maybe you should stop being ignorant and have a look? Or do you prefer to live in your own closed world?

w1zard may not be a math wizz, he is a person and people can make mistakes, you're now carrying over mistakes without even giving thought to it. Don't parrot him, think for yourself.


----------



## Cyber Locc

Quote:


> Originally Posted by *davio*
> 
> Your lack of though is so laughable its actually disturbing. I have provided you with a link to Wikipedia which shows how to do percentages of percentages properly. Maybe you should stop being ignorant and have a look? Or do you prefer to live in your own closed world?
> 
> w1zard may not be a math wizz, he is a person and people can make mistakes, you're now carrying over mistakes without even giving thought to it.


Dude I understand what you are saying, however depending how he has made the graphs the outcome can be difrent, we do not know how he made them.

Did he base them on the decrease or on the increase? He doesn't specify that and impacts that by making statements that show he made the graph diffently than what you seem to think.

When a question comes of interpreting a graph any logical human being is going to listen to the man that made the graph.

Not listen to people that are making assumptions about the graph that they didn't make with no knowledge of how it was made.

I am not saying his is wrong not are you, I am saying that I am taking his method as he made the graph that's it. Why is that so difficult to understand?

And why this is even continuing is beyond me. Everyone is in agreenance with my numbers aside from the 580 to 780ti ones. That skip a generation and are notrelvant at all.

I have wrote w1a directly not tpu, on here today to get it the bottom if this until then drop it, we are not even needing to talk about tpu here, we have firestrike results.

In the firestrike results I have no issues with you guy's math.

So move on from it please and thanks.

Futhermore, what is the point of the 580-780to Congo that is flagship to flagship with a shrink and a skip in there. So what we are to expect pascal to be a 80% upgrade from ti to ti I very very highly doubt that. We seen that with a gap, Volta will proballybe an 80% gap titan titan but pascal ya I don't see that happening.

Sorry for grammar one phone will edit when on pc in a bit.


----------



## mcg75

I respect TPU and the work they put into doing their numbers very much.

I really never looked at any of his conclusions much before but now I can see why everyone is talking about it.

980 Ti Matrix - 100%
980 Ti stock - 80%

He states the Matrix is 20% faster than stock. This is simply incorrect.

In this case, the stock 980 Ti is the baseline. 20% of 80% is only 16% not 20%.

It would be correct to say the 980 Ti stock is 20% slower.

But the Matrix is 25% faster than 980 Ti stock.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> And why this is even continuing is beyond me. Everyone is in agreenance with my numbers aside from the 580 to 780ti ones. That skip a generation and are notrelvant at all.


How do you still not get that 580 to 780 Ti isn't skipping a generation? The 680 and 780 are the same generation, the 780 is just a larger chip. It's still Fermi to Kepler.
Quote:


> Originally Posted by *mcg75*
> 
> I respect TPU and the work they put into doing their numbers very much.
> 
> I really never looked at any of his conclusions much before but now I can see why everyone is talking about it.
> 
> 980 Ti Matrix - 100%
> 980 Ti stock - 80%
> 
> He states the Matrix is 20% faster than stock. This is simply incorrect.
> 
> In this case, the stock 980 Ti is the baseline. 20% of 80% is only 16% not 20%.
> 
> It would be correct to say the 980 Ti stock is 20% slower.
> 
> But the Matrix is 25% faster than 980 Ti stock.


Yeah, it came up in the other thread as well. Someone was going to tell him to fix his math, not sure if they did or not.


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Lol then we better stop the presses because my math is taken from w1zard himself lol.
> 
> Regardless we will see who is right when the gpus launch. You think that that 1070 Will get a 50-60% boost from the 970, that just isn't likely imo. Aside from the fact that it has never happened and will not this time lol.
> 
> Thing is obviously my logic isn't flawed as no 70 card has matched the flagsip of the gen prior for th gems with TIs, if you can not understand tiers then I don't know what to tell you.
> 
> Never had a flagship this year been the low mid range of next, x70 use to be high and cards they are not anymore. If you think that 980TIs will be 300 dollars than show me proof of that ever happening. You can't because it doesnt.
> 
> Your right it is a business so they will give small incremental upgrades every year and people will buy them rather than give no updates for 2 years with 1 big one.
> 
> You say that no one will buy it, however tons of people replaced the re 780TIs with 980s that were slower/on par, those same people then upgraded to TIs a few months later, you must not understand this market.
> 
> Again if you have a problem with my math than we better stop listening to techpowerup.
> 
> I understand what you guys are saying, however tpu Says in 1 review that is a speed increase and in the next he says it's a decrease so which is it? You are making assumptions about what he is doing.
> 
> He is the problem not me, so take it up with him. I am simply parroting him for the 1000th time, I am using his mathsince he made the graphs period!


The graph si fine, is your interpretation that's wrong, I'd bother to explain if you're willing to listen, but since lots of people have explained this I'm sure you would ignore it anyway.

"Thing is obviously my logic isn't flawed as no 70 card has matched the flagsip of the gen prior for th gems with TIs, if you can not understand tiers then I don't know what to tell you. "

What the hell are you even saying? If you're going to lie so hard on people's faces at least try to be funny, otherwise is cringe worthy and disappointing.












A 970 is beyond doubt faster than the Flagship card of the previous generation, please dully note that both card were made under the same fabrication process and yet there's such huge gains on performance, a node shrink will take this to a whole different level, especially when both AMD and Nvidia have worked on perfecting architectures for 16 and 14nm since so long ago, this node shrink has been delayed so much that both companies are MORE than prepared for it.

Now please stop. It's embarrassing.


----------



## iluvkfc

Quote:


> Originally Posted by *davio*
> 
> Maths & Physics Major here: its ~85%
> 
> The maths:
> If there is a change from 54% to 100%, then we can represent the increase as: (0.54-1.00)/0.54 = 0.85 = 85% percent change.
> 
> For reference, it is easily shown here:
> https://en.wikipedia.org/wiki/Relative_change_and_difference
> 
> Wiki is a very good reference in maths FYI.


No need to be a math/physics major to know this... Or maybe it is in American education system which is why that guy doesn't get it.


----------



## Cyber Locc

Quote:


> Originally Posted by *mcg75*
> 
> I respect TPU and the work they put into doing their numbers very much.
> 
> I really never looked at any of his conclusions much before but now I can see why everyone is talking about it.
> 
> 980 Ti Matrix - 100%
> 980 Ti stock - 80%
> 
> He states the Matrix is 20% faster than stock. This is simply incorrect.
> 
> In this case, the stock 980 Ti is the baseline. 20% of 80% is only 16% not 20%.
> 
> It would be correct to say the 980 Ti stock is 20% slower.
> 
> But the Matrix is 25% faster than 980 Ti stock.


That is what I am trying to tell them, I am not trying to be rude or troll or insult anyone.

I am getting told 2 diffrent things and stuck in who to believe and when he made the graph he gets the benefit maybe he is wrong. Judging by the number of people that are saying he is he most likely is.

However it is hard to not go with the guy that made the graphs you know what I mean.

Forcemen, fine skipping a series is that better? I understand that 680 and 780 are on the same node. However there is performance increases in the 600 series and then there in the 700 series.

I am sure the entire die shrink will be good for a 80% improvement over maxwell, when both does are the same size and tdp, however none of pascal will have the same die size. There are also cutting graphics performance in exchange for mass compute performance.

So we will not see 80% till the does are the same size. That won't happen until volta I dont think.


----------



## Cyber Locc

Quote:


> Originally Posted by *iluvkfc*
> 
> No need to be a math/physics major to know this... Or maybe it is in American education system which is why that guy doesn't get it.


I really don't understand why I keep having to repeat myself....

Maybe I am bad at math but it seems everyone else is bad at reading.

For the last time I am following TPUS MATH HE MADE the GRAPH......

The math you guys are saying may be right, but he made the graph... Who knows how his graph works? Or how it's ordered sorted? Is there a breakdown or description of such? No there isn't so all we can go by is the way he interepts it and he is doing what I AM period.


----------



## mcg75

Quote:


> Originally Posted by *mcg75*
> 
> I respect TPU and the work they put into doing their numbers very much.
> 
> I really never looked at any of his conclusions much before but now I can see why everyone is talking about it.
> 
> 980 Ti Matrix - 100%
> 980 Ti stock - 80%
> 
> He states the Matrix is 20% faster than stock. This is simply incorrect.
> 
> In this case, the stock 980 Ti is the baseline. 20% of 80% is only 16% not 20%.
> 
> It would be correct to say the 980 Ti stock is 20% slower.
> 
> But the Matrix is 25% faster than 980 Ti stock.


Ok, now that I've had a chance to look at this closer, I see what TPU is doing but I'm just not sure why.

Taking the numbers from the 980 Ti Matrix review.........

980 Ti stock has a total of 620.3 fps across 15 games for an average of 41.35 fps.

980 Ti Matrix has a total of 743.0 fps across 15 games for an average of 49.53 fps.

49.53 / 41.35 = 1.1978 or 20%.

So his conclusion based on his numbers is correct. The percentage chart is not correct.

If the 980 Ti Matrix is the baseline card at 100%, the 980 Ti Stock should be 83% not 80%.

All these numbers are from the 4K test only.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> The graph si fine, is your interpretation that's wrong, I'd bother to explain if you're willing to listen, but since lots of people have explained this I'm sure you would ignore it anyway.
> 
> "Thing is obviously my logic isn't flawed as no 70 card has matched the flagsip of the gen prior for th gems with TIs, if you can not understand tiers then I don't know what to tell you. "
> 
> What the hell are you even saying? If you're going to lie so hard on people's faces at least try to be funny, otherwise is cringe worthy and disappointing.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> A 970 is beyond doubt faster than the Flagship card of the previous generation, please dully note that both card were made under the same fabrication process and yet there's such huge gains on performance, a node shrink will take this to a whole different level, especially when both AMD and Nvidia have worked on perfecting architectures for 16 and 14nm since so long ago, this node shrink has been delayed so much that both companies are MORE than prepared for it.
> 
> Now please stop. It's embarrassing.


To the graph see above I am done saying the same over and over.

To the 970 ya after how many years of driver releases was that? Looks the tpus launch review it shows a diffrent story.

Are we saying the 1070 will be faster at launch of In 2 years?


----------



## davio

Quote:


> Originally Posted by *iluvkfc*
> 
> No need to be a math/physics major to know this... Or maybe it is in American education system which is why that guy doesn't get it.


ahah I questioned myself if i should say it or not. I'm from Australia









yeah I just thought it gave some more substance to what I was saying lol


----------



## Cyber Locc

Quote:


> Originally Posted by *mcg75*
> 
> Ok, now that I've had a chance to look at this closer, I see what TPU is doing but I'm just not sure why.
> 
> Taking the numbers from the 980 Ti Matrix review.........
> 
> 980 Ti stock has a total of 620.3 fps across 15 games for an average of 41.35 fps.
> 
> 980 Ti Matrix has a total of 743.0 fps across 15 games for an average of 49.53 fps.
> 
> 49.53 / 41.35 = 1.1978 or 20%.
> 
> So his conclusion based on his numbers is correct. The percentage chart is not correct.
> 
> If the 980 Ti Matrix is the baseline card at 100%, the 980 Ti Stock should be 83% not 80%.
> 
> All these numbers are from the 4K test only.


That is the way I have been doing the math, and am being told that I am wrong.

He is doing his chart on increase not decrease. So his chart says a 46% increase it means a 46% increase not a 46% decrease.


----------



## ZealotKi11er

GTX970 is not faster then GTX780 Ti unless very specific games where Maxwell has big advantage. The only reason GTX970 wins in most benchmarks is because of clock speeds. If you OC both cards GTX780 Ti will beat it.


----------



## Pyrotagonist

The 580 gives 54% of the 780 Ti's performance, the 780 Ti is 85% faster than the 580 - these statements are equivalent. Note that "780 Ti is 46% faster than the 580" is not one of them.

Also, at the 970 launch, it (the reference clocked one) was already 94.2% of the 780 Ti, or equivalently, the 780 Ti was 6.1% faster.


Quote:


> He is doing his chart on increase not decrease. So his chart says a 46% increase it means a 46% increase not a 46% decrease.


Absolutely unreal. This is stuff we learn in 4th grade science class.

The 580 has 54 points, the 780 Ti has 100 points. 54 * 1.46 (a 46% increase) yields 78.9 points. 78.9 != 100.


----------



## Cyber Locc

Quote:


> Originally Posted by *Pyrotagonist*
> 
> The 580 gives 54% of the 780 Ti's performance, the 780 Ti is 85% faster than the 580 - these statements are equivalent. Note that "780 Ti is 46% faster than the 580" is not one of them.
> 
> Also, at the 970 launch, it (the reference clocked one) was already 94.2% of the 780 Ti, or equivalently, the 780 Ti was 6.1% faster.
> 
> 
> Absolutely unreal. This is stuff we learn in 6th grade science class, probably earlier, like 4th grade.
> 
> The 580 has 54 points, the 780 Ti has 100 points. 54 * 1.46 (a 46% increase) yields 78.9 points. 78.9 != 100.


I agree with the 970 a lot of people seem to have selective memory and selective benchmarks.









On the other note, I get what your saying but TPUs show there chart going increase based not decreased but then another will show the opposite. So who is right, we need another bench site with a similar system or someone to take the time to check his results.

Or just wait for his reply to me lol.
Quote:


> Originally Posted by *Pyrotagonist*
> 
> The 580 gives 54% of the 780 Ti's
> Absolutely unreal. This is stuff we learn in 4th grade science class.
> The 580 has 54 points, the 780 Ti has 100 points. 54 * 1.46 (a 46% increase) yields 78.9 points. 78.9 != 100.


MK I am seriously getting tired of this. By your math the 980ti woudl be a 23% increase, however W1zzard says in the darn comments that the increase is 20% not 23....

Seriously I dont know how many different ways I can say the same thing, you guys seriously need to stop calling me hard headed and go look for yourself. How many days and how many people have argued about this and yet only 2 have actually went and said oh hes right W1zzards calcs are wrong apparently. Seriously does know one here know how to read? or they just like to blindly assume and insult people.

For 5 secs, relize that you didnt make the graph and you do not know how he made it. He could have very easily done all the calculations himself and said okay there is a 46% increase for a 780ti from a 580, so I will structure the chart to read that increase and work up from 0 not down from 100.

You are all making the assumption that his chart is going down from 100, however it could be going up from zero, rendering your math wrong. How none of you are able to grasp this is beyond me.


----------



## davio

Pyrotagonist eludes to the fact that there is a reciprocal relationship between increase and decrease, so It doesn't matter which way.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Pyrotagonist*
> 
> The 580 gives 54% of the 780 Ti's performance, the 780 Ti is 85% faster than the 580 - these statements are equivalent. Note that "780 Ti is 46% faster than the 580" is not one of them.
> 
> Also, at the 970 launch, it (the reference clocked one) was already 94.2% of the 780 Ti, or equivalently, the 780 Ti was 6.1% faster.
> 
> 
> Absolutely unreal. This is stuff we learn in 4th grade science class.
> 
> The 580 has 54 points, the 780 Ti has 100 points. 54 * 1.46 (a 46% increase) yields 78.9 points. 78.9 != 100.


Basically most people pick the % which favors their side. % Difference, % Faster, % Slower.


----------



## davio

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Basically most people pick the % which favors their side. % Difference, % Faster, % Slower.


too true


----------



## Pyrotagonist

The 970 as far as I'm concerned matches the 780 Ti, even at launch. That's the point I was making; a 6% difference is still well within the same performance tier.

http://anandtech.com/bench/product/1072?vs=1350

780 Ti beats the 580 by around 90% in almost all games.


----------



## iluvkfc

Quote:


> Originally Posted by *Cyber Locc*
> 
> I really don't understand why I keep having to repeat myself....
> 
> Maybe I am bad at math but it seems everyone else is bad at reading.
> 
> For the last time I am following TPUS MATH HE MADE the GRAPH......


Ok he might have made a mistake in how he made the percentage graphs, although I highly doubt it. Let's go to the review from where these performance numbers are: https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_Ti/1.htm. And compare actual FPS numbers and not percentage.

There is no actual FPS data for the 580 so let's consider the 680 vs 780 Ti. A typical game to consider is Crysis 3 at 1080p (should be fairly representative but if you insist you can add FPS numbers for all games and do the average). We find 680 gets 32.0 FPS and 780 Ti gets 48.8 FPS.

*% difference using 680 as a baseline: |48.8-32|/32 = 52% (780 Ti faster)
% difference using 780 Ti as a baseline: |32-48.8|/48.8 = 34% (680 slower)*

Now let's go to the performance summary.

*% difference using 680 as a baseline: |100-73|/73 = 37%(780 Ti faster)
% difference using 780 Ti as a baseline: |73-100|/100 = 27% (680 slower)*

*Wait... wut? Is he right? Although there are a LOT of games in that review that are CPU bottlenecked, e.g. SC2, Skyrim... so maybe that is skewing the data.*


----------



## Dargonplay

Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX970 is not faster then GTX780 Ti unless very specific games where Maxwell has big advantage. The only reason GTX970 wins in most benchmarks is because of clock speeds. If you OC both cards GTX780 Ti will beat it.


You could OC both cards and the 970 would still be faster.

OC Headroom should be an important factor, but only comparing cards by their OC is silly, that's why only Linustechtips do it. There are some 780Ti that couldn't even get to 1100MHz while there are 970s that can reach up to 1700MHz, there is always some variance hence it isn't a reliable way to measure one chip's performance capabilities.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX970 is not faster then GTX780 Ti unless very specific games where Maxwell has big advantage.


And what advantage is that exactly? Tessellation? Geometry? even if what you say it's true and Maxwell does have an advantage over Kepler, ins't that the whole point? Of Course that a newer architecture must have some advantage over the old one, advantages that will end up making it faster.

There is no game that has been released in the last 6 months where a 970 is not faster than a 780Ti.
Quote:


> Originally Posted by *Cyber Locc*
> 
> To the 970 ya after how many years of driver releases was that? Looks the tpus launch review it shows a diffrent story.
> 
> Are we saying the 1070 will be faster at launch of In 2 years?


Again, very cringe worthy. The 900 series doesn't have "Years" of existing, it doesn't needed "Years" for its drivers to mature and have a 970 trashing the floor $/Performance with a 780Ti.

A 970 became a faster card than a 780Ti just a couple months after release, now you want to compare overly mature drivers with overly immature drivers? suit yourself, although you can't deny that only a few months after launching the 970 was the faster card and this was achieved under the already perfected, over exploited fabrication process that both generations were sharing.

Pascal on the other hand is coming with a HUGE node shrink, one of the biggest jumps in the whole history of Graphics card (When referring to percentage of shrinkage), and you're here with delusions of insignificance, it's not even remotely logic.


----------



## criminal

Lol... Not bad math thread again.

1070 will be as fast/faster than the 980ti. No doubt in my mind.


----------



## Pyrotagonist

Quote:


> Originally Posted by *iluvkfc*
> 
> Ok he might have made a mistake in how he made the percentage graphs, although I highly doubt it. Let's go to the review from where these performance numbers are: https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780_Ti/1.htm. And compare actual FPS numbers and not percentage.
> 
> There is no actual FPS data for the 580 so let's consider the 680 vs 780 Ti. A typical game to consider is Crysis 3 at 1080p (should be fairly representative but if you insist you can add FPS numbers for all games and do the average). We find 680 gets 32.0 FPS and 780 Ti gets 48.8 FPS.
> 
> *% difference using 680 as a baseline: |48.8-32|/32 = 52% (780 Ti faster)
> % difference using 780 Ti as a baseline: |32-48.8|/48.8 = 34% (680 slower)*
> 
> Now let's go to the performance summary.
> 
> *% difference using 680 as a baseline: |100-73|/73 = 37%(780 Ti faster)
> % difference using 780 Ti as a baseline: |73-100|/100 = 27% (680 slower)*
> 
> *Wait... wut? Is he right? Although there are a LOT of games in that review that are CPU bottlenecked, e.g. SC2, Skyrim... so maybe that is skewing the data.*


Nah, most games had the 780 Ti being about 37% faster. 780 Ti is just a Crysis 3 beast. It even beats the 980 in that particular game. The charts are probably right.

Anandtech's bench shows the 780 Ti at a huge advantage compared to the 580, consistent with the 85% the TPU chart suggests: http://anandtech.com/bench/product/1072?vs=1350


----------



## iluvkfc

Quote:


> Originally Posted by *Pyrotagonist*
> 
> Nah, most games had the 780 Ti being about 37% faster. 780 Ti is just a Crysis 3 beast. It even beats the 980 in that particular game. The charts are probably right.
> 
> Anandtech's bench shows the 780 Ti at a huge advantage compared to the 580, consistent with the 85% the TPU chart suggests: http://anandtech.com/bench/product/1072?vs=1350


Yeah and also it's kinda hard to believe a 780 Ti is only 46% faster than a 580. But this exposes a problem with old TPU benchmarks that include games that no one would ever consider when debating between high-end graphics cards (so in the end the graph looks screwed when compared to normal games). Glad they got rid of those games.


----------



## renx

I miss those days when rumours and fakes used to overrate the upcoming product.

High-end Pascal will be way better than what those benchmarks show.

.


----------



## guttheslayer

It wont be impossible for X70 to be as fast as GTX 980 Ti, but the price tag wont be anywhere below $400 (GTX 670 was around that price)

X80 will be at most 20-25% faster to both 980 Ti and X70.

That benchie if its real its either underclocked, on testing drivers, OR its simply X70. X70 should come in 8GB as well.


----------



## EightDee8D

meth and math aren't same lol.


----------



## Randomdude

Quote:


> Originally Posted by *Cyber Locc*
> 
> What are you talking about now you are skipping generations lol.
> 
> However lets do that, 580 vs 780ti
> 
> 
> 
> *I am seeing 46% where you are coming up with 80% is beyond me.
> *
> 46/100 = .46 or 46% can you math bro?


Oh my God, please educate yourself. Math is a science that a lot of people claim they have a clue about for _whatever reason_ and that is something I cannot fathom, but when you can't even do simple percentages you should really stop believing in yourself so much and have a reality check when everyone on the forum is wasting time on correcting your blatant ignorance while you keep insisting that you are right. Also PLEASE don't say take it up with wizz1rd, it's not his mistake, it's simply and truly only your lack of knowledge on subjects you believe you have a clue about. Work on that ego a bit, get down to earth - would do you wonders. This post really irked me.

EDIT: Yes, it is beyond you.


----------



## littledonny

Who cares about 3DM11 P scores. No one plays at that resolution.


----------



## Cyber Locc

Quote:


> Originally Posted by *Pyrotagonist*
> 
> The 970 as far as I'm concerned matches the 780 Ti, even at launch. That's the point I was making; a 6% difference is still well within the same performance tier.
> 
> http://anandtech.com/bench/product/1072?vs=1350
> 
> 780 Ti beats the 580 by around 90% in almost all games.


Mk well its not as you are proving my point as the 980 is only 5% faster than the 780ti. Which places the TI in between them both which is exactly what I have been saying this entire time. The 1070 will be slower than the TI and the 1080 will be faster than the TI not by much in either direction, as both cards will increase from there counter parts by ~30%


----------



## Cyber Locc

Quote:


> Originally Posted by *Randomdude*
> 
> Oh my God, please educate yourself. Math is a science that a lot of people claim they have a clue about for _whatever reason_ and that is something I cannot fathom, but when you can't even do simple percentages you should really stop believing in yourself so much and have a reality check when everyone on the forum is wasting time on correcting your blatant ignorance while you keep insisting that you are right. Also PLEASE don't say take it up with wizz1rd, it's not his mistake, it's simply and truly only your lack of knowledge on subjects you believe you have a clue about. Work on that ego a bit, get down to earth - would do you wonders. This post really irked me.
> 
> EDIT: Yes, it is beyond you.


Okay I will learn math when you learn how to read deal?

I am not believing in myself I am believing in the guy that made the graph LEARN HOW TO READ...... I have said this over 20 times in this thread alone and seriously you still cant read it.

The guy that made the graph uses the same math as me so I am going to use his how is this so hard to comprehend?

And then everyone says this happens every year, I am sure it does as you are saying that W1zzards math is wrong.


----------



## zealord

people are too pessimistic about Pascal/1070/1080 and think too highly of a 980 Ti which isn't even a ful fat card.

I am sure that those cards in the 3Dmark benchmarks either aren't the 1070/1080 or something (whatever it is) is restricting those cards. I am convinced and fully believe that what we see there won't be the 1070/1080 we will get too see on launch day.


----------



## Randomdude

Quote:


> Originally Posted by *Cyber Locc*
> 
> *Okay I will learn math when you learn how to read deal?*
> 
> I am not believing in myself I am believing in the guy that made the graph LEARN HOW TO READ...... I have said this over 20 times in this thread alone and seriously you still cant read it.
> 
> The guy that made the graph uses the same math as me so I am going to use his how is this so hard to comprehend?


I'm afraid I have a head-start of about 15 years.


----------



## Cyber Locc

Quote:


> Originally Posted by *Randomdude*
> 
> I'm afraid I have a head-start of about 15 years.


Obviously not as you still cant read.

I would also hope if you had 15 years on me, (Which would put you well into your 50s btw) you would learn how to write not in text walls and with Periods I didn't even read what you said as it gibberish to me.


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay I will learn math when you learn how to read deal?
> 
> I am not believing in myself I am believing in the guy that made the graph LEARN HOW TO READ...... I have said this over 20 times in this thread alone and seriously you still cant read it.
> 
> The guy that made the graph uses the same math as me so I am going to use his how is this so hard to comprehend?
> 
> And then everyone says this happens every year, I am sure it does as you are saying that W1zzards math is wrong.


It doesn't matter what the guy said, it is what it is, the graph doesn't seem faulty as it does make sense and it's consistent to real life results, you shouldn't be guided by what "the guy" is saying, but understanding the process behind the graph and making the math yourself to either corroborate or debunk the graph.

In the event that the graph is faulty then we shouldn't be using it in the first place, either way you're doing it wrong.
Quote:


> Originally Posted by *Cyber Locc*
> 
> I am not believing in myself I am believing in the guy that made the graph


So you're saying you have no fault for believing in a faulty graph because the guy who made the faulty graph told you something, this assuming the graph is faulty, which I believe it isn't but the way you're interpreting the numbers is.


----------



## Randomdude

Quote:


> Originally Posted by *Cyber Locc*
> 
> Obviously not as you still cant read.


Quote:


> Originally Posted by *Randomdude*
> 
> Oh my God, please educate yourself. Math is a science that a lot of people claim they have a clue about for _whatever reason_ and that is something I cannot fathom, but when you can't even do simple percentages you should really stop believing in yourself so much and have a reality check when everyone on the forum is wasting time on correcting your blatant ignorance while you keep insisting that you are right. *Also PLEASE don't say take it up with wizz1rd, it's not his mistake, it's simply and truly only your lack of knowledge on subjects you believe you have a clue about.* Work on that ego a bit, get down to earth - would do you wonders. This post really irked me.
> 
> EDIT: Yes, it is beyond you.


Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay I will learn math when you *learn how to read* deal?
> 
> I am not believing in myself I am believing in the guy that made the graph LEARN HOW TO READ...... I have said this over 20 times in this thread alone and seriously you still cant read it.
> 
> The guy that made the graph uses the same math as me so I am going to use his how is this so hard to comprehend?
> 
> And then everyone says this happens every year, I am sure it does as you are saying that *W1zzards math* is wrong.


The irony seems to be lost on some.

EDIT: I'm done with you, waste of time.


----------



## Cyber Locc

Quote:


> Originally Posted by *Randomdude*
> 
> The irony seems to be lost on some.
> 
> EDIT: I'm done with you, waste of time.


See above I do not read text walls, learn grammar then I will read what you say and that wont happen.
Quote:


> Originally Posted by *Dargonplay*
> 
> It doesn't matter what the guy said, it is what it is, the graph doesn't seem faulty as it does make sense and it's consistent to real life results, you shouldn't be guided by what "the guy" is saying, but understanding the process behind the graph and making the math yourself to either corroborate or debunk the graph.
> 
> In the event that the graph is faulty then we shouldn't be using it in the first place, either way you're doing it wrong.


Is it though is it really? I used t780ti vs 580 bench posted earlier and did the math with the FPS for just games (the benches would make it worse) and used the math everyone is saying to. My result was 219% not 85%.

So its not really true to real life, at least not his results compared to there results.

I will go through his benches and review and see.

Okay well that is not going to happen. He uses entirely different games for the tests so what now? He doesn't have the same results, honestly we could furthermore pack the results in that fashion.

The GTX 580 was not designed for DX11 it was designed for DX10. the 780ti was designed for DX11 and optimized for its features. So this is like comparing a Maxwell card to a Volta card that might have Async yes there will be physical card gains there is also API gains. It is very hard to compare these cards in this fashion and say well its 85% faster it is today but is that because of a die shrink or is it everything.


----------



## KarathKasun

I am pretty sure its gong to be x70 < 980Ti < x80.

28nm to 16nm is not a full shrink, the structure size does not go down by half. This means density is only going up by 30-40%.

Compute is going up by way of having more transistors per compute core. Maxwell was such a hit in games because they cut the extra compute features out of the compute cores. NV has to address this for their Quadro lines or get curbstomped in the professional sector WHICH THEY DO NOT WANT.

Larger compute cores + marginal density increase = not a huge leap forward for games.
For scientific compute performance is going to double or more though.
Quote:


> Originally Posted by *Cyber Locc*
> 
> See above I do not read text walls, learn grammar then I will read what you say and that wont happen.
> Is it though is it really? I used t780ti vs 580 bench posted earlier and did the math with the FPS for just games (the benches would make it worse) and used the math everyone is saying to. My result was 219% not 85%.
> 
> So its not really true to real life, at least not his results compared to there results.
> 
> I will go through his benches and review and see.


85% faster (100%+85%) = ~185% performance BTW.


----------



## Pyrotagonist

Quote:


> Originally Posted by *Cyber Locc*
> 
> Mk well its not as you are proving my point as the 980 is only 5% faster than the 780ti. Which places the TI in between them both which is exactly what I have been saying this entire time. The 1070 will be slower than the TI and the 1080 will be faster than the TI not by much in either direction, as both cards will increase from there counter parts by ~30%


Fair enough. I don't disagree. My qualitative view is just different.


----------



## Forceman

Quote:


> Originally Posted by *zealord*
> 
> people are too pessimistic about Pascal/1070/1080 and think too highly of a 980 Ti which isn't even a ful fat card.
> 
> I am sure that those cards in the 3Dmark benchmarks either aren't the 1070/1080 or something (whatever it is) is restricting those cards. I am convinced and fully believe that what we see there won't be the 1070/1080 we will get too see on launch day.


A lot is going to depend on clock speeds they are able to get out of the new process, and how big the die ends up. 980 Ti is a really big die, and the 1080 may be smaller than expected.
Quote:


> Originally Posted by *KarathKasun*
> 
> I am pretty sure its gong to be x70 < 980Ti < x80.
> 
> 28nm to 16nm is not a full shrink, the structure size does not go down by half. This means density is only going up by 30-40%.
> 
> Compute is going up by way of having more transistors per compute core. Maxwell was such a hit in games because they cut the extra compute features out of the compute cores. NV has to address this for their Quadro lines or get curbstomped in the professional sector WHICH THEY DO NOT WANT.
> 
> Larger compute cores + marginal density increase = not a huge leap forward for games.
> For scientific compute performance is going to double or more though.
> ~85% faster = ~200% performance BTW.


TSMC claims double the density for 16+ over 28nm . Doubt it'll be that much, but 30-40% density increase seems pessimistic.
Quote:


> TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology.


http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm


----------



## Cyber Locc

Quote:


> Originally Posted by *KarathKasun*
> 
> I am pretty sure its gong to be x70 < 980Ti < x80.
> 
> 28nm to 16nm is not a full shrink, the structure size does not go down by half. This means density is only going up by 30-40%.
> 
> Compute is going up by way of having more transistors per compute core. Maxwell was such a hit in games because they cut the extra compute features out of the compute cores. NV has to address this for their Quadro lines or get curbstomped in the professional sector WHICH THEY DO NOT WANT.
> 
> Larger compute cores + marginal density increase = not a huge leap forward for games.
> For scientific compute performance is going to double or more though.
> 85% faster (100%+85%) = ~185% performance BTW.


Yes I know that but I didn't get 185 I got 219. so that would be a 119% performance gain, and thats just in the games the benchmarks have larger gaps so that number would go up.


----------



## KarathKasun

Quote:


> Originally Posted by *Forceman*
> 
> A lot is going to depend on clock speeds they are able to get out of the new process, and how big the die ends up. 980 Ti is a really big die, and the 1080 may be smaller than expected.
> TSMC claims double the density for 16+ over 28nm . Doubt it'll be that much, but 30-40% density increase seems pessimistic.
> http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm


Does TSMC 16nm FF+ look like its half the size of TSMC 28nm?



It would need to be approximately 60x45 to be twice as dense.

TSMC 16nm FF+ is ~70% of the size of TSMC 28nm.


----------



## renx

If the 970 equals a 780ti, then I can't imagine a X70 performing worse than a 980ti.


----------



## Forceman

Quote:


> Originally Posted by *KarathKasun*
> 
> Does TSMC 16nm FF+ look like its half the size of TSMC 28nm?
> 
> 
> 
> It would need to be approximately 60x45 to be twice as dense.


I'm quoting TSMC. I'd imagine they know their own node and density ability.


----------



## KarathKasun

That image is using metrics from TSMC's own design guidelines.

16FF is technically not 16nm. It is a hybrid 20nm process which is why 16FF is the same size as 20nm and 16FF+ is only a tiny bit smaller in one dimension.


----------



## Cyber Locc

Quote:


> Originally Posted by *renx*
> 
> If the 970 equals a 780ti, then I can't imagine a X70 performing worse than a 980ti.


It doesnt do that until almost a year later is the problem. So on launch day we wont see that. Also that is arguably after NV gimping the 780ti through drivers and improving the 970.
Quote:


> Originally Posted by *Forceman*
> 
> I'm quoting TSMC. I'd imagine they know their own node and density ability.


I do not think any of the pessimist are arguing that there isn't a massive density difference. the thing is the Die will not be 980ti sized not even close + a lot of the density will be used for Compute.


----------



## zealord

Quote:


> Originally Posted by *Forceman*
> 
> A lot is going to depend on clock speeds they are able to get out of the new process, and how big the die ends up. 980 Ti is a really big die, and the 1080 may be smaller than expected.


yes quite possible that it is smaller than we think.

But that would further mutilate the X80 line cards and alienate customers like me that aren't loyal to Nvidia but rather want the best value for their money. Sure in the end I want the best performance/money I can get, but I still feel robbed for buying the GTX 680 (294mm² die) card 4 years ago.

The GTX 1080 could still be good, even with a small die, but it still needs to have a much better price/performance than the 980 Ti. What sense would a GTX 1080 make that is not much (25-30%) faster than the GTX 980 Ti ?

Anything but (or better) :

GTX 1080
8GB VRAM
25%+ better than 980 Ti
599$

would make the card completely uninteresting for me. Especially since the GTX 980 Ti is probably *over 1 year old* by the time Pascal arrives.


----------



## KarathKasun

Quote:


> Originally Posted by *zealord*
> 
> yes quite possible that it is smaller than we think.
> 
> But that would further mutilate the X80 line cards and alienate customers like me that aren't loyal to Nvidia but rather want the best value for their money. Sure in the end I want the best performance/money I can get, but I still feel robbed for buying the GTX 680 (294mm² die) card 4 years ago.
> 
> The GTX 1080 could still be good, even with a small die, but it still needs to have a much better price/performance than the 980 Ti. What sense would a GTX 1080 make that is not much (25-30%) faster than the GTX 980 Ti ?
> 
> Anything but (or better) :
> 
> GTX 1080
> 8GB VRAM
> 25%+ better than 980 Ti
> 599$
> 
> would make the card completely uninteresting for me. Especially since the GTX 980 Ti is probably *over 1 year old* by the time Pascal arrives.


Welcome to the slowing advancement of tech.

We are at the end of silicon scaling, and stagnation is going to be quite a thing until something else comes around.


----------



## Forceman

Quote:


> Originally Posted by *KarathKasun*
> 
> That image is using metrics from TSMC's own design guidelines.
> 
> 16FF is technically not 16nm. It is a hybrid 20nm process which is why 16FF is the same size as 20nm and 16FF+ is only a tiny bit smaller in one dimension.


Again, feel free to take it up with TSMC, who are the ones saying it roughly doubles the density, but bear in mind that halving both dimensions would result in 4 times the density. Doubling the density only requires a 2/3 reduction in the linear dimensions.

Assuming those pictures are to scale (big assumption), here's 9 16 FF+ fitting in roughly the same area as 4 28nm.


----------



## Cyber Locc

Quote:


> Originally Posted by *zealord*
> 
> yes quite possible that it is smaller than we think.
> 
> But that would further mutilate the X80 line cards and alienate customers like me that aren't loyal to Nvidia but rather want the best value for their money. Sure in the end I want the best performance/money I can get, but I still feel robbed for buying the GTX 680 (294mm² die) card 4 years ago.
> 
> The GTX 1080 could still be good, even with a small die, but it still needs to have a much better price/performance than the 980 Ti. What sense would a GTX 1080 make that is not much (25-30%) faster than the GTX 980 Ti ?
> 
> Anything but (or better) :
> 
> GTX 1080
> 8GB VRAM
> 25%+ better than 980 Ti
> 599$
> 
> would make the card completely uninteresting for me. Especially since the GTX 980 Ti is probably *over 1 year old* by the time Pascal arrives.


But, to what will you go then? Mahigan stated earlier that these benches show about the same gains AMD will have. He also seems to have some inside knowledge or something







.


----------



## renx

Quote:


> Originally Posted by *Cyber Locc*
> 
> It doesnt do that until almost a year later is the problem. So on launch day we wont see that.


I understand your point, but that may count even more if we were talking about the same node.
I'm not saying that I'm buying all the hype produced around Pascal for the last two years. But according to nvidia, it's supposed to bring a "leap" of some sort.
Huang even used the word "breakthrough" in different conferences.
I choose to remain optimistic, and I believe we may be pleasantly surprised when the final word is pronounced.


----------



## Dargonplay

Quote:


> Originally Posted by *renx*
> 
> If the 970 equals a 780ti, then I can't imagine a X70 performing worse than a 980ti.


Quote:


> Originally Posted by *Cyber Locc*
> 
> It doesnt do that until almost a year later.


You keep saying that, it doesn't change the fact you're ignoring... well, The facts. If you keep ignoring the facts after this post you will be spouting nonsense and misinformation, anyone who does that is making a fool of themselves.

The 970 was released around September 19 2014, Far Cry 4 was released 2 months later on November 18 2014, just two months was enough to make the 970 the absolute superior card.



Each post you make is a new bottom low, when 99% of someone's argument is based on misinformation and lies it gets hard not to call that out.


----------



## Cyber Locc

Quote:


> Originally Posted by *renx*
> 
> I understand your point, but that may count even more if we were talking about the same node.
> I'm not saying I'm buying all the hype produced around Pascal for the last two years. But according to nvidia, it's supposed to bring a "leap" of some sort.
> Huang even used the word "breakthrough" in different conferences.
> I choose to remain optimistic, and I believe we may be pleasantly surprised when the final word is pronounced.


You are correct, he has mentioned leaps and breakthroughs all of them have been regarding Compute not 1 has revolved gaming at all. Pascal will be very strong compute cards where Maxwell is the weakest in years, they will not be a huge jump for gaming.

Also they should in theory see a large gain with the same size chip but the chips wont be the same size till Volta.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> You keep saying that, it doesn't change the fact you're ignoring... well, The facts. If you keep ignoring the facts after this post you will be spouting nonsense and misinformation, basically making a fool of yourself, assuming you aren't already.
> 
> The 970 was released around September 19 2014, Far Cry 4 was released 2 months later on November 18 2014, just two months was enough to make the 970 the absolute superior card.
> 
> 
> 
> Each post you make is a new bottom low.


Mk well I see your one game and raise you overall data.





that was from December by the way, 970s didn't match TIs until after the Titan X was released.

That is 3% @1440p and 5% @4k.


----------



## GorillaSceptre

Unfortunately, even if Pascal/Polaris reach the (very high) expectations people have for them, they still won't be 4k/60 cards... In Vega and Volta we trust.


----------



## KarathKasun

Quote:


> Originally Posted by *Forceman*
> 
> Again, feel free to take it up with TSMC, who are the ones saying it roughly doubles the density, but bear in mind that halving both dimensions would result in 4 times the density. Doubling the density only requires a 2/3 reduction in the linear dimensions.
> 
> Assuming those pictures are to scale (big assumption), here's 9 16 FF+ fitting in roughly the same area as 4 28nm.


You are about 10% oversized, bringing it to 8 for 4. Though that also does not account for other design characteristics. AFAIK, you have to use the spacing from 20nm for most instances, landing you at something closer to 6 or 7 for 4 with 8 for 4 being the absolute best possible outcome which wont happen often due to leakage and/or "crosstalk". (cant remember the specific term for use in silicon transistors, sorry)


----------



## Forceman

Quote:


> Originally Posted by *KarathKasun*
> 
> You are about 10% oversized, bringing it to 8 for 4. Though that also does not account for other design characteristics. AFAIK, you have to use the spacing from 20nm for most instances, landing you at something closer to 6 or 7 for 4 with 8 for 4 being the absolute best possible outcome which wont happen often due to leakage and/or "crosstalk". (cant remember the specific term for use in silicon transistors, sorry)


I'd say 7 is "roughly" double. And again, to be clear, I'm not the one making the claim that it is double. It is TSMC that says that, I'm just assuming they know what they are talking about.
Quote:


> TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, *around 2 times the density*, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. By leveraging the experience of 20SoC technology, TSMC 16FF+ shares the same metal backend process in order to quickly improve yield and demonstrate process maturity for time-to-market value.


http://www.tsmc.com/english/dedicatedFoundry/technology/16nm.htm


----------



## KarathKasun

The placement of the "or" in there seems very odd. Like they are saying better performance/density "or" better power characteristics. Hmm.


----------



## zealord

Quote:


> Originally Posted by *KarathKasun*
> 
> Welcome to the slowing advancement of tech.
> 
> We are at the end of silicon scaling, and stagnation is going to be quite a thing until something else comes around.


Yeah maybe, but there is no way we won't see gains. You are talking about 28nm -> 14nm right? I am no expert, but we are not quite there yet right?

I am still cautiously optimistic though.

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Unfortunately, even if Pascal/Polaris reach the (very high) expectations people have for them, they still won't be 4k/60 cards... In Vega and Volta we trust.


what specifically do you consider "high expectations"?

My expectation is GTX 1080 being 25% faster than 980 Ti and I personally consider that extremely low disappointing expectations. Eight months ago I expected a possible GTX 1080 to be 50% better than a 980 Ti.

AMD is targeting 16K 240hz HDR or something. How would we ever achieve that?

Also what exactly would stop Nvidia from doing a 16nm 500+mm² Pascal card with HBM2 (in 2017 or whenever it is available. I am not talking right now!)? Can we even be pessimistic about Pascal architecture itself? That card should, in theory, totally destroy the GTX 980 Ti, shouldn't it ?


----------



## KarathKasun

No, pascal is like one of Intel's tocks.

It is merely refinement of Maxwell from what I have seen/heard. Which makes sense as it is on a new node, and node transitions have been notoriously "meh" for Nvidia.

Im fairly certain the GPU you are waiting for will be Volta.


----------



## zealord

Quote:


> Originally Posted by *KarathKasun*
> 
> No, pascal is like one of Intel's tocks.
> 
> It is merely refinement of Maxwell from what I have seen/heard. Which makes sense as it is on a new node, and node transitions have been notoriously "meh" for Nvidia.


So best case scenarion would be :

A 300mm² Pascal card is about as good as a 980 Ti

and a 600mm² Pascal card would be twice as good as a 980 Ti?

Or am I looking at it wrong?


----------



## Juub

Quote:


> Originally Posted by *Dargonplay*
> 
> You keep saying that, it doesn't change the fact you're ignoring... well, The facts. If you keep ignoring the facts after this post you will be spouting nonsense and misinformation, anyone who does that is making a fool of themselves.
> 
> The 970 was released around September 19 2014, Far Cry 4 was released 2 months later on November 18 2014, just two months was enough to make the 970 the absolute superior card.
> 
> 
> 
> Each post you make is a new bottom low, when 99% of someone's argument is based on misinformation and lies it gets hard not to call that out.


Which is why people say Kepler has been crippled. 780 Ti should be on par or slightly faster than a 290X but it has been getting beaten by it like a rented mule recently.


----------



## KarathKasun

Something like that. IDK when yields will be good enough for a 600mm chip though.


----------



## zealord

Quote:


> Originally Posted by *KarathKasun*
> 
> Something like that. IDK when yields will be good enough for a 600mm chip though.


oh no that sounds horrible.

つ ◕_◕ ༽つ AMD take my energy and help us out of this misery つ ◕_◕ ༽つ


----------



## KarathKasun

I could be totally wrong, but that's how I see it on the gaming front.

X80 should be 350-400mm^2, will game like a 980 Ti, will compute like the older Titans or better.
Most importantly, a "full fat" version should fit in a laptop.

I see this generation as mainly bringing top end performance to laptops (980 Ti/Fury) while mildly boosting desktop performance. Mostly due to the fact that gaming laptops are expanding rather quickly as a market.


----------



## GorillaSceptre

Quote:


> Originally Posted by *zealord*
> 
> what specifically do you consider "high expectations"?
> 
> My expectation is GTX 1080 being 25% faster than 980 Ti and I personally consider that extremely low disappointing expectations. Eight months ago I expected a possible GTX 1080 to be 50% better than a 980 Ti.
> 
> AMD is targeting 16K 240hz HDR or something. How would we ever achieve that?
> 
> Also what exactly would stop Nvidia from doing a 16nm 500+mm² Pascal card with HBM2 (in 2017 or whenever it is available. I am not talking right now!)? Can we even be pessimistic about Pascal architecture itself? That card should, in theory, totally destroy the GTX 980 Ti, shouldn't it ?


I think expecting the 1080 to be 20% faster than a 980 Ti is high honestly. These are baby chips while not just targeting gaming like the Maxwell cards did, and Pascal from what we've been hearing is more or less a slightly different Maxwell. Despite the whole Async debacle, the 980 Ti is an impressive card, the best 28nm could offer us.

Polaris also looks like it will be the same story, although Polaris being quite different to Fiji might be quite a bit ahead of the Fury X, but still not a big jump. Yeah, in 2017 why not? But by then i hope Vega is out, I'm not sure if Volta is scheduled for next year or not.


----------



## renx

Quote:


> Originally Posted by *GorillaSceptre*
> 
> Unfortunately, even if Pascal/Polaris reach the (very high) expectations people have for them, they still won't be 4k/60 cards... In Vega and Volta we trust.


Even if it was, the question is 4k/60 for how long.
I remember the GTX280 being the definitive 1080p/60fps gpu, then the 480, 580...and so on.
As long as new engines and effects are added into game programming, they'll ask for more in no time.


----------



## Forceman

Quote:


> Originally Posted by *KarathKasun*
> 
> The placement of the "or" in there seems very odd. Like they are saying better performance/density "or" better power characteristics. Hmm.


I thought it was odd also. I'm guessing that means they can "tune" the layout a little bit, and be able to offer trade-offs between density and power, probably by reducing leakage with lower density.
Quote:


> Originally Posted by *KarathKasun*
> 
> I could be totally wrong, but that's how I see it on the gaming front.
> 
> X80 should be 350-400mm^2, will game like a 980 Ti, will compute like the older Titans or better.


I'd guess the 1080 to be a little better than the 980 Ti (10-15%) but that sounds pretty reasonable. I suspect it may be a little smaller chip than that though, closer to GTX 680 size.


----------



## zealord

Quote:


> Originally Posted by *GorillaSceptre*
> 
> I think expecting the 1080 to be 20% faster than a 980 Ti is high honestly. These are baby chips while not just targeting gaming like the Maxwell cards did, and Pascal from what we've been hearing is more or less a slightly different Maxwell. Despite the whole Async debacle, the 980 Ti is an impressive card, the best 28nm could offer us.
> 
> Polaris also looks like it will be the same story, although Polaris being quite different to Fiji might be quite a bit ahead of the Fury X, but still not a big jump. Yeah, in 2017 why not? But by then i hope Vega is out, I'm not sure if Volta is scheduled for next year or not.


well yeah, but the GTX 680 was a super small baby chip too.

GTX 580 (520mm²) to GTX 680 (294mm²)

but still 30% performance increase.


----------



## KarathKasun

That was because they stripped all compute additions that were non essential at the time.

GF100/110 were huge and hot because of all the excess focus on CUDA/HPC markets.


----------



## Cyber Locc

Quote:


> Originally Posted by *zealord*
> 
> well yeah, but the GTX 680 was a super small baby chip too.
> 
> GTX 580 (520mm²) to GTX 680 (294mm²)
> 
> but still 30% performance increase.


Right like Karth said they stripped alot of Gaming cards compute around that time, and they slowly stripped more and more. With Pascal they are putting it all back and then some so Gaming performance will barely raise honestly if at all.

The market wants Compute, DX12 wants Compute, Maxwell was about sucking out the most gaming performance possible. Pascal will be about catching Compute where it should have been if they would have carried trends since the 580. It will show amazing gains in Compute, Gaming it will not.

Has no one noticed that everything NV has said about Pascal has been about compute? Gaming is the after thought this time.

If Pascal were to carry the trend of stripped Compute, then I would agree a 1070 would beat a 980ti however this gen is about Compute and that will take up a lot of the Chip.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *zealord*
> 
> What sense would a GTX 1080 make that is not much (25-30%) faster than the GTX 980 Ti ?


Just look to 2014 for that answer. They released a 980 that wasn't much faster than the 780Ti at launch...


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Mk well I see your one game and raise you overall data.
> 
> 
> 
> 
> 
> that was from December by the way, 970s didn't match TIs until after the Titan X was released.
> 
> That is 3% @1440p and 5% @4k.


You're quoting a chart you admitted is made by a man who told you some extremely wrong math as the ultimate proof.

I would like to use real games if you don't mind me being down to earth.

Alien Isolation was Launched on October 7 2014 less than a month after the 970, even with such immature drivers the 970 was still the better card, this is basically a game that was launched the same month the 970 launched, this isn't the 2 years you're talking about, it's actually 2 weeks.



Far Cry 4 that Launched exactly 2 months after the launch of the 970, the 970 was the better card.



The Crew that Launched on December 2 2014



Call of Duty Advanced Warfare that was launched on November 4 2014



Even at worst case scenario the 970 would still match the 780Ti in games like Dragon Age Inquisition.



ALL of these games were launched almost at the same time the 970 Launched, most of them had the 970 as the better card, this is the norm, the rule, the exception was actually the 780Ti matching the 970. I don't care what the chart says if I can debunk it this easily.

There were also Shadow of Mordor benchmarks, but I couldn't find one with the 970 and the 780Ti in the same graph, although you can check the results separately and will see that the 970 wins here too.

From September to December, these are the highest profile games launched, and all of them but 1 the 970 was the BEST card by a lot. From there on the 970 just trash the 780Ti.

Let me repeat, this was ALL DONE within the same fabrication process, to think Pascal under 16mm wont even see the gains from Kepler to Maxwell is utterly foolish.


----------



## zealord

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Just look to 2014 for that answer. They released a 980 that wasn't much faster than the 780Ti at launch...


that is very different because both are 28nm !

GTX 580 > GTX 680 was from 40nm to 28nm.

@ KarathKasun yeah that could be why.


----------



## renx

If I remember correctly, the 970 matched the 780ti basically from the very beginning.
Maybe it was 5% behind overall, but it didn't take more than a month or two to match and even improve the 780ti's performance.


----------



## Pyrotagonist

Even if W1zzard doesn't know math, I'm sure the charts are generated automatically, presumably with an algorithm written by someone who does know math.


----------



## Master__Shake

2 things.

i don't trust 3dmark screenies anymore since this happened to me.










yes that's a 2 terahertz petahertz? 2600k.

also it is very possible that these are real and they are using A LOT less power than the previous generation.


----------



## Dargonplay

Quote:


> Originally Posted by *Master__Shake*
> 
> 2 things.
> 
> i don't trust 3dmark screenies anymore since this happened to me.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> yes that's a 2 terahertz petahertz? 2600k.
> 
> also it is very possible that these are real and they are using A LOT less power than the previous generation.


Maybe it detected the Nuclear Reactor you have connected to your RIG.


----------



## Cyber Locc

[quote name="Dargonplay" url="/t/1595065/wccftech-nvidia-pascal-gtx-1080-1070-1060-benchmarks-leaked-\
Let me repeat, this was ALL DONE within the same fabrication process, to think Pascal under 16mm wont even see the gains from Kepler to Maxwell is utterly foolish.[/quote]

Again you are ignoring compute so lets just see who is the fool at release







.


----------



## Forceman

Quote:


> Originally Posted by *Master__Shake*
> 
> also it is very possible that these are real and they are using A LOT less power than the previous generation.


Not for nothing, but the first Maxwell that showed up was the 750 Ti.


----------



## Cyber Locc

Quote:


> Originally Posted by *Master__Shake*
> 
> 2 things.
> 
> i don't trust 3dmark screenies anymore since this happened to me.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> yes that's a 2 terahertz petahertz? 2600k.
> 
> also it is very possible that these are real and they are using A LOT less power than the previous generation.


That is where my money is at P/PW and Compute gaming will be back burner this round. they have 5 years worth of removing and gimping compute to catch up on all at once that is going to take alot of the chip.

Have none of you notice they have said Zelch about pascal gaming? Just compute and performance per watt. There is a reason for that mark my words.

since the 600 series they have removed compute by a lot every time up till the 900 where it virtually is trash. Now they have to add all that back and improve on what it should have been, I wouldn't be shocked if the 1080 performs worse than the 980ti honestly,


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Again you are ignoring compute so lets just see who is the fool at release
> 
> 
> 
> 
> 
> 
> 
> .


And you're ignoring that Nvidia is competing against AMD who is focusing on gaming this generation, a generation where AMD actually have too many advantages.

1-) AMD is using 14mm process, which means they have superior density, capable of getting more performance out of smaller dies compared to Nvidia (Up to 20% more)
2-) AMD Polaris Architecture is very parallel, DirectX 12 is very parallel, GCN is just made for this API because it's too similar to mantle
3-) AMD Have ACEs built into the hardware, this means that with heavy ACEs usage AMD can gain up to 45% extra performance, the use of ACEs will be mainstream next year, seeing how many games are starting to use them just now.

If Nvidia doesn't provide the jump we're expecting then with all these points above AMD Surely will, so you're right we'll see who's the fool at release, except that if you were right the fool would be Nvidia, and we know very well that unlike some people here they don't have a record of making a fool of themselves.









PD: You said yourself and I'll quote
Quote:


> Originally Posted by *Cyber Locc*
> 
> Thing is obviously my logic isn't flawed as no 70 card has matched the flagsip of the gen prior for th gems with TIs, if you can not understand tiers then I don't know what to tell you.


I have proved you very wrong, a 70 card has matched a flagship just this gen in fact, now you don't get to throw "Compute" to change the argument. My work is done here.


----------



## KarathKasun

Quote:


> Originally Posted by *Dargonplay*
> 
> If Nvidia doesn't provide the jump we're expecting then with all these points above AMD Surely will, so you're right we'll see who's the fool at release, except that if you're right the fool would be Nvidia, and we know very well that unlike some people here they don't have a record of making a fool of themselves.


LOL, NV has no record of making fools of themselves... In what reality do you live?

Geforce 4 = Missing important features, sure it was great at its launch... not so much 6-12 months later.
Geforce 5 = Late train wreck
Geforce 480/470/465 = hot, slow, not really gaming GPUs. Redeemed only in HPC market for the most part.
Tegra = Train wreck

All companies do stupid things at one point or another.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> And you're ignoring that Nvidia is competing against AMD who is focusing on gaming this generation, a generation where AMD actually have too many advantages.
> 
> 1-) AMD is using 14mm process, which means they have superior density, capable of getting more performance out of smaller dies compared to Nvidia (Up to 20% more)
> 2-) AMD Polaris Architecture is very parallel, DirectX 12 is very parallel, GCN is just made for this API because it's too similar to mantle
> 3-) AMD Have ACEs built into the hardware, this means that with heavy ACEs usage AMD can gain up to 45% extra performance, the use of ACEs will be mainstream next year, seeing how many games are starting to use them just now.
> 
> If Nvidia doesn't provide the jump we're expecting then with all these points above AMD Surely will, so you're right we'll see who's the fool at release, except that if you're right the fool would be Nvidia, and we know very well that unlike some people here they don't have a record of making a fool of themselves.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> PD: You said yourself and I'll quote
> I have proved you very wrong, a 70 card has matched a flagship just this gen in fact, now you don't get to throw "Compute" to change the argument. My work is done here.


Umm okay again you proved it did months later..... in games that were new so most likely didnt have the old cards in the drivers yet. you are also disregarding other sites and benches and just cherry picking yours ya that my friend is called a red herring.

Also from Mahigan is saying AMD will be showing the same gains and he seems to have insider info so that in fact means you are wrong.

Like I said I hope you are right I will glady sell my TI and buy 2 1070s.


----------



## hrockh

what this rumor really proves, the new cards should be out real soon


----------



## Cyber Locc

Quote:


> Originally Posted by *hrockh*
> 
> what this rumor really proves, the new cards should be out real soon


That and that there will be serious amounts of tears of all the waiters.


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm okay again you proved it did months later..... in games that were new so most likely didnt have the old cards in the drivers yet.
> 
> Also from Mahigan is saying AMD will be showing the same gains and he seems to have insider info so that in fact means you are wrong.


Mahigan is a very smart developer who's got 0 inside knowledge from AMD or Nvidia, you're again resorting to spouting misinformation to support your weak claims, he have cleared this up many times.

Also, months later? Funny how you went from "2 years later" to "1 year later" to "6 months later" and now "2 months later". I just posted games that were launched 1 week and a few days after the 900 Series release, this further proves how much you ignore other people posts.

Also you don't seem to understand how games are developed, if anything those game would have the old cards supported at their full potential while the newer cards would have been crippled, the development process takes months and years, the game launched less than 2 weeks after the 900 series, how can you possibly believe they would have been 100% ready for those cards? Maybe you should accept reality and see that you were proved wrong.
Quote:


> Originally Posted by *KarathKasun*
> 
> LOL, NV has no record of making fools of themselves... In what reality do you live?
> 
> Geforce 4 = Missing important features, sure it was great at its launch... not so much 6-12 months later.
> Geforce 5 = Late train wreck
> Geforce 480/470/465 = hot, slow, not really gaming GPUs. Redeemed only in HPC market for the most part.
> Tegra = Train wreck
> 
> All companies do stupid things at one point or another.


You are right, I should have rephrased that better to "Nvidia can make a fool of themselves, they're just not that hardcore as some people here" Lol.

You can't deny though that they can play the market, I can't believe they will allow AMD to regain so much marketshare just like that.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> Mahigan is a very smart developer who's got 0 inside knowledge from AMD or Nvidia, you're again resorting to spouting misinformation to support your weak claims, he have cleared this up many times.
> 
> Also, months later? I just posted games that were launched 1 week and a few days after the 900 Series release, this further proves how much you ignore other people posts.
> 
> Also you don't seem to understand how games are developed, if anything those game would have the old cards supported at their full potential while the newer cards would have been crippled, the development process takes months and years, the game launched less than 2 weeks after the 900 series, how can you possibly believe they would have been 100% ready for those cards? Maybe you should accept reality and see that you were proved wrong.
> You are right, I should have rephrased that better to "Nvidia can make a fool of themselves, they're just not that hardcore as some people here" Lol.
> 
> You can't deny though that they can play the market, I can't believe they will allow AMD to regain so much marketshare just like that.


Mk keep cherry picking.

He cleared that up huh, When was that when he said he has sources but he will not divulge them, but that he does not work for AMD? As that is pretty clear he has Inside sources.


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Mk keep cherry picking.
> 
> He cleared that up huh, When was that when he said he has sources but he will not divulge them, but that he does not work for AMD? As that is pretty clear he has Inside sources.


All the benchmarks posted are from Anandtech and Guru3D, is that Cherry Picking or that's how you call people when they prove you wrong?
Quote:


> Originally Posted by *Cyber Locc*
> 
> in games that were new so most likely didnt have the old cards in the drivers


That's hilarious, because it's actually the other way around.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> All the benchmarks posted are from Anandtech, is that Cherry Picking or that's how you call people when they prove you wrong?
> That's hilarious, because it's actually the other way around.


As to his sources and my impression was gather from here

Zealord, "Could you go into more detail where you get your information from?"

Mahigan, "I can't frown.gif But I'm sure some folks at AMD are wondering by now hhhh wink.gif"

Now if you don't get sources from that IDK what to tell you, that wasn't his only implication into this either just the latest one.


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> As to his sources and my impression was gather from here
> 
> Zealord, "Could you go into more detail where you get your information from?"
> 
> Mahigan, "I can't frown.gif But I'm sure some folks at AMD are wondering by now hhhh wink.gif"
> 
> Now if you don't get sources from that IDK what to tell you, that wasn't his only implication into this either just the latest one.


I'd rather avoid discussing about Mahigan and stick to the hard facts, anything we come up about him is ultimately inaccurate and unconfirmable.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> I'd rather avoid discussing about Mahigan and stick to the hard facts, anything we come up about him is ultimately inaccurate and unconfirmable.


Okay no problem, moving on. I can cherry pick too.

So what you are telling me is that, the 23 games tested by TPU where the 780ti is faster is irrelevant because its faster in a couple of games, at a resolution that no one even plays. Your entire argument is cherry picking a few gems to say see I told you. the world isnt in a bubble and those few games you linked do not change anything majority rules. even with those handful of wins the TI still wins, maybe not by much but it still wins. It never won by much anyway TBH and I never said it did, it also didn't lose to the 980 by much either, which is the entire point of the discussion.

Nice logic though 5 games at a resolution no one even plays at, make a rule and dictates a card a winner. That my friend is called cherry picking.

Facts are I have 23 games saying you are wrong you have 5 saying you are right that does not mean you are right lol.


----------



## Threx

Going from 54 to 100 is an 85% increase, not a 46% increase.

It's baffling how this can be so hard to understand.


----------



## Dargonplay

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay no problem, moving on. I can cherry pick too.
> 
> So what you are telling me is that, the 23 games tested by TPU where the 780ti is faster is irrelevant because its faster in a couple of games, at a resolution that no one even plays. Your entire argument is cherry picking a few gems to say see I told you. the world isnt in a bubble and those few games you linked do not change anything majority rules. even with those handful of wins the TI still wins, maybe not by much but it still wins. It never won by much anyway TBH and I never said it did, it also didn't lose to the 980 by much either, which is the entire point of the discussion.
> 
> Nice logic though 5 games at a resolution no one even plays at, make a rule and dictates a card a winner. That my friend is called cherry picking.
> 
> Facts are I have 23 games saying you are wrong you have 5 saying you are right that does not mean you are right lol.


A Couple of games? those I posted are all games launched from September to December, they also happen to be the only high profile games launched through that time, sorry but I'm pretty sure that if Anandtech and Guru3D made those charts it'll look very differently.

A resolution that no one even plays? Have you seen how much money Asus have made with their ROG 1440p Monitors? I my self have 3 144Hz 1440p Monitors, 1440p is becoming the new 1080p now days.

Again, I'm not cherry Picking, I happen to trust Guru3D and Anandtech more, those two sites differ against 1 site, if anything the Cherry Picking seems to be on your side of this argument with a single chart against real benchmarks from two different very reliable sites.

What Majority? Please, those games where the 970 beats the 780Ti are games that launched at the same time the 900 Series launched, I'm limiting my self to that timeline from September to December, if I started posting high profile games that launched after December then 99.9% of them would have the 970 as the winner.
Quote:


> Originally Posted by *Cyber Locc*
> 
> even with those handful of wins the TI still wins, maybe not by much but it still wins. It never won by much anyway TBH and I never said it did.


Right, you just said
Quote:


> Originally Posted by *Cyber Locc*
> 
> Thing is obviously my logic isn't flawed as no 70 card has matched the flagsip of the gen prior for th gems with TIs, if you can not understand tiers then I don't know what to tell you.


You said no 70 card have matched the previous flagship, here I am proving that it didn't only matched it but surpassed it.


----------



## EightDee8D

after reading those arguments and looking at profile pics, i can say it suits you both very well. LOL


----------



## KarathKasun

Ill just say that x70 cards tend to match previous gen flagships, but only just for the most part.

X80 tends to be a little ahead of X70, so if X70 is within 5% of the prior Ti then x80 will be slightly faster. Though it would not be more than 15-20%. This is speaking of raw performance, excluding frame buffer limitations.

Also, statistically the 1440P user base is something like 5% where 1080P makes up 80% or more.


----------



## Dargonplay

Quote:


> Originally Posted by *KarathKasun*
> 
> Ill just say that x70 cards tend to match previous gen flagships, but only just for the most part.
> 
> X80 tends to be a little ahead of X70, so if X70 is within 5% of the prior Ti then x80 will be slightly faster. Though it would not be more than 15-20%. This is speaking of raw performance, excluding frame buffer limitations.
> 
> Also, statistically the 1440P user base is something like 5% where 1080P makes up 80% or more.


I'd like to see statistic of how many people buying the new Polaris or Pascal GPUs from 380X or 1070 Upwards have a 1440p monitor at that point, I would bet my left manhood that the number must be around 35 to 60%

If we include people using Intel Graphics or an Nvidia GT 720 type of cards the yeah, 1080p should be king, but they aren't "Gamers" anyway as they surely use their PCs for office works, social media or multimedia.

Now days any gamer can get a 100$ to 150$ capable card to not suffer through Intel HD 4000.


----------



## KarathKasun

And you would probably be left with just enough dangling spherical components to use in a trackball.

Using large TV's as gaming monitors is a very real and very popular thing.


----------



## orlfman

i think a lot of people forget about the last die shrink is that there wasn't a "ti" card in the mix. the 580 was the fastest single gpu solution on the market for nvidia. it was nvidia's single solution flagship, top dog. yes they had the 590 but it was NOT a single gpu card. it was a dual gpu card on a single pcb.

the 600 series? same exact thing. 680 was the top dog for a single gpu solution. not only that but both the 570, 580, 670, and 680 were the "big chips." they were the full fermi and keplar gpu's of their series.

starting with the 700 series this changed. the 780 was not nvidia's top dog. a new tier was added. the 780 ti followed by the titan was. maxwell 2 continued this trend but took it one step further with the 970 and 980 (gm204) not being full maxwell 2 chips. gm200 is the full maxwell 2 "big chip" and were exclusive for the 980 ti and titan x. the 970 being a cut down gm204 chip and the 980 ti being a cut down gm200 chip.

we went from "570 and 580" to "970, 980, 980 ti, and titan."

the "1080" is aimed at taking over the 980 spot. NOT the 980 ti spot. that's the "1080 ti" job.

you cannot compare the 1080 to the 980 ti since both cards are on different tier's.

another thing to note is what nvidia said: "pascal built on 16nm finfet process will offer 2x the performance per watt compared to maxwell 2." the 980 ti is 250 watts. half of 250 is 125. so that means, *with no gimping*, a *full big pascal chip*, at 125 watts, should offer the same performance as a 980 ti. the full pascal chip at 250 watts? that should offer the performance of two 980 ti's in sli. that's incredible.

seeing as the "1070" and "1080" are not the full pascal chips, that's being reserved apparently for the ti and titan, similar to maxwell 2's setup, the 1080 having performance near a 980 ti is perfectly reasonable. that puts it at roughly 30% increase minimum over the 980 since the 980 ti on average is 30% faster than a 980 with a similar tdp as a 980. also toss in the faster clocked gddr5 and increased memory capacity it will probably be even more faster. near beefy 980 ti overclock levels (extra ~7 - ~10%). rougly a 30 - 40% increase over the previous 980 that occupied its tier level. that's a big increase.


----------



## Klocek001

if they perform the same then how are they on different tiers?


----------



## Olivon

Quote:


> Originally Posted by *Dargonplay*
> 
> 1-) AMD is using 14mm process, which means they have superior density, capable of getting more performance out of smaller dies compared to Nvidia (Up to 20% more)
> 2-) AMD Polaris Architecture is very parallel, DirectX 12 is very parallel, GCN is just made for this API because it's too similar to mantle
> 3-) AMD Have ACEs built into the hardware, this means that with heavy ACEs usage AMD can gain up to 45% extra performance, the use of ACEs will be mainstream next year, seeing how many games are starting to use them just now..


1) 0 proof, total bullcrap, nobody here really know difference between 14nm GloFo and 16nm TSMC with big dies. Hawaii/Fiji got better density but run/overclock like crap and got stomped by Maxwell (lesser density but better clocks)

2) 0 proof. Nobody here know how Polaris arch is different from the past GCN

3) You mix up things and clearly don't understand what you're talking about


----------



## Dargonplay

Quote:


> Originally Posted by *Olivon*
> 
> 1) 0 proof, total bullcrap, nobody here really know difference between 14nm GloFo and 16nm TSMC with big dies. Hawaii/Fiji got better density but run/overclock like crap and got stomped by Maxwell (lesser density but better clocks)
> 
> 2) 0 proof. Nobody here know how Polaris arch is different from the past GCN
> 
> 3) You mix up things and clearly don't understand what you're talking about


1-) It's been proven that AMD is using Samsung Process for their 14mm chips, this is known. 14mm will give you a better density, no way around it.

2-) Not saying I know what are the changes made to Polaris, beyond the fact they've added a new command processor that will improve DX11 performance greatly, but it is as you said GCN, and GCN already showed its prowess, even if AMD doesn't improve the architecture and just limit itself to add more shaders with faster clock speeds using the newer process it'll still have ACEs, even Hawaii have them so I don't know what you're even saying here.

It's a fact that GCN is a parallel architecture, it can do Graphics and compute concurrently.

3-) Please enlighten me.


----------



## orlfman

Quote:


> Originally Posted by *Klocek001*
> 
> if they perform the same then how are they on different tiers?


the 1080 isn't the 980 ti successor. the 1080 ti is the successor of the 980 ti. the 1080 is the successor of the 980. when gauging performance increases you have to compare the new generation card to its previous card that occupied its tier level.

the ti and titans are meant as the flagship series of the generation while the "70 and 80" series cards are meant for the higher mainstream bracket. contrary to before. since the 1080 isn't the 980 ti successor it has lower expectation levels of performance. its designed to go against and replace the previous card that occupied its space, which is the 980. the 1080 being near 980 ti performance, which would be a 30 - 40% increase over the previous generation card (980) is a huge upgrade in performance.


----------



## Klocek001

Quote:


> Originally Posted by *orlfman*
> 
> since the 1080 isn't the 980 ti successor it has lower expectation levels of performance. its designed to go against and replace the previous card that occupied its space, which is the 980


no. just no.
780Ti - > 980 was an improvement. If nvidia doesn't make 980Ti -> X80 improvement even bigger with 28nm to 16nm jump then it will be a fail.
you're talking about replacing the 980 when we've already had a card that is rougly 1.4-1.5x the performance of 980 priced at $100 more for nearly a year.


----------



## orlfman

Quote:


> Originally Posted by *Klocek001*
> 
> no. just no.


why no?

1070 - 1080 - 1080 ti - pascal titan
(970) - (980) - (980 ti) - (maxwell 2 titan)

1070 replaces the 970
1080 replaces the 980
1080 ti replaces the 980 ti
pascal titan replaces the maxwell 2 titan
Quote:


> Originally Posted by *Klocek001*
> 
> no. just no.
> 780Ti - > 980 was a ~15% improvement. If nvidia doesn't make 980Ti -> X80 improvement even bigger with 28nm to 16nm jump then it will be a fail.


that actually proves my point. ~10% faster than a 780 ti is the equivalent to a beefy 780 ti overclock.

10 - 15% increase over the 780ti is similar to what the 1080 will bring over the 980 ti.

the 980 wasn't the replacement for the 780 ti. the 980 ti was. what did it bring over the 980? a 30% increase which would equate to a ~40% increase over the 780 ti.


----------



## bonami2

Quote:


> Originally Posted by *iLeakStuff*
> 
> So they launch a GTX 980 for $500 that match a GTX 980Ti which can be bought for $550 today. Or a GTX 1070 for say $400 that beats a $300 GTX 970 by 10%?
> 
> Does that even makes 1% sense lol?


intel do that with their cpu


----------



## happyrichie

I don't think we are gona see the 1080ti till 2017, I think those chips are gona replace the tesla line up, so we will see a 1080 with the performance of 980ti with better efficiency as the top NVidia card until big pascal drops with hbm2 in 2017


----------



## happyrichie

1070 should be slightly better than 980 and cost slightly less, that's usually how it goes, im not sure in dollars , 980 $450? 1070 $400ish


----------



## Klocek001

Quote:


> Originally Posted by *orlfman*
> 
> that actually proves my point. ~10% faster than a 780 ti is the equivalent to a beefy 780 ti overclock.


maybe you skipped those two lines:
Quote:


> Originally Posted by *Klocek001*
> 
> If nvidia doesn't make 980Ti -> X80 improvement even bigger *with 28nm to 16nm jump* then it will be a fail.
> you're talking about replacing the 980 when *we've already had a card that is rougly 1.4-1.5x the performance of 980 priced at $100 more for nearly a year.*


and according to my experience with nvidia cards (980 G1 -> 980 Ti 6G) 980Ti is not 30% faster like you said, comparing 1450MHz 980 to 1380MHz (stock) 980Ti it was a 40% jump in framerate in The Witcher 3, and the 980Ti wasn't even overclocked at that time. Can't compare now, I lot the fraps minmaxavg benches of my 980 after a system reinstall.


----------



## headd

Quote:


> Originally Posted by *zealord*
> 
> people are too pessimistic about Pascal/1070/1080 and think too highly of a 980 Ti which isn't even a ful fat card.
> .


Yep GTX980TI is just GTX570 of 40nm era.1070 should be compared to TITANX only card that uses FULL GM200.


----------



## magnek

Quote:


> Originally Posted by *Dargonplay*
> 
> You are right, I should have rephrased that better to "Nvidia can make a fool of themselves, they're just not that hardcore as some people here" Lol.
> 
> You can't deny though that they can play the market, I can't believe they will allow AMD to regain so much marketshare just like that.


Well if AMD pulls a rabbit out of the hat they may have no choice. But yeah slim chance of that happening, although we could always hope for another HD4870 level of upset. I mean I'm pretty sure Jen-Hsun went blue in the face after he found out 4870 had 90% of 280's performance at only 46% the price. ATi really redeemed themselves with that card, especially after G80 hit them really hard and 2900 XT got laughed out of the room case.


----------



## Mack42

So judging from the initial quote, these cards perform the same as current generation, no performance increase?


----------



## headd

Quote:


> Originally Posted by *orlfman*
> 
> why no?
> 
> 1070 - 1080 - 1080 ti - pascal titan
> (970) - (980) - (980 ti) - (maxwell 2 titan)
> 
> 1070 replaces the 970
> 1080 replaces the 980
> 1080 ti replaces the 980 ti
> pascal titan replaces the maxwell 2 titan
> that actually proves my point. ~10% faster than a 780 ti is the equivalent to a beefy 780 ti overclock.
> 
> 10 - 15% increase over the 780ti is similar to what the 1080 will bring over the 980 ti.
> 
> the 980 wasn't the replacement for the 780 ti. the 980 ti was. what did it bring over the 980? a 30% increase which would equate to a ~40% increase over the 780 ti.


You cant compare GPU at same node vs die shrink.Last Time this die shrink happend *28nm*GTX670 reck *40nm*GTX580 by 20%.
*16nm*1070 *MUST* be faster than *28nm*GTX980TI by 10-15% or its fail


----------



## KeepWalkinG

Quote:


> Originally Posted by *headd*
> 
> You cant compare GPU at same node vs die shrink.Last Time this die shrink happend *28nm*GTX670 reck *40nm*GTX580 by 20%.
> *16nm*1070 *MUST* be faster than *28nm*GTX980TI by 10-15% or its fail


The new nvidia card will be with almost the same performance but 2x Perf Per Watt.
Not 2x performance over the old card


----------



## EightDee8D

Quote:


> Originally Posted by *KeepWalkinG*
> 
> The new nvidia card will be with almost the same performance but 2x Perf Per Watt.
> Not 2x performance over the old card


unless they keep die size and tdp similar to gm200.

which they can't


----------



## renx

AMD is promising 2x performance per watt, give or take.
Now they won't just start releasing low TDP flagship videocards, so a 1.8X or even 2X performance over the last generation is expected from AMD.
Nvidia won't stay behind, lets be real. They usually do the same or better than AMD.
I don't trust the benchmarks shown in this thread. They're either fake or GP106.

.


----------



## gamervivek

Quote:


> Originally Posted by *mcg75*
> 
> Ok, now that I've had a chance to look at this closer, I see what TPU is doing but I'm just not sure why.
> 
> Taking the numbers from the 980 Ti Matrix review.........
> 
> 980 Ti stock has a total of 620.3 fps across 15 games for an average of 41.35 fps.
> 
> 980 Ti Matrix has a total of 743.0 fps across 15 games for an average of 49.53 fps.
> 
> 49.53 / 41.35 = 1.1978 or 20%.
> 
> So his conclusion based on his numbers is correct. The percentage chart is not correct.
> 
> If the 980 Ti Matrix is the baseline card at 100%, the 980 Ti Stock should be 83% not 80%.
> 
> All these numbers are from the 4K test only.


I ran the numbers and yours are off for the matrix card from mine. 775 sum for average of 51.67 and the matrix ends up 24.96% faster at 4k.


----------



## maarten12100

540MHz








If correct either they have a really nice product with some headroom or it doesn't clock high.
Most likely just not reported correctly though.


----------



## Pantsu

540 MHz is the clock my 970 defaults to after the driver crashes. I'm thinking these are just some altered Maxwell results.


----------



## airfathaaaaa

Quote:


> Originally Posted by *renx*
> 
> AMD is promising 2x performance per watt, give or take.
> Now they won't just start releasing low TDP flagship videocards, so a 1.8X or even 2X performance over the last generation is expected from AMD.
> Nvidia won't stay behind, lets be real. They usually do the same or better than AMD.
> I don't trust the benchmarks shown in this thread. They're either fake or GP106.
> 
> .


the problem lies with 2 things
nvidia needs to have a hardware sc AND have ACE like engines and at the same time having a tdp that will actually showcase the p/w ration of the 16nm AND wont give up any false flags about the maxwell/2.0 lack of hardware that made that mess


----------



## looniam

am i suppose to be "worried?"
@1403Mhz: http://www.3dmark.com/3dm11/10940012











though that driver # in those SS is interesting . .


----------



## iLeakStuff

Quote:


> Originally Posted by *KarathKasun*
> 
> I am pretty sure its gong to be x70 < 980Ti < x80.
> 
> 28nm to 16nm is not a full shrink, the structure size does not go down by half. This means density is only going up by 30-40%.


Incorrect. The cells actually decrease by 50%. Did you even look at the picture you are referring to? I swear the math on this forum is just absolutely horrible. Teachers would get a heart attack by reading the equations here. Psychologist would have a field day on those who still defend their horrible math and think they are right and everyone else is wrong









Try (90*64)/(118*90). That will give you the answer on the density of 16nm vs 28nm


----------



## zetoor85

fake. no doubt nvidia have some pascal ready at some point, but this is 100% fake


----------



## Klocek001

Probably fake. But 9K->11.7K is 30% faster. Twice as good as 780Ti -> 980, right ? If they came up with 1080 priced at $550, 30% faster than 980Ti with considerably lower power draw and temps I guess it'd be acceptable as a moderatley decent release. Maybe I'd even get two, the thing that stopped me from getting a second 980Ti is the temps which are already high on a single air cooled 980Ti, plus I switch cards too often to pump extra money into watercooling.
A lot will depend on clocking too, I dare to say it might be make or break in some cases. If the 980Ti->1080 difference is indeed 30% but 1080 can't overclock as good as 980Ti,bringing down the advantage to 20-25% in comparison to a 1.5GHz 980Ti it will be a bust. If hovever the 1080 can oc as good as 980Ti the difference will grow ven bigger and might come up to 35-40% in some cases,which would be good. Nevertheless I want to see at least 40-50% improvement over 980Ti, not 30%.


----------



## airfathaaaaa

Quote:


> Originally Posted by *iLeakStuff*
> 
> Incorrect. The cells actually decrease by 50%. Did you even look at the picture you are referring to? I swear the math on this forum is just absolutely horrible. Teachers would get a heart attack by reading the equations here. Psychologist would have a field day on those who still defend their horrible math and think they are right and everyone else is wrong
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Try (90*64)/(118*90). That will give you the answer on the density of 16nm vs 28nm


yeah but its not a trully 16nm node....its a 20nm node with low power finfets there is a very big difference between a native 16nm and the one nvidia has atm
http://www.soitec.com/pdf/WP_handel-jones.pdf


----------



## Yop

I will be waiting for whichever company releases an HBM2 card first for a nice sizeable jump in power.


----------



## guttheslayer

Have you guys not realised it been almost 9 months since 980 Ti has been release, there is no cards in between and if NV fail to deliver GTX1080 above the speed of 980 Ti, its truly a failed card, especially after 12 month of no release.


----------



## Piraal

Pretty sure I remember 780Ti trading blows with 980 on release, so "truly a failed card" is a little bit of sensationalizing.


----------



## mcg75

Quote:


> Originally Posted by *Piraal*
> 
> Pretty sure I remember 780Ti trading blows with 980 on release, so "truly a failed card" is a little bit of sensationalizing.


Perhaps not failed but certainly not what we want.

The 980 was a mid-sized 28nm die designed on the same node as the fully enabled full sized 28nm die 780 Ti.

The fact that it consistantly beat the 780 Ti was very impressive considering no die shrink.

This time around, Nvidia has the advantage of a die shrink to 16nm.

The last time this same scenario happened, the gtx 680 ran circles around the gtx 580.

The 980 Ti isn't even a fully enabled die like the 580 or 780 Ti.

If this new card doesn't solidly beat the 980 Ti, it's disappointing for us all.


----------



## iLeakStuff

Quote:


> Originally Posted by *airfathaaaaa*
> 
> yeah but its not a trully 16nm node....its a 20nm node with low power finfets there is a very big difference between a native 16nm and the one nvidia has atm
> http://www.soitec.com/pdf/WP_handel-jones.pdf


No.

The one Im referring to in my post is 20nm. TSMCs 20nm (16nm) is 50% in size of TSMCs 28nm.
Thats why TSMC state 2x the density over 28nm and that is why these benchmarks is most likely fake and not Pascal. If they are its GTX 1070 top and thats with a crappy i3 processor


----------



## iLeakStuff

Here is a proper comparison against the benchmark in OP against GTX 980Ti.
Both with an i3 2100.

Launching a GTX 1080 with the exact same performance as GTX 980Ti makes ZERO sense looking at previous generations. GTX 1070? Maybe.

Look at the Graphic scores. That is what matters

GTX 980Ti


Pascal?


----------



## iLeakStuff

Another tidbit is:
GTX 980Ti Driver version: 10.18.13.6175
Unknown Driver version: 10.18.13.6744

Not sure what these mean exactly.


----------



## rluker5

Quote:


> Originally Posted by *iLeakStuff*
> 
> Here is a proper comparison against the benchmark in OP against GTX 980Ti.
> Both with an i3 2100.
> 
> Launching a GTX 1080 with the exact same performance as GTX 980Ti makes ZERO sense looking at previous generations. GTX 1070? Maybe.
> 
> Look at the Graphic scores. That is what matters
> 
> GTX 980Ti
> 
> 
> Pascal?


It only makes zero sense in terms of isolated logic. If they can only make an ocd titanp perform 40% better than an ocd titanx(due to fp64,16 and clock speed loss due to new node, finfet) then the stepping is appropriate. Without knowing how these things are actually working out in real life, the logic has some of it's framework based in conjecture. We will find out more later.


----------



## ZealotKi11er

Quote:


> Originally Posted by *mcg75*
> 
> Perhaps not failed but certainly not what we want.
> 
> The 980 was a mid-sized 28nm die designed on the same node as the fully enabled full sized 28nm die 780 Ti.
> 
> The fact that it consistantly beat the 780 Ti was very impressive considering no die shrink.
> 
> This time around, Nvidia has the advantage of a die shrink to 16nm.
> 
> The last time this same scenario happened, the gtx 680 ran circles around the gtx 580.
> 
> The 980 Ti isn't even a fully enabled die like the 580 or 780 Ti.
> 
> If this new card doesn't solidly beat the 980 Ti, it's disappointing for us all.


Yes but at the same time GTX680 was a GTX580 replacement even though technically it was not. GTX780 came much much later to not call GTX680 the true replacement of GTX580. The price also speaks for itself. If you expect GTX1080 to run circles around GTX980 Ti then it will be at least $550.


----------



## iLeakStuff

Remember GTX 970 and its controversy?
3.5GB was a high priority memory. The remaining 512MB was in a different "bank".

The Pascal GPU that scored like GTX 980Ti comes with 7680MB VRAM. Total memory in a 8GB card is 8192MB. Deduct 512MB we get 7680MB.
Could this be GTX 1070 with the same memory setup as Maxwell GM204 cards? While GTX 1080 get full 8192MB?

Or is 3DMark11 application even capable of detecting this and displaying 7680 instead of 8192MB?


----------



## looniam

Quote:


> Originally Posted by *iLeakStuff*
> 
> Here is a proper comparison against the benchmark in OP against GTX 980Ti.
> Both with an i3 2100.
> 
> Launching a GTX 1080 with the exact same performance as GTX 980Ti makes ZERO sense looking at previous generations. GTX 1070? Maybe.
> 
> Look at the Graphic scores. That is what matters
> 
> GTX 980Ti
> 
> 
> Pascal?


Quote:


> Originally Posted by *iLeakStuff*
> 
> Another tidbit is:
> GTX 980Ti Driver version: 10.18.13.6175
> Unknown Driver version: 10.18.13.6744
> 
> Not sure what these mean exactly.


pssst iirc win 8.1 does give better scores in 3dmark11.


----------



## iLeakStuff

Quote:


> Originally Posted by *looniam*
> 
> pssst iirc win 8.1 does give better scores in 3dmark11.


Not over 300 graphic score boost


----------



## mcg75

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Yes but at the same time GTX680 was a GTX580 replacement even though technically it was not. GTX780 came much much later to not call GTX680 the true replacement of GTX580. The price also speaks for itself. If you expect GTX1080 to run circles around GTX980 Ti then it will be at least $550.


The object of the discussion wasn't to argue about what is a "true replacement" really is.

We're talking about the performance of Nvidia's mid-size die flagship vs the previous generation full die flagship performance.

The 680 and 980 fall into this discussion because they aren't the full die and they were the flagship.

780 belongs nowhere in the conversation as it was the full die albeit with parts disabled.


----------



## Klocek001

28 to 16nm is a 43% die shrink. 40nm to 28nm was 30%, yet we still saw shader count more than double on 290mm 670 comparing to a 520mm 580.


----------



## looniam

Quote:


> Originally Posted by *iLeakStuff*
> 
> Quote:
> 
> 
> 
> Originally Posted by *looniam*
> 
> pssst iirc win 8.1 does give better scores in 3dmark11.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Not over 300 graphic score boost
Click to expand...

sorry i just checked and that's in FS win 8.1 is better.
but a driver can add ~300 graphics. hell, *even between bench runs w/same driver* . .









common, no. but can happen.


----------



## Redwoodz

Quote:


> Originally Posted by *renx*
> 
> If I remember correctly, the 970 matched the 780ti basically from the very beginning.
> Maybe it was 5% behind overall, but it didn't take more than a month or two to match and even improve the 780ti's performance.


You had stripping of compute in Maxwell. It allowed higher relative performance.


----------



## Rob27shred

This all BS IMHO, just the fact that this is from WCCFTech is enough to make me a disbeliever TBH...


----------



## VeritronX

Quote:


> Originally Posted by *Dargonplay*
> 
> You could OC both cards and the 970 would still be faster.
> 
> OC Headroom should be an important factor, but only comparing cards by their OC is silly, that's why only Linustechtips do it. There are some 780Ti that couldn't even get to 1100MHz while there are 970s that can reach up to 1700MHz, there is always some variance hence it isn't a reliable way to measure one chip's performance capabilities.
> And what advantage is that exactly? Tessellation? Geometry? even if what you say it's true and Maxwell does have an advantage over Kepler, ins't that the whole point? Of Course that a newer architecture must have some advantage over the old one, advantages that will end up making it faster.
> 
> There is no game that has been released in the last 6 months where a 970 is not faster than a 780Ti.
> Again, very cringe worthy. The 900 series doesn't have "Years" of existing, it doesn't needed "Years" for its drivers to mature and have a 970 trashing the floor $/Performance with a 780Ti.
> 
> A 970 became a faster card than a 780Ti just a couple months after release, now you want to compare overly mature drivers with overly immature drivers? suit yourself, although you can't deny that only a few months after launching the 970 was the faster card and this was achieved under the already perfected, over exploited fabrication process that both generations were sharing.
> 
> Pascal on the other hand is coming with a HUGE node shrink, one of the biggest jumps in the whole history of Graphics card (When referring to percentage of shrinkage), and you're here with delusions of insignificance, it's not even remotely logic.


Most 780ti's can do 1202Mhz pretty easily, some really unlucky people could only get around 1150Mhz or so, custom pcb's can get to 1300Mhz and beyond.. considering the card would boost to 1Ghz at stock that's a decent OC.

if you want to do a speed comparison here's mine at 1202Mhz 1.15v with an i7 4790K at 4.6Ghz running Heaven 4.0 on a recent driver:


Spoiler: Warning: Spoiler!







If that's too old for your liking the newest game I have Rainbow Six: Siege, I'll have to reinstall that to run the test (need to get a bigger SSD)


----------



## Mhill2029

Quote:


> Originally Posted by *Rob27shred*
> 
> This all BS IMHO, just the fact that this is from WCCFTech is enough to make me a disbeliever TBH...


Most likely...

But me, i've got to the point where i'm sick to death of hearing about Pascal. It's been rumoured and discussed for so long with no real facts on the table. Last i heard the big chips aren't until Q4 2016 - Q1 2017, and i couldn't care less about the peasant cards









Nobody cares about the lesser cards anyway...


----------



## VeritronX

Quote:


> Originally Posted by *Mhill2029*
> 
> Nobody cares about the lesser cards anyway...


Eh, I'm looking at upgrading from my 780ti if they have a card that's less than 200W and can match the 980ti or titan X for performance.. I like quiet reference design cards and 680, 980 both have been that so far. Beats having to buy a waterblock for it to get it to my desired level of quiet.


----------



## Mhill2029

Quote:


> Originally Posted by *VeritronX*
> 
> Eh, I'm looking at upgrading from my 780ti if they have a card that's less than 200W and can match the 980ti or titan X for performance.. I like quiet reference design cards and 680, 980 both have been that so far. Beats having to buy a waterblock for it to get it to my desired level of quiet.


Don't you run a 1080p display though? That 780Ti will last you a few more years yet my friend.


----------



## evoll88

So stay with two titan x's or upgrade on 4k monitor?


----------



## VeritronX

Quote:


> Originally Posted by *Mhill2029*
> 
> Don't you run a 1080p display though? That 780Ti will last you a few more years yet my friend.


I have a 1080p 120hz atm, but I've been looking to get a 1440p GSYNC monitor for a while... I'm probably going to hold off and see what happens with displayport 1.3 and HDR displays now that cards with that ability are on the horison.

No kidding about the 780ti being enough for 1080p though, Just finished running the R6 Seige benchmark on ultra at 1080p:


Spoiler: Warning: Spoiler!




Ignore the glare from my light, only the camera on my phone shows it


----------



## iLeakStuff

If one is to assume these benchmarks is real, I`m calling this now. This look like a very reasonable scenario to me. WCCFTech was looking at it wrong. They didnt find GTX 1080, but one entry of GTX 1070 and two entries of GTX 1060.

As always look at the "Graphic Score" for GPU performance. Total score depends on RAM, CPU etc all baked in to a mix.

*GTX 980Ti*

http://www.3dmark.com/3dm11/10877835
*
GTX 1070*

http://www.3dmark.com/3dm11/11061015

Both equal in performance. GTX 1070 comes with 256bit bus and 2GB more GDDR5 than GTX 980Ti. GTX 1070 comes with ~140W TDP while GTX 980Ti is 250W.
GTX 1070 is about 45% faster than GTX 970.
Although we have no numbers yet, GTX 1080 is probably 20-30% above this.

*
GTX 960*

http://www.3dmark.com/3dm11/10832689

*GTX 1060*

http://www.3dmark.com/3dm11/10852916

GTX 1060 is about 40% faster than GTX 960. GTX 1060 features 3GB GDDR5 and comes with 192bit bus.
GTX 1060 is about 10% faster than a GTX 970,


----------



## Dargonplay

Quote:


> Originally Posted by *iLeakStuff*
> 
> If one is to assume these benchmarks is real, I`m calling this now. This look like a very reasonable scenario to me. WCCFTech was looking at it wrong. They didnt find GTX 1080, but one entry of GTX 1070 and two entries of GTX 1060
> 
> GTX 980Ti
> 
> 
> GTX 1070
> 
> 
> Both equal in performance. GTX 1070 comes with 256bit bus and 2GB more GDDR5 than GTX 980Ti. GTX 1070 comes with ~140W TDP while GTX 980Ti is 250W.
> GTX 1070 is about 45% faster than GTX 970.
> Although we have no numbers yet, GTX 1080 is probably 20-30% above this.
> 
> GTX 1060
> 
> 
> GTX 960
> 
> 
> GTX 1060 is about 40% faster than GTX 960. GTX 1060 features 3GB GDDR5 and comes with 192bit bus.
> GTX 1060 is about 10% faster than a GTX 970,


https://www.youtube.com/watch?v=uelHwf8o7_U

I love the way you lie.

I don't like the way it'd hurt for you to be wrong though, I think I would become a GPU beater.

Pascal better be delivering.


----------



## looniam

^ links please (iLeakStuff)

on a side note; IF TRUE its interesting looking at the links already, that pascal isn't beating maxwell in all the graphics tests. seems pascel does much better w/tess and volumetric lightning.

what was that latest GW library?


----------



## iLeakStuff

Quote:


> Originally Posted by *looniam*
> 
> ^ links please (iLeakStuff)
> 
> on a side note; IF TRUE its interesting looking at the links already, that pascal isn't beating maxwell in all the graphics tests. seems pascel does much better w/tess and volumetric lightning.
> 
> what was that latest GW library?


Added the links to the entries I compared with.
Wccftech thinks the GTX 1060 was a mobile GP104 GPU but thats clearly a desktop since its using this motherboard
http://www.asrock.com/mb/intel/Z170%20Extreme6/


----------



## looniam

Quote:


> Originally Posted by *iLeakStuff*
> 
> Added the links to the entries I compared with.
> Wccftech thinks the GTX 1060 was a mobile GP104 GPU but thats clearly a desktop since its using this motherboard
> http://www.asrock.com/mb/intel/Z170%20Extreme6/


thanks for that.








i'm looking at the FPS for each test. the 980ti is ahead in the first two while the "unknown" pulls ahead with bigger wins for test 3, 4 and combined.

then using the description of each test on the futermark site.
https://www.futuremark.com/benchmarks/3dmark11


----------



## solt

Can someone please explain to me what the release schedule is like for Nvidia desktop gpus over the next year? Are the gtx1070 and gtx1080 coming before June 2016? I saw this post with X80 that got me confused.


----------



## iLeakStuff

Quote:


> Originally Posted by *looniam*
> 
> thanks for that.
> 
> 
> 
> 
> 
> 
> 
> 
> i'm looking at the FPS for each test. the 980ti is ahead in the first two while the "unknown" pulls ahead with bigger wins for test 3, 4 and combined.
> 
> then using the description of each test on the futermark site.
> https://www.futuremark.com/benchmarks/3dmark11


Very interesting find. Is that typical for Pascal?
Tessellation and lighting was what Maxwell did very well in: http://images.anandtech.com/graphs/graph9659/All-4K-render.png. While Fiji was good in the other parts. Perhaps they are moving towards async and the same features as AMD this time?


3DMark11


----------



## Forceman

Quote:


> Originally Posted by *solt*
> 
> Can someone please explain to me what the release schedule is like for Nvidia desktop gpus over the next year? Are the gtx1070 and gtx1080 coming before June 2016? I saw this post with X80 that got me confused.


No one knows. The assumption is that the will come around the June time frame, with the big chips coming early 2017.


----------



## looniam

Quote:


> Originally Posted by *iLeakStuff*
> 
> Very interesting find. Is that typical for Pascal?
> Tessellation and lighting was what Maxwell did very well in: http://images.anandtech.com/graphs/graph9659/All-4K-render.png. While Fiji was good in the other parts. Perhaps they are moving towards async and the same features as AMD this time?
> 
> 
> 3DMark11


well right now i am trying to find the differences between volumetric illumination (test 1 maxwell wins) and volumetric lightning (test 3 and combined that "pascal" wins substantially)

the *futuremark site* seems to be more descriptive of the tests than guru3d download page btw.


----------



## i7monkey

Are we ever going to see a return to big die chips for new generation releases like the GTX 480 (GF100) or is Nvidia content giving us midrange GX104 for the rest of time?

GK104 (GTX 680)
GM204 (GTX 980)
GP104 (GTX 1080)

They're not that much faster than the 580, 780Ti, and 980Ti.

It's really disappointing.


----------



## Forceman

Quote:


> Originally Posted by *looniam*
> 
> well right now i am trying to find the differences between volumetric illumination (test 1 maxwell wins) and volumetric lightning (test 3 and combined that "pascal" wins substantially)
> 
> the *futuremark site* seems to be more descriptive of the tests than guru3d download page btw.


Do we think 3DMark11 uses enough advanced features to make it a valid way to compare Maxwell to Pascal?
Quote:


> Originally Posted by *i7monkey*
> 
> Are we ever going to see a return to big die chips for new generation releases like the GTX 480 (GF100) or is Nvidia content giving us midrange GX104 for the rest of time?
> 
> GK104 (GTX 680)
> GM204 (GTX 980)
> GP104 (GTX 1080)
> 
> They're not that much faster than the 580, 780Ti, and 980Ti.
> 
> It's really disappointing.


Probably not, the small first strategy is a better business plan since you can get people to upgrade twice.


----------



## iLeakStuff

I`d get two of those GTX 1070 if they cost $350. Anything above that while matching a GTX 980Ti is too much I think.
With plain GDDR5 they are not worh more. The only difference VRAM wise for the first cards seems to be 2GHz vs 1.75GHz


----------



## Klocek001

and they'll probably pack more than 3.5GB vram. if 1080 is supposed to be 8GB then 1070 might as well, even if it's 7+1 I wouldn't care. Just would like a 1070 with OC to match a stock 980Ti with custom cooling, that means a 1.35GHz one.


----------



## iLeakStuff

Quote:


> Originally Posted by *Klocek001*
> 
> and they'll probably pack more than 3.5GB vram. if 1080 is supposed to be 8GB then 1070 might as well, even if it's 7+1 I wouldn't care. Just would like a 1070 with OC to match a stock 980Ti with custom cooling, that means a 1.35GHz one.


Didnt you see the 3DMark entry for GTX 1070?
http://www.3dmark.com/3dm11/11061015

7680MB. Where are the remaining 512MB? Sounds familiar? lol








Still not sure if 3DMark would read 8GB or 7680MB though. If its unable to read that 512MB? Sounds weird if it cant so I dont know
Quote:


> GTX 970 is a 4GB card. However, the upper 512MB of the additional 1GB is segmented and has reduced bandwidth. This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment.


----------



## i7monkey

Quote:


> Originally Posted by *Forceman*
> 
> Probably not, the small first strategy is a better business plan since you can get people to upgrade twice.


The only viable upgrade imo are the full GX100 chips.

GX104 is barely any faster than the previous gen top performer.
GX100 Titan is a blatant rip off.

Which leaves us with the Ti version. Same performance as Titan at two thirds the cost.

Problem is they tend to come out every year to year and a half after new gen releases.

GTX 680 (April 2012) ---> Titan (Feb 2013) ---> GTX 780 (May 2013) ---> GTX 780Ti (November 2013) = *1 year and 7 months between GK104 and full chip GK110*.

GTX 980 (September 2014) ---> Titan X (Mar 2015) ---> GTX 980Ti (June 2015) = 9 months, which is actually pretty quick but it would be better if they came out with full GX100 chips on new generation launches instead of making us wait a year for the real stuff.

Pretty disappointing stuff from Nvidia if you ask me. The 780 was an insult to original Titan owners. The 780Ti was an insult to Titan/780 owners. 980Ti was offensive to Titan X owners.

They're intent on screwing with their customers yet people still continue to buy it and at outrageous prices. We've argued Titan on this board for 3+ years now, lol, but it's still offensive.

No midrange stuff, no overpriced Titan nonsense. Give us a slightly cut down big die GX100 chip at launch and then a year later give us the full chip. Everything else is worthless imo.


----------



## iLeakStuff

AMD wont be any better.
Their Polaris 10 card wasnt much faster than Fury X. I think it was 10-15% faster than Fury X and thats their high end card which they showed just recently


----------



## Klocek001

an offensive graphics card. people are getting frustrated with all kids of stupid stuff these days....
did the introduction of 980Ti made ppl's Titan X's run worse ? Let's ask around.


----------



## i7monkey

Quote:


> Originally Posted by *iLeakStuff*
> 
> AMD wont be any better.
> Their Polaris 10 card wasnt much faster than Fury X. I think it was 10-15% faster than Fury X and thats their high end card which they showed just recently


We've got a big die shrink, can't they do any better? Are the dies this much smaller than Fury X/980Ti or are they intentionally gimping these cards?


----------



## Pyrotagonist

Polaris 10 is apparently a small die - 232mm^2, more of Pitcairn replacement/GP106 competitor than anything. If GP104 is closer to 300mm^2 like previous 104s, it's not impressive.


----------



## sinholueiro

Quote:


> Originally Posted by *iLeakStuff*
> 
> AMD wont be any better.
> Their Polaris 10 card wasnt much faster than Fury X. I think it was 10-15% faster than Fury X and thats their high end card which they showed just recently


Was the Hitman demo in Ultra? If not, Polaris 10 is even less powerful than FuryX. If Polaris 10 is like FuryX at 150W with HBM like some say, I would pay up to 449. 499 if it comes with 8GB HBM.


----------



## iLeakStuff

Quote:


> Originally Posted by *i7monkey*
> 
> We've got a big die shrink, can't they do any better? Are the dies this much smaller than Fury X/980Ti or are they intentionally gimping these cards?


I think both Nvidia and AMD will play with the "FinFET card" this round. Meaning they will present and market these card as "worlds most efficient" cards etc and have these watt comparisons against previous cards. Its a nice feature but it isnt exactly perfect if performance isnt that great over previous cards.
Quote:


> Originally Posted by *Pyrotagonist*
> 
> Polaris 10 is apparently a small die - 232mm^2, more of Pitcairn replacement/GP106 competitor than anything. If GP104 is closer to 300mm^2 like previous 104s, it's not impressive.


That is not confirmed. All we know is that one chip will be 232mm. And that AMD is planning to launch two types of chips this year: Polaris 10 and Polaris 11.
I`m betting that the 232mm2 card is Polaris 11, the low end card. Really doubt they are able to go down from 350-380mm2 which is typical for high end for many years and go down to 232mm2.
Quote:


> Originally Posted by *sinholueiro*
> 
> Was the Hitman demo in Ultra? If not, Polaris 10 is even less powerful than FuryX. If Polaris 10 is like FuryX at 150W with HBM like some say, I would pay up to 449. 499 if it comes with 8GB HBM.


It was Polaris 10 with Ultra settings in 1440p with DX12 in Hitman getting 60FPS in average
vs Fury X with Ultra settings in 1440p with *DX11* getting 50FPS in average

I think too that Polaris 10 will come with HBM1 although its not confirmed. But the Polaris10 GPU was really small


----------



## Forceman

Quote:


> Originally Posted by *iLeakStuff*
> 
> Didnt you see the 3DMark entry for GTX 1070?
> http://www.3dmark.com/3dm11/11061015
> 
> 7680MB. Where are the remaining 512MB? Sounds familiar? lol
> 
> 
> 
> 
> 
> 
> 
> 
> Still not sure if 3DMark would read 8GB or 7680MB though. If its unable to read that 512MB? Sounds weird if it cant so I dont know


3DMark reads the 970 as a 4GB card, since that is what it is. If they use double density VRAM to get 8GB, like AMD does on the 390s, then it would have a 1GB partition, not 512MB.


----------



## looniam

Quote:


> Originally Posted by *Forceman*
> 
> Do we think 3DMark11 uses enough advanced features to make it a valid way to compare Maxwell to Pascal?


i don't know but it does seem coincidental that the "pascal" card does much better in a DX11 benchmark with volumetric lightning as NV just launched the same in a GW library.

this is nvidia's stated mission; to promote their newest (or most advanced) hardware tech through their software development.


----------



## Cyber Locc

Quote:


> Originally Posted by *sinholueiro*
> 
> Was the Hitman demo in Ultra? If not, Polaris 10 is even less powerful than FuryX. If Polaris 10 is like FuryX at 150W with HBM like some say, I would pay up to 449. 499 if it comes with 8GB HBM.


I remember reading somewhere that they cant do 8gb of HBM1, hints why NV is going to use HBM 2. Not sure if there is any truth to that, just something I read somewhere.

I dont know very much about HBM memory, so take that with a grain of salt unless someone that knows more can confirm that.


----------



## ZealotKi11er

Personally knowing that HBM2 is going to be 1024GB/s make me hold off the a true upgrade. Prices have gone up so when I upgrade I want something that lasts. The first wave will probably get a lot of people because of price/performance and performance/watt but all I care is pure performance.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Personally knowing that HBM2 is going to be 1024GB/s make me hold off the a true upgrade. Prices have gone up so when I upgrade I want something that lasts. The first wave will probably get a lot of people because of price/performance and performance/watt but all I care is pure performance.


I am with you which Is why I went with a TI recently, I will use it till the next TI has non reference cards. If I get a year or a year and a half, out of it and can sell it for 3-400 with a block thats not a bad loss for a years use IMO.

I am not buying into the paying 700+ for a 1080 for it to drop by 200 a few months later when the real flagships come.


----------



## magnek

Quote:


> Originally Posted by *looniam*
> 
> ^ links please (iLeakStuff)
> 
> on a side note; IF TRUE its interesting looking at the links already, that pascal isn't beating maxwell in all the graphics tests. seems pascel does much better w/tess and volumetric lightning.
> 
> what was that latest GW library?


Not Tess again!







I swear she's like that annoying ex you just can't get rid of.
Quote:


> Originally Posted by *looniam*
> 
> i don't know but it does seem coincidental that the "pascal" card does much better in a DX11 benchmark with volumetric lightning as NV just launched the same in a GW library.
> 
> this is nvidia's stated mission; to promote their newest (or most advanced) hardware tech through their software development.


Quoted for posterity and probable truth.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> Personally knowing that HBM2 is going to be 1024GB/s make me hold off the a true upgrade. Prices have gone up so when I upgrade I want something that lasts. The first wave will probably get a lot of people because of price/performance and performance/watt but all I care is pure performance.


I'll be damned if I buy another GDDR5(X) card I'll tell ya that much.


----------



## Cyber Locc

Quote:


> Originally Posted by *magnek*
> 
> Not Tess again!
> 
> 
> 
> 
> 
> 
> 
> I swear she's like that annoying ex you just can't get rid of.


Hey I like Tess, I even chose her in my latest play through







. She is way nicer than Yen.


----------



## iLeakStuff

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Personally knowing that HBM2 is going to be 1024GB/s make me hold off the a true upgrade. Prices have gone up so when I upgrade I want something that lasts. The first wave will probably get a lot of people because of price/performance and performance/watt but all I care is pure performance.


HBM2 wont make the card last any longer than GDDR5.
I understand wanting to buy the biggest chips though, since thats where you get the most performance. But on the other side, GP104 will probably offer better performance/dollar than GP100....


----------



## ZealotKi11er

Quote:


> Originally Posted by *iLeakStuff*
> 
> HBM2 wont make the card last any longer than GDDR5..


What do you mean? GDDR5 is maxed out already. Nvidia can maybe get another 35% if they got 512-Bit but thats it. HBM2 cards will have 3x memory bandwidth of GTX980 Ti. That is hugeeeee.


----------



## iLeakStuff

Quote:


> Originally Posted by *ZealotKi11er*
> 
> What do you mean? GDDR5 is maxed out already. Nvidia can maybe get another 35% if they got 512-Bit but thats it. HBM2 cards will have 3x memory bandwidth of GTX980 Ti. That is hugeeeee.


Overclocked GTX 980Ti certainly doesnt have any issues with GDDR5.
Thats the GTX 1080 for you probably. This time they are using 2GHz VRAM while 980Ti runs 1.75GHz.

GP100 will probably show signs of bottlenecks if it was with GDDR5, that I agree with. Although HBM2 have huge memory bandwidth, it doesnt suddenly boost the GPU up in the clouds. Its like NVMe SSDs vs SATA3 SSDs.

GTX 970 was a vastly better buy than GTX 980Ti value wise. Going SLI on GTX 1070 may actually be a viable route as well instead of waiting and getting GP100.


----------



## DNMock

Quote:


> Originally Posted by *Randomdude*
> 
> Oh my God, please educate yourself. Math is a science that...


Stopped reading there.... Math is most definitely not a Science. Math is a tool (an extremely important and refined tool, but a tool none-the-less) that is used by science.

I'll let someone who is way smarter than everyone who posted in this thread combined explain it a bit better:




Pro-Tip: Never go out of your way to try and make yourself look more intelligent than you really are. The end result is you look like a moron. Speaking from experience, lots and lots of experience.


----------



## sinholueiro

Quote:


> Originally Posted by *iLeakStuff*
> 
> I think both Nvidia and AMD will play with the "FinFET card" this round. Meaning they will present and market these card as "worlds most efficient" cards etc and have these watt comparisons against previous cards. Its a nice feature but it isnt exactly perfect if performance isnt that great over previous cards.
> That is not confirmed. All we know is that one chip will be 232mm. And that AMD is planning to launch two types of chips this year: Polaris 10 and Polaris 11.
> I`m betting that the 232mm2 card is Polaris 11, the low end card. Really doubt they are able to go down from 350-380mm2 which is typical for high end for many years and go down to 232mm2.
> It was Polaris 10 with Ultra settings in 1440p with DX12 in Hitman getting 60FPS in average
> vs Fury X with Ultra settings in 1440p with *DX11* getting 50FPS in average
> 
> I think too that Polaris 10 will come with HBM1 although its not confirmed. But the Polaris10 GPU was really small


When did they say that the game was running in Ultra?

Quote:


> Originally Posted by *Cyber Locc*
> 
> I remember reading somewhere that they cant do 8gb of HBM1, hints why NV is going to use HBM 2. Not sure if there is any truth to that, just something I read somewhere.
> 
> I dont know very much about HBM memory, so take that with a grain of salt unless someone that knows more can confirm that.


4GB is the maximum, but there are hints that AMD has a workaround to up that to 8GB.


----------



## Noufel

Quote:


> Originally Posted by *iLeakStuff*
> 
> AMD wont be any better.
> Their Polaris 10 card wasnt much faster than Fury X. I think it was 10-15% faster than Fury X and thats their high end card which they showed just recently


where did you see that? can you please link me a source


----------



## iLeakStuff

Quote:


> Originally Posted by *Noufel*
> 
> where did you see that? can you please link me a source


http://www.overclock3d.net/articles/gpu_displays/amd_polaris_10_engineering_sample_pictured/1


----------



## _Killswitch_

I know all this "rumors" and such, but think GTX 1070 or 1080 will nice upgrade from my aging GTX 680. So any case i'm excited. If I had bought 900 card probably not as much.


----------



## guttheslayer

I dont see why GP104 wont be using 512 bits.


----------



## zealord

Quote:


> Originally Posted by *guttheslayer*
> 
> I dont see why GP104 wont be using 512 bits.


it could be. But its highly unlikely.

Last 512 bits card for Nvidia was the GTX 280 or something.

All cards after that have been either 256 or 384. Even the Titans are only 384.

It still could be, but I heavily doubt it.


----------



## Noufel

Quote:


> Originally Posted by *iLeakStuff*
> 
> http://www.overclock3d.net/articles/gpu_displays/amd_polaris_10_engineering_sample_pictured/1


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> I think both Nvidia and AMD will play with the "FinFET card" this round. Meaning they will present and market these card as "worlds most efficient" cards etc and have these watt comparisons against previous cards. Its a nice feature but it isnt exactly perfect if performance isnt that great over previous cards.


Yep that is what I have thought all along







. I know you wanted better performance and I completely get why and I know we have bumped heads over this. I just dont think we will see that performance, I think we will see Compute and Efficiency.

It is a sad day to see a small upgrade, on what everyone has hyped up to be 100000 times better. However I honestly feel thats whats coming







. I even bought a 980ti instead of waiting for Pascal once I realized this myself, so I am 700 dollars confident that we wont see huge gains lol.

I am still hopefully that we may see huge gains, I am just trying to reign in everyone expectations so they are not disappointed when it turns out a dud. Expect nothing than everything is satisfaction







.


----------



## kuruptx

Same here, so ready for summer. I think we will see a HUGE improvement, Maybe even run games on ultra and get 60 FPS, like the Witcher 3 and the Division with a 1080 compared to my old 680..


----------



## Cyber Locc

Quote:


> Originally Posted by *kuruptx*
> 
> Same here, so ready for summer. I think we will see a HUGE improvement, Maybe even run games on ultra and get 60 FPS, like the Witcher 3 and the Division.


Errm, I play Witcher on Maxed @1440p and get 60 frames without dropping in my testing (it probably does drop but I dont notice it, and when I watch the FPS it never drops).

Well I say max but I do turn Hairworks off, I can still get 60 with it on, it just drops to 55ish sometimes so I leave it off.

The Division is new, so give it a few weeks for drivers to improve, anyway there isnt a game that a Titan X/ 980ti can not max out completely at 1440p with 60 fps that I have played yet. Not stock clocks mind you 1500mhz core, which from what I have seen is fairly easy (I hit it on Air).

Did you mean 4k by Ultra? If so that defiantly isn't going to happen.


----------



## kuruptx

Quote:


> Originally Posted by *Cyber Locc*
> 
> Errm, I play Witcher on Maxed @1440p and get 60 frames without dropping in my testing (it probably does drop but I dont notice it, and when I watch the FPS it never drops).
> 
> Well I say max but I do turn Hairworks off, I can still get 60 with it on, it just drops to 55ish sometimes so I leave it off.
> 
> The Division is new, so give it a few weeks for drivers to improve, anyway there isnt a game that a Titan X/ 980ti can not max out completely at 1440p with 60 fps that I have played yet. Not stock clocks mind you 1500mhz core, which from what I have seen is fairly easy (I hit it on Air).
> 
> Did you mean 4k by Ultra? If so that defiantly isn't going to happen.


Oh no sorry, I just meant ultra in game settings, at 1440p! Something my 680 can't come close to doing .


----------



## Cyber Locc

Quote:


> Originally Posted by *kuruptx*
> 
> Oh no sorry, I just meant ultra in game settings, at 1440p! Something my 680 can't come close to doing .


Oh ya okay, ya a 980ti overclocked can do it at 1440p and stock can at 1080p (i think even a 970 can at 1080)


----------



## dustinr26

I just hope Pascal can run single card and ultra settings(in 4K) for most games since all these newer monitors are 4K with higher refresh rates.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *i7monkey*
> 
> The only viable upgrade imo are the full GX100 chips.
> 
> GX104 is barely any faster than the previous gen top performer.
> GX100 Titan is a blatant rip off.
> 
> Which leaves us with the Ti version. Same performance as Titan at two thirds the cost.
> 
> Problem is they tend to come out every year to year and a half after new gen releases.
> 
> GTX 680 (April 2012) ---> Titan (Feb 2013) ---> GTX 780 (May 2013) ---> GTX 780Ti (November 2013) = *1 year and 7 months between GK104 and full chip GK110*.
> 
> GTX 980 (September 2014) ---> Titan X (Mar 2015) ---> GTX 980Ti (June 2015) = 9 months, which is actually pretty quick but it would be better if they came out with full GX100 chips on new generation launches instead of making us wait a year for the real stuff.
> 
> Pretty disappointing stuff from Nvidia if you ask me. The 780 was an insult to original Titan owners. The 780Ti was an insult to Titan/780 owners. 980Ti was offensive to Titan X owners.
> 
> They're intent on screwing with their customers yet people still continue to buy it and at outrageous prices. We've argued Titan on this board for 3+ years now, lol, but it's still offensive.
> 
> No midrange stuff, no overpriced Titan nonsense. *Give us a slightly cut down big die GX100 chip at launch and then a year later give us the full chip.* Everything else is worthless imo.


Even if Nvidia were so inclined to do this (which they are not) they wouldn't be able to release Pascal like that anyway. There's no way they are going to have the yields on a new process to start out with big-die chips, which is why I have been saying all along that anybody expecting a GP100 card this summer was delusional. They will release a small GP-104 GTX 1080 first and use the time that buys them to continue improving yields on 16nm before even thinking about bringing out the big chip. I still say middle-2017 and beyond is the most likely release time frame for GP100 to drop as the next Titan with a probable skipping of the 1080Ti name to the next generation a la Kepler's release schedule (680 > Titan > 780/780Ti).


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> Errm, I play Witcher on Maxed @1440p and get 60 frames without dropping in my testing (it probably does drop but I dont notice it, and when I watch the FPS it never drops).
> 
> Well I say max but I do turn Hairworks off, I can still get 60 with it on, it just drops to 55ish sometimes so I leave it off.
> 
> The Division is new, so give it a few weeks for drivers to improve, anyway there isnt a game that a Titan X/ 980ti can not max out completely at 1440p with 60 fps that I have played yet. Not stock clocks mind you 1500mhz core, which from what I have seen is fairly easy (I hit it on Air).
> 
> Did you mean 4k by Ultra? If so that defiantly isn't going to happen.


My 290X CFX can do 3440x1440p @ 60 fps 95% of the time. It just does not want to work at 4K for some reason so I have to use Single GPU.


----------



## Cyber Locc

Quote:


> Originally Posted by *dustinr26*
> 
> I just hope Pascal can run single card and ultra settings(in 4K) for most games since all these newer monitors are 4K with higher refresh rates.


Ya that isn't going to happen lol. The only card of the line up that would come close would be a Titan P and it would require a 100% performance increase, we won't see that till equal chip sizes so Volta at least.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> My 290X CFX can do 3440x1440p @ 60 fps 95% of the time. It just does not want to work at 4K for some reason so I have to use Single GPU.


I remember you saying that before. Honestly if I was you I'd drop the 290s for a 1080 or a 980ti.

That was what I did lol. As I downsized the board so could only run 2 290s so I got the 980. It's a downgrade from the 3 and a side grade from 2, still feels like an upgrade though lol.


----------



## Majin SSJ Eric

I'm just gonna keep hanging in there with my original Titans. They still do above 60FPS most of the time in all the games I play at 1440p and I prefer SLI to single cards anyway just for the aesthetic purposes of my rig.


----------



## gamervivek

Quote:


> Originally Posted by *DNMock*
> 
> Stopped reading there.... Math is most definitely not a Science. Math is a tool (an extremely important and refined tool, but a tool none-the-less) that is used by science.
> 
> I'll let someone who is way smarter than everyone who posted in this thread combined explain it a bit better:
> 
> 
> 
> 
> Pro-Tip: Never go out of your way to try and make yourself look more intelligent than you really are. The end result is you look like a moron. Speaking from experience, lots and lots of experience.


Quote:


> Many philosophers believe that mathematics is not experimentally falsifiable, and thus not a science according to the definition of Karl Popper.[35] However, in the 1930s Gödel's incompleteness theorems convinced many mathematicians[who?] that mathematics cannot be reduced to logic alone, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently."[36] Other thinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.


https://en.wikipedia.org/wiki/Mathematics#Mathematics_as_science
Quote:


> Calude: We seem to have concluded that mathematics depends on physics, haven't we? But mathematics is the main tool to understand physics. Don't we have some kind of circularity?
> 
> Chaitin: Yeah, that sounds very bad! But if math is actually, as Imre Lakatos termed it, quasi-empirical, then that's exactly what you'd expect. And as you know Cris, for years I've been arguing that information-theoretic incompleteness results inevitably push us in the direction of a quasi-empirical view of math, one in which math and physics are different, but maybe not as different as most people think. As Vladimir Arnold provocatively puts it, math and physics are the same, except that in math the experiments are a lot cheaper!


http://www.rutherfordjournal.org/article020103.html
Quote:


> Insofar as mathematical theorems refer to reality, they are not sure, and insofar as they are sure, they do not refer to reality.


- and that man's name?


----------



## Cyber Locc

Quote:


> Originally Posted by *gamervivek*
> 
> Quote:
> 
> 
> 
> Many philosophers believe that mathematics is not experimentally falsifiable, and thus not a science according to the definition of Karl Popper.[35] However, in the 1930s Gödel's incompleteness theorems convinced many mathematicians[who?] that mathematics cannot be reduced to logic alone, and Karl Popper concluded that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently."[36] Other thinkers, notably Imre Lakatos, have applied a version of falsificationism to mathematics itself.
> 
> 
> 
> https://en.wikipedia.org/wiki/Mathematics#Mathematics_as_science
Click to expand...

You missed a spot lol,
Quote:


> The opinions of mathematicians on this matter are varied. Many mathematicians[who?] feel that to call their area a science is to downplay the importance of its aesthetic side, and its history in the traditional seven liberal arts; others[who?] feel that to ignore its connection to the sciences is to turn a blind eye to the fact that the interface between mathematics and its applications in science and engineering has driven much development in mathematics. One way this difference of viewpoint plays out is in the philosophical debate as to whether mathematics is created (as in art) or discovered (as in science). It is common to see universities divided into sections that include a division of Science and Mathematics, indicating that the fields are seen as being allied but that they do not coincide. In practice, mathematicians are typically grouped with scientists at the gross level but separated at finer levels. This is one of many issues considered in the philosophy of mathematics.[citation needed]
> 
> Source - https://en.wikipedia.org/wiki/Mathematics#Mathematics_as_science


----------



## guttheslayer

Quote:


> Originally Posted by *zealord*
> 
> it could be. But its highly unlikely.
> 
> Last 512 bits card for Nvidia was the GTX 280 or something.
> 
> All cards after that have been either 256 or 384. Even the Titans are only 384.
> 
> It still could be, but I heavily doubt it.


It could be 512 bits might be used for a short temporary substitute before GDDR5x comes. Its not a so called long term thing.


----------



## ozlay

Those 3dmark scores are close to Quadro M5000 resuolts which is an 8gb card. They seem fake to me?


----------



## Cyber Locc

Quote:


> Originally Posted by *ozlay*
> 
> Those 3dmark scores are close to Quadro M5000 resuolts which is an 8gb card. They seem fake to me?


I agree they could be fake, however saying there in line with a M5000, umm no.



I realize that is firestrike, but 3dmark 11 and FS numbers are not that far off, Plus I gave a link in the OP to a 980ti getting the same result as the leaked card, and it gets way more than 13k in firestrike.

Do you have a source for a M5000 getting those numbers in 3dmark 11? also can that card be overclocked? more specifically its memory? The card in the pic is running memory at 8000mhz, which is what everyone has assumed for pascal.


----------



## Mhill2029

Don't modern cards get bizarre results in 3DMark 07 anyway? Last time I ran 07 my results varied so dramatically between runs, that I could barely distinguish anything.


----------



## Cyber Locc

Quote:


> Originally Posted by *Mhill2029*
> 
> Don't modern cards get bizarre results in 3DMark 07 anyway? Last time I ran 07 my results varied so dramatically between runs, that I could barely distinguish anything.


idk if they do but this is 3dMarrk 11, not 7 lol.

Also as to the M5000 thing, that would I guess be applicable to the second 2 cards results. However the second 2 are said to be mobile GPUs. the first one is supposedly a 1070 or 1080, seeing how it has 7.5gbs of Vram I dont know of any NV cards like that, I would say they are continuing the trend and that is a 1070 lol. The bench would put it slightly slower than a 980ti or tied, so that seems about right to me.

If that is the case at least they are being honest about the .5 this time around hahahaha.


----------



## Mhill2029

Quote:


> Originally Posted by *Cyber Locc*
> 
> idk if they do but this is 3dMarrk 11, not 7 lol.
> 
> Also as to the M5000 thing, that would I guess be applicable to the second 2 cards results. However the second 2 are said to be mobile GPUs. the first one is supposedly a 1070 or 1080, seeing how it has 7.5gbs of Vram I dont know of any NV cards like that, I would say they are continuing the trend and that is a 1070 lol. The bench would put it slightly slower than a 980ti or tied, so that seems about right to me.
> 
> If that is the case at least they are being honest about the .5 this time around hahahaha.


Sorry I meant 3DMark 11, dunno why I said 07 lol


----------



## Cyber Locc

Quote:


> Originally Posted by *Mhill2029*
> 
> Sorry I meant 3DMark 11, dunno why I said 07 lol










NP, umm not that I know of but I am not sure honestly.

In other news Khalid was asked this on the comments,

"So Khalid, on a scale of 1 to 10. How "genuine" are these results compared to the specs article from yesterday? =)" The article from yesterday he is referencing is the Pascal Specs that were "Leaked"

His Reply,

"~9. Yesterday's specs being 0."

https://disqus.com/home/discussion/wccftech/wip_nvidia_pascal_3dmark_11_entries_spotted/#comment-2577444660

I know that he doesn't have a lot of creditability to most, however I agree he does post a lot of "wrong" rumors he always states them as such and never backs them up. He is very much defending his source on this to be legit.

That said I will state again, he copied the header from the article that predated his, he is saying that these numbers are for the 1070, and the other 2 are mobile GPUs, not what the title would have you to believe.

Take all that with a grain of salt, just filling in some info


----------



## gamervivek

Quote:


> Originally Posted by *Cyber Locc*
> 
> You missed a spot lol,


You missed the lack of citations in that spot. As for the [who?] in my quote, there's Chaitin.


----------



## BoredErica

Quote:



> Originally Posted by *Pyrotagonist*
> 
> Even if W1zzard doesn't know math, I'm sure the charts are generated automatically, presumably with an algorithm written by someone who does know math.


I'm just glad most people on OCN know how to do basic math correctly.









If Pascal will be released in this 980, 980ti kind of fashion, then it might be problematic if I decide to get the 1080, and then the 1080ti. I'm considering doing a custom loop, but I don't think a 1080 block would work for a 1080ti? That would be a real hassle, on top of having to sell a GPU yet again. I really don't like having to sell physical stuff to people.


----------



## Cyber Locc

Quote:


> Originally Posted by *gamervivek*


Ripped all that, as this argument about math needs taken elsewhere sorry for assisting in its continuance.


----------



## Cyber Locc

Quote:


> Originally Posted by *Darkwizzie*
> 
> I'm just glad most people on OCN know how to do basic math correctly.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> If Pascal will be released in this 980, 980ti kind of fashion, then it might be problematic if I decide to get the 1080, and then the 1080ti. I'm considering doing a custom loop, but I don't think a 1080 block would work for a 1080ti? That would be a real hassle, on top of having to sell a GPU yet again. I really don't like having to sell physical stuff to people.


And see that before I can even finish posting we have a prime example lol. I surely do love it though.

Edit: just to see if I was even semi correct, I checked Darks Occupation "Student" ahh to points proven lol.

Tell you what Dark hit me up 10 years after you finish your bachelors degree and lets see how many elementary lessons you remember









Even though there again we come back full circle to I was following the logic of the man that made the chart, who's logic is the only that matters honestly as you know he made the chart.


----------



## BoredErica

Quote:


> Originally Posted by *Cyber Locc*
> 
> And see that before I can even finish posting we have a prime example lol. I surely do love it though.
> 
> Edit: just to see if I was even semi correct, I checked Darks Occupation "Student" ahh to points proven lol.
> 
> Tell you what Dark hit me up 10 years after you finish your bachelors degree and lets see how many elementary lessons you remember
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Even though there again we come back full circle to I was following the logic of the man that made the chart, who's logic is the only that matters honestly as you know he made the chart.


You accuse me of having an inflated ego. Yet you were so sure of your ability to do basic math you fought a line of people, some of which I know to be generally well-educated. The person with the overbearing ego is you, which is why you got ridiculed. It's also fun to poke at the person suffering from a serious case of Dunning-Kruger. If I were you, I would be A LOT more scared that I got something wrong. Despite having two well-off overclocking threads, I lived in constant fear of getting things wrong. What I didn't do was to resort to this weird tactic you've got going on. Had you accepted that you were wrong from the start and exhibited a bit more doubt that you did, I would have not commented at all. If you understand what percentages are exactly, you should've been able to reason it out without needing anybody else's help.

If in 10 year's time I have forgotten how to do elementary school level math, then I ought to hang my head in shame. Elementary level math is not the same as high school level. High school level can stretch from algebra 2 to well into calculus. Not knowing 1+1=2 is inexcusable even if you have a job that has nothing to do with math. The same thing applies to not understanding what percentages are. I use some math for the things I do from day to day. I have to compare performance numbers using percentages, for example. Elementary level math is useful in many places. (I am surprised that you have these many posts in OCN yet only today ran into this problem with math. We deal with percentages all the time when comparing cards.) I also hope that my English doesn't degrade as I get older, as it seems to have with you. I had a very difficult time understanding what you were even saying. Run-on sentences do that to me. Finally, I believe understanding everything at an elementary school level is a very low bar for the informed citizen regardless of trade.

Regardless, your incredible confidence, inability to do math, followed by your eventual downfall and expected inability to be humble in the face of being wrong was a fun distraction. But then we got into whether math is a science or not, and that is just too many levels removed from the original topic (and a bit boring to boot).

And now is the part where you try to make yet another post defending yourself and trying to ridicule me despite knowing nothing about me. And despite the fact that you're blocked, but you can try again to save face.


----------



## Cyber Locc

Quote:


> Originally Posted by *Darkwizzie*
> 
> You accuse me of having an inflated ego, yet you were so sure of your ability to do basic math, you fought a line of people, some of which I know to be generally well-educated. The person with the overbearing ego is you, which is why you got ridiculed. It's also fun to poke at the person suffering from a serious case of Dunning-Kruger. If I were you, I would be A LOT more scared that I got something wrong. Despite having two well-off overclocking threads, I lived in constant fear of getting things wrong. And when I did, I always corrected myself. What I didn't do was to resort to this weird tactic you've got going on. Had you accepted that you were wrong from the start of exhibited a bit more doubt that you did, I would have not commented at all. If you understand what percentages are exactly, you should've been able to reason it out without needing anybody else's help.


LOL, Okay ya. I pointed out in the second post on the subject that I was following wizards logic so what are you even talking about?
Quote:


> Originally Posted by *Darkwizzie*
> 
> If, in 10 year's time, I have forgotten how to do elementary school level math, then I ought to hang my head in shame. Elementary level math is not the same as high school level. High school level can stretch from algebra 2 to well into calculus. Here we're talking about basic percentages. I run the Skylake and Haswell threads. I have been known to use percentages/basc math when I teach people things. I also hope that my English doesn't degrade as I get older, as it seems to have with you. I had a very difficult time understanding what you were even saying. Run-on sentences do that to me.


Funny you missed an i seems its already degrading lol. Also no worries it will, unless you work in an environment where those skills are used daily as such is life, skills that are not used are lost.

What should also be pointed out is that my first and second comments (which included my allegations to W1zzards logic) were not approached with hey you are not doing the math correctly. They were greeted with childish insults, and bashing.
Quote:


> Originally Posted by *Darkwizzie*
> 
> You accuse me of having an inflated ego.
> Regardless, your incredible confidence, inability to do math, followed by your eventual downfall and expected inability to be humble in the face of being wrong was a fun distraction. But then we got into whether math is a science or not, and that is just too many levels removed from the original topic (and a bit boring to boot).
> Blocked.


I did not bring about this conversation, nor do I frankly care about it either, I was simply playing devils advocate for DN.

As to the blocked, I am more than accepting of that, as if your first instinct is to tell someone they are wrong and insult them without even understanding a reason for the claims. I have no appeal in discussions with you, again I will state I did not now nor did I then say I was good with Math. Where that keeps coming from I will never know, I did however state on multiple occasions that I was using the Math set forth by the creator of the chart. I also stated that in the context to which you feel it applies that your math is correct, however since we have no idea how that chart was made we would be wrong not to abide by his logic.

However I did concede that his math may be wrong, and when he says that to me here (Him directly not TPU) or makes a Statement/Correction on TPU. Then I will gladly follow your math. However at this time I will continue to follow the math of the person that made the charts, as we do not know how he made said charts. It is really that simple.

The math being presented by everyone assumes too much.

We Assume,
That his graphs are showing speed decrease from the 100% baseline.
That his graphs have not already been calculated in the way you are calculating them.

If I was to take all the numbers and totaled them, Then I take the change, and gather a percentage. Let us say that percentage is, 65% increase in speed. I then declare that my card is the baseline.

I then mark that on the graph as 100%, now according to the logic at hand I would need to reverse the problem and decide what the decrease of performance is and mark that on the graph. Such as Card A will be 100% on the graph Card B will be 60 on the Graph. This is what we assume when we read TPUs charts.

However if you read TPUs conclusions you will see that he interchanges the 2, so all assumptions are off at this point.

So now lets take the same graph and flip it. We have a 65% increase and we have our card A at 100%, now if our chart is to show the positive change instead of the negative, then card B would be at 35%.

Now seeing how the numbers from Card A were taken after the numbers for Card B. In my mind it would make sense to do the graph in the second manner, as the results are based on Increase that will be seen later than decrease. However we do not know how he made the chart, short of his interpretations of it. This is complicated by the fact he does it both ways in all the reviews.

My issue and our argument is not with the math, it is with the assumptions of how the graph was made in spite of the way it is interpreted by the person who made it.


----------



## Klocek001

my point about 1080 is as follows - it better deliver at least 30% over 980Ti (OC'd vs OC'd) cause if they don't and Polaris does nvidia won't have my money, not only this time but probably in 2017 when I'll be upgrading to a HBM2 monster as well.


----------



## BoredErica

Quote:


> Originally Posted by *Klocek001*
> 
> my point about 1080 is as follows - it better deliver at least 30% over 980Ti (OC'd vs OC'd) cause if they don't and Polaris does nvidia won't have my money, not only this time but probably in 2017 when I'll be upgrading to a HBM2 monster as well.


I was hoping for 6gb HBM 2 for 1080 and 8gb HBM 2 for 1080ti, but it seems clear it won't happen.


----------



## Kriant

While PPW efficiency is impressive and all, I am mostly looking for raw increase of performance over the previous gen. And the only thing that can pull me to upgrade from Titan X right now is +35-50% in DX11 AND DX12. Otherwise, waiting for HBM2 2017 showdown


----------



## Fuzzywinks

My 980ti Classy is sitting at 1485MHz and still isn't giving me a consistent 100fps at 3440x1440 like I would want. I'm not really interested in picking up a second card for SLI, I just need about 50% more oomf in a single card and I'll be happy. I'll try to hold out for the new flagship but hopefully it hits the market sooner rather than later.


----------



## atomhard

Quote:


> Originally Posted by *Cyber Locc*
> 
> [wall-of-text]
> 
> Which I find that very funny, As I never acclaimed to be a mathematician or any good in the subject at all. I have never had the need in my every day life for that particular math skill


Which is hilarious considering you and that other guy are making a fit about _"Math most *definitely*"_ not being a science.
As if that's somehow relevant or should even be discussed by those who think that 100 is 46% bigger than 46.
Quote:


> Originally Posted by *Forceman*
> 
> new nodes gain about 80% for both AMD and Nvidia.


Quote:


> Originally Posted by *Cyber Locc*
> 
> lets do that, 580 vs 780ti
> 
> http://www.overclock.net/content/type/61/id/2736846/width/350/height/700
> 
> I am seeing 46% where you are coming up with 80% is beyond me.
> 
> 46/100 = .46 or 46% can you math bro?


----------



## Matt26LFC

Quote:


> Originally Posted by *Darkwizzie*
> 
> I was hoping for 6gb HBM 2 for 1080 and 8gb HBM 2 for 1080ti, but it seems clear it won't happen.


Not sure we'll see 6GB HBM, I thought each stack was 1024bit hence HBM1 being 4*1GB Stacks each stack 1024bit so 4096bit bus, so guessing HBM2 being 2GB stacks 4*2GB = 8GB total @ 1024bit per stack keeping 4096bit or 4*4GB stacks for 16GB cards. Not sure how it all works, should really read up about HBM memory
Quote:


> Originally Posted by *Fuzzywinks*
> 
> My 980ti Classy is sitting at 1485MHz and still isn't giving me a consistent 100fps at 3440x1440 like I would want. I'm not really interested in picking up a second card for SLI, I just need about 50% more oomf in a single card and I'll be happy. I'll try to hold out for the new flagship but hopefully it hits the market sooner rather than later.


If you want 50% boost over your current 980Ti, then you'll want 1080ti minimum tbh, and even then I don't think you'll get 50%

Personally I feel we'll be on this new process node for as long as we was 28nm, so I don't think either AMD or Nvidia will dole out all the new process has to offer first time round, perhaps even they can't as its so new.

just my two cents lol


----------



## Klocek001

Quote:


> Originally Posted by *Fuzzywinks*
> 
> My 980ti Classy is sitting at 1485MHz and still isn't giving me a consistent 100fps at 3440x1440 like I would want. I'm not really interested in picking up a second card for SLI, I just need about 50% more oomf in a single card and I'll be happy. I'll try to hold out for the new flagship but hopefully it hits the market sooner rather than later.


you might need "1070" SLI at least, "1080" SLI preferably if you wanna max out games. I hope I'll get away with one "1080" since I'm shooting for 100 fps but 256x1440 and I don't need to max out every setting, mix of high/ultra is fine. 3440x1440 is considerably tougher on your GPU (values in %,but if the fps target is 100 then you might consider it equivalent to fps comparin 1440p 16:9 vs 21:9). This is quite correct, my avg. in Witcher 3 is about 80, if I use 4K DSR it drops to about 38 fps.


----------



## iLeakStuff

Quote:


> Originally Posted by *Kriant*
> 
> While PPW efficiency is impressive and all, I am mostly looking for raw increase of performance over the previous gen. And the only thing that can pull me to upgrade from Titan X right now is +35-50% in DX11 AND DX12. Otherwise, waiting for HBM2 2017 showdown


+35% over GTX Titan X seems like a stretch for GTX 1080. Its possible, but I think it will be more in the 10-25% area somewhere.
Anything over that I think you need to wait for GP100, which I guess Nvidia will release Titan for in early 2017 and then 1080Ti later in 2017 (benchlife wrote that I think).

Quote:


> Originally Posted by *ozlay*
> 
> Those 3dmark scores are close to Quadro M5000 resuolts which is an 8gb card. They seem fake to me?


Quadro M5000 comes with 8GB VRAM but its a mobile card so its running 1200MHz on the VRAM, not 2002MHz. Thats not possible on low voltage memory.
Further M5000 scores a lot less than 20k in graphic score (12k).


----------



## Cyber Locc

Quote:


> Originally Posted by *atomhard*
> 
> Which is hilarious considering you and that other guy are making a fit about _"Math most *definitely*"_ not being a science.
> As if that's somehow relevant or should even be discussed by those who think that 100 is 46% bigger than 46.


46% bigger than 46. K ya no, anyway moving on..... "_"Math most *definitely*"_ not being a science" Only I didnt say that, as a matter of fact I said that entire argument was petty and pointless.
Quote:


> Originally Posted by *iLeakStuff*
> 
> Quadro M5000 comes with 8GB VRAM but its a mobile card so its running 1200MHz on the VRAM, not 2002MHz. Thats not possible on low voltage memory.
> Further M5000 scores a lot less than 20k in graphic score (12k).


that is an interesting point that I did not think of, so do you think that the second 2 results in this list could be Mobile GPUS, even though they have 20002mhz memory?

also the Quadro he is talking about the and the one you are are different. He is referring to the Desktop Quadro, http://renderositymagazine.com/nvidia-gpu-quadro-m5000-in-review-cms-264

Your results are for the Quadro M5000M.


----------



## Burke888

Quote:


> Originally Posted by *i7monkey*
> 
> Are we ever going to see a return to big die chips for new generation releases like the GTX 480 (GF100) or is Nvidia content giving us midrange GX104 for the rest of time?
> 
> GK104 (GTX 680)
> GM204 (GTX 980)
> GP104 (GTX 1080)
> 
> They're not that much faster than the 580, 780Ti, and 980Ti.
> 
> It's really disappointing.


I agree completely and liked how you summed up the last few generations so well.
I made the mistake of upgrading my 780Ti's in SLI to 980s. It really was just a marginal improvement.

If AMD can somehow put pressure on Nvidia I believe we may see the end of this cycle of Nvidia passing off the midrange chips as the flagship launch cards of each generation.

As much as I hate to say it, I believe the benchmarks are likely to be correct.
The 680 at launch didn't best the GTX 580 by much, the 980 didn't blow the 780Ti out of the water, and unfortunately it doesn't look like the GTX 1080 is going to be that much better than the 980Ti.

To me it's all the more disappointing as I have been anxiously awaiting the arrival of the new Pascal cards to replace the GTX 980s that I had the misfortune of purchasing







.


----------



## zealord

Quote:


> Originally Posted by *Burke888*
> 
> If AMD can somehow put pressure on Nvidia I believe we may see the end of this cycle of Nvidia passing off the midrange chips as the flagship launch cards of each generation.


I'd love that so much.


----------



## ZealotKi11er

If Nvidia released GTX1080 for $500 and is 10-15% faster than GTX980 Ti people will go for it. Even GTX980 Ti owners. Nvidia know very well that this has worked in the past and will work again and again.


----------



## zealord

Quote:


> Originally Posted by *ZealotKi11er*
> 
> If Nvidia released GTX1080 for $500 and is 10-15% faster than GTX980 Ti people will go for it. Even GTX980 Ti owners. Nvidia know very well that this has worked in the past and will work again and again.


Only if the 1080 overclocks as good/or better than the 980 Ti. (for 980 Ti users to upgrade)


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *zealord*
> 
> Only if the 1080 overclocks as good/or better than the 980 Ti. (for 980 Ti users to upgrade)


I'm assuming he means 10-15% faster at max OC.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm assuming he means 10-15% faster at max OC.


Hard to say really. Mostly looking at stock vs stock with similar OC potential with GTX1080 having the OC advantage. GTX980 Ti Reference is really not that fast. Its like the old HD 7970. Only runs ~ 1150-1200 depending on the games but most can OC 1450-1550 giving a good 20% increase in performance. What usually happens is review sites will put the Reference GTX980 Ti vs Reference GTX1080 for "fair" comparison. Same thing happened with GTX780 Ti. I think Stock performance is important but OC models of the card are also very important. Non-Reference GTX980 Ti are 15-20% faster then reference and the user does not have to do any overclocking. I have been overclocking for ages now and I do not consider a OC that I do not keep 24/7 a representation of the cards potential. For example my 290X can fo 1225MHz with high enough voltage stable but I would never run that 24/7. I run 1150MHz for 24/7 and is 100% stable. Same with my HD 7970. It could do 1250MHz but I found 1125MHz to be the optimal speed. I think Nvidia overclocking is a bit easier from what I see from report here on OC.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> If Nvidia released GTX1080 for $500 and is 10-15% faster than GTX980 Ti people will go for it. Even GTX980 Ti owners. Nvidia know very well that this has worked in the past and will work again and again.


I think the 1080 will be 650 just like the last 2 and then be reduced to make room for the TI, people will still buy it they always do.


----------



## i7monkey

GF100/GF110 used to cost $499.

GX104 chips used to cost $199/229/$239 on launch like the GTX 460, 560, 560, and 560Ti. Now they cost $499/$549 and are rebranded as 680s and 980s.

Titan X is a GF110 580 in disguise and should cost $499, not $999.

*Never forget.*


----------



## criminal

Quote:


> Originally Posted by *i7monkey*
> 
> GF100/GF110 used to cost $499.
> 
> GX104 chips used to cost $199/229/$239 on launch like the GTX 460, 560, 560, and 560Ti. Now they cost $499/$549 and are rebranded as 680s and 980s.
> 
> Titan X is a GF110 580 in disguise and should cost $499, not $999.
> 
> *Never forget.*


And yet you bought a 980Ti anyway...


----------



## iLeakStuff

Quote:


> Originally Posted by *i7monkey*
> 
> GF100/GF110 used to cost $499.
> 
> GX104 chips used to cost $199/229/$239 on launch like the GTX 460, 560, 560, and 560Ti. Now they cost $499/$549 and are rebranded as 680s and 980s.
> 
> Titan X is a GF110 580 in disguise and should cost $499, not $999.
> 
> *Never forget.*


Good luck convincing them to get down to $500 for GP100 and $250 for GP104


----------



## Majin SSJ Eric

I used to think the same thing but let's remember that Fermi was actually the exception to the rule, not the rule itself. Nvidia likely would have priced the 480 higher (like the 8800 Ultra) had its launch not been a fiasco. And there was solid competition from AMD at the time as well with the 5870.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I used to think the same thing but let's remember that Fermi was actually the exception to the rule, not the rule itself. Nvidia likely would have priced the 480 higher (like the 8800 Ultra) had its launch not been a fiasco. And there was solid competition from AMD at the time as well with the 5870.


Fermi was late by 6 months, not much faster than $400 HD 5870, ran Hot and loud. Nvidia still did not lose Market Share which shows their fans are very loyal.


----------



## looniam

Quote:


> Originally Posted by *criminal*
> 
> And yet you bought a 980Ti anyway...


well then that would be $650 for the $330 card (570)


----------



## i7monkey

Quote:


> Originally Posted by *looniam*
> 
> well then that would be $650 for the $330 card (570)


Yup, ripoffs across all segments from low to high end.

Titan X = 580 = $499 instead of $999

980Ti = 570 = $330 instead of $659

980 = 460 = $229 instead of $549


----------



## iLeakStuff

What adds to the insult is that wafer price for the nodes have always decreased too compared to the old cards








(minus 16nm FF which cost more than 28nm)


----------



## rainzor

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Fermi was late by 6 months, not much faster than $400 HD 5870, ran Hot and loud. Nvidia still did not lose Market Share which shows their fans are very loyal.


You do realize that AMD had supply issues with 40nm due to bad yields at TSMC?
When supply stabilized they have actually gained quite a bit of market share.
But yeah, selective memory is a common problem don't feel bad about it..


----------



## ZealotKi11er

Quote:


> Originally Posted by *i7monkey*
> 
> Yup, ripoffs across all segments from low to high end.
> 
> Titan X = 580 = $499 instead of $999
> 
> 980Ti = 570 = $330 instead of $659
> 
> 980 = 460 = $229 instead of $549


Its more like this

GTX Titan X is GTX 580 3GB $1K vs $600
GTX 980 Ti is more like GTX480
GTX 980 is more like GTX560 Ti
GTX970 is more like GTX460


----------



## Klocek001

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its more like this
> 
> GTX Titan X is GTX 580 3GB $1K vs $600
> GTX 980 Ti is more like GTX480
> GTX 980 is more like GTX560 Ti
> GTX970 is more like GTX460


are you talking relative performance to other cards or are you talking performance on resolution that was more or less standard at the time the cards were launched?

a crazy idea: there were two variations of cards in the past, like gtx 260 and gtx 560ti, but the change was in core count. what if nvidia releases a more power efficient 1080 with 256-bit DDR5 and a faster but more power hungry 1080 with 512-bit memory ?
that even possible?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Klocek001*
> 
> are you talking relative performance to other cards or are you talking performance on resolution that was more or less standard at the time the cards were launched?


Relative performance.


----------



## Klocek001

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Relative performance.


so option no.1 ? comparing to other cards,right ?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Klocek001*
> 
> so option no.1 ? comparing to other cards,tight ?


You have to take a lot more things into consideration. GTX Titan for example is a lot more premium build card then GTX580 3GB. Also if you remove Titan from the current lineup you have GTX980 Ti for example. $650 vs $500. The difference is not that bad. The overpriced card this generation was GTX980 and GTX780 Ti. GTX780 too but that did not last.


----------



## Klocek001

eh the times of x60Ti cards, that won't come back,will it .....
gtx 560Ti 448 was ridiculous compared to 580.


----------



## i7monkey

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Also if you remove Titan from the current lineup you have GTX980 Ti for example. $650 vs $500. The difference is not that bad.


980Ti is a 570 because it's a cut-down version of the high end chip.

Titan X = full chip = 580 equivalent

980Ti = cut down version = 570 equivalent

We're paying $659 for a card that used to cost $330.


----------



## Cyber Locc

Quote:


> Originally Posted by *i7monkey*
> 
> 980Ti is a 570 because it's a cut-down version of the high end chip.
> 
> Titan X = full chip = 580 equivalent
> 
> 980Ti = cut down version = 570 equivalent
> 
> We're paying $659 for a card that used to cost $330.


And buying them anyway







The only way things will change is if we speak with our wallets. we wont.


----------



## ZealotKi11er

Quote:


> Originally Posted by *i7monkey*
> 
> 980Ti is a 570 because it's a cut-down version of the high end chip.
> 
> Titan X = full chip = 580 equivalent
> 
> 980Ti = cut down version = 570 equivalent
> 
> We're paying $659 for a card that used to cost $330.


Please you are very wrong...... GTX580/570 where refresh only hence price was similar to 480/470. GTX980 Ti vs GTX570 .... lol. GTX570 = GTX970. In price and relative performance. You also have to remember GTX980 Ti is a high-quality card with an amazing cooler while 570 was a card that would blow up.

Even 980 and 970 are no small die GPUs even though their name says so. HD 7970 has a smaller die while having 384-Bit memory.


----------



## Klocek001

Quote:


> Originally Posted by *ZealotKi11er*
> 
> GTX570 = GTX970. In price and relative performance. You also have to remember GTX980 Ti is a high-quality card with an amazing cooler while 570 was a card that would blow up.


that's accurate.


----------



## magnek

Probably more accurate to say 570 had a subpar dog poo Flextronics PCB that nVidia cut every corner on till it could just barely handle stock usage, and so in comparison the 980 Ti ref board looks so much better. But really the ref boards for GM200 are nothing special at all, even that blower which was a highlight for Kepler has become woefully inadequate for GM200.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *i7monkey*
> 
> 980Ti is a 570 because it's a cut-down version of the high end chip.
> 
> Titan X = full chip = 580 equivalent
> 
> 980Ti = cut down version = 570 equivalent
> 
> We're paying $659 for a card that used to cost $330.


You apparently skipped over my post about Fermi. Why do you think Fermi is the only "correct" pricing scheme when no other generation of Nvidia cards has used that price structure? Fermi was the exception. Remember the 8800 Ultra? Or even the $650 GTX 280?


----------



## zealord

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Please you are very wrong...... GTX580/570 where refresh only hence price was similar to 480/470. GTX980 Ti vs GTX570 .... lol. GTX570 = GTX970. In price and relative performance. You also have to remember GTX980 Ti is a high-quality card with an amazing cooler while 570 was a card that would blow up.
> 
> Even 980 and 970 are no small die GPUs even though their name says so. HD 7970 has a smaller die while having 384-Bit memory.


Nah it's not comparable in relative performance. Only comparable in price.

The gap between the GTX 970 and Titan X (best Maxwell) is much bigger than the gap between GTX 570 and GTX 580 (best Fermi).

There is no black and white with comparing these two generations though.

I think a 600mm² Maxwell card should be more expensive than a 520mm² Fermi card, but I don't think that it should be twice the price like we currently have.

A more reasonable pricing would look like :

970 - 330$
980 - 399$
980 Ti 550$
Titan X 650$


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *zealord*
> 
> Nah it's not comparable in relative performance. Only comparable in price.
> 
> The gap between the GTX 970 and Titan X (best Maxwell) is much bigger than the gap between GTX 570 and GTX 580 (best Fermi).
> 
> There is no black and white with comparing these two generations though.
> 
> I think a 600mm² Maxwell card should be more expensive than a 520mm² Fermi card, but I don't think that it should be twice the price like we currently have.
> 
> A more reasonable pricing would look like :
> 
> 970 - 330$
> 980 - 399$
> 980 Ti 550$
> Titan X 650$


I would love to see pricing like that but there just is no reason for Nvidia to do so, especially now with market share the way it is and a legion of Team Green fanboys at their back...


----------



## zealord

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I would love to see pricing like that but there just is no reason for Nvidia to do so, especially now with market share the way it is and a legion of Team Green fanboys at their back...


Yeah AMD could've handled the 7000 series, R9 290 series and Fury cards better. Nvidia had the upper hand in 28nm cards and that allowed them to price those cards high.

AMD cards 290(X)/390(X) are finally starting to look good, but now it's way too late.

They better have something banging with Polaris.


----------



## ZealotKi11er

The game will be a lot different with 14/16nm cards. Now they know how to space performance because considering how long 28nm stayed with us they are going to stretch for even longer.


----------



## Xuvial

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> and a legion of Team Green fanboys at their back...


Said Team Green fanboys didn't really have a choice considering AMD kept falling short in some way or the other.

It would be like blaming Intel's pricing on Team Intel fanboys when their only other alternative was Faildozer


----------



## Tobiman

AMD even admitted not too long ago that they were slacking. Seems management didnt have their stuff together but something feels different this time around. Hope they can keep it up. I'm in for the ride.


----------



## PostalTwinkie

GP100 /w HBMv2 or bust!


----------



## ZealotKi11er

Quote:


> Originally Posted by *PostalTwinkie*
> 
> GP100 /w HBMv2 or bust!


You think will care though? They will just get the card that as fast a GTX980 Ti and call it a day.


----------



## error-id10t

Quote:


> Originally Posted by *ZealotKi11er*
> 
> The overpriced card this generation was GTX980 and GTX780 Ti. GTX780 too but that did not last.


Every x80 is over priced, 580 was a fine card but expensive. Then came 680 which barely beat a 670 but cost a lot more, total waste of time. 780 was a waste considering 780TI which was the true card (I'm ignoring 770 because I still don't understand why this card existed). 980 again barely beat a 970, waste of money but in this case even the 980TI had a poor showing.

I don't really understand the current gen cards, they just didn't deliver and I as an example was forced to sit on the sidelines with my iGPU









Normal people and I mean that in a nice way don't even entertain Titan cards, there is simply no reason. If SLI worked a x70 SLI would always be the best buy and then after that a x80TI (IMO). You always ignore x80 and Titan whatever.


----------



## tajoh111

Quote:


> Originally Posted by *zealord*
> 
> Nah it's not comparable in relative performance. Only comparable in price.
> 
> The gap between the GTX 970 and Titan X (best Maxwell) is much bigger than the gap between GTX 570 and GTX 580 (best Fermi).
> 
> There is no black and white with comparing these two generations though.
> 
> I think a 600mm² Maxwell card should be more expensive than a 520mm² Fermi card, but I don't think that it should be twice the price like we currently have.
> 
> A more reasonable pricing would look like :
> 
> 970 - 330$
> 980 - 399$
> 980 Ti 550$
> Titan X 650$


Even AMD doesn't want to price their cards like this, or atleast this low. AMD current pricing model which is what you listed(how you want nvidia to price their cards), is in the long run an unprofitable one. They don't make any money. Hawaii has to sell at the price it sell's at because it is old and the marketing disadvantage of AMD cards.

It doesn't mean AMD doesn't want to sell their cards for more money. If the gtx 980 ti never came out, the fury x would come out at more than 650. The Fury x prices just doesn't make sense compared to the nano and fury original when it includes an uncut chip and a CLC. Lets not forget, when the 7870 first came out, AMD tried to sell a 212mm2 card for 350 dollars. AMD will price their cards high with no competition.

Fermi was a much cheaper card to make than maxwell or keplar. Nvidia had a special deal in place where they only paid for working chips until a certain yield was reached. Hence they could price their cards lower than normal for a big die.


----------



## ZealotKi11er

Quote:


> Originally Posted by *error-id10t*
> 
> Every x80 is over priced, 580 was a fine card but expensive. Then came 680 which barely beat a 670 but cost a lot more, total waste of time. 780 was a waste considering 780TI which was the true card (I'm ignoring 770 because I still don't understand why this card existed). 980 again barely beat a 970, waste of money but in this case even the 980TI had a poor showing.
> 
> I don't really understand the current gen cards, they just didn't deliver and I as an example was forced to sit on the sidelines with my iGPU
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Normal people and I mean that in a nice way don't even entertain Titan cards, there is simply no reason. If SLI worked a x70 SLI would always be the best buy and then after that a x80TI (IMO). You always ignore x80 and Titan whatever.


Yes GTX970 was and is a good deal but there are people out there that want GTX980 Ti.


----------



## Majin SSJ Eric

Only way to get back on topic is to start talking on-topic. So with that said I would like to reiterate that I really do not expect that much from these new Pascal cards honestly. Looks like they will be very small chips, will not have HBM, will be saddled with compute (or so I have read) and on and on. If AMD's aim is to also release the mid-range stuff this year then 2016 is shaping up to be pretty lame indeed on the video card front. GP100 will undoubtedly be incredible but don't look for that beast anytime over the next 365 days...


----------



## Forceman

A lot of people saying they are adding compute back and that'll take up die space, but if they are making GP104 with a GDDR memory controller and GP100 with a HBM controller, why can't they make GP104 without compute and GP100 with? They are going to be pretty significantly different dies anyway (unlike previous generations) because of the memory, so why not go all-in?

They aren't selling Gx104 chips in Tesla cards, are they?


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Only way to get back on topic is to start talking on-topic. So with that said I would like to reiterate that I really do not expect that much from these new Pascal cards honestly. Looks like they will be very small chips, will not have HBM, will be saddled with compute (or so I have read) and on and on. If AMD's aim is to also release the mid-range stuff this year then 2016 is shaping up to be pretty lame indeed on the video card front. GP100 will undoubtedly be incredible but don't look for that beast anytime over the next 365 days...


I am with you 100%

I know there is alot of people that think otherwise and I will be sad when they are a let down.

I think there is simply too many factors at play that will hinder performance for this new node. You summed it up pretty well, with what I have been saying.

I would actually go on to quote someone that I read something from earlier, although I do not remember where. anyway he stated (if he reads this or I find it please take credit). "Welcome to the world where we are hitting the limits of Silicon and until other means are found this is the new reality" it was something to that effect.

I think he is truly on point, if we look at Intels past few years and AMD/NVs, the last shrink that showed real gains for Intel was 28nm. I know popular opinion is that is due to a lack of AMD rivalry, but what if it isn't? What if we have truly hit a focal point where we need to explore options outside of Silicon and have hit a wall with our current tech?

If there is any truth to that we may in fact see the same moves by NV/AMD this round, they may give us 5-10% performance gains for a few years and much better power efficiency, until this wall can be breached. I know this could 100% wrong its just a guess really, but if we think about it. Does the AMDs lack of competition really make a diffrence in Intels gains? The Sandy Bridge Line was a major upgrade each after has been horrid, but AMD wasn't very much competition back then either so why have we not see massive gains still.

Also if there was still gains left in 22nm then why go to skylake so fast? If they are just milking wouldn't it make since to milk 22nm more before moving to 14?
Quote:


> Originally Posted by *Forceman*
> 
> A lot of people saying they are adding compute back and that'll take up die space, but if they are making GP104 with a GDDR memory controller and GP100 with a HBM controller, why can't they make GP104 without compute and GP100 with? They are going to be pretty significantly different dies anyway (unlike previous generations) because of the memory, so why not go all-in?
> 
> They aren't selling Gx104 chips in Tesla cards, are they?


Quadro cards are based on GP104, I am sure as the current Quadros are based on GM 204. Tesla is the compute flagship but Quadro is the Midrange computes, and they have been neglecting Quadro for awhile, they need some serious boosts in that market.

I also think that Teslas are more server aimed and Quadros more Workstation. I could be wrong about that, but the current Quadros are currently GM204 based that I know forsure.

EDIT: Some Teslas are also based on GM204 so yes they are making teslas on the 4 chip







.


Also there is 1 Quadro card based on GM200 and that is the M6000.

Also when attempting to get this information for you, I came across something interesting. It was on a forum and a lot of different people were talking about (take it with a grain) anyway someone said they worked for NV and this is how it goes.

When Nvidia makes GPUs they do not create Gaming GPUs on purpose. They create Tesla GPUS, if a GPU is damaged in a way that it isnt up to Tesla standards, it becomes a Quadro, if it isnt up to Quadro standards (Lacks ECC, Multi Displays (I assume this means more than 4?, or other features) it becomes a Geforce card.

This would in fact make since in the grand scheme. If we realize this is how Intel does its CPUs, every CPU is designed to be a Xeon however if it cannot make the cut then features are cut off, and it becomes an I7, if it cannot use Hyperthreading it becomes a I5, if it cannot use 4 cores it becomes an I3 and the Cycle continues. this is a way to recycle chips instead of throw them out, this is why our gaming cards can be sold for so cheap and our processors as well.

If you stop to think about this, we do not matter, Gamers are the ones that buy the trash the professional market does not want. We also make up a very small portion of the industry, comparatively to the Industrial market who buys 10000s of 10k dollar GPUs every year. The professional Market also needs graphics performance, if they didn't we may not even have GPUs honestly.

Found the guy that works for NV (supposedly, though what he says does make since)

"From my experience at Nvidia, engineering wise there is no difference. Nvidia makes one chip with all the features. It will then blow out a couple of fuses and sell that as a GeForce.

For example, the chip might support ECC on main memory which is necessary for workstation or enterprise level customers. But, if you want that feature, be ready to pay a lot more. Similarly, a GeForce card might have architectural support for large amounts of video memory. But no GeForce will ever ship with that much memory. Yet another example is the display resolutions or number of displays a single graphics card can support. All these are easy to turn off in a chip either through BIOS or more permanently by blowing fuses so that a customer does not figure out a hack for turning them back on.

Note that this is pretty much how every company would work. So Nvidia is not doing anything wrong or unethical here."

https://www.quora.com/What-are-the-differences-between-Nvidias-GeForce-Quadro-and-the-Tesla-range

His refrences (from his profile) Data Scientist @Kabbage, ex-performance architect @Oracle, Nvidia


----------



## Majin SSJ Eric

I've seen others also mentioning that both AMD and Nvidia are going to be stuck on 14/16nm for a long time (like at least as long as we were on 28nm) and I have to say that I absolutely agree with that. There is just little reason for Nvidia (or AMD) to go for every bit of performance they can get out of this new process when its likely that they will also be using (and having to find performance for it) for the next couple of generations. As long as the 1080 (or whatever they call it) is comfortably ahead of the 980Ti (like 10-15%) with comparable overclocks I think they will consider that a win and call it a day for GP204. Honestly, considering how small the chip is I would also consider it an impressive level of performance but it still does NOT feel like a flagship to me. That'll be GP100 for sure...


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I've seen others also mentioning that both AMD and Nvidia are going to be stuck on 14/16nm for a long time (like at least as long as we were on 28nm) and I have to say that I absolutely agree with that. There is just little reason for Nvidia (or AMD) to go for every bit of performance they can get out of this new process when its likely that they will also be using (and having to find performance for it) for the next couple of generations. As long as the 1080 (or whatever they call it) is comfortably ahead of the 980Ti (like 10-15%) with comparable overclocks I think they will consider that a win and call it a day for GP204. Honestly, considering how small the chip is I would also consider it an impressive level of performance but it still does NOT feel like a flagship to me. That'll be GP100 for sure...


Ya I have as well, and Mahigan also stated that we should be expecting the same gains from AMD this time around.

I also agree I think we will see 10%ish on the 1080 with the 1080ti coming in at 30% faster than the 980ti. Worse part is everyone will cry and moan for a month and then buy it anyway.

I think we may be stuck on 14/16 for even longer than 28, as even Intel is having issues reducing to 10nm from the latest I have heard, and they may even put CL on 14nm as well. If Intel cant do it then GF/TSMC defiantly are going to be stuck for awhile.


----------



## renx

I do not agree with the pessimistic predictions, but I respect them of course.

But suppose your're right, and GP100 turns out to improve Maxwell by a 30%. Then what about DX12 ?

We will be playing DX12 games mostly, by 2017. And we have seen the way Maxwell sucks in dx12.

So I'm wondering if you'd still predict such a low gain under DX12 as well. Because if so, then Pascal would be competing with Fiji. LOL


----------



## Cyber Locc

Quote:


> Originally Posted by *renx*
> 
> I do not agree with the pessimistic predictions, but I respect them of course.
> 
> But suppose your're right, and GP100 turns out to improve Maxwell by a 30%. Then what about DX12 ?
> 
> We will be playing DX12 games mostly, by 2017. And we have seen the way Maxwell sucks in dx12.
> 
> So I'm wondering if you'd still predict such a low gain under DX12 as well. Because if so, then Pascal would be competing with Fiji. LOL


Well that would highly depend wouldn't it. Does pascal have Async, we do not know the answer to this yet. Does Maxwell truly lack Async or can it be modified to pseudo support Async?

As to the largely playing DX12 by 2017 this has also yet to be seen, we only have a few games scheduled to get DX12, with a lot of them just being DX11, I would lean more to the start of 2018 for full out adaption of DX12.

Also like I said it has already been stated, AMD will have similar gains. Futhermore Pascal competing with Fiji has yet to be seen. We do not have anywhere close to enough data on DX12 to make that assumption we have results for 2 games both of which are AMD sponsored titles and very new. We need more time and more games to make actual conclusions about DX12 as of right now thats as up in the air as Pascal.

Anyway DX12 hasn't added much, it put Fiji matching or beating Titan Xs/980tis, it should have done that in DX11 at its release, its just not actually starting to be where it should. So if the x80 is 30% faster than a 980ti and a fiji is matching it like it should have a year ago then Pascal is still faster than Fiji.

I think people blur lines due to pricing you cant do that.

A Fury X is made to compete with the titan X period pricing is irrelvant.
Fury - 980ti
390x - 980
390 - 970

That is how those cards should be compared.


http://techbuyersguru.com/first-look-dx12-performance-rise-tomb-raider?page=1

Looks like NV is still winning in DX12 in this game, there is no fury of course but the 390x isnt beating the 980, not by a long shot.

I will try to find more recent benches.

Here is a newer bench, I think IDK. this site also has dx12 Hitman results yet I cant find them I only see the Dx11 results.



Anyway in ROTR and DX12, the 980ti is still top dog, so if Fiji is losing now how will it win against pascal or match it when thats faster than a 980ti?
http://sivertimes.com/hitman-and-rise-of-the-tomb-raider-bring-directx-12-to-the-test/16831

People are taking those Ashes benches way to far.

Edit again: Now we see a trend however.



http://wccftech.com/hitman-pc-directx-12-benchmarks/

Thats 3 games with a win for either side and a tie.

The Fury X matches the 980ti in Hitman, Beats it in Ashes, Loses in ROTR. That is what should be happening as its the competing card, Fury was made to compete with the titan X then later to compete with the 980ti. So if Pascal X80 is 10% faster than a 980ti, and AMDs rival is also 10% faster than a Fury X how is that X80 competing with Fiji.


----------



## DrFPS

Quote:


> Originally Posted by *Cyber Locc*
> 
> I am with you 100%
> 
> Tesla is the compute flagship but Quadro is the Midrange computes, and they have been neglecting Quadro for awhile, they need some serious boosts in that market.
> 
> I also think that Teslas are more server aimed and Quadros more Workstation. I could be wrong about that, but the current Quadros are currently GM204 based that I know forsure.
> 
> EDIT: Some Teslas are also based on GM204 so yes they are making teslas on the 4 chip
> 
> 
> 
> 
> 
> 
> 
> .
> 
> 
> Also there is 1 Quadro card based on GM200 and that is the M6000.
> 
> Also when attempting to get this information for you, I came across something interesting. It was on a forum and a lot of different people were talking about (take it with a grain) anyway someone said they worked for NV and this is how it goes.
> 
> When Nvidia makes GPUs they do not create Gaming GPUs on purpose. They create Tesla GPUS, if a GPU is damaged in a way that it isnt up to Tesla standards, it becomes a Quadro, if it isnt up to Quadro standards (Lacks ECC, Multi Displays (I assume this means more than 4?, or other features) it becomes a Geforce card.
> 
> This would in fact make since in the grand scheme. If we realize this is how Intel does its CPUs, every CPU is designed to be a Xeon however if it cannot make the cut then features are cut off, and it becomes an I7, if it cannot use Hyperthreading it becomes a I5, if it cannot use 4 cores it becomes an I3 and the Cycle continues. this is a way to recycle chips instead of throw them out, this is why our gaming cards can be sold for so cheap and our processors as well.
> 
> If you stop to think about this, we do not matter, Gamers are the ones that buy the trash the professional market does not want. We also make up a very small portion of the industry, comparatively to the Industrial market who buys 10000s of 10k dollar GPUs every year. The professional Market also needs graphics performance, if they didn't we may not even have GPUs honestly.
> 
> Found the guy that works for NV (supposedly, though what he says does make since)
> 
> "From my experience at Nvidia, engineering wise there is no difference. Nvidia makes one chip with all the features. It will then blow out a couple of fuses and sell that as a GeForce.


Most of this is correct. They guy from n_v you talk about is 100% correct. That is exactly how they come up with new technologies. Except they don't blow fuses. They leak electrical energy past the transistors, that in turn short out shader cores or cuda cores. You can call it blowing fuses.
There is a lot more to Nvidia marking than meets the eye. Such as Disney and Dreamworks studios and the like. All most every modern movie that has CGI is made with nvidia Tesla and quadro. Think about that.
http://www.techradar.com/us/news/world-of-tech/inside-dreamworks-how-animated-movies-are-rendered-1127122
Quote:


> On a system with 32GB of RAM, a 160GB SSD boot drive (and a 500GB data drive), with an Nvidia Quadro 5000 graphics card (which itself has 352 cores), a technician opened an animated sequence for a new character in an upcoming movie.


Never underestimate Geforce, its a huge market for nvidia. You can use a geforce to cadcam, but you can't game on tesla or quadro. All you have to do is look at Nvidia's main website. As it mostly all Geforce or Shield.
I also agree that we will be stuck at 16nm for a long time. What will have for media after that? Did you know that your finger nails grow at 1nm per second?


----------



## Cyber Locc

Quote:


> Originally Posted by *DrFPS*
> 
> Never underestimate Geforce, its a huge market for nvidia. You can use a geforce to cadcam, but you can't game on tesla or quadro. All you have to do is look at Nvidia's main website. As it mostly all Geforce or Shield


Well I dont mean they dont make money from Geforce of course they do, but its kind of a side project to recycle chips.

You can game on a Quadro I have done it, when I worked as an IT, I was given a Quadro to play with at work and I gamed on it just fine (that was years ago but still.) I will agree they are not as good for gaming, but they are fairly close.

The reason NVs site is covered in Geforce is industrial applications do not use that, the site you refer is for consumers, not many consumers buy Quadros ect. Those are bought from IT purchasers from large company's, and they deal with NV in a completely different facet (I have assisted in similar purchasing in my time as an IT for a college, but not with NV, with Dell for servers.) Anyway to that extent I have some knowledge and experience with such purchases and they are a lot bigger than consumer ones.

For instance in my time there we replaced our servers 2 times in my time there, I assisted in 1, lets just say that purchase was in very very large numbers of dollars.

I would say if you took all the money spent on consumer GPUs by every member of OCN in the past year it would not come close to the amount 1 college spends to upgrade there servers. And that is 1 college out of how many? Plus all the big businesses ect, of course that is full out servers not a GPU, still though.


----------



## Defoler

Hadn't the 980 was only a few % better than the 780 TI sometimes, and it was mostly because of the extra memory?

I won't be surprised of the 980 replacement isn't going to right out beat the 980 TI in every situation. It should be a high end but not the top card until a new titan and a new TI comes out later (and they always come out later in order to increase sales of the 980/970 replacements).


----------



## iLeakStuff




----------



## Mhill2029

I could make that image in 5mins in any Paint application


----------



## Noufel

Quote:


> Originally Posted by *iLeakStuff*


too much grey on the Geforce E


----------



## iLeakStuff

Could be fake. Found it today


----------



## KeepWalkinG

I found the real picture:


----------



## Hattifnatten

It's fake


----------



## ebduncan

LOL that the people thinking you are going to get 980ti performance at the 970 price point, i suppose it's possible with competitive pricing from AMD.

Also, making comparisons between the 580 to the 680 is a hog wash to try and extrapolate how much faster the new cards will be. The 680 wasn't a flagship gpu, it was gm104, small die part, the 780/780ti/ Titan were the big die gpus. The 580 was a flagship.

Likely be something like this 970<1060<980<1070<980ti<1080

Hard to say how much faster these parts will actually be, as they could always just reduce power consumption and keep current levels of performance to push efficiency, or they could push performance and keep the same TDP's, Decisions that are made with every new process node. What we hear from the AMD camp at this time is more of a push for efficiency.

Come June and new gpu battles will be taking place.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *iLeakStuff*


Close but no cigar. Wrong green is used:



Source: http://international.download.nvidia.com/partnerforce-us/Brand-Guidelines/NVIDIA_Corporate_Guidelines_2012_1.2.pdf
http://www.nvidia.com/object/partner-sales-marketing-tools.html

Edit:



Edit 2:

Typeface looks off as well:



Edit 3:



"BAND COLOR
The only color for the secondary band is NViDiA Green." See "COLOR PALETTE" above.


----------



## criminal

Quote:


> Originally Posted by *ebduncan*
> 
> *LOL that the people thinking you are going to get 980ti performance at the 970 price point*, i suppose it's possible with competitive pricing from AMD.
> 
> *Also, making comparisons between the 580 to the 680 is a hog wash to try and extrapolate how much faster the new cards will be. The 680 wasn't a flagship gpu, it was gm104, small die part, the 780/780ti/ Titan were the big die gpus.* The 580 was a flagship.
> 
> Likely be something like this 970<1060<980<1070<980ti<1080
> 
> Hard to say how much faster these parts will actually be, as they could always just reduce power consumption and keep current levels of performance to push efficiency, or they could push performance and keep the same TDP's, Decisions that are made with every new process node. What we hear from the AMD camp at this time is more of a push for efficiency.
> 
> Come June and new gpu battles will be taking place.


No it is not guaranteed, but it is indeed quite possible. We have seen it in the past and we could see it again. That's what many of us are saying.

Also, how is comparing a 580 to a 680 hogwash? 580 (big die) to 680 (small die) was a die shrink like we are going to see from the 980Ti (big die) to the 1080 (small die). The only difference is they are adding back some compute with Pascal. Still in the realm of possibility for us to see the same performance jump or more.


----------



## ebduncan

Quote:


> Originally Posted by *criminal*
> 
> Also, how is comparing a 580 to a 680 hogwash? 580 (big die) to 680 (small die) was a die shrink like we are going to see from the 980Ti (big die) to the 1080 (small die). The only difference is they are adding back some compute with Pascal. Still in the realm of possibility for us to see the same performance jump or more.


I did say it was possible with competition from AMD to get 980ti performance for the price of a 970. Without competition they can charge whatever they want ie if its faster than the 970 it will cost more than a 970 etc.

comparing the gains from one older gpu to a newer gpu is never solid grounds for comparison between current gpus and future gpus. Which is why I called it a hog wash, because its not based on any factual evidence. There are far too many variables for that ever to represent any sort of factual information. One example of this is how the 780ti gets its butt stomped by the 980 nowadays, because of the extra feature sets.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> No it is not guaranteed, but it is indeed quite possible. We have seen it in the past and we could see it again. That's what many of us are saying.
> 
> Also, how is comparing a 580 to a 680 hogwash? 580 (big die) to 680 (small die) was a die shrink like we are going to see from the 980Ti (big die) to the 1080 (small die). The only difference is they are adding back some compute with Pascal. Still in the realm of possibility for us to see the same performance jump or more.


I'm not sure if I see them releasing the GTX 970 replacement at $330 if it matches a 980 Ti tbh. Taking a look at the recent performance numbers for a middle of the range resolution (lower than 4K but higher than 1080): http://tpucdn.com/reviews/ASUS/GTX_980_Ti_Matrix/images/perfrel_2560_1440.png that's a 42% performance leap. I can see them pricing it at $400-450 for a card that matches the 980 Ti. For a $330 card, I can see them releasing one that's 10%-15% faster than a 980 (or about 10% away from matching a 980 Ti).


----------



## criminal

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I'm not sure if I see them releasing the GTX 970 replacement at $330 if it matches a 980 Ti tbh. Taking a look at the recent performance numbers for a middle of the range resolution (lower than 4K but higher than 1080): http://tpucdn.com/reviews/ASUS/GTX_980_Ti_Matrix/images/perfrel_2560_1440.png that's a 42% performance leap. I can see them pricing it at $400-450 for a card that matches the 980 Ti. For a $330 card, I can see them releasing one that's 10%-15% faster than a 980 (or about 10% away from matching a 980 Ti).


Even I agree that $330 for 980Ti performance is a stretch. But I could realistically see $400. If Nvidia has a target price of say $350 for the 1070, then yeah I could see performance being 10% away from the 980Ti.


----------



## iLeakStuff

GTX 1070 is around $300-400 somewhere.
I hope they do $350 again.

GTX 980Ti SLI for almost the same price as one GTX 980Ti.
That will guarantee that like GTX 970, it will be their cashcow snd they will sell a ton of them. Probably two to me


----------



## Woundingchaney

Seems to me that we could expect 980ti performance out of the x80 and from the x70 about 15-20% less than that. I expect the x70 to perform better than the current 980 but a bit less than the 980ti; that should adequately represent the intial launches of their new line up. This would coincide rather well with their history of launches, I personally cant see Nvidia breaking this trend even on a die shrink (there is simply not enough market pressure).


----------



## EinZerstorer

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Only way to get back on topic is to start talking on-topic. So with that said I would like to reiterate that I really do not expect that much from these new Pascal cards honestly. Looks like they will be very small chips, will not have HBM, will be saddled with compute (or so I have read) and on and on. If AMD's aim is to also release the mid-range stuff this year then 2016 is shaping up to be pretty lame indeed on the video card front. GP100 will undoubtedly be incredible but don't look for that beast anytime over the next 365 days...


Every single thread here goes off topic, always some unfounded personal opinion to lead it there too.

Hilarious.


----------



## EinZerstorer

Quote:


> Originally Posted by *iLeakStuff*
> 
> GTX 1070 is around $300-400 somewhere.
> I hope they do $350 again.
> 
> GTX 980Ti SLI for almost the same price as one GTX 980Ti.
> That will guarantee that like GTX 970, it will be their cashcow snd they will sell a ton of them. Probably two to me


I love how you're guaranteeing and quoting prices on pure speculation.


----------



## iLeakStuff

Quote:


> Originally Posted by *EinZerstorer*
> 
> I love how you're guaranteeing and quoting prices on pure speculation.


Im quoting a price range. No disabled Gx104 part have ever cost more than $400


----------



## Cyber Locc

Quote:


> Originally Posted by *EinZerstorer*
> 
> I love how you're guaranteeing and quoting prices on pure speculation.


I normally dont agree with Ileakstuff, but in this case he is correct, pricing for cut 4 chips, or 70s in general have always been ~350-400, and never over 400.

The pricing of the lines have remained more or less the same over the years. With x70s and x80s remaining in there price brackets. They just added more brackets to raise costs for flagships.


----------



## iLeakStuff

GTX 560Ti - GF114 - $249
GTX 670 - GK104 - $400
GTX 970 - GM204 - $350
GTX 1070 - GP104 -


----------



## Bogga

You guys are so lucky with your low prices... 650$ for an EVGA SC 980Ti on the us site and 715€ for the same card on the eu site which is 805$. Ordering it from a vendor here in Sweden is 850$...

I'm currently selling my 970's while they're still worth something, guess they'll be worth a lot less come june. So I have high hopes on Pascal (or polaris) and hope I can get a single card which performs ~50% better than my 970's. Wanna try single card again after 5850's, 6870's, 680's and now 970's. And water block will cost less as well


----------



## ZealotKi11er

Quote:


> Originally Posted by *iLeakStuff*
> 
> GTX 560Ti - GF114 - $249
> GTX 670 - GK104 - $400
> GTX 970 - GM204 - $350
> GTX 1070 - GP104 -


Forget pre Kepler cards. Reason was 670 was $400 compared to $330 of GTX970 was because 670 was a lot closer to 680 unlike 970 to 980.


----------



## Cyber Locc

Quote:


> Originally Posted by *EightDee8D*
> 
> behold the new gtx70ti only @ 449.99 dorra, 120w and 5% betta den 980ti
> 
> 
> 
> 
> 
> 
> 
> 
> 
> with 5.5+.5gb vram and a new feature called nosync


Correction its 7.5gbs of Vram its right here on this bench the top result hehehe. They will call it 8gb though.

I foresee the same though, it wont improve performance just P/PW and that will be the new metric all the fanboys use. "Its all about that P/PW bro, plus 1 8 pin I save money for my custom cables" you can laugh but I guarantee someone will say that exactly just wait!

To the Async thing, I am not entirely convinced on that mattering just yet. It seemed to when all we had was ashes, but looking at the DX12 lineup now shows a different story.
Quote:


> Originally Posted by *Bogga*
> 
> You guys are so lucky with your low prices... 650$ for an EVGA SC 980Ti on the us site and 715€ for the same card on the eu site which is 805$. Ordering it from a vendor here in Sweden is 850$...
> 
> I'm currently selling my 970's while they're still worth something, guess they'll be worth a lot less come june. So I have high hopes on Pascal (or polaris) and hope I can get a single card which performs ~50% better than my 970's. Wanna try single card again after 5850's, 6870's, 680's and now 970's. And water block will cost less as well


You will definitely see one, however it may not be till TIs, most likely x80 though.


----------



## EightDee8D

Quote:


> Originally Posted by *Cyber Locc*
> 
> Correction its 7.5gbs of Vram its right here on this bench the top result hehehe. They will call it 8gb though.
> 
> I foresee the same though, it wont improve performance just P/PW and that will be the new metric all the fanboys use. "Its all about that P/PW bro, plus 1 8 pin I save money for my custom cables" you can laugh but I guarantee someone will say that exactly just wait!
> 
> To the Async thing, I am not entirely convinced on that mattering just yet. It seemed to when all we had was ashes, but looking at the DX12 lineup now shows a different story.
> You will definitely see one, however it may not be till TIs, most likely x80 though.


yep, whatever will be good at nvidia's side, only that will matter the most. and it's not like amd will beat nvidia in every single department. so there's that. heck even now there are things in which amd is better than nvidia, but those things doesn't matter and never will unless nvidia beats amd in those departments.

imo, this is the best chance amd will ever have to get back some good 10-20% market share. all they need to do is deliver 5-10% better performance than tx with good oc ability and 100-120w @ 300-350$. they already have better arch just need better p/w and p/$. and also better dx11 performance. all that + launch before nvidia. gg


----------



## Bogga

Quote:


> Originally Posted by *Cyber Locc*
> 
> You will definitely see one, however it may not be till TIs, most likely x80 though.


Yeah I'm afraid of that... afraid that the Ti-version will be a christmas card or some other crappy launch.









I wanna expand my water cooling and get that new card in the loop. I just can't sit here with integrated graphics for 8 months,

If I check sweclockers.com and the dates of the reviews the 980Ti was 2015-06-01 and the 980 2014-09-19 which is almost 9 months apart


----------



## iLeakStuff

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Forget pre Kepler cards. Reason was 670 was $400 compared to $330 of GTX970 was because 670 was a lot closer to 680 unlike 970 to 980.


Well more reasons to expect good prices if 1070 = 980ti and 1080 = +25% then?


----------



## mouacyk

Quote:


> Originally Posted by *Cyber Locc*
> 
> Correction its 7.5gbs of Vram its right here on this bench the top result hehehe. They will call it 8gb though.
> 
> I foresee the same though, it wont improve performance just P/PW and that will be the new metric all the fanboys use. "Its all about that P/PW bro, plus 1 8 pin I save money for my custom cables" you can laugh but I guarantee someone will say that exactly just wait!
> 
> To the Async thing, I am not entirely convinced on that mattering just yet. It seemed to when all we had was ashes, but looking at the DX12 lineup now shows a different story.
> You will definitely see one, however it may not be till TIs, most likely x80 though.


Fanboys elsewhere may be all about P/PW, but here on OCN, the only metric that matters is P/in. I didn't want to go there, but the 2nd biggest GPU maker did.


----------



## criminal

Quote:


> Originally Posted by *iLeakStuff*
> 
> Well more reasons to expect good prices if 1070 = 980ti and 1080 = +25% then?


Yep.

If Nvidia/AMD gives 980Ti performance for $400 or less like I said weeks ago in another thread, I will be happy. I have no desire to pay more than $450 for a gpu any longer. Just isn't worth it (IMHO) in the long run.


----------



## orlfman

Quote:


> Originally Posted by *headd*
> 
> You cant compare GPU at same node vs die shrink.Last Time this die shrink happend *28nm*GTX670 reck *40nm*GTX580 by 20%.
> *16nm*1070 *MUST* be faster than *28nm*GTX980TI by 10-15% or its fail


the 680 was roughly 30-40% (40% on only a few) faster than the 580.)

we are comparing successors here. I.E. the 680 was the 580 successor
remember the 1080 is the successor to the 980. NOT the 980 ti. that's the 1080 ti's job

1080 being roughly 30 - 40% faster than the 980 (which would put it equal to or 10% greater than the 980 ti.

the "x70" has traditionally been 10 - 20% lower than the "x80" counterpart. this goes all the way back to the 6xxx series. I.E. the 6800gt was about 15% slower than the 6800ultra. just like the 970 is roughly 20% slower than the 980 (but the 980 has 30% increased price for that extra 20%! which has made the 980 a ****ty value)

if we take the 1080 being a full 40% faster than the 980
take the max 20% reduction for the 1070
that would put the 1070 around 20% faster than the 980 and 10% slower than the 980 ti

that's practically the 670 over the 580.


----------



## Cyber Locc

Quote:


> Originally Posted by *mouacyk*
> 
> Fanboys elsewhere may be all about P/PW, but here on OCN, the only metric that matters is P/in. I didn't want to go there, but the 2nd biggest GPU maker did.


Lol, I think you are missing what I was saying.

The P/Increase (i assume that's what you mean) is going to suck. However the P/PW will be greatly increased so people here and everyone else will change that to the best metric to rationalize the bad p/in and buy it anyway.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Lol, I think you are missing what I was saying.
> 
> The P/Increase (i assume that's what you mean) is going to suck. However the P/PW will be greatly increased so people here and everyone else will change that to the best metric to rationalize the bad p/in and buy it anyway.


Performance per inch is what he is referring to...lol


----------



## mouacyk

Quote:


> Originally Posted by *criminal*
> 
> Performance per inch is what he is referring to...lol










if the Lisas of the world had their way, we would all be measured by P/in.


----------



## iLeakStuff

Quote:


> Originally Posted by *criminal*
> 
> Yep.
> 
> If Nvidia/AMD gives 980Ti performance for $400 or less like I said weeks ago in another thread, I will be happy. I have no desire to pay more than $450 for a gpu any longer. Just isn't worth it (IMHO) in the long run.


Im not paying more than $400 tops for GTX 1070 and I think most including Nvidia agree.

There have been times you could get 980Ti for $600. $150 reduction in price for the same performance makes very little sense. $200-250 reduction, well that should be enough to create sales imho


----------



## DETERMINOLOGY

Benchmarks go up everyone goes nuts..As for me im waiting for offical reviews and leaks which we should get in 2-3 months


----------



## KeepWalkinG

When we expected some more info for the new Pascal architectures ?
Maybe Computex?


----------



## iLeakStuff

In less than 2 weeks..


----------



## criminal

Quote:


> Originally Posted by *iLeakStuff*
> 
> Im not paying more than $400 tops for GTX 1070 and I think most including Nvidia agree.
> 
> There have been times you could get 980Ti for $600. $150 reduction in price for the same performance makes very little sense. $200-250 reduction, well that should be enough to create sales imho


Agree.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Performance per inch is what he is referring to...lol


Okay ya that was my second guess lol. I didn't go with that because god I hope not, I like my big Beefy Water cooled GPUs the same size or bigger than my EATX boards. I do not want the world to go to Fury Nanos that just look awful on a real motherboard.

Quote:


> Originally Posted by *orlfman*
> 
> the 680 was roughly 30-40% (40% on only a few) faster than the 580.)
> 
> we are comparing successors here. I.E. the 680 was the 580 successor
> remember the 1080 is the successor to the 980. NOT the 980 ti. that's the 1080 ti's job
> 
> 1080 being roughly 30 - 40% faster than the 980 (which would put it equal to or 10% greater than the 980 ti.
> 
> the "x70" has traditionally been 10 - 20% lower than the "x80" counterpart. this goes all the way back to the 6xxx series. I.E. the 6800gt was about 15% slower than the 6800ultra. just like the 970 is roughly 20% slower than the 980 (but the 980 has 30% increased price for that extra 20%! which has made the 980 a ****ty value)
> 
> if we take the 1080 being a full 40% faster than the 980
> take the max 20% reduction for the 1070
> that would put the 1070 around 20% faster than the 980 and 10% slower than the 980 ti
> 
> that's practically the 670 over the 580.


I agree 110% I believe that is what we will see.


----------



## magnek

Quote:


> Originally Posted by *mouacyk*
> 
> 
> 
> 
> 
> 
> 
> 
> if the Lisas of the world had their way, we would all be measured by P/in.


It's not about the length, but how you use it right, _riiiiiiight_?


----------



## Shatun-Bear

Quote:


> Originally Posted by *orlfman*
> 
> the 680 was roughly 30-40% (40% on only a few) faster than the 580.)
> 
> we are comparing successors here. I.E. the 680 was the 580 successor
> remember the 1080 is the successor to the 980. NOT the 980 ti. that's the 1080 ti's job
> 
> 1080 being roughly 30 - 40% faster than the 980 (which would put it equal to or 10% greater than the 980 ti.
> 
> the "x70" has traditionally been 10 - 20% lower than the "x80" counterpart. this goes all the way back to the 6xxx series. I.E. the 6800gt was about 15% slower than the 6800ultra. just like the 970 is roughly 20% slower than the 980 (but the 980 has 30% increased price for that extra 20%! which has made the 980 a ****ty value)
> 
> if we take the 1080 being a full 40% faster than the 980
> take the max 20% reduction for the 1070
> that would put the 1070 around 20% faster than the 980 and 10% slower than the 980 ti
> 
> that's practically the 670 over the 580.


Good explanation. +REP


----------



## iLeakStuff

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Close but no cigar. Wrong green is used:
> 
> 
> 
> Source: http://international.download.nvidia.com/partnerforce-us/Brand-Guidelines/NVIDIA_Corporate_Guidelines_2012_1.2.pdf
> http://www.nvidia.com/object/partner-sales-marketing-tools.html
> 
> Edit:
> 
> 
> 
> Edit 2:
> 
> Typeface looks off as well:
> 
> 
> 
> Edit 3:
> 
> 
> 
> "BAND COLOR
> The only color for the secondary band is NViDiA Green." See "COLOR PALETTE" above.


Quote:


> Originally Posted by *Hattifnatten*
> 
> 
> 
> 
> It's fake


Holy crap, way to take investigation to the next level.
Well done guys









You guys got a degree in photoshop or something?


----------



## prava

Quote:


> Originally Posted by *Cyber Locc*
> 
> Yep about as nuts as you think a 1070 will beat a 980ti.
> 
> I agree some of these could be mobile cards, not a 980 mobile but a pascal mobile.
> 
> Only time will tell.


It has always been like this?

GTX260 beat 8800 Ultra
470 beat 280
670 beat 580

and so on.

The mid-range card has ALWAYS beat the top-tier card of the last generation. Now, if you consider that we have never seen such a jump in the integration process as we will see very soon (since we have never shrinked so much in a single step; keep in mind we are moving from 28nm to 1x nm) that gains next gen will be massive. For starters a 980TI should be doable a sub 125W. But I guess we will see the results very soon.


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *prava*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Cyber Locc*
> 
> Yep about as nuts as you think a 1070 will beat a 980ti.
> 
> I agree some of these could be mobile cards, not a 980 mobile but a pascal mobile.
> 
> Only time will tell.
> 
> 
> 
> It has always been like this?
> 
> GTX260 beat 8800 Ultra
> 470 beat 280
> 670 beat 580
> 
> and so on.
> 
> The mid-range card has ALWAYS beat the top-tier card of the last generation. Now, if you consider that we have never seen such a jump in the integration process as we will see very soon (since we have never shrinked so much in a single step; keep in mind we are moving from 28nm to 1x nm) that gains next gen will be massive. For starters a 980TI should be doable a sub 125W. But I guess we will see the results very soon.
Click to expand...

5 years from now when they are done milking the process node.. both AMD and nVidia will do this as the wall of silicon is only nearing.


----------



## prava

Quote:


> Originally Posted by *F3ERS 2 ASH3S*
> 
> 5 years from now when they are done milking the process node.. both AMD and nVidia will do this as the wall of silicon is only nearing.


Do what? Release a 980 TI @ sub-125W? This should be doable day one. And there is no reason not to do it straight-away.

For starters, a lower consumption chip is smaller than a bigger one; it also requires less cooling. All of this combines into a cheaper-to-fab chip. My idea is that first they will release the 980TI performance bracket... ie, the mid-range cards. And then, once the node is more mature, they will go bigger in size. It also makes sense.


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *prava*
> 
> Quote:
> 
> 
> 
> Originally Posted by *F3ERS 2 ASH3S*
> 
> 5 years from now when they are done milking the process node.. both AMD and nVidia will do this as the wall of silicon is only nearing.
> 
> 
> 
> Do what? Release a 980 TI @ sub-125W? This should be doable day one. And there is no reason not to do it straight-away.
> 
> For starters, a lower consumption chip is smaller than a bigger one; it also requires less cooling. All of this combines into a cheaper-to-fab chip. My idea is that first they will release the 980TI performance bracket... ie, the mid-range cards. And then, once the node is more mature, they will go bigger in size. It also makes sense.
Click to expand...

Not for that wattage range, speed yes not wattage though, although they could. as a company they make more money by stretching it out further and giving consumers something to upgrade to.. basically controlling the market for profit games... thought that was business 101


----------



## iLeakStuff

Quote:


> Originally Posted by *prava*
> 
> It has always been like this?
> 
> GTX260 beat 8800 Ultra
> 470 beat 280
> 670 beat 580
> 
> and so on.
> 
> The mid-range card has ALWAYS beat the top-tier card of the last generation. Now, if you consider that we have never seen such a jump in the integration process as we will see very soon (since we have never shrinked so much in a single step; keep in mind we are moving from 28nm to 1x nm) that gains next gen will be massive. For starters a 980TI should be doable a sub 125W. But I guess we will see the results very soon.


That 980Ti tier card will probably be like GTX 970. Around 145W so you are not far off


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *iLeakStuff*
> 
> Quote:
> 
> 
> 
> Originally Posted by *prava*
> 
> It has always been like this?
> 
> GTX260 beat 8800 Ultra
> 470 beat 280
> 670 beat 580
> 
> and so on.
> 
> The mid-range card has ALWAYS beat the top-tier card of the last generation. Now, if you consider that we have never seen such a jump in the integration process as we will see very soon (since we have never shrinked so much in a single step; keep in mind we are moving from 28nm to 1x nm) that gains next gen will be massive. For starters a 980TI should be doable a sub 125W. But I guess we will see the results very soon.
> 
> 
> 
> That 980Ti tier card will probably be like GTX 970. Around 145W so you are not far off
Click to expand...

Its next years iteration that will be the 125w but yeah


----------



## Cyber Locc

Quote:


> Originally Posted by *prava*
> 
> It has always been like this?
> 
> GTX260 beat 8800 Ultra
> 470 beat 280
> 670 beat 580
> 
> and so on.
> 
> The mid-range card has ALWAYS beat the top-tier card of the last generation. Now, if you consider that we have never seen such a jump in the integration process as we will see very soon (since we have never shrinked so much in a single step; keep in mind we are moving from 28nm to 1x nm) that gains next gen will be massive. For starters a 980TI should be doable a sub 125W. But I guess we will see the results very soon.


Mk the fact that you are missing is a 770, 970 and 1070 are not midrange cards. They have added tiers how people are not understanding this I have no clue at all, the new 970s are the old 650s.

Look, Lets take the most recent of your series list.

600 series, GTX 650 < GTX 650TI < GTX 660 < GTX 660TI < GTX 670 < GTX 680.

700 Series, GTX 750 < GTX 750TI < GTX 760 < GTX 760TI < GTX 770 < GTX 780 < GTX Titan < GTX Titan Black.
The Titan was a gap bridge but it is still a 700 series card, which puts it in an odd place.

900 series, GTX 950 < GTX 960 < GTX 970 < GTX 980 < GTX 980TI < GTX Titan X.

Now do you see what was done here? Tiers were removed from the lower lines and moved to the higher lines. That would mean for a GTX 1070 to Beat a GTX 980TI it would require a drop of 3 tiers vs 1 tier. When 70s were beating the last years flagship, they were the second card in the line not the 4th. \

I seriously do not get why people cannot understand this.

If we follow the past logic, the only card that has to beat the TI is the 80, with next years TI beating the Titan X. that is what we see from the past you linked.

Quote:


> Originally Posted by *iLeakStuff*
> 
> Holy crap, way to take investigation to the next level.
> Well done guys
> 
> 
> 
> 
> 
> 
> 
> 
> 
> You guys got a degree in photoshop or something?


Well that was a fake, but I found a real one for you







.


----------



## iLeakStuff

Quote:


> Originally Posted by *F3ERS 2 ASH3S*
> 
> Its next years iteration that will be the 125w but yeah


Yup, 2017 will bring cards with better silicon, lower voltage which will decrease power I guess. But I`m not waiting!


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> Yup, 2017 will bring cards with better silicon, lower voltage which will decrease power I guess. But I`m not waiting!


Well pascal will have lower wattage, it has to it has 1 8 pin.


----------



## iLeakStuff

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well pascal will have lower wattage, it has to it has 1 8 pin.


Same as GTX 680/980 really. 2x6pin. Both have a 150W limit


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> Same as GTX 680/980 really. 2x6pin. Both have a 150W limit


Ya I know but less than a TI. A lot less than a non reference TI.


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *iLeakStuff*
> 
> Quote:
> 
> 
> 
> Originally Posted by *F3ERS 2 ASH3S*
> 
> Its next years iteration that will be the 125w but yeah
> 
> 
> 
> Yup, 2017 will bring cards with better silicon, lower voltage which will decrease power I guess. But I`m not waiting!
Click to expand...

That was my point though, we know both AMD and nVidia are going to milk this node as much as possible especially after what they learned from the 28nm node. 20nm was a bust but created better business options for same node modifications to reduce power and increase performance.

However, by the time the 125w variant of the 980 TI comes out.. people will be looking more at the top tier and the 125w 980TI variant will be over shadowed (not saying it wont sell however the TDP/performance won't be something that will sell the cards themselves.)


----------



## looniam

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well that was a fake, but I found a real one for you
> 
> 
> 
> 
> 
> 
> 
> .


that red is obviously misinformation disseminated from AMD.


----------



## Cyber Locc

Quote:


> Originally Posted by *looniam*
> 
> that red is obviously misinformation disseminated from AMD.


No, thats from EVGA thats why they are already marketing that classy baby







.

Check the colors and type font its all legit







.


----------



## NFL

If the x70 does fall between the 980 & 980Ti, I believe I've found my next GPU. As long as they don't price it at $400 like the 670, that is


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Mk the fact that you are missing is a 770, 970 and 1070 are not midrange cards. They have added tiers how people are not understanding this I have no clue at all, the new 970s are the old 650s.
> 
> Look, Lets take the most recent of your series list.
> 
> 600 series, GTX 650 < GTX 650TI < GTX 660 < GTX 660TI < GTX 670 < GTX 680.
> 
> 700 Series, GTX 750 < GTX 750TI < GTX 760 < GTX 760TI < GTX 770 < GTX 780 < GTX Titan < GTX Titan Black.
> The Titan was a gap bridge but it is still a 700 series card, which puts it in an odd place.
> 
> 900 series, GTX 950 < GTX 960 < GTX 970 < GTX 980 < GTX 980TI < GTX Titan X.
> 
> Now do you see what was done here? Tiers were removed from the lower lines and moved to the higher lines. That would mean for a GTX 1070 to Beat a GTX 980TI it would require a drop of 3 tiers vs 1 tier. When 70s were beating the last years flagship, they were the second card in the line not the 4th. \
> 
> I seriously do not get why people cannot understand this.
> 
> If we follow the past logic, the only card that has to beat the TI is the 80, with next years TI beating the Titan X. that is what we see from the past you linked.
> 
> Well that was a fake, but I found a real one for you
> 
> 
> 
> 
> 
> 
> 
> .


Actually, the 670 and 970 are the same tier. Just look at the codename for each. (GK104/GM204)


----------



## looniam

Quote:


> Originally Posted by *Cyber Locc*
> 
> No, thats from EVGA thats why they are already marketing that classy baby
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Check the colors and type font its all legit
> 
> 
> 
> 
> 
> 
> 
> .


nope. the eVGA classy uses "stencil" fonts.




stop with the clandestine AMD marketing.


----------



## Clocknut

GTX 460 equals the top chip GTX280
GTX660 equals GTX 580
GTX760 not equal the GTX680 -.-
GTX960 not equal the GTX780Ti -.-

I am disappointed to Nvidia. I hope GTX1060 equal the 980Ti


----------



## magnek

Quote:


> Originally Posted by *looniam*
> 
> nope. the eVGA classy uses "stencil" fonts.
> 
> 
> 
> 
> stop with the clandestine AMD marketing.


Goddamn know-it-all EVGA fanboys


----------



## looniam




----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Actually, the 670 and 970 are the same tier. Just look at the codename for each. (GK104/GM204)


Okay, I understand that, however bear with me now as this is obviosuoly hard to grasp.

THERE WAS NO GK100 GEFORCE CARD!

104 was the flagship, Now it is not! X80s used to be Flagships now they are not.
Quote:


> Originally Posted by *looniam*
> 
> nope. the eVGA classy uses "stencil" fonts.
> 
> 
> 
> 
> stop with the clandestine AMD marketing.


Na they are updating and retiring the old Classy logo, they said they want to move into the future







. thats the New "K|ng-Classified" Logo, see you got 2 leeks in one day man.

(thinks to self darn I knew I should have used the EVGA classy logo)


----------



## looniam

Quote:


> Originally Posted by *Cyber Locc*
> 
> Na they are updating and retiring the old Classy logo, they said they want to move into the future
> 
> 
> 
> 
> 
> 
> 
> . thats the New "K|ng-Classified" Logo, see you got 2 leeks in one day man.
> 
> (thinks to self darn I knew I should have used the EVGA classy logo)


k|ngp|n uses house A rama font (slightly modified) so no go there redteam boy. or should i say roy . . .

btw, KP and classy are really two different "brands" so don't expect either to change.


----------



## Cyber Locc

Quote:


> Originally Posted by *looniam*
> 
> k|ngp|n uses house A rama font (slightly modified) so no go there redteam boy. or should i say roy . . .
> 
> btw, KP and classy are really two different "brands" so don't expect either to change.


Are you calling me a LIAR!, Bwahhhhh I am telling you what I gave you was a Leek. I have sources at EVGA, and I know what the old classy logo looks like I have owned quite a few classys in my days, boards and GPUs. Also my first ever 2011 board and one of my all time favorite motherboards anyway plastered right on the heat sink.



also who is Roy, and I am totally not a redteam boy. I have owned a total of like 5 AMD gpus in my life, 3 of which were 290s lol. I am mostly a Nv fan, but I like them both. I do however wish that Evga made AMD cards, as I use Evga Exclusively when on NV







.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay, I understand that, however bear with me now as this is obviosuoly hard to grasp.
> 
> THERE WAS NO GK100 GEFORCE CARD!
> 
> 104 was the flagship, Now it is not! X80s used to be Flagships now they are not.


The GK110 chip served that role. The GK104 was not the flagship Kepler chip, the Titan Black/780 Ti using GK110 were.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> The GK110 chip served that role. The GK104 was not the flagship Kepler chip, the Titan Black/780 Ti using GK110 were.


Right ya only that was 700 series not 600 series. that is kind of the point being made here. 700 series added more tiers.

Before the 700 series x80 cards were flagships, and yes the x80 was the flagship for the 600 series, starting with the 700 that changed.

I would more so say the OG Titan was the 600 series flagship, however there again we just added a tier, and technically the Titan was a 700 series card.


----------



## looniam

Quote:


> Originally Posted by *Cyber Locc*
> 
> also who is Roy, and I am totally not a redteam boy


yep you're totally *roy taylor*. nice try, did you have huddy help you with this? because it's almost beyond your intelligence.


----------



## Cyber Locc

Quote:


> Originally Posted by *looniam*
> 
> yep you're totally *roy taylor*. nice try, did you have huddy help you with this? because it's almost beyond your intelligence.


Pffft I wish.


----------



## looniam




----------



## Cyber Locc

Quote:


> Originally Posted by *looniam*


Thanks for that though, I had totally forgot the X58 Heat sink had a metal badge that said classified. I need that for a project I am doing and that was a god send so thanks for making that happen.

Now I just have to find a dead Classy


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Forceman*
> 
> The GK110 chip served that role. The GK104 was not the flagship Kepler chip, the Titan Black/780 Ti using GK110 were.


Not in 2012 they weren't. The 680 WAS the flagship Kepler card until Titan launched nearly a full year later. Then when mainstream GK110 released it was as the 780 so the 600 series flagship indeed was GK104...


----------



## Forceman

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Not in 2012 they weren't. The 680 WAS the flagship Kepler card until Titan launched nearly a full year later. Then when mainstream GK110 released it was as the 780 so the 600 series flagship indeed was GK104...


You can play numbering semantics all you want, but GK104 wasn't the big Kepler chip, and there was a lot of discussion about it when the GTX 680 launched. A 300mm^2 die is not a flagship die.

Gx104 has been the second tier chip for a long time, no matter what number they give it.


----------



## zealord

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Not in 2012 they weren't. The 680 WAS the flagship Kepler card until Titan launched nearly a full year later. Then when mainstream GK110 released it was as the 780 so the 600 series flagship indeed was GK104...


Well going by that then the 750 Ti (GM107) can also be considered the Maxwell flagship card for a couple of months? Atleast until the 980 came out

I mean i understand that approach and it has some logic in it which I won't deny, but I personally will never feel comfortable to call cards like those flagship cards.

Nvidia can name the cards however they like. Basically each card that releases first is the flagship card then within a generation. I can consider them the best, but calling them a flagship won't do it for me personally. I will probably name the GP104 the "Currently-Best-until-the-big-boys-come-out-to-play" card.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Forceman*
> 
> You can play numbering semantics all you want, but GK104 wasn't the big Kepler chip, and everyone knew that when the GTX 680 launched. A 300mm^2 die is not a flagship die.


No its not but in 2012 it was. Remember also that Nvidia was working on GK100 at the time and couldn't get it to work for whatever reason. That was a node shrink as well and I think they realized then that they would no longer be releasing the full sized flagship GPU's at the launch of a new architecture (and especially a new process) anymore. That's why it was bonkers to see people here actually expecting to see GP100 launch first this spring!


----------



## prava

Quote:


> Originally Posted by *iLeakStuff*
> 
> That 980Ti tier card will probably be like GTX 970. Around 145W so you are not far off


We will see, though if the AMD photo leak is of any inspiration... they will have very low power consumption.
Quote:


> Originally Posted by *Cyber Locc*
> 
> Mk the fact that you are missing is a 770, 970 and 1070 are not midrange cards. They have added tiers how people are not understanding this I have no clue at all, the new 970s are the old 650s.
> 
> Look, Lets take the most recent of your series list.
> 
> 600 series, GTX 650 < GTX 650TI < GTX 660 < GTX 660TI < GTX 670 < GTX 680.
> 
> 700 Series, GTX 750 < GTX 750TI < GTX 760 < GTX 760TI < GTX 770 < GTX 780 < GTX Titan < GTX Titan Black.
> The Titan was a gap bridge but it is still a 700 series card, which puts it in an odd place.
> 
> 900 series, GTX 950 < GTX 960 < GTX 970 < GTX 980 < GTX 980TI < GTX Titan X.
> 
> Now do you see what was done here? Tiers were removed from the lower lines and moved to the higher lines. That would mean for a GTX 1070 to Beat a GTX 980TI it would require a drop of 3 tiers vs 1 tier. When 70s were beating the last years flagship, they were the second card in the line not the 4th. \
> 
> I seriously do not get why people cannot understand this.
> 
> If we follow the past logic, the only card that has to beat the TI is the 80, with next years TI beating the Titan X. that is what we see from the past you linked.


Logic? What logic? You seem to be missing the point.

X80 and X70 /X60 (when we didn't have X70) have always had the same price, more or less. And the series have never been smaller since we have always had proper cards at all the tiers. Even more so because you missplace some products. The GTX 750 Ti is a modern card, compared to the GTX 760. There is only 2 years between their respective launches









What has happened is that they added a new product above the X80, with an increased price. Just look at the 8800 Ultra or 780TI and 980TI, but forget mixed-use cards such as Titan. Those 3 cards had huge prices on launch.

So no, tiers weren't removed at all, tiers have been added at the top of the scale. But that still means nothing regarding positioning. The future x70 will trade blows with the 980 Ti, or surpass it. But that won't be the big chip... for starters, because they need to recoup their investment and, secondly, because the process isn't probably ready to develop a really big die.

Quote:


> Originally Posted by *Cyber Locc*
> 
> Are you calling me a LIAR!, Bwahhhhh I am telling you what I gave you was a Leek. I have sources at EVGA, and I know what the old classy logo looks like I have owned quite a few classys in my days, boards and GPUs. Also my first ever 2011 board and one of my all time favorite motherboards anyway plastered right on the heat sink.
> 
> 
> 
> also who is Roy, and I am totally not a redteam boy. I have owned a total of like 5 AMD gpus in my life, 3 of which were 290s lol. I am mostly a Nv fan, but I like them both. I do however wish that Evga made AMD cards, as I use Evga Exclusively when on NV
> 
> 
> 
> 
> 
> 
> 
> .












Its LEAK, not LEEK.


----------



## Majin SSJ Eric

To me one of the most interesting things about the upcoming Pascal cards is finding out what Nvidia's new naming scheme is going to be. I think most people are assuming they will not name it the GTX 1080 so what are they going to do? Are they going to skip to GTX 1180 or will they come up with an entirely new naming scheme like AMD did with the move to the R9 nomenclature a couple of years ago? We should find out pretty soon!


----------



## headd

Quote:


> Originally Posted by *orlfman*
> 
> the 680 was roughly 30-40% (40% on only a few) faster than the 580.)
> 
> we are comparing successors here. I.E. the 680 was the 580 successor
> remember the 1080 is the successor to the 980. NOT the 980 ti. that's the 1080 ti's job
> 
> 1080 being roughly 30 - 40% faster than the 980 (which would put it equal to or 10% greater than the 980 ti.
> 
> the "x70" has traditionally been 10 - 20% lower than the "x80" counterpart. this goes all the way back to the 6xxx series. I.E. the 6800gt was about 15% slower than the 6800ultra. just like the 970 is roughly 20% slower than the 980 (but the 980 has 30% increased price for that extra 20%! which has made the 980 a ****ty value)
> 
> if we take the 1080 being a full 40% faster than the 980
> take the max 20% reduction for the 1070
> that would put the 1070 around 20% faster than the 980 and 10% slower than the 980 ti
> 
> that's practically the 670 over the 580.


Wrong.
GTX680 aka GK104 was midrange SKU priced as x80card.Beating last gen flagship GTX580(Big fermi) aka TITANX now.GTX680 predecessor was GTX560TI not GTX580.
GTX1070/1080 are again GP104 midrange SKU.TITANX is Flagship GPU(Big maxwell)


----------



## Cyber Locc

Quote:


> Originally Posted by *prava*
> 
> X80 and X70 /X60 (when we didn't have X70) have always had the same price, more or less. And the series have never been smaller since we have always had proper cards at all the tiers. Even more so because you missplace some products. The GTX 750 Ti is a modern card, compared to the GTX 760. There is only 2 years between their respective launches
> 
> 
> 
> 
> 
> 
> 
> 
> 
> What has happened is that they added a new product above the X80, with an increased price. Just look at the 8800 Ultra or 780TI and 980TI, but forget mixed-use cards such as Titan. Those 3 cards had huge prices on launch.
> 
> So no, tiers weren't removed at all, tiers have been added at the top of the scale. But that still means nothing regarding positioning. The future x70 will trade blows with the 980 Ti, or surpass it. But that won't be the big chip... for starters, because they need to recoup their investment and, secondly, because the process isn't probably ready to develop a really big die.


Okay thing is that 580, 480, 285, ect those were big chip cards. You are correct they did add tiers as I said. You are correct they didnt take away tiers they moved other cards down. Such as the x70 was moved down.

As to the flagship thing, the 600 series flagship is the 680 period. It is irrelevant that there was no big chip for the 600 series as the fact remains the 6 series highest GPU was the 680 both Titans were 700 series.
Quote:


> Originally Posted by *prava*
> 
> Its LEAK, not LEEK.


Wow Swoosh!. You must have completely missed my LEEK huh,



Quote:


> Originally Posted by *headd*
> 
> Wrong.
> GTX680 aka GK104 was midrange SKU priced as x80card.Beating last gen flagship GTX580(Big fermi) aka TITANX now.GTX680 predecessor was GTX560TI not GTX580.
> GTX1070/1080 are again GP104 midrange SKU.TITANX is Flagship GPU(Big maxwell)


You guys just dont get it do you. 680 was the FLAGSHIP! there was no card above a 680 in the 600 series ever made, (dual GPU aside) that means it was the FLAGSHIP!

Flagship - The best or most important one of a group or system: This store is the flagship of our retail chain.
For the 600 series that was the 680.


----------



## zealord

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> To me one of the most interesting things about the upcoming Pascal cards is finding out what Nvidia's new naming scheme is going to be. I think most people are assuming they will not name it the GTX 1080 so what are they going to do? Are they going to skip to GTX 1180 or will they come up with an entirely new naming scheme like AMD did with the move to the R9 nomenclature a couple of years ago? We should find out pretty soon!


haha yeah agree. I am really curious what they come up with

GTX 1080 does work, but I feel like they are not going to do it because of confusion with resolution for lesser tech interested people.

X80 and then X180 doesn't sound too bad.

NV80 or N80 or NX80 or NP80 would work.


----------



## Majin SSJ Eric

What about ditching alphanumericals for names like in the olden days?

GeForce Valkyrie?

GeForce Ascender?

GeForce Butterfly X?


----------



## zealord

Too many GPUs nowadays within a generation.

If it was only X60, X70 and X80 for low, mid and high end then full names could work, but Nvidia releases too many GPUs for that.

What I would like is something like GPX80 for Geforce Pascal and GVX80 for Volta.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> You can play numbering semantics all you want, but GK104 wasn't the big Kepler chip, and there was a lot of discussion about it when the GTX 680 launched. A 300mm^2 die is not a flagship die.


Sorry I missed this before in my replys. You keep using that word I dont think it means what you think it means.

A Titan was a big chip that is correct, it also was not a 600 series card according to NV it was a 700 series card (thats dumb I agree, but I dont make the rules). Therefore the GTX 680 was the Single GPU flagship, by the very definition of the word, just because you dont feel it was flagship worthy does change the fact that it was the flagship.
Quote:


> Originally Posted by *Forceman*
> 
> Gx104 has been the second tier chip for a long time, no matter what number they give it.


I agree with this partly. In a world with a better chip the GK104 is a mid range chip. However in the world of the 600 series there was no better chip. Therefore that series Flagship was the GK104, you can try to twist that any way you want it doesn't change the facts.

Again
*"Flagship - The best or most important one of a group or system: This store is the flagship of our retail chain."*

As far as the 600 series is concerned that is the GTX 680, Period, what chip it uses is irrelevant that was the top card in the series and still is thus it was a Flagship. (again of Single GPUs, as there was the 690)


----------



## Clocknut

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> To me one of the most interesting things about the upcoming Pascal cards is finding out what Nvidia's new naming scheme is going to be. I think most people are assuming they will not name it the GTX 1080 so what are they going to do? Are they going to skip to GTX 1180 or will they come up with an entirely new naming scheme like AMD did with the move to the R9 nomenclature a couple of years ago? We should find out pretty soon!


I havent see Nvidia use 10 name. remember? GTX280, they skip the entire 100 series for retail unit. AMD did the same thing.

I feels that it is either X80 or GTX1800/GTX2800.


----------



## Cyber Locc

Quote:


> Originally Posted by *Clocknut*
> 
> I havent see Nvidia use 10 name. remember? GTX280, they skip the entire 100 series for retail unit. AMD did the same thing.
> 
> I feels that it is either X80 or GTX1800/GTX2800.


How about P80, I guess that would get confusing after awhile as well?

Unless they are using X as the roman numeral X and not the letter X, then what would the next cards be the XI80 then the XII80.

Well let me tell you my time machine took me pretty far back and I am Currently using a XVII80TI atm.


----------



## Serandur

Why not skip the GTX 1080 name and go for a GTX 2080, GTX 3080, GTX 4080, etc. scheme?


----------



## Cyber Locc

Quote:


> Originally Posted by *Serandur*
> 
> Why not skip the GTX 1080 name and go for a GTX 2080, GTX 3080, GTX 4080, etc. scheme?


We would fairly quickly hit the 5XXX line which would put us full circle back to long long ago. IE they have already done that.

I mean like you have just illustrated that buys them 3 series sets, then what the 5xxx, 6xxx, 7xxx, 8xxx, and 9xxx lines already exist.


----------



## Serandur

Quote:


> Originally Posted by *Cyber Locc*
> 
> We would fairly quickly hit the 5XXX line which would put us full circle back to long long ago. IE they have already done that.
> 
> I mean like you have just illustrated that buys them 3 series sets, then what the 5xxx, 6xxx, 7xxx, 8xxx, and 9xxx lines already exist.


They've never had a GTX 5080, 6080, 7080, 8080, etc. The distinction being the 8 moved to the third numbering position.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Sorry I missed this before in my replys. You keep using that word I dont think it means what you think it means.
> 
> A Titan was a big chip that is correct, it also was not a 600 series card according to NV it was a 700 series card (thats dumb I agree, but I dont make the rules). Therefore the GTX 680 was the Single GPU flagship, by the very definition of the word, just because you dont feel it was flagship worthy does change the fact that it was the flagship.
> I agree with this partly. In a world with a better chip the GK104 is a mid range chip. However in the world of the 600 series there was no better chip. Therefore that series Flagship was the GK104, you can try to twist that any way you want it doesn't change the facts.
> 
> Again
> *"Flagship - The best or most important one of a group or system: This store is the flagship of our retail chain."*
> 
> As far as the 600 series is concerned that is the GTX 680, Period, what chip it uses is irrelevant that was the top card in the series and still is thus it was a Flagship. (again of Single GPUs, as there was the 690)


600 or 700 doesn't matter, and I don't know why you are so hung up on the number. The group in question is Kepler cards, and they were both Kepler cards.

The fact remains that the second tier GK104 *Kepler* card was faster than than the previous generation (Fermi) large die card. That's the trend, and I expect it to continue with Pascal.

And yes, I'm quite familiar with what semantics means. Are we taking a test now, because I also know how percentages work.


----------



## Cyber Locc

Quote:


> Originally Posted by *Serandur*
> 
> They've never had a GTX 5080, 6080, 7080, 8080, etc. The distinction being the 8 moved to the third numbering position.


Thats true, however its still too similar, we will have people buying 8800s thinking they are getting a killer deal. I dont think most the OCN community would do such a thing.

However in the immortal words of George Carlin "Some people are really, Edited google it"
Quote:


> Originally Posted by *Forceman*
> 
> 600 or 700 doesn't matter, and I don't know why you are so hung up on the number. The group in question is Kepler cards, and they were both Kepler cards.


You are right it doesn't matter, So okay we will go with that then. then the X80 or whatever series comes out it doesn't have toe beat the 980ti at all. As a matter of fact it can be slower, as long as the GP 200 series is faster all is well.

also no the group in question was not and is not kepler cards it was and is the 600 series. The 700 series was 2 years later. So we had no new flagship for 3 years? Ya get out of here with that freaky logic, that is simply not going to fly,.
Quote:


> Originally Posted by *Forceman*
> 
> The fact remains that the second tier GK104 *Kepler* card was faster than than the previous generation (Fermi) large die card. That's the trend, and I expect it to continue with Pascal.


I can agree with this 100% it certainly was, no one has said it was not. However as has also been stated before the 600 series was an odd one. And the exception to the rule not the rule.

Furthermore, as I pointed out so have a few other people. A large portion of Compute performance was removed to get that gaming gains. that is not the case this time its the opposite so take what we seen with the 600 series (Major increases in Gaming, and reduction in Compute) and reverse it and that is what we will see.

Fermi was meant to be a Gaming Architecture, Pascal is a Compute Architecture. As I also pointed out earlier Geforce cards are the recycling program, Tesla comes first and they need some series catching up with compute.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> You are right it doesn't matter, So okay we will go with that then. then the X80 or whatever series comes out it doesn't have toe beat the 980ti at all. As a matter of fact it can be slower, as long as the GP 200 series is faster all is well.


In the spirit of compromise, I will now only refer to GK104 cards by their 7 series numbers. So the GTX 770 is the card that beat the GTX 580.

And I think everyone expects GP100 to beat GM200. There probably won't even be a GP200 with Volta so (relatively) close.

Edit: at least I hope they don't need to go to a GP200 to get it right, since that would imply they screwed up GP100 somehow, and probably mean a significant delay.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> 600 or 700 doesn't matter, and I don't know why you are so hung up on the number. The group in question is Kepler cards, and they were both Kepler cards.
> 
> The fact remains that the second tier GK104 *Kepler* card was faster than than the previous generation (Fermi) large die card. That's the trend, and I expect it to continue with Pascal.
> 
> And yes, I'm quite familiar with what semantics means. Are we taking a test now, because I also know how percentages work.


Quote:


> Originally Posted by *Forceman*
> 
> In the spirit of compromise, I will now only refer to Gk104 cards by their 7 series numbers. So the GTX 770 is the card that beat the GTX 580.
> 
> And I think everyone expects GP100 to beat GM100. There probably won't even be a GP200 with Volta so (relatively) close.
> 
> Edit: at least I hope they don't need to go to a GP200 to get it right, since that would imply they screwed up GP100 somehow, and probably mean a significant delay.


Well see if this was just another gaming architecture I would most likely agree with you, however it isnt. It is in fact a Compute architecture, with Serious gains being wanted for P/PW. We are at the point where a 980ti can play maxed everything on 1440p.

The only thing it cant do max is 4k, and thats still a ways out, so a 20% performance increase wont sell many cards. Sure we enthusiasts will buy them because that is what we do and we want big numbers. However from a logical standpoint if you are upgrading from the 900 series to Pascal you are throwing money way, there is simply no need.

The causal users will not upgrade for that very reason they have no need. Teslas users and Quadro users have being being neglected over the last few years, they have a need. So NV will fix that market and there concerns, and throw us a P/PW gains that will entice the penny pinchers and average users which are the majority of this market not us.

And to think that maybe NV hasn't caught on to Intel over the last few years is a stretch. We have shown them they can give us small gains and we will still buy it. Just like we do when Intel releases a new proc thats 5% faster and we all run out and spend 400-1000 to have the lastest CPU despite the fact that it is barely any gain at all.

You know who wont do that? Big Businesses that are spending millions at a time and they want that money to be well spent. They want Large gains, so if NV doesn't focus on them and let it trickle down like they used to then they are fools.


----------



## Forceman

If you don't think GP100 is going to beat GM200, you are seriously underestimating the power of a die shrink. There is a 0% chance that 1080 Ti (or whatever the GP100 card is called) ends up slower than a 980 Ti.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> If you don't think GP100 is going to beat GM200, you are seriously underestimating the power of a die shrink. There is a 0% chance that 1080 Ti (or whatever the GP100 card is called) ends up slower than a 980 Ti.


Oh I didn't say that, I said a GP104 cut chip will not beat a 980ti. I also said that the gains we see GM200-GP100 will be small, 20-30% tops.

We will see gains, Gains in compute, Gains in P/PW, Gains in P/IN, and some small gains in P/G.

That is the way the industry has been heading for awhile, people dont want faster they want smaller and less wattage until they can carry around a flagship PC the size of a usb stick.


----------



## i7monkey

Flagship is all about specs and not what you call it.

You can call it a GTX 680Ti Super Ultra Classified Ghz HIGH END Edition. It's still mid-sized and we're still paying twice the price for it.


----------



## Cyber Locc

Quote:


> Originally Posted by *i7monkey*
> 
> Flagship is all about specs and not what you call it.
> 
> You can call it a GTX 680Ti Super Ultra Classified Ghz HIGH END Edition. It's still mid-sized and we're still paying twice the price for it.


Okay it doesn't matter what size it is, in the line of 600 series the top performing card was the 680 that makes it the flagship. Do I need to break out the definition of that word again? The GPU industry doesn't own it and it doesn't say "Has to be a big chip".

The Kepler Architectures Flagship as stated before was the Titan Black that is correct.

However the 600 series flagship was a GTX 680. Your right it doesn't matter what they call they card, nor does it matter what they call the chip, it doesn't even matter what size it is, the flagship for a series is the top performing card of that series period.

I got a Idea how about we make it threw the rest of this thread without anyone talking about the 600 series. As obviously that entire debacle is just beyond the grasp of many.

Also the GTX 680 was a 550 dollar card so I am not sure how that was a flagship price on a midrange card. Looks at 780, 980, sees the same prices...

Anyway I got lucifer and Better Call Saul on the DVR so I will be back in a few hours lets see if everyone can not mention the 600 series before I come back.

Forceman you should change your sig to mention the 600 series as well.

EDIT: Dang I forgot that Limitless and Gotham on Mondays as well, why they had to move all my shows to Monday I do not know, Hate them for that. so Ill be back in 4 hours sorry.


----------



## i7monkey

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay it doesn't matter what size it is, in the line of 600 series the top performing card was the 680 that makes it the flagship. Do I need to break out the definition of that word again? The GPU industry doesn't own it and it doesn't say "Has to be a big chip".
> 
> The Kepler Architectures Flagship as stated before was the Titan Black that is correct.
> 
> However the 600 series flagship was a GTX 680. Your right it doesn't matter what they call they card, nor does it matter what they call the chip, it doesn't even matter what size it is, the flagship for a series is the top performing card of that series period.


There are different chips based on different specs. Small chips, mid-sized chips, and the biggest and fastest chips.

Nvidia failed to deliver us their fastest chip so they took their second fastest (GK104) and charged us full price.

Sorry but that's unacceptable no matter what business you're in.

I don't care if GK104 was the fastest they had. It was mid-sized and they charged us twice the amount for it, and then they went and doubled the price of their high end chips as well.


----------



## magnek

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> What about ditching alphanumericals for names like in the olden days?
> 
> GeForce Valkyrie?
> 
> GeForce Ascender?
> 
> GeForce Butterfly X?


GeForce GT 720
GeForce GTX 1080
GeForce GTX 1440
GeForce GTX 2160
GeForce GTX Titan 2880

Only 5 cards are needed to cover the entire spectrum


----------



## Clocknut

Quote:


> Originally Posted by *Serandur*
> 
> Why not skip the GTX 1080 name and go for a GTX 2080, GTX 3080, GTX 4080, etc. scheme?


they are not going to put a zero as the second digit. Intel doesnt do it, Nvidia doesnt, AMD doesnt.


----------



## xTesla1856

They should resurrect the ghost of 3dfx and call the cards Voodoo. Along with the awesome box art and ad campaign. Now THAT would be something !


----------



## i7monkey

the good old days of cool video card names


----------



## SuperZan

Quote:


> Originally Posted by *xTesla1856*
> 
> They should resurrect the ghost of 3dfx and call the cards Voodoo. Along with the awesome box art and ad campaign. Now THAT would be something !


I was so proud the day I installed my very own Voodoo2. Riva TNT too. XD


----------



## Serandur

Quote:


> Originally Posted by *Clocknut*
> 
> they are not going to put a zero as the second digit.


Not that it matters whether anyone already did it, but:

Quote:


> Intel doesnt do it,


Yes, they do:

Intel Xeon 5080
Intel Xeon 5070
Intel Xeon 5060

Quote:


> Nvidia doesnt,


Yes, they do:

NVIDIA Quadro M6000
NVIDIA Quadro M5000
NVIDIA Quadro M4000

Quote:


> AMD doesnt


And yes, they totally do:

AMD Phenom II X6 1090T
AMD Phenom II X6 1075T
AMD Phenom II X6 1055T

Oh, and Intel 8086 immediately sprang to mind. You know, ancestor to every modern PC's CPU...


----------



## rudyae86

So is this going to be like that one time when people said you are better off buying a GTX 970 instead of a 780 Ti? hmmm

So should I just wait for a 1080 instead of buying a 980 Ti? lol


----------



## STEvil

Depends how long you typically keep your video cards for.


----------



## Avant Garde

I have sold my GTX 980 and I'm waiting for that X80 or GTX 1080 or whatever. I think that will be at least 30% faster than GTX 980Ti.


----------



## ZealotKi11er

OK GTX680 was midrange but what about GTX980. Its dies size is bigger then HD 7970. Also if you compare the size of the die then 7970 also midrange of GCN.


----------



## KeepWalkinG

Quote:


> Originally Posted by *Avant Garde*
> 
> I have sold my GTX 980 and I'm waiting for that X80 or GTX 1080 or whatever. I think that will be at least 30% faster than GTX 980Ti.


And if is only 10% better than 980 Ti what you will do?


----------



## Woundingchaney

Quote:


> Originally Posted by *KeepWalkinG*
> 
> And if is only 10% better than 980 Ti what you will do?


Oddly enough approx. 10% is a reasonable estimate (depending on scenarios such as async compute and what not)


----------



## Avant Garde

Quote:


> Originally Posted by *KeepWalkinG*
> 
> And if is only 10% better than 980 Ti what you will do?


I will buy it of course. I will get GPU that is faster than GTX 980Ti, more power efficient and with better future driver support for the price of GTX 980Ti.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> To me one of the most interesting things about the upcoming Pascal cards is finding out what Nvidia's new naming scheme is going to be. I think most people are assuming they will not name it the GTX 1080 so what are they going to do? Are they going to skip to GTX 1180 or will they come up with an entirely new naming scheme like AMD did with the move to the R9 nomenclature a couple of years ago? We should find out pretty soon!


I just hope we get Maxwell type scaling for overclocks with respect to voltage. I'd rather not have to pay an extra $50-150 for a custom PCB card and wait additional weeks/months + additional weeks/months for custom blocks (that are more expensive) for said cards to come out (i.e. Kepler).


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay, I understand that, however bear with me now as this is obviosuoly hard to grasp.
> 
> THERE WAS NO GK100 GEFORCE CARD!
> 
> 104 was the flagship, Now it is not! X80s used to be Flagships now they are not.
> Na they are updating and retiring the old Classy logo, they said they want to move into the future
> 
> 
> 
> 
> 
> 
> 
> . thats the New "K|ng-Classified" Logo, see you got 2 leeks in one day man.
> 
> (thinks to self darn I knew I should have used the EVGA classy logo)


You have been proven wrong so many times with your math and now with this, but you still can't admit when you are wrong!

Yes, the GK104 chip used for the GTX 680 was the flagship, but it is still a mid-range chip like the 980. The 980/680 are the same tier and the 670/970 are the same tier. Just because Nvidia milked Kepler and was able to sale the GTX680 (a midrange die) as the 6 series flagship does not make it any different than the 980. The 7 series was an extension of the 6 series.... the first time Nvidia has basically ever been able to do that. Reasons being that GK100 chips had yield issues and GK104 (mid-range chip) was able to beat the 7970. When the second revision of big chip Kepler (GK110) did so well at gaming and with how good the GTX680 (GK104) was still competing with AMD's top card (7970), Nvidia was able to introduce the new tier of GPU (Titan). We would have probably never seen the 780Ti had AMD not dropped a Titan killer(290x) at $550. Nvidia instantly created tiers in the 7 series that we have not seen since and may never see again:

Tier 1: Titan X (Big Maxwell) = Titan Black(Big Kepler)
Tier 2: 980Ti(Big Maxwell) = 780Ti(Big Kepler)
(780 (Big Kepler) not represented in the 9 series)
(Titan (Big Kepler) not represented in the 9 series)
Tier 3: 980(small Maxwell) = 680(small Kepler)
Tier 4: 970(small Maxwell)= 670(Small Kepler).

Yeah, X80 chip is no longer the flagship chip, but in past generations the 3 and 4 tier gpus (which are represented by the X80 and the X70 cards now) have both beaten the flagship of the previous generation.

Do you follow me now? You can't honestly be one of those that fall for Nvidia's marketing and not know that Kepler (same generation chip) was extended across 2 series (6 and 7). They kinda extended Fermi, but at least the GTX 470, 480, 570 and 580 were all large die chips and were the tier 1 and 2 of their respective generations.

Quote:


> Originally Posted by *Forceman*
> 
> You can play numbering semantics all you want, but GK104 wasn't the big Kepler chip, and there was a lot of discussion about it when the GTX 680 launched. A 300mm^2 die is not a flagship die.
> 
> Gx104 has been the second tier chip for a long time, no matter what number they give it.


Exactly. Still don't know how people miss this!


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> You have been proven wrong so many times with your math and now with this, but you still can't admit when you are wrong!
> 
> Yes, the GK104 chip used for the GTX 680 was the flagship, but it is still a mid-range chip like the 980. The 980/680 are the same tier and the 670/970 are the same tier. Just because Nvidia milked Kepler and was able to sale the GTX680 (a midrange die) as the 6 series flagship does not make it any different than the 980. The 7 series was an extension of the 6 series.... the first time Nvidia has basically ever been able to do that. Reasons being that GK100 chips had yield issues and GK104 (mid-range chip) was able to beat the 7970. When the second revision of big chip Kepler (GK110) did so well at gaming and with how good the GTX680 (GK104) was still competing with AMD's top card (7970), Nvidia was able to introduce the new tier of GPU (Titan). We would have probably never seen the 780Ti had AMD not dropped a Titan killer(290x) at $550. Nvidia instantly created tiers in the 7 series that we have not seen since and may never see again:
> 
> Tier 1: Titan X (Big Maxwell) = Titan Black(Big Kepler)
> Tier 2: 980Ti(Big Maxwell) = 780Ti(Big Kepler)
> (780 (Big Kepler) not represented in the 9 series)
> (Titan (Big Kepler) not represented in the 9 series)
> Tier 3: 980(small Maxwell) = 680(small Kepler)
> Tier 4: 970(small Maxwell)= 670(Small Kepler).
> 
> Yeah, X80 chip is no longer the flagship chip, but in past generations the 3 and 4 tier gpus (which are represented by the X80 and the X70 cards now) have both beaten the flagship of the previous generation.
> 
> Do you follow me now? You can't honestly be one of those that fall for Nvidia's marketing and not know that Kepler (same generation chip) was extended across 2 series (6 and 7). They kinda extended Fermi, but at least the GTX 470, 480, 570 and 580 were all large die chips and were the tier 1 and 2 of their respective generations.
> Exactly. Still don't know how people miss this!


Eh, IIRC 780 Ti was basically a 3 GB Titan Black (or Titan Black was a 6 GB 780 Ti) albeit gimped compute. I guess you could say the 980 Ti is equivalent to a Titan though, since the 980 Ti has a slightly cut Titan X die and the Titan is a slightly cut Titan Black die (GK110). At least according to Wikipedia:



But you're right. There's no further cut-down GM200 like there was with Kepler and having #1 780 Ti/Titan Black, #2 Titan, #3 780

And yeah, 980 does seem like 680/770.


----------



## jezzer

Quote:


> Originally Posted by *Avant Garde*
> 
> I will buy it of course. I will get GPU that is faster than GTX 980Ti, more power efficient and with better future driver support for the price of GTX 980Ti.


If it is only 10% faster there is a chance it would be actually slower in practice. At least before the point maxwell will loose driver love.

Smaller dies, in most cases first gen mostly give improved temps and power but also less headroom for overclocking.

It can be 10% stock vs stock but who does not have +30% performance on their 980Ti due to running 1400-1500 core?

So if it OCs worse and only gets +10/15% performance extra it would be still slower









There is also a big chance it does not have full hardware DX12 support and still lacks proper Async but thats another thing but since you talk about better future driver support u seem to want to be future ready.

I would not blindly buy it thinking it will perform better that a 980 TI pre-driver nerf.


----------



## Avant Garde

I think that nothing of that you've just mentioned will happen


----------



## EightDee8D

Quote:


> Originally Posted by *Avant Garde*
> 
> I will buy it of course. I will get GPU that is faster than GTX 980Ti, more power efficient and with *better future driver support* for the price of GTX 980Ti.


that's not how you supposed to handle nvidia gpu, latest is a must


----------



## criminal

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Eh, IIRC 780 Ti was basically a 3 GB Titan Black (or Titan Black was a 6 GB 780 Ti) albeit gimped compute. I guess you could say the 980 Ti is equivalent to a Titan though, since the 980 Ti has a slightly cut Titan X die and the Titan is a slightly cut Titan Black die (GK110). At least according to Wikipedia:
> 
> 
> 
> But you're right. There's no further cut-down GM200 like there was with Kepler and having #1 780 Ti/Titan Black, #2 Titan, #3 780
> 
> And yeah, 980 does seem like 680/770.


Yeah, 980Ti is cut chip (not as much as the 780 though), but the 780Ti may have never seen the light of day had AMD not dropped the 290x at $550. Had that not happened, then I think the 780/Titan would have remained alone on the market until the 9 series arrived. Or they had it planned all along because they knew they would sell.
Quote:


> How much does any of that matter if R9 290X is still a stellar performer? I guess that depends on how much it costs, right? As it happens, AMD says you'll find it flagship Hawaii-based board for $550. That's $100 less than GeForce GTX 780 and $450 less than a Titan. And better performance, in many of the cases we tested, than both. Wowsa.


Source


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> Yeah, 980Ti is cut chip (not as much as the 780 though), but the 780Ti may have never seen the light of day had AMD not dropped the 290x at $550. Had that not happened, then I think the 780/Titan would have remained alone on the market until the 9 series arrived. Or they had it planned all along because they knew they would sell.
> Source


Yeah that's why I said the 980 Ti is like a Titan which is 2688 cores vs 780 Ti/Titan Black at 2880 cores. Compared to 2816 cores for the 980 Ti and 3072 for the Titan X. And that you were right, the 780 isn't really represented in the 900 series since there's just 1 cut-down die vs 2. I'm sure they could have had a further cut-down ~2400-2500 core GPU which SHOULD have been the 980 at $550 instead of the piece of crap 980 that we got (that really should have been a 970 or at least called "970 Ti").


----------



## Clocknut

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yeah that's why I said the 980 Ti is like a Titan which is 2688 cores vs 780 Ti/Titan Black at 2880 cores. Compared to 2816 cores for the 980 Ti and 3072 for the Titan X. And that you were right, the 780 isn't really represented in the 900 series since there's just 1 cut-down die vs 2. I'm sure they could have had a further cut-down ~2400-2500 core GPU which SHOULD have been the 980 at $550 instead of the piece of crap 980 that we got (that really should have been a 970 or at least called "970 Ti").


I am kinda wondered where are those GM200 chips that didnt make it to 2816 cores for 980Ti. Kinda wasted, could have sold as 980


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *Clocknut*
> 
> Quote:
> 
> 
> 
> Originally Posted by *xxdarkreap3rxx*
> 
> Yeah that's why I said the 980 Ti is like a Titan which is 2688 cores vs 780 Ti/Titan Black at 2880 cores. Compared to 2816 cores for the 980 Ti and 3072 for the Titan X. And that you were right, the 780 isn't really represented in the 900 series since there's just 1 cut-down die vs 2. I'm sure they could have had a further cut-down ~2400-2500 core GPU which SHOULD have been the 980 at $550 instead of the piece of crap 980 that we got (that really should have been a 970 or at least called "970 Ti").
> 
> 
> 
> I am kinda wondered where are those GM200 chips that didnt make it to 2816 cores for 980Ti. Kinda wasted, could have sold as 980
Click to expand...

Maybe they will go back to the fermi days and do the 980 SE and SC and all the other acronyms that the 460 ended up with


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *F3ERS 2 ASH3S*
> 
> Maybe they will go back to the fermi days and do the 980 SE and SC and all the other acronyms that the 460 ended up with


Who knows the percentage of GPUs that failed to meet 980 Ti specs though (unless they've released yield info).


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Quote:
> 
> 
> 
> Originally Posted by *F3ERS 2 ASH3S*
> 
> Maybe they will go back to the fermi days and do the 980 SE and SC and all the other acronyms that the 460 ended up with
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Who knows the percentage of GPUs that failed to meet 980 Ti specs though (unless they've released yield info).
Click to expand...

True. But this is what I was referring to http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-460/specifications


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *F3ERS 2 ASH3S*
> 
> True. But this is what I was referring to http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-460/specifications


Yeah I know what you mean. Although that was Fermi which allegedly had poor yield (Nvidia denied it) and it was a new node so I wouldn't expect Nvidia to have that many 980 Ti failures considering how long the 28nm node has been in production.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> You have been proven wrong so many times with your math and now with this, but you still can't admit when you are wrong!
> 
> Yes, the GK104 chip used for the GTX 680 was the flagship, but it is still a mid-range chip


I will stop it there. It is a mid range chip in the past and as far as performance standpoint or to the titan It is mid range.

Actually pretty sure I agreed to that a few times (further than what you quoted though).

It is still however the 6 series flagship, did the 6 series have a bad flagship, yes. However that isn't the point, point is the 600 series 680 is the flagship.

I agree that was some milk work on there part. I think the titan should have been in the 600 series, why it isnt is well dumb imo, and that should the flagship. No one is saying that the 680 was a good flagship, just that by the very definition of the word it was the 600 series flagship. Pay close attention what I am stating there and have been from the very beginning, *The 600 series FLAGSHIP*, not the Kepler Flagship.
Quote:


> Originally Posted by *criminal*
> 
> The 980/680 are the same tier and the 670/970 are the same tier. Just because Nvidia milked Kepler and was able to sale the GTX680 (a midrange die) as the 6 series flagship does not make it any different than the 980. The 7 series was an extension of the 6 series.... the first time Nvidia has basically ever been able to do that. Reasons being that GK100 chips had yield issues and GK104 (mid-range chip) was able to beat the 7970. When the second revision of big chip Kepler (GK110) did so well at gaming and with how good the GTX680 (GK104) was still competing with AMD's top card (7970), Nvidia was able to introduce the new tier of GPU (Titan).


100% agree with all of this, and that the Titan should have been the flagship.
Quote:


> Originally Posted by *criminal*
> 
> We would have probably never seen the 780Ti had AMD not dropped a Titan killer(290x) at $550. *Nvidia instantly created tiers in the 7 series that we have not seen since and may never see again:*


Okay so this section will assume you are talking about the 780TI, as thats the way I am interpreting it. There was no instantly created tiers, The announced the 780ti 4 days after the 290X was announced. The cards were release on Novemeber 7th thats a little over a week later. They did not say hey lets make a TI, and then announce the card, then start sales all in an 11 day span.

We can say they knew about the 290x before that (I forget when the rumors started), how much in advance did they know? A month, 2 months, even 3 Months? that is not enough time to design a GPU cut or not, box it and ship it to suppliers to launch by the date it did.

People think that the 780ti was not planned and was reaction to the 290x, well this is just false. That GPU was planned long before the 290x was even rumored. Honestly as you have stated yourself, I think that was planned during the 6 series honestly. There should have been a TI in the 6 series it just never happened.

As to the we wont see it again, Well thats alos false. We seen it again in Maxwell, and even now NV is saying the top end cards will have HBM2, well seeeing how that isnt even in production until this summer, thats a ways out. So you think the only card with HBM2 or GDDR5x will be the Titan P? There will be a TI from here on out and your lying to yourself if you think otherwise, the Milking age has begun and there is no going back.

Quote:


> Originally Posted by *criminal*
> 
> Tier 1: Titan X (Big Maxwell) = Titan Black(Big Kepler)
> Tier 2: 980Ti(Big Maxwell) = 780Ti(Big Kepler)
> (780 (Big Kepler) not represented in the 9 series)
> (Titan (Big Kepler) not represented in the 9 series)
> Tier 3: 980(small Maxwell) = 680(small Kepler)
> Tier 4: 970(small Maxwell)= 670(Small Kepler).
> 
> Yeah, X80 chip is no longer the flagship chip, but in past generations the 3 and 4 tier gpus (which are represented by the X80 and the X70 cards now) have both beaten the flagship of the previous generation.
> 
> Do you follow me now? You can't honestly be one of those that fall for Nvidia's marketing and not know that Kepler (same generation chip) was extended across 2 series (6 and 7). They kinda extended Fermi, but at least the GTX 470, 480, 570 and 580 were all large die chips and were the tier 1 and 2 of their respective generations.
> Exactly. Still don't know how people miss this!


I agree in past generations the 3 tier card has beaten the flagship, the x80 is now the 3rd tier card. The x50s (old 4th tier) was slower than the flagship, and that is exactly what I have been saying this entire time. Faster than the X70(4th tier) slower than the X80(3rd tier).

As to the rest I have said, the 600 series was a strange exception to normal and we should seriously just throw it out the window, for this entire conversation. It has created alot of problems in this area, due to many discrepancy's in the way it was handled vs other generations.

EG. Forceman comparing the 580 to the 780ti, I get what he is saying they are equal chip sizes I get that. However the 1080TI will not have a chip the same size as the 980ti, they have done that before and it didnt work, the node shrink has to be associated with a die shrink, the following architecture can restore the Die size on the same node.

Intel follows the same rules, so why you think NV is immune to it is beyond me. Node Shrink = Die Shrink, then Die Increase on the same node the following year.

Lets take Skylake for an Example. Skylake Die Size was 122.4 vs DC Die Size of 177. The dies are shrinking, the E market gives the larger die and its a year later, there is a reason for that. http://www.anandtech.com/show/9505/skylake-cpu-package-analysis

However we can also look at the CPU market for another story. They had a large increase going from 40 to 28, this landed Sandy Bridge on the top of the map as one of the last good CPU tick. Since Sandy we have seen barely any gains. Even when we jumped over to the 14nm Skylake the gains are still small.

So Skylake is what a 20% improvement maybe 25% I7 to I7 vs Ivy bridge? And you expect Nvidia to pull a better increase than Intel haha that is going to happen. The issues that Intel has been feeling with shrinks is about to hit NV get ready for it.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> the node shrink has to be associated with a die shrink


It doesn't "have to be". It's just often done to widen margins between production costs:retail pricing (smaller die = more per wafer) and less spent on R&D since they don't have to engineer a new architecture.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> It doesn't "have to be". It's just often done to widen margins between production costs:retail pricing (smaller die = more per wafer) and less spent on R&D since they don't have to engineer a new architecture.


You are right I should not have said have to be. I should have said should be.

It also goes deeper than that, they have done it once before. With the gtx480 and that was a very big mistake that ensued lots of issues for them, due to heat and other factors. They will not make that mistake again I would hope, but you never know with them.

All that on top of what you said. They need to get familiar with the node before making a big chip. Also seeing how we will be stuck on 14nm for awhile most likely I feel they will milk it.


----------



## magnek

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I just hope we get Maxwell type scaling for overclocks with respect to voltage. I'd rather not have to pay an extra $50-150 for a custom PCB card and wait additional weeks/months + additional weeks/months for custom blocks (that are more expensive) for said cards to come out (i.e. Kepler).


Errr wut? Maxwell doesn't voltage scale worth a crap unless you go subzero cooling, so I'd rather have a return to Kepler/Tahiti style voltage scaling, where with enough voltage (and balls) you could achieve up to almost a 50% overclock, which is simply not possible for Maxwell for 24/7 use.

Also the reason OG Titan had 1 SMX disabled was because 28nm yields were still too poor at the time the Tesla K20X was released (Nov 2012) to make full GK110 chips sustainable. The OG Titan was simply a K20X stripped of ECC and other Tesla specific features (and probably worse build quality).


----------



## Cyber Locc

Quote:


> Originally Posted by *magnek*
> 
> Also the reason OG Titan had 1 SMX disabled was because 28nm yields were still too poor at the time the Tesla K20X was released (Nov 2012) to make full GK110 chips sustainable. The OG Titan was simply a K20X stripped of ECC and other Tesla specific features (and probably worse build quality).


We should use that as the new slogan for Titans, Every time a Tesla Dies a Titan is born!


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> You are right I should not have said have to be. I should have said should be.
> 
> It also goes deeper than that, they have done it once before. With the gtx480 and that was a very big mistake that ensued lots of issues for them, due to heat and other factors. They will not make that mistake again I would hope, but you never know with them.
> 
> All that on top of what you said. They need to get familiar with the node before making a big chip. Also seeing how we will be stuck on 14nm for awhile most likely I feel they will milk it.


Idk what they think sometimes. Intel does it right. Production on the tiny dies first before getting into bigger and bigger dies. Only annoyance is you end up with the big-die CPUs coming so late. Like come on now, Skylake was released 6 months ago and we're still waiting on Broadwell-E -.- Prob see Kaby Lake before BW-E (only since I'm a pessimist).


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> As to the we wont see it again, Well thats alos false. We seen it again in Maxwell, and even now NV is saying the top end cards will have HBM2, well seeeing how that isnt even in production until this summer, thats a ways out. So you think the only card with HBM2 or GDDR5x will be the Titan P? There will be a TI from here on out and your lying to yourself if you think otherwise, the Milking age has begun and there is no going back.
> I agree in past generations the 3 tier card has beaten the flagship, the x80 is now the 3rd tier card. The x50s (old 4th tier) was slower than the flagship, and that is exactly what I have been saying this entire time. Faster than the X70(4th tier) slower than the X80(3rd tier).


I am not saying we won't see a Ti again, I am saying we probably won't see a Kepler type tier again. You know the FOUR big die chips in the same generation. Nvidia was able to produce the Titan (a little cut down), 780 (even more cut down), a 780Ti (full chip, half vram) and the Titan Black (full chip, full vram). That's what I was referring to. Like someone else says, if you really want to get technical, the 570 = 980Ti = Titan, 780 = maybe a GTX 560 Ti 448 Core Limited Edition (lol at the name), 580 = Titan Black/780Ti = Titan X. Nvidia cut Kepler so many ways I think it confused everyone. And just to clarify, I think Nvidia had already produced 780Ti/Titan Black and sat on it waiting to see what AMD was bringing to the table. Titan Black was probably being released anyway, because once it released, you couldn't find a regular Titan anymore. Heck, some people bought both the Titan and then the Titan Black. Can't hate on Nvidia for cash in where they can!









Okay. I just got confused because I thought you did say at one point the 970 was a 5th tier gpu at some point. That's what I was just trying to correct. Anyway, I think we do agree on some things, so I will leave it at that.

Sorry if this is beating a dead horse, but this is all for fun for me. I like talking about old gpu's as well as new ones.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Idk what they think sometimes. Intel does it right. Production on the tiny dies first before getting into bigger and bigger dies. Only annoyance is you end up with the big-die CPUs coming so late. Like come on now, Skylake was released 6 months ago and we're still waiting on Broadwell-E -.- Prob see Kaby Lake before BW-E (only since I'm a pessimist).


It does suck for us big chip users I agree with that 100% but at least we dont have GTX 480 cpus








Quote:


> Originally Posted by *criminal*
> 
> I am not saying we won't see a Ti again, I am saying we probably won't see a Kepler type tier again. You know the FOUR big die chips in the same generation. Nvidia was able to produce the Titan (a little cut down), 780 (even more cut down), a 780Ti (full chip, half vram) and the Titan Black (full chip, full vram). That's what I was referring to. Like someone else says, if you really want to get technical, the 470 = 980Ti = Titan, 780 = maybe a GTX 560 Ti 448 Core Limited Edition (lol at the name), 580 = Titan Black/780Ti = Titan X. Nvidia cut Kepler so many ways I think it confused everyone. And just to clarify, I think Nvidia had already produced 780Ti/Titan Black and sat on it waiting to see what AMD was bringing to the table. Titan Black was probably being released anyway, because once it released, you couldn't find a regular Titan anymore. Heck, some people bought both the Titan and then the Titan Black. Can't hate on Nvidia for cash in where they can!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Okay. I just got confused because I thought you did say at one point the 970 was a 5th tier gpu at some point. That's what I was just trying to correct. Anyway, I think we do agree on some things, so I will leave it at that.
> 
> Sorry if this is beating a dead horse, but this is all for fun for me. I like talking about old gpu's as well as new ones.


Yep I think we are pretty much on the same page just miscommunication issues







.

And I agree with everything you said, and Kepler is the poster child of mistakes thats why I said we should not mention it. It should just a be a dark day in GPU history that know one talks about







. Kind of like the GTX 480







.


----------



## magnek

Quote:


> Originally Posted by *Cyber Locc*
> 
> It does suck for us big chip users I agree with that 100% but at least we dont have GTX 480 cpus


So what do you call Prescott?









Ok fine, Nehalem then?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *magnek*
> 
> Errr wut? Maxwell doesn't voltage scale worth a crap unless you go subzero cooling, so I'd rather have a return to Kepler/Tahiti style voltage scaling, where with enough voltage (and balls) you could achieve up to almost a 50% overclock, which is simply not possible for Maxwell for 24/7 use.
> 
> Also the reason OG Titan had 1 SMX disabled was because 28nm yields were still too poor at the time the Tesla K20X was released (Nov 2012) to make full GK110 chips sustainable. The OG Titan was simply a K20X stripped of ECC and other Tesla specific features (and probably worse build quality).


That's exactly what I said. Why would you want to return to Kepler/Tahiti style? You want to wait an extra 2-3 months for the Strix/Matrix/Classy/KPE/Lightning/HOF to come out and then 2-3 more months for the blocks to come out AND have to pay more money for the card AND more money for the block? Not me. I'd rather have the reference card from day 1 with the reference block coming out shortly after so I can laugh at people who wait for the more expensive custom versions and 6 months later, have the same or worse OCs than me.


----------



## Cyber Locc

Quote:


> Originally Posted by *magnek*
> 
> So what do you call Prescott?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Ok fine, Nehalem then?


Same as the 480, Dark Days in CPU history that no one talks about that just gets swept under the rug. With Bulldozer, 3.5+5gb of Vram, GTX 480s, and all the other stupid things these companies have done.

Okay I will reiterate that is a mistake that Intel learned from and digressed and hasn't made since, I would hope that NV has learned from there run of it as well







.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Same as the 480, Dark Days in CPU history that no one talks about that just gets swept under the rug. With Bulldozer, 3.5+5gb of Vram, GTX 480s, and all the other stupid things these companies have done.


Hey, I loved my 470. It was a beast of a card once overclocked and mature drivers. It did heat up a room though!


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Hey, I loved my 470. It was a beast of a card once overclocked and mature drivers. It did heat up a room though!


Yep you and AMD both loved the 400 series







.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *criminal*
> 
> I am not saying we won't see a Ti again, I am saying we probably won't see a Kepler type tier again. You know the FOUR big die chips in the same generation. Nvidia was able to produce the Titan (a little cut down), 780 (even more cut down), a 780Ti (full chip, half vram) and the Titan Black (full chip, full vram). That's what I was referring to. Like someone else says, if you really want to get technical, the 570 = 980Ti = Titan, 780 = maybe a GTX 560 Ti 448 Core Limited Edition (lol at the name), 580 = Titan Black/780Ti = Titan X. Nvidia cut Kepler so many ways I think it confused everyone. And just to clarify, I think Nvidia had already produced 780Ti/Titan Black and sat on it waiting to see what AMD was bringing to the table. Titan Black was probably being released anyway, because once it released, you couldn't find a regular Titan anymore. Heck, some people bought both the Titan and then the Titan Black. Can't hate on Nvidia for cash in where they can!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Okay. I just got confused because I thought you did say at one point the 970 was a 5th tier gpu at some point. That's what I was just trying to correct. Anyway, I think we do agree on some things, so I will leave it at that.
> 
> Sorry if this is beating a dead horse, but this is all for fun for me. I like talking about old gpu's as well as new ones.


I don't believe having half the VRAM has anything to do with the GPU at all. Just the fact that only half the VRAM is soldered to the board. Like if you look at the 980 Ti reference cards, since it's the same GPU as the Titan X, just cut down, they reused the board's design and you end up with the back side lacking VRAM chips:



I think a better way to word it would be: 780 Ti, full chip, gimped FP64 performance and Titan Black, full chip, full FP64 performance.

Edit: Although I guess there's a possibility of a memory controller defect that doesn't allow a portion of cache to communicate with a portion of DRAM.


----------



## magnek

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> That's exactly what I said. Why would you want to return to Kepler/Tahiti style? You want to wait an extra 2-3 months for the Strix/Matrix/Classy/KPE/Lightning/HOF to come out and then 2-3 more months for the blocks to come out AND have to pay more money for the card AND more money for the block? Not me. I'd rather have the reference card from day 1 with the reference block coming out shortly after so I can laugh at people who wait for the more expensive custom versions and 6 months later, have the same or worse OCs than me.


I'm saying I don't want my card's OC potential to be largely determined by the ASIC lottery is all. With Kepler/Tahiti the silicon lottery still existed but could largely be mitigated by applying enough voltage since they scaled well. With Maxwell, you're kinda outta luck if you're dealt a bad hand.

Plus I always avoid nVidia ref PCBs since most of the time they're "just acceptable", so I wait for custom cards anyway. But to each his own.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *magnek*
> 
> I'm saying I don't want my card's OC potential to be largely determined by the ASIC lottery is all. With Kepler/Tahiti the silicon lottery still existed but could largely be mitigated by applying enough voltage since they scaled well. With Maxwell, you're kinda outta luck if you're dealt a bad hand.
> 
> Plus I always avoid nVidia ref PCBs since most of the time they're "just acceptable", so I wait for custom cards anyway. But to each his own.


Well, I do







I agree with reference PCBs being "just acceptable" when voltage scaling exists but, for example, with Maxwell, it doesn't scale at ambient temps so for air or water (un-chilled), a custom PCB won't give an advantage over a reference PCB. I got duped with the 980 Kingpin (two of them!) which is why I just want OC potential to be based on ASIC quality and have nothing to do with the components on the PCB. Plus, after the 980 Ti launched, custom cards barely sold for anything more than reference. Forget about blocks too, custom ones are much more difficult to sell and will net you probably the same as if you were selling a reference block. I had to sell both of my KPEs with blocks for like $475 IIRC since nobody wanted to pay extra for a KPE now that the 980 Ti was out.


----------



## magnek

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Well, I do
> 
> 
> 
> 
> 
> 
> 
> I agree with reference PCBs being "just acceptable" when voltage scaling exists but, for example, with Maxwell, it doesn't scale at ambient temps so for air or water (un-chilled), a custom PCB won't give an advantage over a reference PCB. I got duped with the 980 Kingpin (two of them!) which is why I just want OC potential to be based on ASIC quality and have nothing to do with the components on the PCB. Plus, after the 980 Ti launched, custom cards barely sold for anything more than reference. Forget about blocks too, custom ones are much more difficult to sell and will net you probably the same as if you were selling a reference block. I had to sell both of my KPEs with blocks for like $475 IIRC since nobody wanted to pay extra for a KPE now that the 980 Ti was out.


I always just assume the waterblock is a sunk cost and thus write it off the moment I break the box seal. Plus in the grand scheme of things the cost of the waterblock is minuscule.

But yeah apart from Kingpin/Lightning/Matrix cards, most other custom cards come out pretty fast, and for the most part will do just fine even if you pushed 1.4V through them*, which is why I don't mind as much. But I understand where you're coming from, especially if you take waiting+resale value into account.

*my 980 Ti 6G Gaming uses 8 phases of Sinopower SM7320 mosfets, and each phase is capable of handling at least 60A, so I'd sooner run out of cooling than amps lol


----------



## headd

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yeah that's why I said the 980 Ti is like a Titan which is 2688 cores vs 780 Ti/Titan Black at 2880 cores. Compared to 2816 cores for the 980 Ti and 3072 for the Titan X. And that you were right, the 780 isn't really represented in the 900 series since there's just 1 cut-down die vs 2. I'm sure they could have had a further cut-down ~2400-2500 core GPU which SHOULD have been the 980 at $550 instead of the piece of crap 980 that we got (that really should have been a 970 or at least called "970 Ti").


I think Nv make more money selling overpriced crap GTX980 for 550USD than selling cutdown gm200 with 2400-2500SP for 550USD...That why we didnt get GTX780 maxwell edition.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *magnek*
> 
> I always just assume the waterblock is a sunk cost and thus write it off the moment I break the box seal. Plus in the grand scheme of things the cost of the waterblock is minuscule.
> 
> But yeah apart from Kingpin/Lightning/Matrix cards, most other custom cards come out pretty fast, and for the most part will do just fine even if you pushed 1.4V through them*, which is why I don't mind as much. But I understand where you're coming from, especially if you take waiting+resale value into account.
> 
> *my 980 Ti 6G Gaming uses 8 phases of Sinopower SM7320 mosfets, and each phase is capable of handling at least 60A, so I'd sooner run out of cooling than amps lol


I always see it as "loses 50% value instantly" with watercooling parts. Wish it weren't so.

Waiting just sucks. If you figure that a card will get replaced every year, as that's how I see it with Nvidia, then just waiting 2 or 3 months out of the year is pretty big. Especially since resale value of custom cards + custom blocks gets nuked when new cards are out. I had the KPEs with blocks installed for maybe a month before the 980 Ti dropped and the value just plummeted. I should have just stuck with reference cards instead of eating $600 from that 1 month of having the cards to selling them.
Quote:


> Originally Posted by *headd*
> 
> I think Nv make more money selling overpriced crap GTX980 for 550USD than selling cutdown gm200 with 2400-2500SP for 550USD...That why we didnt get GTX780 maxwell edition.


Yeah but like Magnek said:
Quote:


> Originally Posted by *Magnek*
> Also the reason OG Titan had 1 SMX disabled was because 28nm yields were still too poor at the time the Tesla K20X was released (Nov 2012) to make full GK110 chips sustainable. The OG Titan was simply a K20X stripped of ECC and other Tesla specific features (and probably worse build quality).


So who knows if there were really enough dies that couldn't make the cut (ha ha) for a 980 Ti.


----------



## marik123

I wonder how well does it overclock and would it be enough to push 4k @ 60fps?


----------



## ozlay

Quote:


> Originally Posted by *criminal*
> 
> Hey, I loved my 470. It was a beast of a card once overclocked and mature drivers. It did heat up a room though!


I still have a 480 to heat up my house during them cold winter nights


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *ozlay*
> 
> I still have a 480 to heat up my house during them cold winter nights


Waste of potential. Should watercool it and use the steam to power some turbines that power your entire house + electric heaters.


----------



## magnek

Quote:


> Originally Posted by *ozlay*
> 
> I still have a 480 to heat up my house during them cold winter nights


You know we make fun of Fermi Thermi and all, but in terms of raw heat output, GM200 is easily comparable to and even exceeds that of Fermi when overclocked. For example my current overclock I'm running, I'm seeing about 330-350W in most games, with BF4 in particular going up to 400W. Of course perf/watt is exceptional with Maxwell due to the excellent performance, but it still outputs a HUGE amount of heat.


----------



## F3ERS 2 ASH3S

Quote:


> Originally Posted by *marik123*
> 
> I wonder how well does it overclock and would it be enough to push 4k @ 60fps?


Single card.. doubt it


----------



## bigjdubb

Quote:


> Originally Posted by *ozlay*
> 
> I still have a 480 to heat up my house during them cold winter nights


Quote:


> Originally Posted by *magnek*
> 
> You know we make fun of Fermi Thermi and all, but in terms of raw heat output, GM200 is easily comparable to and even exceeds that of Fermi when overclocked. For example my current overclock I'm running, I'm seeing about 330-350W in most games, with BF4 in particular going up to 400W. Of course perf/watt is exceptional with Maxwell due to the excellent performance, but it still outputs a HUGE amount of heat.


Yup. My 970's forced me to ventilate the underside of my desk. They put out so much heat that I have to have fans under my desk to circulate out the hot air that gets trapped. It's like 20 degrees cooler above my desk than below it. Once you push Maxwell hard it turns into a mini bake oven.


----------



## Alwrath

Man I love the fact that I only spent $260 for a Radeon 290 for the 28nm generation. Replaced my GTX580 I had for 5 years lol. Paid $500 for that GTX580, but it served me well, now my bro has it and hes playing Fallout 4 on it great @1080p.

I feel sorry for everyone who spends so much money constantly upgrading, just be happy with what you have and make your hardware last. People should get what they need, not always what they want, too much $$$$$ wasted.


----------



## ZealotKi11er

Quote:


> Originally Posted by *bigjdubb*
> 
> Yup. My 970's forced me to ventilate the underside of my desk. They put out so much heat that I have to have fans under my desk to circulate out the hot air that gets trapped. It's like 20 degrees cooler above my desk than below it. Once you push Maxwell hard it turns into a mini bake oven.


Its with all cards like that. CFX and SLI just makes things much worse. I do not have much problem since I water cool and for 9 out of 12 months of the year I do not need cooling for the room. If I leave my 2 x 290X + 7970 full load my Room will get super hot if there is no open Window. I like the heat though since its 0C outside.


----------



## rluker5

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its with all cards like that. CFX and SLI just makes things much worse. I do not have much problem since I water cool and for 9 out of 12 months of the year I do not need cooling for the room. If I leave my 2 x 290X + 7970 full load my Room will get super hot if there is no open Window. I like the heat though since its 0C outside.


More than a few times I've sat down to game, lost track of time, and suddenly realize "Geez it's roasting in here!" and open my door. In the summer I put a tower fan there to blow in the ac (radiator heat, no circulation).


----------



## bigjdubb

Quote:


> Originally Posted by *Alwrath*
> 
> I feel sorry for everyone who spends so much money constantly upgrading, just be happy with what you have and make your hardware last. People should get what they need, not always what they want, too much $$$$$ wasted.


No need to feel sorry for anyone. Many people have what they need and are able to get what they want. As long as you aren't financing your hardware on credit then you aren't really doing any harm to yourself by upgrading whenever you feel like it. Also, frequent upgrades keep the used market full of recent hardware for all of those who can't afford full price.
Quote:


> Originally Posted by *ZealotKi11er*
> 
> Its with all cards like that. CFX and SLI just makes things much worse. I do not have much problem since I water cool and for 9 out of 12 months of the year I do not need cooling for the room. If I leave my 2 x 290X + 7970 full load my Room will get super hot if there is no open Window. I like the heat though since its 0C outside.


Well I live in a warm place where temps below freezing are rare, average winter low temp is in the 40's (F) and the other 10 months of the year are hot and humid. I thought about getting a little portable a/c for under my desk so my legs aren't 80 plus degrees while my head is sitting in 60 degree temps. It is compounded by SLI for sure but my cards are pumping out a good amount of heat at 1500 mhz and up. At my max overclock (1570 or so) the 3 fan cooler on my G1's can barely keep the cards at 85 degrees in SLi or not.


----------



## magnek

Not surprised, since when I had my 970 G1 SLI clocked to 1570/8000 power draw was about 200W PER CARD.


----------



## kuruptx

With us still holding onto to our gtx 680, Would you guys recommend a 980 now or wait?


----------



## SuperZan

Quote:


> Originally Posted by *kuruptx*
> 
> With us still holding onto to our gtx 680, Would you guys recommend a 980 now or wait?


If you're at 1920x1080 you could wait for Volta/Vega if you want big-die high end perf. You could also get Pascal's early offering which will likely be a 980ti-ish card performance-wise but with increased efficiency. Kind.of depends on your setup, what you play, and what you are going to spend.


----------



## ZealotKi11er

Quote:


> Originally Posted by *bigjdubb*
> 
> No need to feel sorry for anyone. Many people have what they need and are able to get what they want. As long as you aren't financing your hardware on credit then you aren't really doing any harm to yourself by upgrading whenever you feel like it. Also, frequent upgrades keep the used market full of recent hardware for all of those who can't afford full price.
> Well I live in a warm place where temps below freezing are rare, average winter low temp is in the 40's (F) and the other 10 months of the year are hot and humid. I thought about getting a little portable a/c for under my desk so my legs aren't 80 plus degrees while my head is sitting in 60 degree temps. It is compounded by SLI for sure but my cards are pumping out a good amount of heat at 1500 mhz and up. At my max overclock (1570 or so) the 3 fan cooler on my G1's can barely keep the cards at 85 degrees in SLi or not.


You have to run Reference Cards if SLI. Not sure why people got 2 x GTX970s. I would have sold one and bought GTX980 Ti for extra $100 you would pay for GTX970s accounting for resale cost. Run much cooler and work in every games no no 3.5GB.


----------



## cjc75

Quote:


> Originally Posted by *iLeakStuff*
> 
> "GTX 1070 slightly faster than GTX 970" LOL.
> Not a chance in hell. That 20k GPU score is from a GTX 1070 if its even real. There is no way that is a GTX 1080.
> 
> You are absolutely nuts if you think Nvidia launch a GTX 1070 that beats 970 by 10% and a GTX 1080 that beats a GTX 980Ti by 10%.


Ditto...

These benchmarks are absolutely pathetic...

The 970 still barely beats my 770....

All the hype around Pascal, I'm expecting these things to absolutely trounce all their predecessors... I'm sick of this 5% - 10% improvement gain with each new generation.

I want to be seeing 25% - 30% in improvement gains; otherwise its just not worth investing $500 - $600 in a video card that only gives me 10% improvement over my $300 video card.

There is just no comparison/justification with the price vs improvement ratio....


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Well, I do
> 
> 
> 
> 
> 
> 
> 
> I agree with reference PCBs being "just acceptable" when voltage scaling exists but, for example, with Maxwell, it doesn't scale at ambient temps so for air or water (un-chilled), a custom PCB won't give an advantage over a reference PCB. I got duped with the 980 Kingpin (two of them!) which is why I just want OC potential to be based on ASIC quality and have nothing to do with the components on the PCB. Plus, after the 980 Ti launched, custom cards barely sold for anything more than reference. Forget about blocks too, custom ones are much more difficult to sell and will net you probably the same as if you were selling a reference block. I had to sell both of my KPEs with blocks for like $475 IIRC since nobody wanted to pay extra for a KPE now that the 980 Ti was out.


I think the best option is reference based with modification, like EVGAs SCs ect. I know they dont change much, but they change enough to make a small difference and still use reference blocks. In the case of the 980TI I know the SC upgrades the Caps and resistors and a few other small changes. So its enough to make the card slightly better but not so much to hamper blocks







.
Quote:


> Originally Posted by *cjc75*
> 
> Ditto...
> 
> These benchmarks are absolutely pathetic...
> 
> The 970 still barely beats my 770....
> 
> All the hype around Pascal, I'm expecting these things to absolutely trounce all their predecessors... I'm sick of this 5% - 10% improvement gain with each new generation.
> 
> I want to be seeing 25% - 30% in improvement gains; otherwise its just not worth investing $500 - $600 in a video card that only gives me 10% improvement over my $300 video card.
> 
> There is just no comparison/justification with the price vs improvement ratio....


There is if you look at the improvement ratio that is being worked on. The Price to P/PW, and Price to P/PIN ratios will go way up. Enthusiasts (Us, The minority) we want moarrr power, everyone else in the world realizes they dont need more power. With the games that are out right now and the ones coming out in the next year or 2 we dont need more power.

What would 30% gain over a 980ti get you? It will get you some Big Epeen numbers in benches, and if you play over 60hz it will net you some help there. That is literally It, a 970 will max anything in 1080p, a 980ti will max anything in 1440p both of those at 60fps.

So why do you "Need" More performance? You dont, you want it, however only Us (enthusiasts) want it, the rest of the world wants 980tis that use half the wattage and are half the size. That is what they will get.

Also as you quoted Ileakstuff with this, look at the date of that reply then scroll on through the thread as he changes his tune







.

also I think we will see a 30% improvement, the problem is a lot of people seem to have unrealistic views on what a 30% improvement is.

A 30% improvement is,

1070 is 30% faster than a 970
1080 is 30% faster than a 980
1080ti is 30% faster than a 980ti
Titan P is 30% faster than a Titan X

I defiantly think that will happen, however that does not mean that a 1080, will be 30% faster than a 980ti. Why because a 980ti is already 25% faster than a 980. That means if we see 30% gains like we likely will the 1080 will be 5% faster than a 980ti. That is realistic and that is what we will likely see.

Yet we see claims of the 1070 beating the 980ti, well in order for that to happen the 1070 would have to be 50% faster than the 1070, and that is we use refrence cards on both counts. To beat a Strix 980ti, the 1070 would have to be 75% faster than a 970 that simply is not going to happen.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> I think the best option is reference based with modification, like EVGAs SCs ect. I know they dont change much, but they change enough to make a small difference and still use reference blocks. In the case of the 980TI I know the SC upgrades the Caps and resistors and a few other small changes. So its enough to make the card slightly better but not so much to hamper blocks
> 
> 
> 
> 
> 
> 
> 
> .
> There is if you look at the improvement ratio that is being worked on. The Price to P/PW, and Price to P/PIN ratios will go way up. Enthusiasts (Us, The minority) we want moarrr power, everyone else in the world realizes they dont need more power. With the games that are out right now and the ones coming out in the next year or 2 we dont need more power.
> 
> What would 30% gain over a 980ti get you? It will get you some Big Epeen numbers in benches, and if you play over 60hz it will net you some help there. That is literally It, a 970 will max anything in 1080p, a 980ti will max anything in 1440p both of those at 60fps.
> 
> So why do you "Need" More performance? You dont, you want it, however only Us (enthusiasts) want it, the rest of the world wants 980tis that use half the wattage and are half the size. That is what they will get.
> 
> Also as you quoted Ileakstuff with this, look at the date of that reply then scroll on through the thread as he changes his tune
> 
> 
> 
> 
> 
> 
> 
> .
> 
> also I think we will see a 30% improvement, the problem is a lot of people seem to have unrealistic views on what a 30% improvement is.
> 
> A 30% improvement is,
> 
> 1070 is 30% faster than a 970
> 1080 is 30% faster than a 980
> 1080ti is 30% faster than a 980ti
> Titan P is 30% faster than a Titan X
> 
> I defiantly think that will happen, however that does not mean that a 1080, will be 30% faster than a 980ti. Why because a 980ti is already 25% faster than a 980. That means if we see 30% gains like we likely will the 1080 will be 5% faster than a 980ti. That is realistic and that is what we will likely see.
> 
> Yet we see claims of the 1070 beating the 980ti, well in order for that to happen the 1070 would have to be 50% faster than the 1070, and that is we use refrence cards on both counts. To beat a Strix 980ti, the 1070 would have to be 75% faster than a 970 that simply is not going to happen.


30%? Way too low.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 30%? Way too low.


It really isnt too low, We should see 100% gains frokm the node shrink. However that has to be strecthed across how many generations? Also the die size will be smaller and Compute is being added back. 30% is at best what we will see.

Here is the last realistic Die Shrink we have seen.

https://tpucdn.com/reviews/Zotac/GeForce_9800_GTX/images/perfrel.gif

the 480 used the same size Die doing that is a horrid idea dn they learned that wont happen again.

680 wasnt the same chip.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> It really isnt too low, We should see 100% gains frokm the node shrink. However that has to be strecthed across how many generations? Also the die size will be smaller and Compute is being added back. 30% is at best what we will see.


Going from GTX Titan OG to Titan X is like 50% in new games.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Going from GTX Titan OG to Titan X is like 50% in new games.


Ya I will go with that, however as we discussed earlier NV considers the OG Titan a 700 series card. However I would consider it the 600 series flagship, as it was the first 0 chip after the shrink.

Anyway the titan came after 600 series and before 700 series. the Titan X is 3 generations later, I fully expect for the Titan V to be over 50% faster than the Titan X.

Titan, was post die shrink. Titan X post die shrink and 2 refreshes, big difference then just a die shrink.

I keep seeing this logic thrown around, and it makes sense if we are comparing Volta to Maxwell, Pascal will not see all those gains, it will show half of that.

Also just checked its ~60% faster than the OG titan. It is 44% faster than the 780TI, and the Titan Black was how much faster than a TI? If it was around 10, then that would mean Titan OG - Titan Black was 30%, Titan Black to Titan X was 30%.

also with the changes you listed the Die sizes are the same as are the node sizes. Now with Pascal we will have a smaller node size but alos a smaller Die size.

The NM drop should give us 100% gains as they have stated themselves, however that is on a mature architecture with a same die size which we wont see for 2 or 3 years. People associate "Will double gaming performance" and it will across the node not with Pascal there is still 1-2 more ranges under that node that also needs to show some of those gains.

If they gave us 100% gains right now with Pascal they would have to drop to 10nm for volta or they would have to gives us ~5% gains the former is impossible and the latter will be less profit so they will do neither.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> Ya I will go with that, however as we discussed earlier NV considers the OG Titan a 700 series card. However I would consider it the 600 series flagship, as it was the first 0 chip after the shrink.
> 
> Anyway the titan came after 600 series and before 700 series. the Titan X is 3 generations later, I fully expect for the Titan V to be over 50% faster than the Titan X.
> 
> Titan, was post die shrink. Titan X post die shrink and 2 refreshes, big difference then just a die shrink.


1080 Ti is be as fast hence at least 50% faster 980 Ti. I know you just got a 980 Ti and trying to feel better but since I did not get it I will make anyone that has one not feel special. The power of Internet. You are never a winner. Go play same games.


----------



## jdstock76

Quote:


> Originally Posted by *iLeakStuff*
> 
> Holy crap, way to take investigation to the next level.
> Well done guys
> 
> 
> 
> 
> 
> 
> 
> 
> 
> You guys got a degree in photoshop or something?


No all you have to do is go to Nvidia's partners page and download all their graphics.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 1080 Ti is be as fast hence at least 50% faster 980 Ti. I know you just got a 980 Ti and trying to feel better but since I did not get it I will make anyone that has one not feel special. The power of Internet. You are never a winner. Go play same games.


LOL








, I already thought what I thought before I got a TI that is why I did it didn't seem worth waiting.

You just mad because you know you want one so baddd







. Its killing you







, just do it it will be a year before we see TIs again just grab one and enjoy all its glory







.

Seriously though I am sooo glad I did, its miles better than 290 CF







. just pull the trigger and enjoy the year with the beauty







. I was going to get a Titan X, but the DP killed it so I am going to grab one of the new ones







.


----------



## TranquilTempest

Quote:


> Originally Posted by *Cyber Locc*
> 
> It really isnt too low, We should see 100% gains frokm the node shrink. However that has to be strecthed across how many generations? Also the die size will be smaller and Compute is being added back. 30% is at best what we will see.


Is the high precision being added back to all dies, or just GP100?


----------



## Cyber Locc

Quote:


> Originally Posted by *TranquilTempest*
> 
> Is the high precision being added back to all dies, or just GP100?


Well to understand that, you have to understand how the system works.

When Nvidia makes a GPU chip, it is not made with the intention of being a Geforce card. No chip is made with the intent of being for instance a GTX 1080 (or whatever its called). The chips are designed to be Tesla GPUs, All of them.

If they cant make the Tesla Cut, for missing features or whatever, they have features cut and become Quadros. If they do not make the Quadro Cut, they have features cut and become Geforce cards. So when they add or take away DP performance they do it on all chips and Tesla has a card with every chip made, even the low end chips have a Tesla card.

So when you buy a Geforce card that is because that chip failed 2 sets of Q&A, sometimes only 1 set of Q&A however depending what doesnt work.

For instance, ECC memory support, both Teslas and Quadro need this. So if a chip cannot support ECC it instantly gets cut into a Geforce card.

Intel does the same thing, they do not make chips that are intended to be I3s, they make Xeons that are 16 cores (or whatever the highest core is of that platform). If it fails to run on 16 cores it gets dropped to 12 if it fails again it gets dropped to 10, then 8, then 6, then 4, All the way until it reaches 2. During every cut is a down step, so if a chip cannot operate at 16 but can at 10 they cut cache ect.

This is also true for cache, if it can use 16 cores but cant support the cache that a 16 core has but can support the cache of an I7 it instantly gets cut down to an I7.

It is just like the expression you can always take away you cannot add back, they make the highest chip possible then reduce from there. Sometimes in the case of like the 290s by AMD. The Demand peaks higher then the supply, in this case they take chips slightly better and cut them purposely (even though they were better chips) to abide by the demand.

The same goes for design, when NV designs an architecture, Tesla is on there mind. However Tesla GPUs still give gaming performance in theory, as it works the way any other GPU would just more robust and with more features. What we are left with is the Trickle down performance that we get from the Teslas needs, they do not say we are going to design this architecture for gaming, (or what is helpful to gamers solely)

In the case of the 600, 700, 900, series they gave boosts to there chips that helped gamers alot. This was jsut the way the cards fell, however in doing so they neglected other aspects of Teslas needs, while Teslas saw gains in some places others stayed stagnant, or didn't increase as much as they could have should have.

There industrial clients are not happy about this, and upgraded less due to the lack of gains. This time NV is solely focused on those people, this is easily seen by all the talks about Pascal they have made, NVlink, Deep Learning ECT. This is Teslas time for a serious leap in performance, the gains we see will be trickle down like always.

However this time, the Trickle isn't very useful for gaming some of it isn't at all. It is very useful for Industry use and there new Car Computers. We got our few years to push our industry its Industrys turn.

To this fact anyone that throws a fit, when we dont see these massive gains in gaming. Need to realize that we are not the only ones in the world, and Industry deserves its upgrades as well, whether that leaves our industry stagnant for a few years or not.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Anyway the titan came after 600 series and before 700 series. the Titan X is 3 generations later, I fully expect for the Titan V to be over 50% faster than the Titan X.


The Titan is Kepler and Titan X is Maxwell. It's one generation apart, not three. You need to focus on the die architecture, and not the card number.


----------



## adamkatt

They should just stick with the GTX 1080, 1070 etc. No one is going to confuse a 600 dollar card if they are even remotely going to buy it anyways.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> The Titan is Kepler and Titan X is Maxwell. It's one generation apart, not three. You need to focus on the die architecture, and not the card number.


Okay even still then, we should base from the Titan Black not the OG Titan. Just as you compared the 780TI not the OG Titan to the 580. So that is a 30-40% gain the OG Titan was not KP flagship.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Cyber Locc*
> 
> LOL
> 
> 
> 
> 
> 
> 
> 
> , I already thought what I thought before I got a TI that is why I did it didn't seem worth waiting.
> 
> You just mad because you know you want one so baddd
> 
> 
> 
> 
> 
> 
> 
> . Its killing you
> 
> 
> 
> 
> 
> 
> 
> , just do it it will be a year before we see TIs again just grab one and enjoy all its glory
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Seriously though I am sooo glad I did, its miles better than 290 CF
> 
> 
> 
> 
> 
> 
> 
> . just pull the trigger and enjoy the year with the beauty
> 
> 
> 
> 
> 
> 
> 
> . I was going to get a Titan X, but the DP killed it so I am going to grab one of the new ones
> 
> 
> 
> 
> 
> 
> 
> .


Yes good for you but you are trying so hard to convince everyone that the replacement for your card will not be much faster. I am glad I did not sell my 290X. They already worth more than any Nvidia card right now.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Yeah that's why I said the 980 Ti is like a Titan which is 2688 cores vs 780 Ti/Titan Black at 2880 cores. Compared to 2816 cores for the 980 Ti and 3072 for the Titan X. And that you were right, the 780 isn't really represented in the 900 series since there's just 1 cut-down die vs 2. *I'm sure they could have had a further cut-down ~2400-2500 core GPU which SHOULD have been the 980 at $550 instead of the piece of crap 980 that we got* (that really should have been a 970 or at least called "970 Ti").


100% agree with you there. That is exactly what we SHOULD have gotten as a 980 rather than the 680 redux we ended up with...


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Yes good for you but you are trying so hard to convince everyone that the replacement for your card will not be much faster. I am glad I did not sell my 290X. They already worth more than any Nvidia card right now.


~30% is a lot faster, When a TI comes out at ~30% I will buy it. If it has Async support it will be even faster in DX12.

I am trying to convince people of reality so when Pascal drops they are not disappointed that a it isn't 100000000 times faster that they are expecting.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Okay even still then, we should base from the Titan Black not the OG Titan. Just as you compared the 780TI not the OG Titan to the 580. So that is a 30-40% gain the OG Titan was not KP flagship.


Well the 780 Ti and Titan Black are basically the same in games, and the Titan X is more than 30% faster than the 780 Ti, so it works out about the same.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *magnek*
> 
> I'm saying I don't want my card's OC potential to be largely determined by the ASIC lottery is all. With Kepler/Tahiti the silicon lottery still existed but could largely be mitigated by applying enough voltage since they scaled well. With Maxwell, you're kinda outta luck if you're dealt a bad hand.
> 
> Plus I always avoid nVidia ref PCBs since most of the time they're "just acceptable", so I wait for custom cards anyway. But to each his own.


You hit the nail on the head there. Kepler has definitely been my favorite GPU overclocking experience by far. Early on before we got the custom bios's and hacks for more voltage it looked like the OG Titans were miserable clockers but once all those tools became available GK110 really took off. To this day my Titans can easily deal with 1.45V and that extra voltage yields 1320-1330MHz clocks which is generally higher than 780Ti's and Titan Blacks were able to do (since Nvidia put an end to easy voltage control on those later cards, or so I remember). Don't forget too that my Titans are over 3 years old and have not exactly had the easiest of lives! Would love to see big Pascal scale like big Kepler with voltage!


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> Well the 780 Ti and Titan Black are basically the same in games, and the Titan X is ~45% faster than the 780 Ti, so it works out about the same.


Yep I agree, I will go with that however that ~45 is a far shot form the ~60 that the Titan X beats the Titan by. However again that was a Gaming beneficial Arch, Pascal will not be that is the point here.

If it is ~50% that is awesome and everyone can be happy and all will be well, but I doubt that very much. Maybe I am just a pessimist but life has driven me to that lol.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *kuruptx*
> 
> With us still holding onto to our gtx 680, Would you guys recommend a 980 now or wait?


If you are considering getting a 980 you should absolutely wait until GP204 comes out this summer as current 980 pricing will tank instantly.


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> If you are considering getting a 980 you should absolutely wait until GP204 comes out this summer as current 980 pricing will tank instantly.


GP104? or is it going to be 204?

Has anyone else noticed that we have got any more solid leaks yet? We usually have long since had solid leaks by now.


----------



## Majin SSJ Eric

Hell I don't even know at this point; I'm probably wrong. Just meant whatever the 980 replacement that's coming out soon is...


----------



## dVeLoPe

what do you expect 980ti to be worth when these hit? any confirmation as to if it will blow this cards out fo the water?


----------



## Majin SSJ Eric

I expect the 1080 to be faster than the 980Ti but not by a whole lot. Could be wrong though, maybe the little Pascal flagship will beat expectations? As to how much 980Ti pricing will be affected by the launch, well it might depend on that first question. If the 1080 performs around 980Ti levels then pricing on the 980Ti may remain rather high. If it pulls off some epic beat down of the 980Ti though you can expect 980Ti pricing to plummet.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> GP104? or is it going to be 204?
> 
> Has anyone else noticed that we have got any more solid leaks yet? We usually have long since had solid leaks by now.


Should be GP104. The only reason enthusiast Maxwell was GM204 was because the GM107 (750 Ti) came out so far before they had time to make changes.


----------



## Cyber Locc

Thought so was just checking


----------



## TranquilTempest

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well to understand that, you have to understand how the system works.
> 
> When Nvidia makes a GPU chip, it is not made with the intention of being a Geforce card. No chip is made with the intent of being for instance a GTX 1080 (or whatever its called). The chips are designed to be Tesla GPUs, All of them.
> 
> If they cant make the Tesla Cut, for missing features or whatever, they have features cut and become Quadros. If they do not make the Quadro Cut, they have features cut and become Geforce cards. So when they add or take away DP performance they do it on all chips and Tesla has a card with every chip made, even the low end chips have a Tesla card.
> 
> So when you buy a Geforce card that is because that chip failed 2 sets of Q&A, sometimes only 1 set of Q&A however depending what doesnt work.
> 
> For instance, ECC memory support, both Teslas and Quadro need this. So if a chip cannot support ECC it instantly gets cut into a Geforce card.
> 
> Intel does the same thing, they do not make chips that are intended to be I3s, they make Xeons that are 16 cores (or whatever the highest core is of that platform). If it fails to run on 16 cores it gets dropped to 12 if it fails again it gets dropped to 10, then 8, then 6, then 4, All the way until it reaches 2. During every cut is a down step, so if a chip cannot operate at 16 but can at 10 they cut cache ect.
> 
> This is also true for cache, if it can use 16 cores but cant support the cache that a 16 core has but can support the cache of an I7 it instantly gets cut down to an I7.
> 
> It is just like the expression you can always take away you cannot add back, they make the highest chip possible then reduce from there. Sometimes in the case of like the 290s by AMD. The Demand peaks higher then the supply, in this case they take chips slightly better and cut them purposely (even though they were better chips) to abide by the demand.
> 
> The same goes for design, when NV designs an architecture, Tesla is on there mind. However Tesla GPUs still give gaming performance in theory, as it works the way any other GPU would just more robust and with more features. What we are left with is the Trickle down performance that we get from the Teslas needs, they do not say we are going to design this architecture for gaming, (or what is helpful to gamers solely)
> 
> In the case of the 600, 700, 900, series they gave boosts to there chips that helped gamers alot. This was jsut the way the cards fell, however in doing so they neglected other aspects of Teslas needs, while Teslas saw gains in some places others stayed stagnant, or didn't increase as much as they could have should have.
> 
> There industrial clients are not happy about this, and upgraded less due to the lack of gains. This time NV is solely focused on those people, this is easily seen by all the talks about Pascal they have made, NVlink, Deep Learning ECT. This is Teslas time for a serious leap in performance, the gains we see will be trickle down like always.
> 
> However this time, the Trickle isn't very useful for gaming some of it isn't at all. It is very useful for Industry use and there new Car Computers. We got our few years to push our industry its Industrys turn.
> 
> To this fact anyone that throws a fit, when we dont see these massive gains in gaming. Need to realize that we are not the only ones in the world, and Industry deserves its upgrades as well, whether that leaves our industry stagnant for a few years or not.


Back with Kepler, GK 110 had disproportionately better DP performance than GK104, because the smaller die was more heavily optimized for gaming(single precision). I'm wondering if they're doing the same thing for Pascal. It's not a question of binning, but of architecture/resources.


----------



## EightDee8D

maybe they will go back to pre- kepler naming. GP104- x60 5% faster than stock 980ti. with 6-8gb vram and new flashy features 100-120w @300-350$ ? it will sell like hot cakes.


----------



## Forceman

Quote:


> Originally Posted by *TranquilTempest*
> 
> Back with Kepler, GK 110 had disproportionately better DP performance than GK104, because the smaller die was more heavily optimized for gaming(single precision). I'm wondering if they're doing the same thing for Pascal. It's not a question of binning, but of architecture/resources.


That's what I'm thinking as well.


----------



## SuperZan

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I expect the 1080 to be faster than the 980Ti but not by a whole lot. Could be wrong though, maybe the little Pascal flagship will beat expectations? As to how much 980Ti pricing will be affected by the launch, well it might depend on that first question. If the 1080 performs around 980Ti levels then pricing on the 980Ti may remain rather high. If it pulls off some epic beat down of the 980Ti though you can expect 980Ti pricing to plummet.


I think you've got it right. In any event, it should be a nice card for mainstream gamers and I'm sure many enthusiasts will find interesting uses for a 980ti performer with a smaller electric/thermal footprint. I wouldn't mind a nice Polaris/Pascal SFF design for my portable gamer.


----------



## HarrisLam

GTX 1070 performance slightly better than 970?

whatever happened to the very handsome and promising graphs and charts before? (i know those are just possible expectations but still)

My 570 is gravely ill and I've put it on life-support, trying all I can to NOT buy the 970 I should have had like a year ago, just so I can buy the x70 pascal when it's out.

if this really happens to be the case, I will be so extremely disappointed that I will buy a second hand 970 instead.


----------



## cjc75

Quote:


> Originally Posted by *Cyber Locc*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cjc75*
> 
> Ditto...
> 
> These benchmarks are absolutely pathetic...
> 
> The 970 still barely beats my 770....
> 
> All the hype around Pascal, I'm expecting these things to absolutely trounce all their predecessors... I'm sick of this 5% - 10% improvement gain with each new generation.
> 
> I want to be seeing 25% - 30% in improvement gains; otherwise its just not worth investing $500 - $600 in a video card that only gives me 10% improvement over my $300 video card.
> 
> There is just no comparison/justification with the price vs improvement ratio....
> 
> 
> 
> What would 30% gain over a 980ti get you? It will get you some Big Epeen numbers in benches, and if you play over 60hz it will net you some help there. That is literally It, a 970 will max anything in 1080p, a 980ti will max anything in 1440p both of those at 60fps.
Click to expand...

Dude...

I dont _need_ a 980Ti to max settings at 1440p... I'm already playing max settings @ 1440p just fine, with my GTX 770 FTW 4GB card... though mind its not over 60hz...

Why the heck do I need a $600+ video card that my $300 video card does the same thing just fine; oh.. ok.. well gee I pay an extra $300 to use a little less power. Big deal. Turn off all the lights in my apartment and turn off the A/C for a few hours, and I save more power then a 980Ti would save me... and I'm still gaming away on max settings @ 1440p, and my FPS are just fine...


----------



## zetoor85

Quote:


> Originally Posted by *cjc75*
> 
> Dude...
> 
> I dont _need_ a 980Ti to max settings at 1440p... I'm already playing max settings @ 1440p just fine, with my GTX 770 FTW 4GB card... though mind its not over 60hz...
> 
> Why the heck do I need a $600+ video card that my $300 video card does the same thing just fine; oh.. ok.. well gee I pay an extra $300 to use a little less power. Big deal. Turn off all the lights in my apartment and turn off the A/C for a few hours, and I save more power then a 980Ti would save me... and I'm still gaming away on max settings @ 1440p, and my FPS are just fine...


well, you talk about MAX framerates, what about frametimes? you never played with superb frametimings? and buttersmooth picture? stay with your 680 budget card, let the real men play with the cool toy then.

oh by the way, then you start rock serval anti aliasing like, DS4X OR DS6X OR MSAA/SMAA you enjoy have a card like titan x / 980 ti - just because you havent spend the money ? ^^


----------



## cjc75

Quote:


> Originally Posted by *zetoor85*
> 
> well, you talk about MAX framerates, what about frametimes? you never played with superb frametimings? and buttersmooth picture? stay with your 680 budget card, let the real men play with the cool toy then.
> 
> oh by the way, then you start rock serval anti aliasing like, DS4X OR DS6X OR MSAA/SMAA you enjoy have a card like titan x / 980 ti - just because you havent spend the money ? ^^


Yeah, my frames ARE fine and smooth.

...and its a 770. Not a 680.


----------



## XCalinX

Thank God it isn't much faster than the 980 ti. That means when the big gp100 cards come out, probably next year I can upgrade from my FX 8350 to Zen and from my 980 Ti to the 1080 Ti or the Titan version.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *TranquilTempest*
> 
> Back with Kepler, GK 110 had disproportionately better DP performance than GK104, because the smaller die was more heavily optimized for gaming(single precision). I'm wondering if they're doing the same thing for Pascal. It's not a question of binning, but of architecture/resources.


Eh, not quite. All GeForce GPUs are "optimized" for gaming, it's just that, in this example, GK110 (in the Titan Black) had the option for higher double precision performance at the cost of single precision performance:
Quote:


> For the Titan Black, the magic happens in the driver. The Titan black's driver gives the user an option to choose the double precision performance between 1:3 and 1:24 FP32 (by switching the GPU to TCC mode). When the double precision performance is set to 1:24 FP32, which is the same as the 780 Ti, the the single precision performance of the Titan Black and 780 Ti are identical. But when the user sets the double precision performance to 1:3 FP32, the single precision performance is compromised to boost double precision performance and make it equal to the K40c. In other words, you can choose the performance of the Titan Black to match either the 780 Ti or the K40c based and your preference.
> 
> The K40c has a double precision performance of 1:3 without compromising the single precision performance. This is because the K40 is given a special double precision unit for every 3 single precision cores (white paper). It combines the best of both worlds. NVIDIA also states that the Tesla GPUs go through a much more rigorous Q&A process which guarantees lesser failures and also has additional features such as ECC memory. Hence the large price difference.


http://arrayfire.com/explaining-fp64-performance-on-gpus/


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> I think the best option is reference based with modification, like EVGAs SCs ect. I know they dont change much, but they change enough to make a small difference and still use reference blocks. In the case of the 980TI I know the SC upgrades the Caps and resistors and a few other small changes. So its enough to make the card slightly better but not so much to hamper blocks
> 
> 
> 
> 
> 
> 
> 
> .
> There is if you look at the improvement ratio that is being worked on. The Price to P/PW, and Price to P/PIN ratios will go way up. Enthusiasts (Us, The minority) we want moarrr power, everyone else in the world realizes they dont need more power. With the games that are out right now and the ones coming out in the next year or 2 we dont need more power.


I'm 99% certain the only difference between the regular, SC, and SSC cards are the BIOS. But if you have a link that says otherwise, +1 for learning something new today.


----------



## iLeakStuff

Quote:


> Originally Posted by *XCalinX*
> 
> Thank God it isn't much faster than the 980 ti. That means when the big gp100 cards come out, probably next year I can upgrade from my FX 8350 to Zen and from my 980 Ti to the 1080 Ti or the Titan version.


Its gonna be like GTX 680 and GTX 580.
Expect around 25% faster performance +/- 5% imo.

Probably worth it for 980Ti owners that want bleeding edge performanve while other 980Ti owners think its too little.
Huge increase for 680/780/290X owners though


----------



## guttheslayer

So the great O Titan for pascal will only come 2 years after Titan X was released?


----------



## ZealotKi11er

Quote:


> Originally Posted by *guttheslayer*
> 
> So the great O Titan for pascal will only come 2 years after Titan X was released?


Same as Titan OG to Titan X. 2 years.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I'm 99% certain the only difference between the regular, SC, and SSC cards are the BIOS. But if you have a link that says otherwise, +1 for learning something new today.


Going to take me some time, I will get you some links. I found this out myself from small tidbits they do not like plaster the info everywhere







.

Anyway, Tidbits 1 and 2., that I knew where to get quickly







.

"From Jacob on Twitter. @the_Scarlet_one I double checked. Almost the same some small resistors and capacitors change though." -Thats from Scarlet, EVGA forums mod.
http://forums.evga.com/Geforce-GTX-980-Waterblocks-Geforce-GTX-980-ti-Waterblocks-m2343246-p2.aspx Post 36

It came about due to EK not knowing if the blocks would fit.

Here is another way to tell, not what was changed but that there is a change.



Now this also shows me, the review samples did not have the change, or were just References with the new cooler and bios for review.

Anyway I used that as an example for a reason. It is fairly easy to spot the difference on those pics. If you look at Pic 1, it says EVGA over the PCI, if you look at the second one it says Nvidia. If they use Nvidia Reference PCB it will say Nvidia, if they make there own PCB it will say there name.

If we look at all reference 980tis they say Nvidia,


There is 1 Reference style (even reference blower) card that does not say Nvidia and that is here.


However that is because its not reference, and actually has a pretty major change that is easy to notice. An HDMI port inside the card by the power adapters for the VR bay device.

All that said, I will grab some more info for you as well, the changes are not large by any means. Just small things, NVidia cheaped out very hard on part Y so Evga replaced it with a similar part X, ECT. How much of a difference this will likely make in the end not much honestly.


----------



## Cyber Locc

Quote:


> Originally Posted by *looniam*
> 
> k|ngp|n uses house A rama font (slightly modified) so no go there redteam boy. or should i say roy . . .
> 
> btw, KP and classy are really two different "brands" so don't expect either to change.


Oh BTW loon, Told you about the K|ng Classy seeeee



ITS coming...


----------



## guttheslayer

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Same as Titan OG to Titan X. 2 years.


At least there was a black in between.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I'm 99% certain the only difference between the regular, SC, and SSC cards are the BIOS. But if you have a link that says otherwise, +1 for learning something new today.


Found that twitter post as well, there is more I just need to find it. It was a long time ago that I seen all of this lol.

Scarlet - "@EVGA_JacobF for clarity sake, is all 980ti, except classy/KPE, the same pcb as the Titan X without VRAM on the back? http://forums.evga.com/m/tm.aspx?m=2343246&fp=1 &#8230;"

Jacob - "@the_Scarlet_one roughly, yes."

Scarlet - "@EVGA_JacobF will the Titan X waterblocks fit is the main question "

Jacob - "@the_Scarlet_one should but can't guarantee there may be very minor differences."

Scarlet - "@EVGA_JacobF hopefully I didn't mislead them in that thread, lol. Any way to get pictures of the PCB to visual verification by any chance?"

Jacob - "@the_Scarlet_one I double checked. Almost the same some small resistors and capacitors change though."

https://twitter.com/EVGA_JacobF/status/608504812032778242

I am trying to hunt down a PCB shot of the 980ti SC, all I can find is the review samples which are different, they bear the NV logo, not EVGA. Having some trouble though, short of yanking my cooler off my card and taking pics lol.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> Going to take me some time, I will get you some links. I found this out myself from small tidbits they do not like plaster the info everywhere
> 
> 
> 
> 
> 
> 
> 
> .
> 
> Anyway, Tidbits 1 and 2., that I knew where to get quickly
> 
> 
> 
> 
> 
> 
> 
> .
> 
> "From Jacob on Twitter. @the_Scarlet_one I double checked. Almost the same some small resistors and capacitors change though." -Thats from Scarlet, EVGA forums mod.
> http://forums.evga.com/Geforce-GTX-980-Waterblocks-Geforce-GTX-980-ti-Waterblocks-m2343246-p2.aspx Post 36
> 
> It came about due to EK not knowing if the blocks would fit.
> 
> Here is another way to tell, not what was changed but that there is a change.
> 
> Now this also shows me, the review samples did not have the change, or were just References with the new cooler and bios for review.
> 
> Anyway I used that as an example for a reason. It is fairly easy to spot the difference on those pics. If you look at Pic 1, it says EVGA over the PCI, if you look at the second one it says Nvidia. If they use Nvidia Reference PCB it will say Nvidia, if they make there own PCB it will say there name.
> 
> However that is because its not reference, and actually has a pretty major change that is easy to notice. An HDMI port inside the card by the power adapters for the VR bay device.
> 
> All that said, I will grab some more info for you as well, the changes are not large by any means. Just small things, NVidia cheaped out very hard on part Y so Evga replaced it with a similar part X, ECT. How much of a difference this will likely make in the end not much honestly.


I just checked EK's website and they claim the regular, SC, and SC with ACX coolers are all reference boards. The pics look the same for almost all of them though so no way to compare the differences in small components (i.e. the thing that matters for this discussion). Either way, I don't agree it's enough to make a small difference and unless you know what both parts are, there's nothing showing the small changes are actually for the better. It could very well be a cost saving measure. Also, I didn't see anything that states those changes are ONLY for the SC/SSC/ACX/whatever names they're using that aren't custom PCB like KPE/Classified. It could be very likely that their reference cards (4990KR) have these small changes too.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I just checked EK's website and they claim the regular, SC, and SC with ACX coolers are all reference boards. The pics look the same for almost all of them though so no way to compare the differences in small components (i.e. the thing that matters for this discussion). Either way, I don't agree it's enough to make a small difference and unless you know what both parts are, there's nothing showing the small changes are actually for the better.
> 
> It could very well be a cost saving measure. Also, *I didn't see anything that states those changes are ONLY for the SC/SSC/ACX/whatever names they're using that aren't custom PCB like KPE/Classified.* It could be very likely that their reference cards (4990KR) have these small changes too.


It could very well be a Cost saving measure, However going out of there way to make boards to save costs seems unlikely to me. They didnt make there own boards for no reason, that would not make any sense, however I concede to save costs could be a reason, however as EVGA is a smaller company making there own boards would in fact increase costs in theory right?

As to your comment that I have bolded read that again. The entire convo is about Titan X blocks fitting SCs, Titan X is only available in reference cards period, there is no KPE or Classified Titan Xs.

That and of course the fact that he says that word for word at the start of this convo. Scarlet - "@EVGA_JacobF for clarity sake, is all 980ti, *except classy/KPE*, the same pcb as the Titan X without VRAM on the back? http://forums.evga.com/m/tm.aspx?m=2343246&fp=1 &#8230;"

It is changed, that much is clear, they didn't make there own PCB for no reason. the way this all started is EK themselves said that in the beginning saying there blocks may not fit they need confirmation from EVGA. As to if the changes was to save costs or an upgrade, I will need to dredge through more info for you.

Also,
Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I'm 99% certain the only difference between the regular, SC, and SSC cards are the BIOS.


Well your 99% wrong. the Regular card uses Nvidia PCB, the SCs use EVGA PCB, they would not make there own PCB on there higher tier cards to cut costs that really doesn't even make any sense honestly. Why would you use a Lesser PCB on a higher tier card? Like I said I do concede it is possible however unlikely, and will look more into it.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> It could very well be a Cost saving measure, However going out of there way to make boards to save costs seems unlikely to me. They didnt make there own boards for no reason, that would not make any sense, however I concede to save costs could be a reason, however as EVGA is a smaller company making there own boards would in fact increase costs in theory right?
> 
> As to your comment that I have bolded read that again. The entire convo is about Titan X blocks fitting SCs, Titan X is only available in reference cards period, there is no KPE or Classified Titan Xs.
> 
> That and of course the fact that he says that word for word at the start of this convo. Scarlet - "@EVGA_JacobF for clarity sake, is all 980ti, *except classy/KPE*, the same pcb as the Titan X without VRAM on the back? http://forums.evga.com/m/tm.aspx?m=2343246&fp=1 &#8230;"
> 
> It is changed, that much is clear, they didn't make there own PCB for no reason. the way this all started is EK themselves said that in the beginning saying there blocks may not fit they need confirmation from EVGA. As to if the changes was to save costs or an upgrade, I will need to dredge through more info for you.


That bears the assumption that all reference boards are made by Nvidia (rather, a company like Foxconn that does the actual manufacturing) instead of just a design that must be strictly adhered to, with the AIB vendors having to rely on manufacturing. Do you see what I'm saying?

You state "The entire convo is about Titan X blocks fitting SCs". I see no mention of the SC in the thread you've linked: http://forums.evga.com/Geforce-GTX-980-Waterblocks-Geforce-GTX-980-ti-Waterblocks-m2343246-p2.aspx The response by EVGA stating "I double checked. Almost the same some small resistors and capacitors change though." could very well be for all of their 980 Ti cards based on NVIDIA's reference PCB layout.

NVIDIA GTX 980 Ti (via EK):










EVGA GTX 980 Ti SC+ (via TPU):


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> NVIDIA GTX 980 Ti (via EK):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA GTX 980 Ti
> That bears the assumption that all reference boards are made by Nvidia (rather, a company like Foxconn that does the actual manufacturing) instead of just a design that must be strictly adhered to, with the AIB vendors having to rely on manufacturing. Do you see what I'm saying?
> 
> You state "The entire convo is about Titan X blocks fitting SCs". I see no mention of the SC in the thread you've linked: http://forums.evga.com/Geforce-GTX-980-Waterblocks-Geforce-GTX-980-ti-Waterblocks-m2343246-p2.aspx The response by EVGA stating "I double checked. Almost the same some small resistors and capacitors change though." could very well be for all of their 980 Ti cards based on NVIDIA's reference PCB layout.
> 
> NVIDIA GTX 980 Ti (via EK):
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EVGA GTX 980 Ti SC+ (via TPU):


K well it isn't,

Look at the thread again, EK Luc stated on the first page.

We didn't need to have the 4990 or the 4992 in our hands to know they were reference ones since if you take a little look at the pictures provided by EVGA for those models, you can clearly see the "Nvidia" branding on the PCB right over the PCIe connector.

There was never any question about the Reference cards, the Question was about the ACX 2.0 cards.

"That bears the assumption that all reference boards are made by Nvidia (rather, a company like Foxconn that does the actual manufacturing) instead of just a design that must be strictly adhered to, with the AIB vendors having to rely on manufacturing. Do you see what I'm saying?"

That is not an assumption that is a fact, all Nvidia reference cards say Nvidia. 100% of all titan Xs that do not allow the AIB to change anything say Nvidia on them. The only way they get AIB branding on the card is if the boards are changed. This is a pretty well known fact.

The pictures of the SC from EKs website are not of the SC pcb, this is glaringly obvious due to the fact they say Nvidia on them and SCs do not, they say EVGA.

I also stated that there is a problem with TPUs PCB picture as well as there again we are looking at a Nvidia PCB not an EVGA pcb. None of the retail SCs share that PCB.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> Also,
> Well your 99% wrong. the Regular card uses Nvidia PCB, the SCs use EVGA PCB, they would not make there own PCB on there higher tier cards to cut costs that really doesn't even make any sense honestly. Why would you use a Lesser PCB on a higher tier card? Like I said I do concede it is possible however unlikely, and will look more into it.


They don't use the EVGA PCB, I just linked you a picture of the SC+ from TPU https://www.techpowerup.com/mobile/reviews/EVGA/GTX_980_Ti_SC_Plus/3.html There's some very small differences in components, but not placement. Nothing suggests they're "better", "equal", or "worse". The component changes are 100% drop in as well. That is, the dimensions are exactly the same so blocks would be compatible.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> They don't use the EVGA PCB, I just linked you a picture of the SC+ from TPU https://www.techpowerup.com/mobile/reviews/EVGA/GTX_980_Ti_SC_Plus/3.html There's some very small differences in components, but not placement. Nothing suggests they're "better", "equal", or "worse". The component changes are 100% drop in as well. That is, the dimensions are exactly the same so blocks would be compatible.


They do use EVGA pcb, TPU is wrong my friend I have a EVGA SC and that is not the cards PCB.....

Go to any thread that depicts a retail SC card it is not a Nvidia PCB it is an EVGA pcb.

It is entirely possible that press samples and some stock photos have a Nvidia PCB, however the actual retail cards do not.

Whats funny is then TPU use this image as well,



I got a pic of mine incoming just a min.


----------



## TranquilTempest

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Eh, not quite. All GeForce GPUs are "optimized" for gaming, it's just that, in this example, GK110 (in the Titan Black) had the option for higher double precision performance at the cost of single precision performance:
> http://arrayfire.com/explaining-fp64-performance-on-gpus/


No, the 7.1 billion transistor GK110 die in a gtx 780 or 780 Ti is exactly the same design as in the original Titan or Titan Black. The extra double precision capacity is just disabled on the geforce parts. On the smaller dies like GK104 (GTX 680, Quadro K5000, etc.) the extra double precision resources aren't there in the first place, even the professional line was stuck with DP at 1/24 of SP performance. Instead of dedicating die area to double precision, they made the die smaller/added more single precision resources.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> That is not an assumption that is a fact, all Nvidia reference cards say Nvidia. 100% of all titan Xs that do not allow the AIB to change anything say Nvidia on them. The only way they get AIB branding on the card is if the boards are changed. This is a pretty well known fact.


OK, it looks like you can't understand me for whatever reason(s). I'm talking, for the second time, with respect to the actual manufacturing of the reference PCBs being handled solely by Nvidia. I've already provided pictures of the EVGA GTX 980 Ti SC+ PCB, which TPU states "EVGA's GTX 980 Ti SC+ comes with the company's *ACX 2.0* thermal solution." and doesn't have the exact same branded components. They're the same exact components, just from a different manufacturer. Whether the board was manufactured by Nvidia, or EVGA, there was a switch in component vendors.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *TranquilTempest*
> 
> No, the 7.1 billion transistor GK110 die in a gtx 780 or 780 Ti is exactly the same design as in the original Titan or Titan Black. The extra double precision capacity is just disabled on the geforce parts. On the smaller dies like GK104 (GTX 680, Quadro K5000, etc.) the extra double precision resources aren't there in the first place, even the professional line was stuck with DP at 1/24 of SP performance. Instead of dedicating die area to double precision, they made the die smaller/added more single precision resources.


So like I said, both are optimized for gaming. It's just that GK110 were Tesla failures and the DP performance had to be done via the driver at the expense of SP performance. I think you're confusing my use of the word "optimized" with "created".


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> They do use EVGA pcb, TPU is wrong my friend I have a EVGA SC and that is not the cards PCB.....
> 
> Go to any thread that depicts a retail SC card it is not a Nvidia PCB it is an EVGA pcb.
> 
> It is entirely possible that press samples and some stock photos have a Nvidia PCB, however the actual retail cards do not.
> 
> Whats funny is then TPU use this image as well,
> 
> 
> 
> I got a pic of mine incoming just a min.


Doubt Nvidia would create such an ugly stock photo lol:



It may not be equivalent to retail cards as you've pointed out.


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> OK, it looks like you can't understand me for whatever reason(s). I'm talking, for the second time, with respect to the actual manufacturing of the reference PCBs being handled solely by Nvidia. I've already provided pictures of the EVGA GTX 980 Ti SC+ PCB, which TPU states "EVGA's GTX 980 Ti SC+ comes with the company's *ACX 2.0* thermal solution." and doesn't have the exact same branded components. They're the same exact components, just from a different manufacturer. Whether the board was manufactured by Nvidia, or EVGA, there was a switch in component vendors.


I am a little confused by what you are saying. Also TPUs card is not the same as the Retail SC, is the issue with anything they say.

What they have is in fact a reference Nvidia PCB. Here it is as a matter of fact.



Okay and here is a picture of my actual retail SC TI.



and the EVGA in the front,


And then we can go back to EVGA on twitter they are not the same exact components, they have been changed however not in a way that affects cooling solutions. TPU is wrong there board and the actual board are completely different so anything they have to say is hogwash.
Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> Doubt Nvidia would create such an ugly stock photo lol:
> 
> 
> 
> It may not be equivalent to retail cards as you've pointed out.


Yes I think this is the issue, the early SCs were using Ref PCB then before going to retail that has changed to EVGAs own PCB with small changes. TPUs SC is not the same as my SC.

I am also not stating that the differences are huge. the components may very well be cheaper to save costs, but I find that pointless why make your own PCB and gimp it.

Okay later today I will remove my heatsink and take pictures of my PCB front as that is the only way we are getting to the bottom of this


----------



## iLeakStuff

Uhm, why the heck have nobody noticed this? They ran 3DMark with the VRAM clocked at 2505MHZ!
That gives us a bandwidth of 320GB/s.

Looks like the 2GHz GDDR5 1GB modules can really fly. Holy crap









Pardon my quick paint editing but I refuse to give that site any free publicity


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> Uhm, why the heck have nobody noticed this? They ran 3DMark with the VRAM clocked at 2505MHZ!
> That gives us a bandwidth of 320GB/s.
> 
> Looks like the 2GHz GDDR5 1GB modules can really fly. Holy crap
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Pardon my quick paint editing but I refuse to give that site any free publicity


Wait where is that from? can you pm it to me? Than you don't have to give publicity







.


----------



## magnek

99% certain it's WCCF (you can even faintly make out the "W", as well as the WCCF "mascot")


----------



## Maintenance Bot

Quote:


> Originally Posted by *magnek*
> 
> 99% certain it's WCCF (you can even faintly make out the "W", as well as the WCCF "mascot")


Yep wccftech.


----------



## iLeakStuff

2500MHz on the VRAM. Have we seen that before?


----------



## KeepWalkinG

Quote:


> Originally Posted by *iLeakStuff*
> 
> 2500MHz on the VRAM. Have we seen that before?


I know one guy with 9100vram with 980 ti but 10k(2500) is new limit.


----------



## iLeakStuff

Quote:


> Originally Posted by *KeepWalkinG*
> 
> I know one guy with 9100vram with 980 ti but 10k(2500) is new limit.


Yeah, most golden GTX 980Ti`s fizzle out around 2000-2100MHz I think.
Almost a little excited for this new VRAM. Atleast now there is some hope since we wont get GDDR5X.


----------



## criminal

Quote:


> Originally Posted by *iLeakStuff*
> 
> Yeah, most golden GTX 980Ti`s fizzle out around 2000-2100MHz I think.
> Almost a little excited for this new VRAM. Atleast now there is some hope since we wont get GDDR5X.


Well at least memory bandwidth won't be a concern!


----------



## Majin SSJ Eric

Let's just remember that 3dmark can and has been fooled before and does often display inaccurate info so i still am taking these screens with a large grain of salt.

Oh and WCCFTech!


----------



## TranquilTempest

I'm not really familiar with 3dmark scoring, but why is there such a wide spread in score for what should be similar systems?

For example: http://www.3dmark.com/compare/3dm11/10338793/3dm11/10009153#


----------



## HZCH

Quote:


> Originally Posted by *TranquilTempest*
> 
> I'm not really familiar with 3dmark scoring, but why is there such a wide spread in score for what should be similar systems?
> 
> For example: http://www.3dmark.com/compare/3dm11/10338793/3dm11/10009153#


Weill,if you look at the graphic score, you'll see a 980ti able to beat another 989ti by a whooping 116'700%, with insane fps...
That's a corrupt score, or that guy could have cheat. OR I don't remember how all of that works too: the other compared score looks feable to me...


----------



## Cyber Locc

Quote:


> Originally Posted by *TranquilTempest*
> 
> I'm not really familiar with 3dmark scoring, but why is there such a wide spread in score for what should be similar systems?


Ya there is something wrong there. thats more like 4 980tis.


----------



## Buris

Calling it now, the 970-like score will be a 1060-

the 980-like score will be the 1070

the 1080 is still not known, but considering that both AMD and Nvidia claim to have 4K viable as low as 300$ it would make sense that the 1070 would have 8GB ram.

The performance increase between 1000 series and 900 series is actually a bit underwhelming considering the massive die-shrink, but I'm guessing AMD is aiming for the same thing with their claims of 2.5 times efficiency

"Big GPU's" will come later, the R9 Fury 2 (not real name) and the Titan 3 (not real name)-

This is a way for both Nvidia and AMD to "slow" the generation down to eek out as much profit from the market as possible.


----------



## Clocknut

Quote:


> Originally Posted by *iLeakStuff*
> 
> Yeah, most golden GTX 980Ti`s fizzle out around 2000-2100MHz I think.
> Almost a little excited for this new VRAM. Atleast now there is some hope since we wont get GDDR5X.


may be GP104 pascal 1080 will be using 8GHz GDDR5 @ 384bit and a 4mb L2 cache. This combination should be enough to make up the bandwidth req.

And the big Pascal will use HBM2 by the time it arrive @ year 2017, then they will revise the GP104 into GP204 using GDDR5X?(a.k.a gtx770 successor)


----------



## Weber

source
http://hardware.hdblog.it/2016/03/17/Nvidia-X80-X80-Ti-specifche/
http://www.crossmap.com/news/leaked-specification-of-nvidia-pascal-geforce-x80-geforce-x80ti-and-geforce-x80-titan-26253

Pascal GP100 based GTX Titan (of GeForce X80 Titan) "will allegedly feature 6144 CUDA cores, 384 texture mapping units and 192 render output units and feature a base clock of 1025mhz for a total of 12.595 teraflops."

GTX 1080 Ti (or GeForce X80Ti)"will allegedly feature a cut down GP100 GPU. Packing 5120 CUDA cores, 320 texture mapping units, 160 render output units and feature a base clock of 1025 MHz for a total of 10.496 teraflops."

GTX 1080 (or GeForce X80) "will allegedly feature the most powerful GP104 configuration with all of the chip's resources unlocked. This translates to a 4096 CUDA cores, 256 texture mapping units, 128 render output units and a 1000 MHz base clock for a total of 8.192 teraflops."


----------



## iLeakStuff

Guys,

Benchlife said 8GB for GP104
The 3DMark11 entries say 8GB

You cannot do 8GB and 384bit.
Its either 256bit or 512bit. 512bit not happening on a small die like GP104


----------



## guttheslayer

Quote:


> Originally Posted by *iLeakStuff*
> 
> Guys,
> 
> Benchlife said 8GB for GP104
> The 3DMark11 entries say 8GB
> 
> You cannot do 8GB and 384bit.
> Its either 256bit or 512bit. 512bit not happening on a small die like GP104


I dont see why small die cannot have 512 bits.

GTX 1080 is twice as fast as GTX 980 or even more, you need close to 100% more memory bandwidth to feed that kinda power house.


----------



## zealord

Quote:


> Originally Posted by *guttheslayer*
> 
> I dont see why small die cannot have 512 bits.
> 
> *GTX 1080 is twice as fast as GTX 980 or even more*, you need close to 100% more memory bandwidth to feed that kinda power house.


----------



## guttheslayer

Quote:


> Originally Posted by *zealord*


that is if the rumored 4096 Cuda cores is true, since 980 GTX only has 2048 cores.


----------



## zealord

Quote:


> Originally Posted by *guttheslayer*
> 
> that is if the rumored 4096 Cuda cores is true, since 980 GTX only has 2048 cores.


I am no expert, but even if that is true I think you can't directly translate it like that. 20000 cuda cores wouldn't mean 10 times the performance of a GTX 980. I think it has something to do with diminishing returns and stuff.

Titan X has 50% more Cuda cores than the GTX 980 if I am not mistaken, but its 35-40% faster and not exactly 50% faster.


----------



## guttheslayer

Quote:


> Originally Posted by *guttheslayer*
> 
> that is if the rumored 4096 Cuda core is true, since 980 GTX is 2048 cores.


Quote:


> Originally Posted by *zealord*
> 
> I am no expert, but even if that is true I think you can't directly translate it like that. 20000 cuda cores wouldn't mean 10 times the performance of a GTX 980. I think it has something to do with diminishing returns and stuff.
> 
> Titan X has 50% more Cuda cores than the GTX 980 if I am not mistaken, but its 35-40% faster and not exactly 50% faster.


That is because the Titan X is clocked much lower than the GTX 980. GTX 980 has a boost up to 1216MHz while the Titan X has only 1089MHz at best.

On clock alone the GTX 980 is 12% faster, but since Titan X has 50% more core.

(1/1.12) x 1.5 = 1.32 = 32% performance uplift.

Try to over clock a Titan X to 1126 and 1216Mhz boost and compare. u will be surprised.

Not to mentioned we are talking about Pascal which has even better architecture than MAxwell, which means the performance per core is very likely to be even higher than Maxwell.

Best eg is Kepler core vs Maxwell cores.


----------



## Dragon 32

Quote:


> Originally Posted by *Weber*
> 
> source
> http://hardware.hdblog.it/2016/03/17/Nvidia-X80-X80-Ti-specifche/
> http://www.crossmap.com/news/leaked-specification-of-nvidia-pascal-geforce-x80-geforce-x80ti-and-geforce-x80-titan-26253
> 
> Pascal GP100 based GTX Titan (of GeForce X80 Titan) "will allegedly feature 6144 CUDA cores, 384 texture mapping units and 192 render output units and feature a base clock of 1025mhz for a total of 12.595 teraflops."
> 
> GTX 1080 Ti (or GeForce X80Ti)"will allegedly feature a cut down GP100 GPU. Packing 5120 CUDA cores, 320 texture mapping units, 160 render output units and feature a base clock of 1025 MHz for a total of 10.496 teraflops."
> 
> GTX 1080 (or GeForce X80) "will allegedly feature the most powerful GP104 configuration with all of the chip's resources unlocked. This translates to a 4096 CUDA cores, 256 texture mapping units, 128 render output units and a 1000 MHz base clock for a total of 8.192 teraflops."


Any ideas how likely this is to be legit?

If true it might explain why AMD aren't trying to hype polaris for raw power in comparison. It would be nice to think the X80 could be worth the wait.


----------



## zealord

Quote:


> Originally Posted by *Dragon 32*
> 
> Any ideas how likely this is to be legit?
> 
> If true it might explain why AMD aren't trying to hype polaris for raw power in comparison. It would be nice to think the X80 could be worth the wait.


0% legit. there are flaws in it that people already pointed out like GP100 both HBM2 and GDDR5. It would be a waste.


----------



## guttheslayer

Quote:


> Originally Posted by *Dragon 32*
> 
> Any ideas how likely this is to be legit?
> 
> If true it might explain why AMD aren't trying to hype polaris for raw power in comparison. It would be nice to think the X80 could be worth the wait.


Would have been more believable IF X80 uses 512 bits GDDR5 and X80 Ti uses HBM2 same as Titan Pascal.


----------



## Dragon 32

Curses. I thought it sounded a bit too good to be true.


----------



## Clocknut

Quote:


> Originally Posted by *iLeakStuff*
> 
> Guys,
> 
> Benchlife said 8GB for GP104
> The 3DMark11 entries say 8GB
> 
> You cannot do 8GB and 384bit.
> Its either 256bit or 512bit. 512bit not happening on a small die like GP104


check out those 192bit kepler, Fermi......


----------



## criminal

Quote:


> Originally Posted by *iLeakStuff*
> 
> Guys,
> 
> Benchlife said 8GB for GP104
> The 3DMark11 entries say 8GB
> 
> You cannot do 8GB and 384bit.
> Its either 256bit or 512bit. 512bit not happening on a small die like GP104


Couldn't it technically be 384bit if it has a weird configuration like the 970 has? 384bit up to 7680MB of ram and then 256bit for the rest? That would equal out to exactly 8192MB of ram.


----------



## Cyber Locc

Quote:


> Originally Posted by *guttheslayer*
> 
> That is because the Titan X is clocked much lower than the GTX 980. GTX 980 has a boost up to 1216MHz while the Titan X has only 1089MHz at best.
> 
> On clock alone the GTX 980 is 12% faster, but since Titan X has 50% more core.
> 
> (1/1.12) x 1.5 = 1.32 = 32% performance uplift.
> 
> Try to over clock a Titan X to 1126 and 1216Mhz boost and compare. u will be surprised.
> 
> Not to mentioned we are talking about Pascal which has even better architecture than MAxwell, which means the performance per core is very likely to be even higher than Maxwell.
> 
> Best eg is Kepler core vs Maxwell cores.


Umm no its really not going to surprise us. Clock for Clock a Titan X is 4% faster than a 980ti. Yet a G1 gaming 980ti with 1500mhz+ boost is only 48% faster than a 980 stock. He was correct clock for clock the titan X is about 30-35% faster than a 980.

A reference 980ti and a reference Titan X are the same clocks, they dont illustrate what you are saying at all. Furthermore, the shader difference is 9% on the Titan X vs a 980ti however the actual difference is only 3-4%.

At any rate those specs and 4k shaders on a 1080 are complete fake.
Quote:


> Originally Posted by *Dragon 32*
> 
> Any ideas how likely this is to be legit?
> 
> *If true it might explain why AMD aren't trying to hype polaris for raw power in comparison.* It would be nice to think the X80 could be worth the wait.


Nvidia isnt hyping Pascal for raw power either, because neither of them are going to be these extreme gains people keep thinking they are. They are both Hyping P/PW and Compute gains, as that is what we are going to see.


----------



## iLeakStuff

Quote:


> Originally Posted by *criminal*
> 
> Couldn't it technically be 384bit if it has a weird configuration like the 970 has? 384bit up to 7680MB of ram and then 256bit for the rest? That would equal out to exactly 8192MB of ram.


Sounds waaaaay too complicated man


----------



## Woundingchaney

Quote:


> Originally Posted by guttheslayer View Post
> 
> That is because the Titan X is clocked much lower than the GTX 980. GTX 980 has a boost up to 1216MHz while the Titan X has only 1089MHz at best.


Im not sure where this notion that TX only boosts to 1089 because I can readily confirm that isnt true (Im thinking that was perhaps an aspect of the first review samples). Both of my cards boost to above 1200 at stock and easily OC to 1350 in SLI on air. At release the TitanX was about 30% improved performance above the 980. Even when the 980 released it was slightly under a 10% performance increase from the 780ti.

Realistically 25-35% has been the defacto performance metric for high end releases. I expect the x80 to be comparable to the 980ti/Titan and the higher end card to be approx a 30% performance increase above the x80. Market dynamics really doesnt put either AMD or Nvidia in a position as to where they need to break this cycle and given some architecture shifts it doesnt seem likely from their profitability standpoint. Suggesting massive performance gains doesnt seem realistic at this point.


----------



## Cyber Locc

Quote:


> Originally Posted by *Woundingchaney*
> 
> Im not sure where this notion that TX only boosts to 1089 because I can readily confirm that isnt true (Im thinking that was perhaps an aspect of the first review samples). Both of my cards boost to above 1200 at stock and easily OC to 1350 in SLI on air.
> 
> Realistically 25-35% has been the defacto performance metric for high end releases. I expect the x80 to be comparable to the 980ti/Titan and the higher end card to be approx a 30% performance increase above the x80. Market dynamics really doesnt put either AMD or Nvidia in a position as to where they need to break this cycle. Suggesting massive performance gains doesnt seem realistic at this point.


If your TXs reference models? Or are they SC, G1 Gaming, Strix ect. That changes everything. He is correct reference model has a boost clock of 1089 as does the 980ti reference.

I realize that all TXs are technically reference models, however there is still the factory OCed versions that will show a higher clock speed.

the models that have over a 1200 boost clock out of the box are the SC, HC, Hybrid all 3 by EVGA. https://www.techpowerup.com/gpudb/2632/geforce-gtx-titan-x.html


----------



## Woundingchaney

Quote:


> Originally Posted by *Cyber Locc*
> 
> If your TXs reference models? Or are they SC, G1 Gaming, Strix ect. That changes everything. He is correct reference model has a boost clock of 1089 as does the 980ti reference.
> 
> I realize that all TXs are technically reference models, however there is still the factory OCed versions that will show a higher clock speed.
> 
> the models that have over a 1200 boost clock out of the box are the SC, HC, Hybrid all 3 by EVGA. https://www.techpowerup.com/gpudb/2632/geforce-gtx-titan-x.html


My TXs are stock Nvidia reference models with no bios flash.

The 1089 boost was related to the review samples and Im thinking they were locked at that boost clock (iirc). 1089 is advertised "minimum boost". Most people with stock TXs boost to 1200 (or very near it) from what I have seen.

Notice how the reference models in your link dont show boost clock.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> If your TXs reference models? Or are they SC, G1 Gaming, Strix ect. That changes everything. He is correct reference model has a boost clock of 1089 as does the 980ti reference.
> 
> I realize that all TXs are technically reference models, however there is still the factory OCed versions that will show a higher clock speed.
> 
> the models that have over a 1200 boost clock out of the box are the SC, HC, Hybrid all 3 by EVGA. https://www.techpowerup.com/gpudb/2632/geforce-gtx-titan-x.html


The advertised boost clock is not the actual speed they boost to. Just because it is listed at 1089 doesn't mean it doesn't boost higher. Many (most?) will boost into the 1200 out of the box.


----------



## Cyber Locc

Quote:


> Originally Posted by *Woundingchaney*
> 
> My TXs are stock Nvidia reference models with no bios flash.
> 
> The 1089 boost was related to the review samples and Im thinking they were locked at that boost clock (iirc). Most people with stock TXs boost to 1200 (or very near it) from what I have seen.


Ya well no, EVGA is saying differently for sale on amazon. "EVGA's 24/7 Technical Support; Base Clock: 1000 MHz / Boost Clock: 1075 MHz"

That isn't a review sample.

Here is the offical specs from Nvidia and the Titan X owners club, .

Not to mention if what you are saying is true Evga wouldn't even bother making a SC.

I dont know man I dont own a Titan X, however all the concrete info is saying differently. No one is saying it cannot boost higher, however you have to make changes.
Quote:


> Originally Posted by *Forceman*
> 
> The advertised boost clock is not the actual speed they boost to. Just because it is listed at 1089 doesn't mean it doesn't boost higher. Many (most?) will boost into the 1200 out of the box.


I would love to see a link for that, as thats not how the 980ti works, if I set my boost clock to 1300 it will not go over 1300.

Or are you saying the clock isnt set to 1089 it just says that is the boost? but its actually set over 1200?.


----------



## Woundingchaney

Quote:


> Originally Posted by *Cyber Locc*
> 
> Ya well no, EVGA is saying differently for sale on amazon. "EVGA's 24/7 Technical Support; Base Clock: 1000 MHz / Boost Clock: 1075 MHz"
> 
> That isn't a review sample.
> 
> Here is the offical specs from Nvidia and the Titan X owners club,
> Not to mention if what you are saying is true Evga wouldn't even bother making a SC.
> 
> I dont know man I dont own a Titan X, however all the concrete info is saying differently. No one is saying it cannot boost higher, however you have to make changes.


Official specs in regards to boost represents the minimum boost (that is the only guarantee). Cards inherently vary in their boost capabilities, but when boost is advertised that is the lowest metric that the manufacturer has to supply to the customer. Im thinking the review samples locked at 1089 (iirc).

Honestly there is an entire thread for TX owners, 1200 boost out of the box is very common.


----------



## Cyber Locc

Quote:


> Originally Posted by *Woundingchaney*
> 
> Official specs in regards to boost represents the minimum boost (that is the only guarantee). Cards inherently vary in their boost capabilities, but when boost is advertised that is the lowest metric that the manufacturer has to supply to the customer. Im thinking the review samples locked at 1089 (iirc).
> 
> Honestly there is an entire thread for TX owners, 1200 boost out of the box is very common.


So then what is the boost clock actually set to? I was just in that thread linked you info from it didn't see anyone saying what you are. Do you have Presicion X and can tell us what your boost clock is actually set to, I am curious now







.

I found what you are talking about, however the 980 probally does the same then right? so in that regard what he says still semi applies. I still think he is wrong in a whole though with a 50% speed increase.

Found this too, "so GTX Titan X won't clock quite as high as GTX 980 and the overall performance difference on paper is closer to 33% when comparing boost clocks." http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review

Which is what me and Zealord were trying to tell him







.


----------



## SoccerNinja

I want to get the predator or freesync version ultrawide but I don't know which one to get and was gonna choose depending on who has better performing cards
Pascal or Polaris
What do you guys think?
I currently have a r9 390
Im no where near team red or green I just want whatever will be better long term and for the money.


----------



## Bogga

Forgive me if this has been said over and over... but the size of the pcb is rumoured to be smaller than the previous versions, right?

It just hit me that all my plans might go straight out the window if the x80ti/x80titan is as big as the 980ti/titan x


----------



## EightDee8D

Quote:


> Originally Posted by *SoccerNinja*
> 
> I want to get the predator or freesync version ultrawide but I don't know which one to get and was gonna choose depending on who has better performing cards
> Pascal or Polaris
> What do you guys think?
> I currently have a r9 390
> Im no where near team red or green I just want whatever will be better long term and for the money.


just wait and watch. nobody knows who's gonna be faster. and those monitors won't go out of sale either.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Ya well no, EVGA is saying differently for sale on amazon. "EVGA's 24/7 Technical Support; Base Clock: 1000 MHz / Boost Clock: 1075 MHz"
> 
> That isn't a review sample.
> 
> Here is the offical specs from Nvidia and the Titan X owners club, .
> 
> Not to mention if what you are saying is true Evga wouldn't even bother making a SC.
> 
> I dont know man I dont own a Titan X, however all the concrete info is saying differently. No one is saying it cannot boost higher, however you have to make changes.
> I would love to see a link for that, as thats not how the 980ti works, if I set my boost clock to 1300 it will not go over 1300.
> 
> Or are you saying the clock isnt set to 1089 it just says that is the boost? but its actually set over 1200?.


http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/17

Look at "Max Boost Clock". 1215 right in that review.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/17
> 
> Look at "Max Boost Clock". 1215 right in that review.


Ya I found that already and that is why I striked out my other comment about it







. Kind of dumb marketing there lol.


----------



## Woundingchaney

Quote:


> Originally Posted by *Cyber Locc*
> 
> So then what is the boost clock actually set to? I was just in that thread linked you info from it didn't see anyone saying what you are. Do you have Presicion X and can tell us what your boost clock is actually set to, I am curious now
> 
> 
> 
> 
> 
> 
> 
> .
> 
> I found what you are talking about, however the 980 probally does the same then right? so in that regard what he says still semi applies. I still think he is wrong in a whole though with a 50% speed increase.
> 
> Found this too, "so GTX Titan X won't clock quite as high as GTX 980 and the overall performance difference on paper is closer to 33% when comparing boost clocks." http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review
> 
> Which is what me and Zealord were trying to tell him
> 
> 
> 
> 
> 
> 
> 
> .


Yes the 980 operates under the same boost scenario in this regard.

Yeah the TX is about 25-30% performance increase over the 980 at launch with everything stock. I personally do not know what the current metric is.


----------



## Cyber Locc

Quote:


> Originally Posted by *Woundingchaney*
> 
> Yeah the TX is about 25-30% performance increase over the 980 at launch with everything stock. I personally do not know what the current metric is.


Pretty much the same, from what I have seen on TPU. They no longer include the Titan X but a 980ti is 3% slower Clock for Clock so you can use the Matrix review to get an idea







.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Ya I found that already and that is why I striked out my other comment about it
> 
> 
> 
> 
> 
> 
> 
> . Kind of dumb marketing there lol.


Yeah, I agree. That's why some people on this forum were calling it a "cheat" when the GTX 680 launched. I believe it was the first card to have boost and if you didn't know to monitor clocks, you would think that the gpu was much faster at Nvidia's advertised stock speed than it actual was.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> Yeah, I agree. That's why some people on this forum were calling it a "cheat" when the GTX 680 launched. I believe it was the first card to have boost and if you didn't know to monitor clocks, you would think that the gpu was much faster at Nvidia's advertised stock speed than it actual was.


Ya I personally dont like the BC, I bake my clocks in did with the 6 series and will with my TI soon. Didn't have a 7 series so theres that lol, my trip to red town.


----------



## criminal

Quote:


> Originally Posted by *Cyber Locc*
> 
> Ya I personally dont like the BC, I bake my clocks in did with the 6 series and will with my TI soon. Didn't have a 7 series so theres that lol, my trip to red town.


Yeah me too. I hate boost clock and like to be able to know what my overclock is all the time.


----------



## Woundingchaney

Quote:


> Originally Posted by *Cyber Locc*
> 
> Pretty much the same, from what I have seen on TPU. They no longer include the Titan X but a 980ti is 3% slower Clock for Clock so you can use the Matrix review to get an idea
> 
> 
> 
> 
> 
> 
> 
> .


The approx. 25% - 30% performance improvement metric has been rather standardized at this point. This has proven time and again an adequate performance increase to make the next model relevant for consumers.

last gen x80ti -> new gen x80 approx 10% performance difference

new gen x80 -> new gen x80ti approx 25% - 30% performance difference


----------



## magnek

Quote:


> Originally Posted by *Clocknut*
> 
> check out those 192bit kepler, Fermi......


Segmented memory
needs to die
kthxbai


----------



## Buris

Quote:


> Originally Posted by *SoccerNinja*
> 
> I want to get the predator or freesync version ultrawide but I don't know which one to get and was gonna choose depending on who has better performing cards
> Pascal or Polaris
> What do you guys think?
> I currently have a r9 390
> Im no where near team red or green I just want whatever will be better long term and for the money.


realistically, you could spend what you spent on an r9 390 (300-350$) and expect a 20-30% performance boost when both polaris and pascal are brought out- roughly around July

if you wanted to step up to what will probably be the Fury 2 or Titan x80, you'll likely have to wait another 6 months. But as someone else said, you'll have to wait until then to find out just who has the better chips


----------



## SoccerNinja

Quote:


> Originally Posted by *Buris*
> 
> realistically, you could spend what you spent on an r9 390 (300-350$) and expect a 20-30% performance boost when both polaris and pascal are brought out- roughly around July
> 
> if you wanted to step up to what will probably be the Fury 2 or Titan x80, you'll likely have to wait another 6 months. But as someone else said, you'll have to wait until then to find out just who has the better chips


I would go for the high end card around 2017
I think I might just see the cards then and get a even better monitor that comes out then.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *criminal*
> 
> Yeah me too. I hate boost clock and like to be able to know what my overclock is all the time.


Coming from 7970's to my Titans was an absolute struggle with sanity for me. I hated "boost" clocks so much, especially in the early days of Titan because overclocking was so finicky and figuring out actual clock speeds for bench runs was massively annoying (see the 7970 vs Titan thread in my sig where I explain how boost clocks on the Titans made the testing methodology such a pain). Nowadays with my flashed bios I just run the stock clocks of 1006 MHz and set static overclocks between 1200-1330 MHz when needed.


----------



## guttheslayer

Quote:


> Originally Posted by *Cyber Locc*
> 
> Umm no its really not going to surprise us. Clock for Clock a Titan X is 4% faster than a 980ti. Yet a G1 gaming 980ti with 1500mhz+ boost is only 48% faster than a 980 stock. He was correct clock for clock the titan X is about 30-35% faster than a 980.
> 
> A reference 980ti and a reference Titan X are the same clocks, they dont illustrate what you are saying at all. Furthermore, the shader difference is 9% on the Titan X vs a 980ti however the actual difference is only 3-4%.


I am comparing GTX 980 and Titan X clock speed, not vs 980 Ti. There is no doubt the GTX 980 is clocked faster.

When the first review came out, Titan X at reference was compared to a GTX 980 (Not 980 Ti because TX came out first) and it showed a clear 33-36% lead

Definitely the rule of diminishing return apply, but I am pretty sure it not so badly attenuated. And also since its a difference architecture Cores, it can't be compared directly, except Pascal is probably better in clock to clock per core performance than Maxwell.

Maxwell perform up to 35% faster per core as compared to Kepler (Even thought Kepler was gimped initially but even so Maxwell is still faster in term of per core at currently). I dont see why this wont happen for Pascal vs Maxwell.

*P.S: The GM204 was twice as fast as GK104, even on the same 28nm node, so I probably expect the same for GP104 vs GM204 since its on a newer node even*


----------



## Raghar

Quote:


> Originally Posted by *iLeakStuff*
> 
> Guys,
> 
> Benchlife said 8GB for GP104
> The 3DMark11 entries say 8GB
> 
> You cannot do 8GB and 384bit.
> Its either 256bit or 512bit. 512bit not happening on a small die like GP104


My GTX 660 has basically two memory controllers.
-xxx
-xxx
-xx

If you want to access first 1500 MB you use 192-bit one. If you want to access the last two memory sticks, you use 128-bit one. As long as the highest part is used for data that doesn't require high speed RAM, it's completely fine. BTW using it on 32 bit drivers basically meant the segmented RAM didn't matter, because all RAM was used as a segments to allow 4GB+ cards in 32 bit OS.
64 bit drivers are using fence to prevent problems with different access speeds on one array.

Yes they should release it with 3GB, it would be much more lasting. But they had faster cards with only 2GB and they didn't want to compete with them.


----------



## sinholueiro

Quote:


> Originally Posted by *Raghar*
> 
> My GTX 660 has basically two memory controllers.
> -xxx
> -xxx
> -xx
> 
> If you want to access first 1500 MB you use 192-bit one. If you want to access the last two memory sticks, you use 128-bit one. As long as the highest part is used for data that doesn't require high speed RAM, it's completely fine. BTW using it on 32 bit drivers basically meant the segmented RAM didn't matter, because all RAM was used as a segments to allow 4GB+ cards in 32 bit OS.
> 64 bit drivers are using fence to prevent problems with different access speeds on one array.
> 
> Yes they should release it with 3GB, it would be much more lasting. But they had faster cards with only 2GB and they didn't want to compete with them.


In fact you have 6 32 bit controllers (192 bits) and in 2 of that 6 controllers, you have 2 chips of memory (memory clamshell), being one chip in the other 4 controllers. There's no 192 and 128 bus thing.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> I am a little confused by what you are saying. Also TPUs card is not the same as the Retail SC, is the issue with anything they say.
> 
> What they have is in fact a reference Nvidia PCB. Here it is as a matter of fact.
> 
> Okay and here is a picture of my actual retail SC TI.
> 
> and the EVGA in the front,
> 
> And then we can go back to EVGA on twitter they are not the same exact components, they have been changed however not in a way that affects cooling solutions. TPU is wrong there board and the actual board are completely different so anything they have to say is hogwash.
> Yes I think this is the issue, the early SCs were using Ref PCB then before going to retail that has changed to EVGAs own PCB with small changes. TPUs SC is not the same as my SC.
> 
> I am also not stating that the differences are huge. the components may very well be cheaper to save costs, but I find that pointless why make your own PCB and gimp it.
> 
> Okay later today I will remove my heatsink and take pictures of my PCB front as that is the only way we are getting to the bottom of this


I believe you







Doubt the small changes really make any difference though as they have to fit reference coolers/blocks. +rep though


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I believe you
> 
> 
> 
> 
> 
> 
> 
> Doubt the small changes really make any difference though as they have to fit reference coolers/blocks. +rep though


As far as overclocking or performance I am sure it makes very very little to none. SC cards do seem to be binned better as far as Asic quality from what I have seen but that is neither here nor there. I am not too familiar with Asic quality but from what I have seen my SC isnt great its 74%, it isnt bad either?

Anyway with the current overclocking methods I do not think even kingpins make much a difference outside of LN2. I meant this all as more the board is better quality for the long term. As it was said reference boards are just subpar.

What could they really change like you said, Better soldering, Better sourced parts, Thicker PCB, very minor changes. To that end it doesn't affect much on a performance scale but the quality is better, sorry things came up yesterday and then I completely spaced it I will get the cooler off asap and take some pics as I am very curious myself as of know







.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Cyber Locc*
> 
> As far as overclocking or performance I am sure it makes very very little to none. SC cards do seem to be binned better as far as Asic quality from what I have seen but that is neither here nor there. I am not too familiar with Asic quality but from what I have seen my SC isnt great its 74%, it isnt bad either?
> 
> Anyway with the current overclocking methods I do not think even kingpins make much a difference outside of LN2. I meant this all as more the board is better quality for the long term. As it was said reference boards are just subpar.
> 
> What could they really change like you said, Better soldering, Better sourced parts, Thicker PCB, very minor changes. To that end it doesn't affect much on a performance scale but the quality is better, sorry things came up yesterday and then I completely spaced it I will get the cooler off asap and take some pics as I am very curious myself as of know
> 
> 
> 
> 
> 
> 
> 
> .


I think you confuse people when you say SC since it can refer to both:

http://www.newegg.com/Product/Product.aspx?Item=N82E16814487139
http://www.newegg.com/Product/Product.aspx?Item=N82E16814487141


----------



## Cyber Locc

Quote:


> Originally Posted by *xxdarkreap3rxx*
> 
> I think you confuse people when you say SC since it can refer to both:
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814487139
> http://www.newegg.com/Product/Product.aspx?Item=N82E16814487141


That is true, I mean SC ACX, when I say SC







. So the second one. I will start saying SC+ for now on or what should I say to talk about that SC why they have to do that lol.


----------



## JRSauto

Quote:


> Originally Posted by *cjc75*
> 
> Dude...
> 
> I dont _need_ a 980Ti to max settings at 1440p... I'm already playing max settings @ 1440p just fine, with my GTX 770 FTW 4GB card... though mind its not over 60hz...
> 
> Why the heck do I need a $600+ video card that my $300 video card does the same thing just fine; oh.. ok.. well gee I pay an extra $300 to use a little less power. Big deal. Turn off all the lights in my apartment and turn off the A/C for a few hours, and I save more power then a 980Ti would save me... and I'm still gaming away on max settings @ 1440p, and my FPS are just fine...


Completely amazed that no one called you out on this utter and complete bull... As an owner of a 980TI clocked to 1526mhz core I know first hand you are a liar. You are absolutely NOT maxing out every game at 1440P res unless you enjoy a slide show. My previous GPU a 780 couldnt even do that and that GPU is a lot stronger than your glorified gtx 680.


----------



## Bogga

Quote:


> Originally Posted by *JRSauto*
> 
> Completely amazed that no one called you out on this utter and complete bull... As an owner of a 980TI clocked to 1526mhz core I know first hand you are a liar. You are absolutely NOT maxing out every game at 1440P res unless you enjoy a slide show. My previous GPU a 780 couldnt even do that and that GPU is a lot stronger than your glorified gtx 680.


These kinds of comments are common when you look at different sites where people try to sell their crappy computers at an obscure price.

"Maxes everything!!!111oneone"

Yeah, if everything is Minesweeper in 800x600


----------



## JRSauto

Quote:


> Originally Posted by *Bogga*
> 
> These kinds of comments are common when you look at different sites where people try to sell their crappy computers at an obscure price.
> 
> "Maxes everything!!!111oneone"
> 
> Yeah, if everything is Minesweeper in 800x600


lol


----------



## Cyber Locc

Quote:


> Originally Posted by *JRSauto*
> 
> Completely amazed that no one called you out on this utter and complete bull... As an owner of a 980TI clocked to 1526mhz core I know first hand you are a liar. You are absolutely NOT maxing out every game at 1440P res unless you enjoy a slide show. My previous GPU a 780 couldnt even do that and that GPU is a lot stronger than your glorified gtx 680.


But can it play minesweeper at 4k.


----------



## Raghar

Quote:


> Originally Posted by *sinholueiro*
> 
> In fact you have 6 32 bit controllers (192 bits) and in 2 of that 6 controllers, you have 2 chips of memory (memory clamshell), being one chip in the other 4 controllers. There's no 192 and 128 bus thing.


So let's say it more technically. As you can see from that diagram, there are three 64 bit memory controllers. Now the catch is there are 8 memory sticks. Three sticks of RAM are attached to the first one, and the last one has only two. Logic says they can be operated in a way that when you need to write 1 MB array, at least 128 bits are used simultaneously. Basically the third controller is not active in some situations.

(Looking at this as a two controllers attached that can't work simultaneously can explain it better to some posters.)


----------



## jdstock76

Quote:


> Originally Posted by *JRSauto*
> 
> Completely amazed that no one called you out on this utter and complete bull... As an owner of a 980TI clocked to 1526mhz core I know first hand you are a liar. You are absolutely NOT maxing out every game at 1440P res unless you enjoy a slide show. My previous GPU a 780 couldnt even do that and that GPU is a lot stronger than your glorified gtx 680.


I max everything out @ 1440p with my 980ti/980ti's. It's feasible that perhaps certain games he has better than average performance @ 1440p with a 770, though highly and I stress highly unlikely.


----------



## JRSauto

Quote:


> Originally Posted by *jdstock76*
> 
> I max everything out @ 1440p with my 980ti/980ti's. It's feasible that perhaps certain games he has better than average performance @ 1440p with a 770, though highly and I stress highly unlikely.


Yes 1440P is perfect for my 980TI but this is the card what was really needed for 1440P and no reworked GTX 680 is going to max all games at 1440P unless as I stated you enjoy slide shows.


----------



## Klocek001

Quote:


> Originally Posted by *JRSauto*
> 
> Completely amazed that no one called you out on this utter and complete bull... As an owner of a 980TI clocked to 1526mhz core I know first hand you are a liar. You are absolutely NOT maxing out every game at 1440P res unless you enjoy a slide show. My previous GPU a 780 couldnt even do that and that GPU is a lot stronger than your glorified gtx 680.


It's common knowledge you can max out any game on 960 4GB @1440p, I don't think any would game go over the 4GB limit.
Whether the game is smooth is another thing,very subjective. For me it's +70 fps, for him it's 20 fps.


----------



## JRSauto

Quote:


> Originally Posted by *Klocek001*
> 
> It's common knowledge you can max out any game on 960 4GB @1440p, I don't think any would game go over the 4GB limit.
> Whether the game is smooth is another thing,very subjective. For me it's +70 fps, for him it's 20 fps.


I didnt say he could not max it out, I said it would be a slide show with many modern games while he pretends he is getting 60fps. Also there are a handful of games that will rape a 960 at 1440P when maxed out.


----------



## Cyber Locc

Quote:


> Originally Posted by *Klocek001*
> 
> It's common knowledge you can max out any game on 960 4GB @1440p, I don't think any would game go over the 4GB limit.
> Whether the game is smooth is another thing,very subjective. For me it's +70 fps, for him it's 20 fps.


But can it play Minesweeper on Max?
Quote:


> Originally Posted by *JRSauto*
> 
> Lets see you with a 960 play any of the latest Assassins creed games maxed at 1440P while keeping +70fps.. Or how about Rottr where your 4gb's wont cut it for Ultra Textures and even with Textures at high and everything else maxed at 1440P with Rottr I can guarantee you will be in the mid to high 30's. People seriously need to stop lying....


Well mid to high 30s is fine, if you have gsync thats as smooth as butter. I get their theory, I can play at 30FPS smooth on a 960, so buy a 1000 dollar monitor and a 200 dollar GPU.

My Dell gx755 with Core 2 Duo and integrated graphics can max any game at 1400p as well, want to see







. sure it may only get 1fps but it will launch the game.


----------



## Klocek001

nah 30 fps is bad, g-sync or not. when I bought my g-sync display 50 fps was kinda smooth, but not any lower. I quickly gave up my desire to "max" games anyway, I prefer to turn down some things or use the high preset instead of ultra, that usually doubles the fps and still looks nice enough. I would be glad if GP104 would bring over 30% improvement over my 1.5GHz 980Ti, going from ~75 fps to +100 fps avg would be a nice (... can't find a word) until GP100 could utilize all them 144 Hertzes in my display


----------



## cjc75

Quote:


> Originally Posted by *JRSauto*
> 
> Completely amazed that no one called you out on this utter and complete bull... As an owner of a 980TI clocked to 1526mhz core I know first hand you are a liar. You are absolutely NOT maxing out _*every*_ game at 1440P res unless you enjoy a slide show. My previous GPU a 780 couldnt even do that and that GPU is a lot stronger than your glorified gtx 680.


I never said anything about "maxing out *every* game"...

I also clearly stated that I was playing at 60hz.

I am enjoying 60 - 75 fps in all the games that I currently have maxed out. I wouldn't call that a 'slide show'.. and if your little 780 couldn't do it, then clearly you weren't utilizing it properly... also yours probably wasn't backed up by running your games off a Samsung 950 Pro m.2 SSD.

Regardless of what GPU you have, there is an obvious difference in running a game off a HDD compared to running it off an SSD.

I get double the FPS on several of my games, when running them off my 950 Pro, as opposed to running them off a Western Digital Black 6.0gbs HDD.

Getting good performance out of your games is a lot more then just having a good GPU.


----------



## zealord

Quote:


> Originally Posted by *cjc75*
> 
> I never said anything about "maxing out *every* game"...
> 
> I also clearly stated that I was playing at 60hz.
> 
> I am enjoying 60 - 75 fps in all the games that I currently have maxed out. I wouldn't call that a 'slide show'.. and if your little 780 couldn't do it, then clearly you weren't utilizing it properly... also yours probably wasn't backed up by running your games off a Samsung 950 Pro m.2 SSD.
> 
> Regardless of what GPU you have, there is an obvious difference in running a game off a HDD compared to running it off an SSD.
> 
> *I get double the FPS on several of my games, when running them off my 950 Pro, as opposed to running them off a Western Digital Black 6.0gbs HDD.*
> 
> Getting good performance out of your games is a lot more then just having a good GPU.


I don't want to call you a liar, but I have never had that big a difference.

I tried several games including The Witcher 3, Rise of the Tomb Raider, Metal Gear Solid GZ and TPP, DotA 2, The Evil Within, Starcraft 2, GTA V and many more.

I tested them on both my SSD and HDD and there was no big difference. In most games not even a small one. I got faster loading screens, but that's it.


----------



## JRSauto

Any way to everyone else my old 780 was a beast especially with the overclock but there were certain games maxed out at 1440P where it could not hold a steady 60fps "memory unrelated" just not enough raw GPU power.

The 980TI "which I currently own" is ideally the perfect card for maxing games out at 1440P and having the chance of keeping 60fps in most of them.


----------



## cjc75

Quote:


> Originally Posted by *zealord*
> 
> I don't want to call you a liar, but I have never had that big a difference.
> 
> I tried several games including The Witcher 3, Rise of the Tomb Raider, Metal Gear Solid GZ and TPP, DotA 2, The Evil Within, Starcraft 2, GTA V and many more.
> 
> I tested them on both my SSD and HDD and there was no big difference. In most games not even a small one. I got faster loading screens, but that's it.


Think what ya want.

I play GTA V as well, and it runs slightly choppy off a HDD. In fact there are discussions about it on the GTA Forums; about how the game was infact NOT designed to run off HDD's but was instead designed specifically for SSD's; only that R* doesn't/won't talk about it.

But running the game off a HDD, I experience excessive lag and choppiness, especially in GTA Online while driving around at high speeds; I experience excessive lag... often falling behind by as much as several street blocks as a result, simply because the HDD just cant keep up with the games data.

But running it off the SSD, and I experience a smooth and extremely fast game play experience with never a hiccup.

I won't however, claim that GTA V is one of the games that I've seen nearly double FPS when running on SSD compared to HDD; but I have seen an obvious and remarkable improvement in game performance... It is however one of the games that I am running at nearly maxed Settings... I will not claim it is 100% maxed. I never said _*"every game"*_ as some others here seem to think I said. But I will say that I have GTA V probably running at about 90 - 95% maxed settings.


----------



## magnek

SSD won't improve your FPS only your loading times period. This is non-debatable.
Quote:


> Originally Posted by *Klocek001*
> 
> nah 30 fps is bad, g-sync or not. when I bought my g-sync display 50 fps was kinda smooth, but not any lower. I quickly gave up my desire to "max" games anyway, I prefer to turn down some things or use the high preset instead of ultra, that usually doubles the fps and still looks nice enough. I would be glad if GP104 would bring over 30% improvement over my 1.5GHz 980Ti, going from ~75 fps to +100 fps avg would be a nice (... can't find a word) until GP100 could utilize all them 144 Hertzes in my display


Yeah G-Sync does jack all if you're constantly in the 40 FPS region. I was let down soooooooooo bad because AnandTech made it sound like G-Sync could make 40 FPS feel like 60 FPS. What a big fat lie, 40 FPS still plays like 40 FPS and is still a horrible experience.


----------



## zealord

Quote:


> Originally Posted by *cjc75*
> 
> Think what ya want.
> 
> I play GTA V as well, and it runs slightly choppy off a HDD. In fact there are discussions about it on the GTA Forums; about how the game was infact NOT designed to run off HDD's but was instead designed specifically for SSD's; only that R* doesn't/won't talk about it.
> 
> But running the game off a HDD, I experience excessive lag and choppiness, especially in GTA Online while driving around at high speeds; I experience excessive lag... often falling behind by as much as several street blocks as a result, simply because the HDD just cant keep up with the games data.
> 
> But running it off the SSD, and I experience a smooth and extremely fast game play experience with never a hiccup.
> 
> I won't however, claim that GTA V is one of the games that I've seen nearly double FPS when running on SSD compared to HDD; but I have seen an obvious and remarkable improvement in game performance... It is however one of the games that I am running at nearly maxed Settings... I will not claim it is 100% maxed. I never said _*"every game"*_ as some others here seem to think I said. But I will say that I have GTA V probably running at about 90 - 95% maxed settings.


Maybe your HDD was broken? Have you checked it health status?

Many trustworthy people have done several reviews of HDD vs SSD performance in games and they never could witness any significant improvements like you are claiming.

Source (one of many, but I think one is enough) : http://www.hardocp.com/article/2013/12/10/hdd_vs_ssd_real_world_gaming_performance/5
Quote:


> *In terms of raw video game performance our conclusion is that upgrading to an SSD made absolutely no difference in gameplay performance*. Honestly, we did not expect that it would, hence why we held back for so long on upgrading to SSDs. In every game we tested the performance fell within the margin of error for a realworld gameplay run-through. We tested some very demanding games as well such as Battlefield 4 and ARMA 3. *However, no game showed any performance advantage with the SSD versus the HDD. The framerates were the same, the frame consistency was the same.*
> 
> *Here is what was better with the SSD, as you might guess; load times*. Loading each game was significantly faster on the SSD. Transitioning maps during gameplay was also significantly faster on the SSD. Loading times were improved, and we had a better experience overall simply because game data loaded faster.
> 
> *However, those load times do not translate into frames per second differences while gaming. Upgrading to the SSD has not given us a new performance profile in games, nor has it changed the performance we've shown to you in past reviews using an HDD*.


I hate to call you out like that, but OCN is a site where spreading lies should not be tolerated.

You are clearly lying for whatever reason.


----------



## cjc75

Quote:


> Originally Posted by *JRSauto*
> 
> HAHAHAHAHA! OMG!!...........
> 
> 
> 
> 
> 
> 
> 
> SSD drives do not give "I REPEAT" do not give you more frame rates! They improve loading times and that is about the end of it! I guess you have some magical SSD drive that has built in GPU abilities that no one else has! HAHAHA Now I truly know what kind of a disillusion person I am dealing with


I never said anything about an SSD.

SSD's use the SATA interface.

I said an *m.2* SSD; and not just any m.2 SSD...

A Samsung *950 Pro* m.2 SSD.

There IS a difference, and more so a difference in performance.

So, yeah... I guess I DO have a magical SSD, considering not EVERYONE ELSE, can run this type of m.2 SSD since it can not run in EVERY m.2 slot. You have to have a special m.2 slot to run it... and special m.2 slot that will not accept any other type of m.2 drive.

I see an improvement in performance on several games running off the 950 Pro; compared to when I move them to my Samsung 850 Evo *m.2* SSD, which is running off a separate and slower interface. I see an even more remarkable improvement in performance running them off the 950 Pro compared to running them off a slower HDD.

Simple fact. Many game companies ARE discretely optimizing their games for SSD's over HDD's.
Quote:


> Originally Posted by *JRSauto*
> 
> Thats great, I still like playing games from 2005 myself.


Gee, thank you so much for your infinite wisdom on the age of games like GTA V, Fallout 4, and AC Black Flag/Rogue/Unity... to name a few.

You're just so smart. I had no idea those specific games were sooooo old.


----------



## cjc75

Quote:


> Originally Posted by *zealord*
> 
> Maybe your HDD was broken? Have you checked it health status?
> 
> Many trustworthy people have done several reviews of HDD vs SSD performance in games and they never could witness any significant improvements like you are claiming.
> 
> Source (one of many, but I think one is enough) : http://www.hardocp.com/article/2013/12/10/hdd_vs_ssd_real_world_gaming_performance/5
> I hate to call you out like that, but OCN is a site where spreading lies should not be tolerated.
> 
> You are clearly lying for whatever reason.


...and again.. those "tests" you quote...

Are based on *SATA* SSD's...

Not m.2 or *m.2 Ultra* SSD's...

Different interface.

Different performance.


----------



## Dargonplay

Quote:


> Originally Posted by *cjc75*
> 
> I get double the FPS on several of my games, when running them off my 950 Pro, as opposed to running them off a Western Digital Black 6.0gbs HDD.


This made my day








Quote:


> Originally Posted by *zealord*
> 
> Maybe your HDD was broken? Have you checked it health status?


Even if his HDD is broken he would only have issues where assets aren't being loaded to the RAM or GPU Memory, assets missing, terrain Missing, small stutters when changing maps or scenes.

There's been some instances where I kid you not, have disconnected my HDD while playing Skyrim and the game was still running perfectly fine, until you tried to move somewhere else, then hell would brake lose.

I could explain why cjc75 is wrong and why we're laughing, but I'll leave that to someone else.


----------



## Klocek001

Quote:


> Originally Posted by *Dargonplay*
> 
> This made my day


just imagine if he had two in RAID0 ....

dear God....


----------



## EightDee8D

Quote:


> Originally Posted by *cjc75*
> 
> ...and again.. those "tests" you quote...
> 
> Are based on *SATA* SSD's...
> 
> Not m.2 or *m.2 Ultra* SSD's...
> 
> Different interface.
> 
> Different performance.


that means nothing. it will give you better load times and that's it. even if you run it on m.99 ultra uber sata with samsung 99999990 ssd running at 2-3 tbps.

you have no idea what you're talking about or just trolling for luls. period.


----------



## cjc75

Quote:


> Originally Posted by *EightDee8D*
> 
> that means nothing. it will give you better load times and that's it. even if you run it on m.99 ultra uber sata with samsung 99999990 ssd running at 2-3 tbps.
> 
> you have no idea what you're talking about or just trolling for luls. period.


If you say so.

You guys enjoy your $600 video cards and cheap Sata SSD's.

I'll enjoy my 950 Pro bliss.


----------



## Dargonplay

Quote:


> Originally Posted by *Klocek001*
> 
> just imagine if he had two in RAID0 ....
> 
> dear God....




Oh yes, I can imagine as I have two 1TB SSDs on RAID 0, it's like driving a BMW while mounted on top of a Rainbow Defecating Unicorn, my FPS when playing on SSDs RAID 0 have Quadrupled compared to my HDD cus SSD companies obviously optimizing DEM GAMESSSS, but they're doing this quietly because it's Black Magic and they don't want the consumer to know








Quote:


> Originally Posted by *zealord*
> 
> (anyone else weird out by how many "wrong" people have been on OCN lately? Just recently there was a guy doing math wrong and every single person on this forum told him he was wrong, but he was so stubborn that he probably still doesn't get it.)


Spring Break is my only guess.


----------



## zealord

Quote:


> Originally Posted by *cjc75*
> 
> ...and again.. those "tests" you quote...
> 
> Are based on *SATA* SSD's...
> 
> Not m.2 or *m.2 Ultra* SSD's...
> 
> Different interface.
> 
> Different performance.


You are wrong.

That is not how it works. Please educate yourself. You are confusing gaming performance and SSD write/read performance probably.

You are definitely wrong and everyone in this thread is going to tell you that you are wrong.

(anyone else weird out by how many "wrong" people have been on OCN lately? Just recently there was a guy doing math wrong and every single person on this forum told him he was wrong, but he was so stubborn that he probably still doesn't get it.)


----------



## Klocek001

Quote:


> Originally Posted by *Dargonplay*
> 
> 
> 
> Oh yes, I can imagine as I have two 1TB SSDs on RAID 0, it's like driving a BMW while mounted on top of a Rainbow Defecating Unicorn, my FPS when playing on SSDs RAID 0 have Quadrupled compared to my HDD cus SSD companies obviously optimizing DEM GAMESSSS, but they're doing this quietly because it's Black Magic and they don't want the consumer to know


but the point is you don't have the m.2 950 Pro

edit: the m.2 *ultra* 950 pro, sorry. I think it's this ultra making the difference.


----------



## EightDee8D

Quote:


> Originally Posted by *Klocek001*
> 
> but the point is you don't have the m.2 950 Pro


ohhhh.....


----------



## Klocek001

@cjc if anything those two m.2 SSDs on a z97/4690k are hogging the pci-e lanes of your card.


----------



## zealord

What we all learned today.

Instead of getting a GTX 980 Ti

it is better to get the GTX 950 + m2 pro Ultra SSD 950.

Pros :

- better performance in games
- If both are called 950 (the GPU and the SSD) then you get special synergy performance
- better than a 980 Ti

Cons :

- none.


----------



## Dargonplay

Quote:


> Originally Posted by *Klocek001*
> 
> @cjc if anything those two m.2 SSDs on a z97/4690k are hogging the pci-e lanes of your card.


Yes yes, because his Toys R Us 770 really needs dem lanes.
Quote:


> Originally Posted by *Klocek001*
> 
> just pointing out it could actually be a minimal (1-2%) loss of performance instead of any,even the slightest gain.


This could actually be true with a PCIe 2.0 combined with a GTX 980Ti, so much in fact that I'd be worried for significant drop in performance. His card is a weak and slow Toy R Us 770, so he shouldn't worry about that really, he should worry about dem SSD Black Magic doubling his FPS.
Quote:


> Originally Posted by *zealord*
> 
> What we all learned today.
> 
> - If both are called 950 (the GPU and the SSD) then you get special synergy performance
> - better than a 980 Ti


May this rep be with you.


----------



## Klocek001

Quote:


> Originally Posted by *Dargonplay*
> 
> Yes yes, because his Toys R Us 770 really needs dem lanes.


just pointing out it could actually be a minimal (1-2%) loss of performance instead of any,even the slightest gain.


----------



## magnek

Quote:


> Originally Posted by *zealord*
> 
> What we all learned today.
> 
> Instead of getting a GTX 980 Ti
> 
> it is better to get the GTX 950 + m2 pro Ultra SSD 950.
> 
> Pros :
> 
> - better performance in games
> - If both are called 950 (the GPU and the SSD) then you get special synergy performance
> - better than a 980 Ti
> 
> Cons :
> 
> - none.












I literally lol'd


----------



## Cyber Locc

Quote:


> Originally Posted by *zealord*
> 
> What we all learned today.
> 
> Instead of getting a GTX 980 Ti
> 
> it is better to get the GTX 950 + m2 pro Ultra SSD 950.
> 
> Pros :
> 
> - better performance in games
> - If both are called 950 (the GPU and the SSD) then you get special synergy performance
> - better than a 980 Ti
> 
> Cons :
> 
> - none.


But the question of the day, can it play 4k surround Mindsweeper


----------



## SuperZan

Quote:


> Originally Posted by *Cyber Locc*
> 
> But the question of the day, can it play 4k surround Mindsweeper


Is that like Minesweeper with bong?


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *cjc75*
> 
> ...and again.. those "tests" you quote...
> 
> Are based on *SATA* SSD's...
> 
> Not m.2 or *m.2 Ultra* SSD's...
> 
> Different interface.
> 
> Different performance.


You realize SATA is a bus and M.2 is a connector right... M.2 can be SATA, PCIe, or USB. The interface can be AHCI or NVMe.


----------



## amlett

My sm951 m.2 fixed my marriage.


----------



## criminal

Quote:


> Originally Posted by *cjc75*
> 
> If you say so.
> 
> You guys enjoy your $600 video cards and cheap Sata SSD's.
> 
> I'll enjoy my 950 Pro bliss.


You have to be trolling. At least I hope... lol

Quote:


> Originally Posted by *amlett*
> 
> My sm951 m.2 fixed my marriage.


I lol'd.


----------



## Cyber Locc

Quote:


> Originally Posted by *criminal*
> 
> You have to be trolling. At least I hope... lol
> I lol'd.


He has to be.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *JRSauto*
> 
> Yes 1440P is perfect for my 980TI but this is the card what was really needed for 1440P and no reworked GTX 680 is going to max all games at 1440P unless as I stated you enjoy slide shows.


I dunno, never had a 680/770. I can say that just one of my OG Titans can definitely max every game I play at 1440p. Some games like Crysis 3 will drop down to 30 FPS at times but its rare. Both Titans in SLI (and especially if I put just an average OC on them) will do 60+ FPS in everything. All that said, I have a hard time believing that a 770 would max Crysis 3 or the like at 1440p considering that is a significant drop in power even from the OG Titan.


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I dunno, never had a 680/770. I can say that just one of my OG Titans can definitely max every game I play at 1440p. Some games like Crysis 3 will drop down to 30 FPS at times but its rare. Both Titans in SLI (and especially if I put just an average OC on them) will do 60+ FPS in everything. All that said, I have a hard time believing that a 770 would max Crysis 3 or the like at 1440p considering that is a significant drop in power even from the OG Titan.


You must not play witcher







.


----------



## Majin SSJ Eric

Only Witcher 2 so far.


----------



## ZealotKi11er

Not sure how different 2 x 290X are from Titans but I can hold 60 fps solid with no GW @ 1440p.


----------



## Majin SSJ Eric

They're faster now.


----------



## Cyber Locc

Quote:


> Originally Posted by *ZealotKi11er*
> 
> Not sure how different 2 x 290X are from Titans but I can hold 60 fps solid with no GW @ 1440p.


I can hold 60fps solid with GW one 1 card







telling you zealot come to the green side mwuhahaha. Oh ya that's 1440p







, that's only with very high clocks though and it does drop to 55ish rarely but it happens. I seen a drop to 50 once in the hour that I was monitoring the fps.


----------



## SuperZan

Quote:


> Originally Posted by *Cyber Locc*
> 
> I can hold 60fps solid with GW one 1 card
> 
> 
> 
> 
> 
> 
> 
> telling you zealot come to the green side mwuhahaha


Belay that order!


----------



## Cyber Locc

Quote:


> Originally Posted by *SuperZan*
> 
> Belay that order!


Delay? For? A midrange card that will likely cost the same or more and maybe macth it hehe.


----------



## SuperZan

Quote:


> Originally Posted by *Cyber Locc*
> 
> Delay? For? A midrange card that will likely cost the same or more and maybe macth it hehe.


Belay != delay.







But ya, Polaris/Pascal midrange should offer 980 ti / Fury X - ish performance so that will be a nice upgrade for those who aren't waiting for Vega/Volta.


----------



## FlyingSolo

I need to buy some Samsung 950 PRO


----------



## SuperZan

If you want 60 FPS you will.


----------



## Cyber Locc

Quote:


> Originally Posted by *SuperZan*
> 
> If you want 60 FPS you will.


DUDE IKR, looks like I really need to buy that Intl 750 I was considering, I should be able to get 60 fps in maxed 4k with my 980ti and the 750.

I didn't realize my raided Intel 730s were that much of a handicap







. super glad I am in this thread







.


----------



## Threx

Quote:


> Originally Posted by *Cyber Locc*
> 
> DUDE IKR, looks like I really need to buy that Intl 750 I was considering, I should be able to get 60 fps in maxed 4k with my 980ti and the 750.


But you wouldn't get the special synergy bonus.


----------



## Cyber Locc

Quote:


> Originally Posted by *Threx*
> 
> But you wouldn't get the special synergy bonus.


IKR I need to sell my TI for a 750 that will probably work better huh?

Anyone down to trade? a Ti for a 750 GPU and a Intel 750?

Kidding of course


----------



## STEvil

mod/unlock the firmware to Intel 780! Its no 9 series, but hey, you can run the GTX 780 with it for that synergistic boost!


----------



## romanlegion13th

when can we expect the 1080?


----------



## iLeakStuff

May 27th according to Benchlife. Reveal will be a little over a week from now


----------



## TrueForm

Waiting for the 1070. I was to buying a 970 but realized the new cards are around the corner. My trusty 670 is still holding it's own!


----------



## romanlegion13th

Quote:


> Originally Posted by *iLeakStuff*
> 
> May 27th according to Benchlife. Reveal will be a little over a week from now


Really a week from now... how much faster are we looking at from a 980ti?


----------



## lolfail9001

Quote:


> Originally Posted by *romanlegion13th*
> 
> Really a week from now... how much faster are we looking at from a 980ti?


And we're looking at likely ballpark of 980 ti performance at lower power consumption. Still an improvement over Kepler, but Maxwell spoiled us like candy.

EDIT: Woops, a derp.


----------



## Forceman

Quote:


> Originally Posted by *romanlegion13th*
> 
> Really a week from now... how much faster are we looking at from a 980ti?


I don't know where the week from now comes from, there haven't been any solid rumors that I've seen. And announcing something now and then not having it for sale until end of May is not Nvidia's style, they tend to announce and release close together. They may be preparing to discuss Pascal's architecture next week, but I highly doubt we are going to see a card launch that soon.

And the likeliest performance range is around 980 Ti level, maybe 10-15% more at most, with a possibility of slightly slower than a 980 Ti.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> I don't know where the week from now comes from, there haven't been any solid rumors that I've seen. And announcing something now and then not having it for sale until end of May is not Nvidia's style, they tend to announce and release close together. They may be preparing to discuss Pascal's architecture next week, but I highly doubt we are going to see a card launch that soon.
> 
> And the likeliest performance range is around 980 Ti level, maybe 10-15% more at most, with a possibility of slightly slower than a 980 Ti.


Well said


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Forceman*
> 
> I don't know where the week from now comes from, there haven't been any solid rumors that I've seen. And announcing something now and then not having it for sale until end of May is not Nvidia's style, they tend to announce and release close together. They may be preparing to discuss Pascal's architecture next week, but I highly doubt we are going to see a card launch that soon.
> 
> And the likeliest performance range is around 980 Ti level, maybe 10-15% more at most, with a possibility of slightly slower than a 980 Ti.


I don't think there is any chance that the 1080 will be slower than the 980Ti to be honest. It will likely be 10-15% faster as you said with the 1070 probably 15-20% behind. I just can't see Nvidia releasing on a new node and having it be slower than the last generation. Hasn't ever happened as far as I know (even the utterly useless 980 was still 5-10% faster than the 780Ti at launch). I do think both Nvidia and AMD are going to ease into FINFET slowly as they are both going to be on this node for at least as long as they were on 28nm so we will not be seeing their best 14/16nm cards for some time...


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I don't think there is any chance that the 1080 will be slower than the 980Ti to be honest. It will likely be 10-15% faster as you said with the 1070 probably 15-20% behind. I just can't see Nvidia releasing on a new node and having it be slower than the last generation. Hasn't ever happened as far as I know (even the utterly useless 980 was still 5-10% faster than the 780Ti at launch). I do think both Nvidia and AMD are going to ease into FINFET slowly as they are both going to be on this node for at least as long as they were on 28nm so we will not be seeing their best 14/16nm cards for some time...


It was 5-7% faster at launch not 10, 5% at 1440p do we really care about 1080? (they were 10% faster as of the Titan X release though)

You have to remember that x80s are the new x60s and x60s have always been matching or 5% faster than the last flagship.

Here is the last shrink that we can fall back on (600 series is a bad example all around lol)


The old X60 is now the X80 and the old X80 is now the Titan.

So if the x80 is 5% faster than the Titan X then it will be about 10% faster than the 980ti. However if it matches the Titan X then it will be 5% faster than the TI.

I would bet it matches the Titan X, adds compute, reduces power consumption and has some features. This puts Titans into the hands of people that couldn't afford them previously and will get bought up like crazy.

They could then put the X70 5% slower than the X80 to make that match a TI, and have a lower price point price it at 450 and call it a day, however I doubt they will do that as of what happened last time, when most bought 670s.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Cyber Locc*
> 
> It was 5-7% faster at launch not 10, 5% at 1440p do we really care about 1080? (they were 10% faster as of the Titan X release though)
> 
> You have to remember that x80s are the new x60s and x60s have always been matching or 5% faster than the last flagship.
> 
> Here is the last shrink that we can fall back on (600 series is a bad example all around lol)
> 
> 
> The X60 is now the X80 and the old X80 is now the Titan.


The 680 was ungodly fast compared to the 580 when it launched. I had two 580 Lightnings back in the day and I remember when I replaced them with 7970 Lightnings and just couldn't believe how fast they were! They also OC'd way better than my 580's did...


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> The 680 was ungodly fast compared to the 580 when it launched. I had two 580 Lightnings back in the day and I remember when I replaced them with 7970 Lightnings and just couldn't believe how fast they were! They also OC'd way better than my 580's did...


Right well there is a reason is they removed a ton of compute the 580 also annihilated the 680 in compute by almost double. They are doing the opposite this time, thats why I said the 600 series should just be forgotten entirely lol.

However yes the 680 was 20% faster than the 580, the reasons why, or more importantly what they took away to make that happen, is the thing.

Incoming flood of compute numbers lol.









this last one is most important as this sums it up, They gave up all other compute for Dx11 compute that will happen in reverse this time.



http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Cyber Locc*
> 
> Right well there is a reason is they removed a ton of compute the 580 also annihilated the 680 in compute by almost double. They are doing the opposite this time, thats why I said the 600 series should just be forgotten entirely lol.
> 
> However yes the 680 was 20% faster than the 580, the reasons why, or more importantly what they took away to make that happen, is the thing.


I think the 680's were more like 30-40% faster when factoring in their way higher OC potential. If I recall correctly the 680's could do 1300-1400 MHz while my 580 Lightning's would struggle to hit 1000 MHz...


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think the 680's were more like 30-40% faster when factoring in their way higher OC potential. If I recall correctly the 680's could do 1300-1400 MHz while my 580 Lightning's would struggle to hit 1000 MHz...


Okay scratch that I see what you are saying, I am not sure the difference with Max Overclocks. However they still did that by removing compute. Pascal is fixing that mistake, so we will see a reversal. However as long as they dont do dumb stuff like that again, it should be back to business as usual afterwords.

As far as that happening again, With the low TDPs of the new cards plus there new 14nm chip size they will in theory not clock nearly as well as Maxwell.

If we go Max overclock comparisons with Maxwell I think a 980ti will win against a 1080 all day by quite a bit.


----------



## Forceman

The 600 series had less compute blocks, but the 700 series (GK110) had full compute capability and it was still close to double 580 performance (in 780 Ti guise). So I don't think it is unreasonable to assume they may do the same thing this time around, with a compute restricted GP104 and a full compute GP100. Even if they didn't cut compute from GP104 they can still get 980 Ti level performance, it just may take a slightly larger die.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> The 600 series had less compute blocks, but the 700 series (GK110) had full compute capability and it was still close to double 580 performance (in 780 Ti guise). So I don't think it is unreasonable to assume they may do the same thing this time around, with a compute restricted GP104 and a full compute GP100. Even if they didn't cut compute from GP104 they can still get 980 Ti level performance, it just may take a slightly larger die.


Thats the thing they are not going to cut compute, they are going to add compute. Pascal is all about compute, I am sure it will match a TI or maybe slightly edge it, more than that though I find unlikely, I agree with your thoughts before 5-15% tops stock for stock that number will dwindle with Max OCs.


----------



## Majin SSJ Eric

Both companies will likely get a really big boost just from the node shrink so I can easily envision these new cards far outperforming our predictions here but I just don't think that will happen because they both want to spread the performance out over the life of this process. Totally gut feeling there though based on no facts.


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Both companies will likely get a really big boost just from the node shrink so I can easily envision these new cards far outperforming our predictions here but I just don't think that will happen because they both want to spread the performance out over the life of this process. Totally gut feeling there though based on no facts.


I am 100% agree if Volta was going to be on 10nm then ya we would see 100% gains (assuming same die sizes) but I think they will be on 14nm for awhile so they will stretch it.


----------



## guttheslayer

Quote:


> Originally Posted by *Cyber Locc*
> 
> Thats the thing they are not going to cut compute, they are going to add compute. Pascal is all about compute, I am sure it will match a TI or maybe slightly edge it, more than that though I find unlikely, I agree with your thoughts before 5-15% tops stock for stock that number will dwindle with Max OCs.


I personally felt that the performance of the X80 will be no doubt >15% faster than TX even when factoring max OC. We shall see.

No point keep speculating when the card isnt out.


----------



## Bogga

Which months do you guess the cards will be released. Just shoot from the hip based on the rumours up until today


----------



## Cyber Locc

Quote:


> Originally Posted by *Bogga*
> 
> Which months do you guess the cards will be released. Just shoot from the hip based on the rumours up until today


Computex so june.
Quote:


> Originally Posted by *guttheslayer*
> 
> I personally felt that the performance of the X80 will be no doubt >15% faster than TX even when factoring max OC. We shall see.
> 
> No point keep speculating when the card isnt out.


sure there is its fun, We should hopefully know something next month.


----------



## Forceman

Quote:


> Originally Posted by *Cyber Locc*
> 
> Thats the thing they are not going to cut compute, they are going to add compute. Pascal is all about compute, I am sure it will match a TI or maybe slightly edge it, more than that though I find unlikely, I agree with your thoughts before 5-15% tops stock for stock that number will dwindle with Max OCs.


GP100 is all about compute, but that doesn't mean GP104 has to be. And even if they add compute, it's not like it adds that much die space. Actually, the more I think about it, the more I think we may see another iteration of the 600/700 series. They launch a GDDR5 equipped GP104 this spring/summer as 1080 cards, then next spring/summer they launch HBM2 equipped GP100 and relaunch a GDDR5X equipped GP104 in the same series.


----------



## Cyber Locc

Quote:


> Originally Posted by *Forceman*
> 
> GP100 is all about compute, but that doesn't mean GP104 has to be. And even if they add compute, it's not like it adds that much die space Actually, the more I think about it, the more I think we may see another iteration of the 600/700 series. They launch a GDDR5 equipped GP104 this spring/summer as 1080 cards, then next spring/summer they launch HBM2 equipped GP100 and relaunch a GDDR5X equipped GP104 in the same series.


Doubt it, people were really peeved with the Quadros and lower end Teslas, the last gen. You have to realize that GP104 will be the majority of Quadros and will also have teslas made from it. They are saying 10X compute across the baord, they are hyping up Nvlink that is only for those guys, this gen will be 100% compute based across the board, it needs to be the Car computers and Industry people need a boost we have gotten plenty they have been getting shafted.

I could be wrong only time will tell, If I am then I will absolutely admit it when the time comes I will eat my words 100%. However if I am right well then you will never hear the end of it right lol.

As to your suggestion god I hope not, I am not buying a GP104 or any 4 chip period. If they dont release a Titan and a TI, then they wont be getting my upgrade for the WS nor my 3 cards for my gaming rig.


----------



## guttheslayer

Quote:


> Originally Posted by *Forceman*
> 
> GP100 is all about compute, but that doesn't mean GP104 has to be. And even if they add compute, it's not like it adds that much die space. Actually, the more I think about it, the more I think we may see another iteration of the 600/700 series. They launch a GDDR5 equipped GP104 this spring/summer as 1080 cards, then next spring/summer they launch HBM2 equipped GP100 and relaunch a GDDR5X equipped GP104 in the same series.


I would agree about GP104 refresh next year with GDDR5X, at that point though it might have a different codename, either GP114 or GP204.


----------



## Bogga

Quote:


> Originally Posted by *Cyber Locc*
> 
> Computex so june.


Think they will launch Ti and Titan this summer as well?


----------



## guttheslayer

Quote:


> Originally Posted by *Bogga*
> 
> Think they will launch Ti and Titan this summer as well?


If Nvidia stick to their usual Titan Launch time frame den no, titan will be Q1 next year and Ti version will likely to be after Titan release.


----------



## Cyber Locc

Quote:


> Originally Posted by *Bogga*
> 
> Think they will launch Ti and Titan this summer as well?


No Decemberish like always, the Titan around most likely with the TI Jan or Febuary. They physically cant those cards will be using HBM2 which isnt even started production yet and wont until mayish right?


----------



## FlyingSolo

Do you guy's think. The 1070 will play all games at 60fps on 1440p. Also when do you think we can play 4k 60fps with a single card.


----------



## Cyber Locc

Quote:


> Originally Posted by *FlyingSolo*
> 
> Do you guy's think. The 1070 will play all games at 60fps on 1440p. Also when do you think we can play 4k 60fps with a single card.


I think the 970 can play games at 60fps 1440p cant it? If your asking about Max I doubt it, a 980ti can barely do that. Trun on Hair works on the Witcher and you get drops still in the 50s but not permanent 60.

as to 1 card 4k a while.


----------



## guttheslayer

Quote:


> Originally Posted by *FlyingSolo*
> 
> Do you guy's think. The 1070 will play all games at 60fps on 1440p. Also when do you think we can play 4k 60fps with a single card.


The GTX 1070 wont be faster than TX that is for sure. I looking at between 980 and Ti performance. Or 980 Ti performance at best.

Unless they decided to pull a GTX 670 this time.


----------



## FlyingSolo

Quote:


> Originally Posted by *Cyber Locc*
> 
> I think the 970 can play games at 60fps 1440p cant it? If your asking about Max I doubt it.
> 
> as to 1 card 4k a while.


It can play but as you said not with max settings. Put that card in my arcade rig for now. To play fighting games and arcade games only. Hopefully it last quite a while before upgrading it from that rig.


----------



## FlyingSolo

Quote:


> Originally Posted by *guttheslayer*
> 
> The GTX 1070 wont be faster than TX that is for sure. I looking at between 980 and Ti performance. Or 980 Ti performance at best.
> 
> Unless they decided to pull a GTX 670 this time.


I'll be happy if the 1070 can match a 980 ti at $300. But i doubt it tho.


----------



## Majin SSJ Eric

I'm thinking GP100 won't be out til 1 year after GP104 at the earliest.


----------



## Dargonplay

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I'm thinking GP100 won't be out til 1 year after GP104 at the earliest.


3 Years, 3 years I've been using this 290, of those 3 years I've spend at least one entire year wanting to upgrade to something that's worthy of my money, the 980Ti isn't, the Fury X isn't, this 290 is already too close in performance to both companies flagship offerings but with a much lower price (150$ is doable on eBay with 200$ being the norm)

If we aren't seeing a Titan Pascal until a whole other year after Pascal, nor AMD's equivalent, then this will be by far the longest I've stuck with the same card ever, I hope many people feel this way to overcome the upgrade itch so Nvidia and AMD can feel the burn in their wallets for trying to milk us like cows.


----------



## Cyber Locc

Quote:


> Originally Posted by *Dargonplay*
> 
> 3 Years, 3 years I've been using this 290, of those 3 years I've spend at least one entire year wanting to upgrade to something that's worthy of my money, the 980Ti isn't, the Fury X isn't, this 290 is already too close in performance to both companies flagship offerings but with a much lower price (150$ is doable on eBay with 200$ being the norm)
> 
> If we aren't seeing a Titan Pascal until a whole other year after Pascal, nor AMD's equivalent, then this will be by far the longest I've stuck with the same card ever, I hope many people feel this way to overcome the upgrade itch so Nvidia and AMD can feel the burn in their wallets for trying to milk us like cows.


290s are a god card, I actually kinda miss mine. I didnt really have a choice though, as this new rig is going to be a workstation with a very small loop couldn't cool 3 290s. This board would only take 2 anyway but I do miss my babies







. I figured when BW - E comes and I am building a new 2-3 card gaming rig the new flagships will be out.

So what I am saying is I dont blame you, I kinda got forced into doing it and slightly regret it







. I do love this TI though, I just miss my 290s.


----------



## guttheslayer

Quote:


> Originally Posted by *FlyingSolo*
> 
> I'll be happy if the 1070 can match a 980 ti at $300. But i doubt it tho.


You got to get used that the X70 card no longer sell at $300...

The GTX 670 was $399. I guess that explain what I am expecting as well.


----------



## mouacyk

Quote:


> Originally Posted by *guttheslayer*
> 
> You got to get used that the X70 card no longer sell at $300...
> 
> The GTX 670 was $399. I guess that explain what I am expecting as well.


The problem is that the GTX970 isn't EOL any time soon, so its current pricing of $300 has to be taken into consideration. These guys are a business after all -- no point in pricing your retailers out of business.


----------



## BoredErica

Quote:


> Originally Posted by *zealord*
> 
> You are wrong.
> 
> That is not how it works. Please educate yourself. You are confusing gaming performance and SSD write/read performance probably.
> 
> You are definitely wrong and everyone in this thread is going to tell you that you are wrong.
> 
> (anyone else weird out by how many "wrong" people have been on OCN lately? Just recently there was a guy doing math wrong and every single person on this forum told him he was wrong, but he was so stubborn that he probably still doesn't get it.)


Imagine Skyrim with tons of textures (reads hitting over 500 MB/s), textures which are large and read more like sequential. Running around in an open world it seems plausible that a HDD might not be able to keep up and cause stutter. That stutter would be lower fps, if only for a short while. But that means lower average FPS.

Not saying that one would double FPS switching to 950 Pro, but it doesn't seem over the top ridiculous to suggest that a SSD will improve FPS. And tests showing that no FPS gain was noticed on something like Battlefield 4 proves little we didn't already know.


----------



## sl4ppy

The textures are not that big; generally under a few megabytes each at the very high end. They are loaded into video memory at level load; that's why it (vram) exists.

So no, your theory isn't plausible.


----------



## BoredErica

Quote:


> Originally Posted by *sl4ppy*
> 
> The textures are not that big; generally under a few megabytes each at the very high end. They are loaded into video memory at level load; that's why it (vram) exists.


I'm talking about a game like Skyrim. The IO peaks when loading a new cell during loading screen, but running around outside still means loading new cells because it's an open world game.

Look at how vram usage can change just from running around outside for 8 minutes:



And that was without AA or Tamriel Reloaded. Situation is even worse on Windows 10 w/ Tamriel Reloaded due to DX9 Vram bug. Vram gets purged early. Although there, oddly I don't see IO spazzing out. Somebody else reported that it did for them, but I could not replicate.

Nobody are doing these tests. I just need to show one stutter which is there with HDD which is not with SSD. I think I have a decent chance of showing something like that, and either way I should get to the bottom of this. I'm tired of hearsay, I want data.


----------



## JTHMfreak

Why does everyone by into all of this hype? Nvidia is out to get your money first and foremost. The 970 was roughly equal to two 670s, the 670 was roughly equal to two 470s, and so on and so forth.
It boggles my mind that people would think that a company will release a product at a great price that would make the rest of their products unfavorable to buy.
Nvidia will keep doing what has been working for them in the past, plain and simple, take a look at benchmarks, and your own experience to see what the next generation will bring, it doesn't take a genius.
They are a business, the money that they bring in trumps whatever expectations you have. All they have to do to sell a product is to be better than the last iteration, if only by 10-15%.


----------



## guttheslayer

Quote:


> Originally Posted by *mouacyk*
> 
> The problem is that the GTX970 isn't EOL any time soon, so its current pricing of $300 has to be taken into consideration. These guys are a business after all -- no point in pricing your retailers out of business.


So what make you think a faster GTX 1070 is going to sell the same price as GTX 970 especially if 970 is not gonna EOL soon?

Obviously price it higher since its faster and they need to sell all their 970 inventory?


----------



## Pholostan

Quote:


> Originally Posted by *Dargonplay*
> 
> 3 Years, 3 years I've been using this 290, of those 3 years I've spend at least one entire year wanting to upgrade to something that's worthy of my money, the 980Ti isn't, the Fury X isn't, this 290 is already too close in performance to both companies flagship offerings but with a much lower price (150$ is doable on eBay with 200$ being the norm)


I'm in a similar same situation as you, I have a 290X and a 290 that both are coming up on their third year. And I agree, the 980ti or Fury X isn't a real upgrade. Certainly not as my 290X overclocks to 1200 MHz. Add to that the pretty big improvements we've seen from the drivers. Back when I bought my first card, Borderlands 2 ran a bit iffy at 1080p, in crossfire it ran okay at 1440p but with problems ofc. Nowdays I can run it at 4K and get a steady 60FPS, turn down some of the filters and I can get 90FPS with everything else at max. Not a hint of stutter thanks to working frame pacing etc (both cards run stable in CFX at 1125/1350).

So yeah, I will probably have to wait another year too. I would love to be wrong though, we will se this summer.

Related, one of my friends bought a 7970 at launch. That was over four years ago now. He's still happy with it, at 1080p it works well in most games.


----------



## sl4ppy

Quote:


> Originally Posted by *Darkwizzie*
> 
> I'm talking about a game like Skyrim. The IO peaks when loading a new cell during loading screen, but running around outside still means loading new cells because it's an open world game.
> 
> Look at how vram usage can change just from running around outside for 8 minutes:
> 
> 
> 
> And that was without AA or Tamriel Reloaded. Situation is even worse on Windows 10 w/ Tamriel Reloaded due to DX9 Vram bug. Vram gets purged early. Although there, oddly I don't see IO spazzing out. Somebody else reported that it did for them, but I could not replicate.
> 
> Nobody are doing these tests. I just need to show one stutter which is there with HDD which is not with SSD. I think I have a decent chance of showing something like that, and either way I should get to the bottom of this. I'm tired of hearsay, I want data.


Slyrim's *total* compressed texture size is 578Mb, iirc. That uncompresses to roughly 3.5Gb, but is streamed on demand/as needed/not all loaded at once. Even without the streaming, you could comfortably fit 100% of the textures in the entire game of Skyrim into VRAM.

(fwiw, it's trivial to rip the texture data out of the game and double check that data, but Im fairly certain its accurate)


----------



## BoredErica

Quote:


> Originally Posted by *sl4ppy*
> 
> Slyrim's *total* compressed texture size is 578Mb, iirc. That uncompresses to roughly 3.5Gb, but is streamed on demand/as needed/not all loaded at once. Even without the streaming, you could comfortably fit 100% of the textures in the entire game of Skyrim into VRAM.
> 
> (fwiw, it's trivial to rip the texture data out of the game and double check that data, but Im fairly certain its accurate)


Vanilla Skyrim, maybe? My own personal texture pack is ~15.5 gb (which does not cover every texture pack from Official High Res DLC, which itself does not cover every texture in base game). Official High Res DLC itself is like 4.5 gb compressed (too lazy to uncompress, but I remember that they are all textures, no meshes).

I don't know why you'd think vanilla textures would hit 5gb of vram usage in the first place. Vanilla textures are abysmal, they're like 512x512 everywhere. If I was talking about vanilla Skyrim there would be no debate.

As I said previously, it's also possible to jack up the vram usage even more by adding MSAA, and mods like SFO or TR. Load multiple textures where there used to only be one. That pushed my 980ti to the limit. That increases vram usage more than its size in textures when looking at it outside of the game would suggest.

But as I said last time as well, it's just possible that what I'm claiming is true. There is only one way to find out.


----------



## cdoublejj

will any of these GPUs have A-Sync?


----------



## SuperZan

Quote:


> Originally Posted by *cdoublejj*
> 
> will any of these GPUs have A-Sync?


None? All? We've yet to see the results of Nvidia's stated intention to address Async via the software side under Maxwell, we have no concrete knowledge regarding Pascal or Volta and hardware-based Async Compute solutions. Until something happens it's all up in the air.


----------



## sl4ppy

...nevermind..


----------



## SuperZan

Quote:


> Originally Posted by *sl4ppy*
> 
> Yes, Vanilla Skyrim....
> 
> So you want people to benchmark against modded games? Ooookaaaay....


I'd be surprised to see more copies of Vanilla Skyrim in use over modded ones. I think that most people who had intended to play original, unmodded Skyrim would have done so at some point in the past five years. Nearly anybody I see mentioning Skyrim as a current game is speaking of a modded version.


----------



## bigjdubb

Quote:


> Originally Posted by *cdoublejj*
> 
> will any of these GPUs have A-Sync?


Who knows, not likely.

Quote:


> Originally Posted by *SuperZan*
> 
> None? All? We've yet to see the results of Nvidia's stated intention to address Async via the software side under Maxwell, we have no concrete knowledge regarding Pascal or Volta and hardware-based Async Compute solutions. Until something happens it's all up in the air.


Yup

And the even bigger question, does having or not having async even matter.


----------



## Cyber Locc

Quote:


> Originally Posted by *bigjdubb*
> 
> And the even bigger question, does having or not having async even matter.


At this point with the very limited amount of DX12 titles and very few coming in the next year, I am going to go with no. That is no for the people that upgrade every year or 2. If you are buying a card and planning to use it for years (3+), then yes I wouldn't get a card that doesn't have Async.


----------



## bigjdubb

Quote:


> Originally Posted by *Cyber Locc*
> 
> At this point with the very limited amount of DX12 titles and very few coming in the next year, I am going to go with no. That is no for the people that upgrade every year or 2. *If you are buying a card and planning to use it for years (3+), then yes I wouldn't get a card that doesn't have Async.*


I'm still not sold on that part either. I know the capability is being highly touted right now since one brand has it and the other doesn't but I am not convinced it will ever matter outside of specific uses (like the current RTS use).


----------



## Cyber Locc

Quote:


> Originally Posted by *bigjdubb*
> 
> I'm still not sold on that part either. I know the capability is being highly touted right now since one brand has it and the other doesn't but I am not convinced it will ever matter outside of specific uses (like the current RTS use).


Right well honestly I am not completely sold on it yet either. However in the case of 3+ years it's better to have the tech and not need it then to need it and not have it.

Honestly AMD cards do tend to stay competitive longer anyway. So I would personally recommend anyone that must use the same gpu for 3-4 years go with AMD anyway. Especially as if they can't or won't upgrade often money is likely a concern as well.


----------



## Randomdude

Quote:


> Originally Posted by *Cyber Locc*
> 
> *Right well honestly I am not completely sold on it yet either. However in the case of 3+ years it's better to have the tech and not need it then to need it and not have it.*
> 
> Honestly AMD cards do tend to stay competitive longer anyway. So I would personally recommend anyone that must use the same gpu for 3-4 years go with AMD anyway. Especially as if they can't or won't upgrade often money is likely a concern as well.


If the next gen nVidia cards lack async, in my opinion anyone buying them is basically subscribing themselves for the next cards that will have it. Not a wise thing, but it's not my money.


----------



## Woundingchaney

Quote:


> Originally Posted by *Randomdude*
> 
> If the next gen nVidia cards lack async, in my opinion anyone buying them is basically subscribing themselves for the next cards that will have it. Not a wise thing, but it's not my money.


If the next generation cards lack async and they perform better than Amd, I will still buy them. Overall performance is still more important and at this point the relevance of async is anything but set in stone. Realistically having hardware support for async on Amd cards is definitely a feather in their caps, but not the whole story.


----------



## Cyber Locc

Quote:


> Originally Posted by *Randomdude*
> 
> If the next gen nVidia cards lack async, in my opinion anyone buying them is basically subscribing themselves for the next cards that will have it. Not a wise thing, but it's not my money.


Everything Woundingchaney said, and plus I just about covered that didn't I? I said for people that buy new cards every Gen there is little point to buying it with or without Async they are already subscribed to do that anyway.

For someone that wants to keep there card for a long time, then they may need to think about Async a bit more. However Async is far from this spectacular world changing thing that some people are hyping it up to be.

It is the same song and dance every time AMD comes out with some new thing, need we remind every one about Mantle. How many "Oh Mantle is going to destroy Nvidia" "No one buy Nvidia they dont support Mantle" I could go on. What happened with that? oh that is right it wasn't that great, no one used it. So AMD pushed it to MS to force feed it down our throats anyway.

Whats funny about this is

Mantle came out, Everyone "Mantle is the best thing ever, ya low level api rock on dude"

1 year later, "Haha Mantle is a joke that failed miserably and even AMD doesnt talk about it"

2 years later "MS is making DX12 ya its going to be the best "its actually mantle that everyone hated last year, but shhhh".

So now how about next year? Is DX12 going to be this super successful thing that everyone falls over? or is it going to be another massive failure by MS. Seeing how most gamers refuse to use Windows 10, I highly doubt that it will be overly successful anytime soon.

Also INB4 "Whatever Fanboy", I am in no way a fanboy, although someone will say I am for what I have just said. As that is the logic around here right? If you dont over hype and overemphasis everything that comes out of AMDs R&D department then you are a Nvidia fanboy.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Cyber Locc*
> 
> At this point with the very limited amount of DX12 titles and very few coming in the next year, I am going to go with no. That is no for the people that upgrade every year or 2. *If you are buying a card and planning to use it for years (3+), then yes I wouldn't get a card that doesn't have Async.*


I think this is a very good point and something for people to consider. Also consider the recent history of these two companies. If you got a 290X on release you have enjoyed a very future-proof video card over the past 3 years in comparison to, say, a 780Ti bought as the same time. That is the sort of thing that should absolutely be considered by any prospective Pascal or Polaris buyer as AMD has shown that you can expect a steady progression of performance over the life of the card while Nvidia pretty much forgets all about you once their next shiny new cards hit the salesroom floor...


----------



## TranquilTempest

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> I think this is a very good point and something for people to consider. Also consider the recent history of these two companies. If you got a 290X on release you have enjoyed a very future-proof video card over the past 3 years in comparison to, say, a 780Ti bought as the same time. That is the sort of thing that should absolutely be considered by any prospective Pascal or Polaris buyer as AMD has shown that you can expect a steady progression of performance over the life of the card while Nvidia pretty much forgets all about you once their next shiny new cards hit the salesroom floor...


Can you actually expect that to continue? How are driver optimizations are going to make big improvements if the developers are using low level APIs, and getting all the performance up front?


----------



## Cyber Locc

Quote:


> Originally Posted by *TranquilTempest*
> 
> Can you actually expect that to continue? How are driver optimizations are going to make big improvements if the developers are using low level APIs, and getting all the performance up front?


I think even with level apis there is still driver optimizations to be had.

Is there not? Do you have a source on that or a reasoning behind it. I would be very interested in such.

I mean if we look at ashes we see drivers improving things don't we?


----------



## TranquilTempest

Quote:


> Originally Posted by *Cyber Locc*
> 
> I think even with level apis there is still driver optimizations to be had.
> 
> Is there not? Do you have a source on that or a reasoning behind it. I would be very interested in such.
> 
> I mean if we look at ashes we see drivers improving things don't we?


Well, if a game developer can make optimizations that used to need driver modifications, and target the optimizations to each game, I don't see how generic driver updates are going to help, even game ready driver updates would only be fixing mistakes the developers made. I guess it depends on exactly how "low level" the APIs are, and how much effort goes into optimizing the games.


----------



## Matt26LFC

I think even if someone buys Pascal with the mindset of wanting to keeping it for 3 or more years and is disappointed with it a couple years down the road he/she should just whip it out sell it and pick up Volta. I mean its really not a big deal changing one GPU for another a couple years apart.

I'm probably going to change out my whole system this summer or just change out my two 7970s for a single Pascal (been on AMD for ages now and just want to try Nvidia again) and I'm not going to worry about this Async issue, if its not up to snuff 12 months from the summer I'll pick up Volta if its about, or looking into AMDs Polaris. Its really no big deal it is.


----------



## lolerk52

Quote:


> Originally Posted by *Cyber Locc*
> 
> I think even with level apis there is still driver optimizations to be had.
> 
> Is there not? Do you have a source on that or a reasoning behind it. I would be very interested in such.
> 
> I mean if we look at ashes we see drivers improving things don't we?


I understand it as API's having calls to do something, let's say 5 * 5 = x as an example.
The driver's job would then be to translate it to the hardware, since with DX12/Vulkan you aren't coding DIRECTLY to the hardware, just a lot closer.

Depending on how you implement that 5 * 5 = x call, it could be more or less efficient.


----------



## Cyber Locc

Quote:


> Originally Posted by *lolerk52*
> 
> I understand it as API's having calls to do something, let's say 5 * 5 = x as an example.
> The driver's job would then be to translate it to the hardware, since with DX12/Vulkan you aren't coding DIRECTLY to the hardware, just a lot closer.
> 
> Depending on how you implement that 5 * 5 = x call, it could be more or less efficient.


I get that, however wouldn't the driver being good still play a role at all?


----------



## Cyber Locc

Quote:


> Originally Posted by *Matt26LFC*
> 
> I think even if someone buys Pascal with the mindset of wanting to keeping it for 3 or more years and is disappointed with it a couple years down the road he/she should just whip it out sell it and pick up Volta. I mean its really not a big deal changing one GPU for another a couple years apart.
> 
> I'm probably going to change out my whole system this summer or just change out my two 7970s for a single Pascal (been on AMD for ages now and just want to try Nvidia again) and I'm not going to worry about this Async issue, if its not up to snuff 12 months from the summer I'll pick up Volta if its about, or looking into AMDs Polaris. Its really no big deal it is.


I agree completely I think people are making this Async thing into a much bigger deal than it is.

Could it be a big deal, Yes. Will it big a big deal, Maybe. Will it big a big deal in the next 2 years, Unlikely. There simply isn't enough games coming to market in the next few years to make it this game breaking feature that every has to have.

That said like you said, grab Pascal now, if when Volta comes out a Async is a huge deal sell the Pascal, and buy a Volta.

Honestly the fact of the matter is in this industry if you wait and justify waiting for the next big thing or innovation you will always be waiting.


----------



## bigjdubb

Quote:


> Originally Posted by *Cyber Locc*
> 
> *I agree completely I think people are making this Async thing into a much bigger deal than it is.
> 
> Could it be a big deal, Yes. Will it big a big deal, Maybe. Will it big a big deal in the next 2 years, Unlikely. There simply isn't enough games coming to market in the next few years to make it this game breaking feature that every has to have.*
> 
> That said like you said, grab Pascal now, if when Volta comes out a Async is a huge deal sell the Pascal, and buy a Volta.
> 
> Honestly the fact of the matter is in this industry if you wait and justify waiting for the next big thing or innovation you will always be waiting.


I agree but what I am wondering is if there is even any benefit to async outside of where it is being used now, RTS games. I don't understand the technical aspects of this stuff but it seems like its main benefit is massive amounts of AI where typical CPU processing can't keep up. For someone who wouldn't play an RTS even if you paid him, will there ever be a need for Async?

It is obviously way too early to know any of these things but I don't think we will know the answer before Pascal reaches the end of it's shelf life either.


----------



## Bogga

Quote:


> Originally Posted by *bigjdubb*
> 
> I agree but what I am wondering is if there is even any benefit to async outside of where it is being used now, RTS games. I don't understand the technical aspects of this stuff but it seems like its main benefit is massive amounts of AI where typical CPU processing can't keep up. For someone who wouldn't play an RTS even if you paid him, will there ever be a need for Async?
> 
> It is obviously way too early to know any of these things but I don't think we will know the answer before Pascal reaches the end of it's shelf life either.


Would be sweet music to my ears if that was the case. Never have and never will play those games... well I played some single player AoE and Warcraft way back. But that was when graphic cards were still on AGP


----------



## TranquilTempest

Quote:


> Originally Posted by *Cyber Locc*
> 
> I get that, however wouldn't the driver being good still play a role at all?


The amount of performance to be gained from driver improvements depends on the quality of the initial drivers. If You put out crap drivers, then yes there's going to be room to improve on them. The point is that there's going to be less room for AMD/Nvidia to make and fix mistakes, and more room for game developers to make and fix mistakes. If a feature has a slow implementation, it's also easier for a game dev to see what exactly is slowing the game down and not use that feature.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *Matt26LFC*
> 
> I think even if someone buys Pascal with the mindset of wanting to keeping it for 3 or more years and is disappointed with it a couple years down the road *he/she should just whip it out sell it and pick up Volta.* I mean its really not a big deal changing one GPU for another a couple years apart.
> 
> I'm probably going to change out my whole system this summer or just change out my two 7970s for a single Pascal (been on AMD for ages now and just want to try Nvidia again) and I'm not going to worry about this Async issue, if its not up to snuff 12 months from the summer I'll pick up Volta if its about, or looking into AMDs Polaris. Its really no big deal it is.


Sure but a lot of times circumstances in life make this impossible. I, for example, used to be a big spender on PC parts. I bought two 580 Lightnings, traded them out for 7970 Lightnings (which I sold for reference 7970's when I went full water cooling) and then jumped on two OG Titans right when they launched. Then life happened, my daughter got older, I got divorced, I bought a motorcycle, etc, and all of the sudden I didn't have the disposable cash to just buy whatever the new "it" card was anymore. Consequently I have been stuck with my Titans since early 2013 and tbh am extremely glad that they were as future-proof as they were at the time. They still perform well enough in all my games that considering new cards today seems unnecessary. They also still have 6GB VRAM so I don't have to be worried about being limited there either. Anyway, all things being equal I would absolutely consider how the cards from these two companies might perform 2-3 years down the road if I were in the market for them and if one looks to be in a better position to do so that would be a factor I would consider. Certainly not the only factor but a factor nonetheless.


----------



## Matt26LFC

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Sure but a lot of times circumstances in life make this impossible. I, for example, used to be a big spender on PC parts. I bought two 580 Lightnings, traded them out for 7970 Lightnings (which I sold for reference 7970's when I went full water cooling) and then jumped on two OG Titans right when they launched. Then life happened, my daughter got older, I got divorced, I bought a motorcycle, etc, and all of the sudden I didn't have the disposable cash to just buy whatever the new "it" card was anymore. Consequently I have been stuck with my Titans since early 2013 and tbh am extremely glad that they were as future-proof as they were at the time. They still perform well enough in all my games that considering new cards today seems unnecessary. They also still have 6GB VRAM so I don't have to be worried about being limited there either. Anyway, all things being equal I would absolutely consider how the cards from these two companies might perform 2-3 years down the road if I were in the market for them and if one looks to be in a better position to do so that would be a factor I would consider. Certainly not the only factor but a factor nonetheless.


For sure, life can get in the way of things. I wouldn't totally dismiss it, I just don't think it'll be a major problem with gaming nor do I think it would be hard to buy a 1080 pascal for $600 sell it a year or two later for $300 and raise another $300 for Volta if it was murdering my gaming experience enough. Life getting in the way would no doubt mean I couldn't be so cavalier with my spending, however it shouldn't take too long to find a few hundred bucks, pounds where i'm from







and get my gaming experience fixed







It may take a little longer, you know a change in priorities, but I think its more than doable for most.

I just hope these cards are released sooner rather than later now, I get so bored of speculation I've gotten to the point where I just want to know their respective performances lol


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Sure but a lot of times circumstances in life make this impossible. I, for example, used to be a big spender on PC parts. I bought two 580 Lightnings, traded them out for 7970 Lightnings (which I sold for reference 7970's when I went full water cooling) and then jumped on two OG Titans right when they launched. Then life happened, my daughter got older, I got divorced, I bought a motorcycle, etc, and all of the sudden I didn't have the disposable cash to just buy whatever the new "it" card was anymore. Consequently I have been stuck with my Titans since early 2013 and tbh am extremely glad that they were as future-proof as they were at the time. They still perform well enough in all my games that considering new cards today seems unnecessary. They also still have 6GB VRAM so I don't have to be worried about being limited there either. Anyway, all things being equal I would absolutely consider how the cards from these two companies might perform 2-3 years down the road if I were in the market for them and if one looks to be in a better position to do so that would be a factor I would consider. Certainly not the only factor but a factor nonetheless.


Yep, kids are the worst STD money-wise. It doesn't help that these new cards are so expensive either but hey, people will pay for it and business is booming http://finance.yahoo.com/echarts?s=NVDA+Interactive#{%22range%22:%225y%22,%22allowChartStacking%22:true}


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> Sure but a lot of times circumstances in life make this impossible. I, for example, used to be a big spender on PC parts. I bought two 580 Lightnings, traded them out for 7970 Lightnings (which I sold for reference 7970's when I went full water cooling) and then jumped on two OG Titans right when they launched. Then life happened, my daughter got older, I got divorced, I bought a motorcycle, etc, and all of the sudden I didn't have the disposable cash to just buy whatever the new "it" card was anymore. Consequently I have been stuck with my Titans since early 2013 and tbh am extremely glad that they were as future-proof as they were at the time. They still perform well enough in all my games that considering new cards today seems unnecessary. They also still have 6GB VRAM so I don't have to be worried about being limited there either. Anyway, all things being equal I would absolutely consider how the cards from these two companies might perform 2-3 years down the road if I were in the market for them and if one looks to be in a better position to do so that would be a factor I would consider. Certainly not the only factor but a factor nonetheless.


That just depends on many things, arguably we could say my life is in the way right now.

I do not have this huge tech budget that others seem to have, I have a lot of bills a business a baby girl and a wife that love to spend my money. I have about 400-600 dollars a month (on a good month) I can spend on movies or tech or gaming or whatever. However I save that until I can replace my GPUs, I dont have 2k the day it launches to go and buy it like some but I always get it, I just save for what I want and prioritize things better.

I think if you honestly sat down and did your budget you could do the same. If you didn't drink that coffee at starbucks every day for a month, there is half your way to a Titan X. It is more so the need or want to buy the stuff. Your wants and cares have changed, other things have became more important, than updating your rig.

To put that in better perspective, this is my hobby so I prioritize it, alot of my friends go to the Bar/Casino. I do too but once every few months where they go 2-3 times a week. I prioritize the way I spend my extra money as this is how I like to spend it.


----------



## y2kcamaross

Quote:


> Originally Posted by *Cyber Locc*
> 
> That just depends on many things, arguably we could say my life is in the way right now.
> 
> I do not have this huge tech budget that others seem to have, I have a lot of bills a business a baby girl and a wife that love to spend my money. I have about 400-600 dollars a month (on a good month) I can spend on movies or tech or gaming or whatever. However I save that until I can replace my GPUs, I dont have 2k the day it launches to go and buy it like some but I always get it, I just save for what I want and prioritize things better.
> 
> *I think if you honestly sat down and did your budget you could do the same. If you didn't drink that coffee at starbucks every day for a month, there is half your way to a Titan X*. It is more so the need or want to buy the stuff. Your wants and cares have changed, other things have became more important, than updating your rig.
> *
> To put that in better perspective, this is my hobby so I prioritize it*, alot of my friends go to the Bar/Casino. I do too but once every few months where they go 2-3 times a week. I prioritize the way I spend my extra money as this is how I like to spend it.


Please don't act like you know what someone could do with their budget/finances, someone else could prioritize it as well and just not have the money no matter what


----------



## Cyber Locc

Quote:


> Originally Posted by *y2kcamaross*
> 
> Please don't act like you know what someone could do with their budget/finances, someone else could prioritize it as well and just not have the money no matter what


No having the money no matter what, would entail not having the money for a motorcycle that he just said he bought. Even if you have 50 dollars a month in discretionary funds that is money that could be set aside and saved for your tech craves.

Aside from that, we are all sitting on a tech forum, blowing time for entertainment. If you have time to blow and not money well then it sounds like you could get a second job to help with the tech funds doesn't it? Of course kids would not fall under that, however this conversation doesn't talk about kids does it?

90% of the people in the US blow money on things they do not need, that they could live without. So what you have said is 100% false, there is always something you could do with out if you are honest with your self.

This is the land of oppuritinty and if you truly want something you can get it. There is always a way, cutting unneeded, getting side work, ect. There is ALWAYS ANOTHER WAY!.

The only way I would buy that statement is if, you were working a job that had you struggling to survive. IE barely afford to eat and only have your basic needs at best. If that is the case well then you should get a second job, so that you can better your quality of life.

Any other case is falling on deaf ears. Think about it, do you have Cable TV? Do you need cable TV? I dont have it, I dont need it. Do you go to Starbucks, do you smoke, do you go the movies, to the bar, I could go on. Those things can be cut, if you have any money left after you buy food and basic needs then you have money you could spend on tech.

Now we could go further and define what a basic need is, a basic need is clothing to keep you warm, food and water, housing. thats it that is all the basic needs, anything beyond that is a want not a need.


----------



## Majin SSJ Eric

For sure priorities change. That's my point. I bought my bike with cash and if I had wanted to upgrade my rig instead I could've had a 5960X, two Titan X's and all new ancillary hardware with thousands left over for what I spent on my bike. But I wanted the bike more so I got it. Doing so meant I've been unable to upgrade my PC but, as I said, luckily I still have pretty high end hardware and it can still run everything I need it to. Had I gotten 780Ti's at the time I could be running into VRAM issues right now, for example.


----------



## Cyber Locc

Quote:


> Originally Posted by *Majin SSJ Eric*
> 
> For sure priorities change. That's my point. I bought my bike with cash and if I had wanted to upgrade my rig instead I could've had a 5960X, two Titan X's and all new ancillary hardware with thousands left over for what I spent on my bike. But I wanted the bike more so I got it. Doing so meant I've been unable to upgrade my PC but, as I said, luckily I still have pretty high end hardware and it can still run everything I need it to. Had I gotten 780Ti's at the time I could be running into VRAM issues right now, for example.


Yep I agree 100% priorities can always change we get bored with one hobby and want to partake in another for a bit







. I understand that completely and like you said your titans can game just fine







.

Hey at least you have something to show for the change lol. I changed to spending all my money on an MMO for a bit, god was that a bad idea thousands later, I dont play it and have nothing to show for it lol.


----------



## y2kcamaross

Quote:


> Originally Posted by *Cyber Locc*
> 
> No having the money no matter what, would entail not having the money for a motorcycle that he just said he bought. Even if you have 50 dollars a month in discretionary funds that is money that could be set aside and saved for your tech craves.
> 
> *Aside from that, we are all sitting on a tech forum, blowing time for entertainment. If you have time to blow and not money well then it sounds like you could get a second job to help with the tech funds doesn't it*? Of course kids would not fall under that, however this conversation doesn't talk about kids does it?
> 
> 90% of the people in the US blow money on things they do not need, that they could live without. So what you have said is 100% false, there is always something you could do with out if you are honest with your self.
> 
> This is the land of oppuritinty and if you truly want something you can get it. There is always a way, cutting unneeded, getting side work, ect. There is ALWAYS ANOTHER WAY!.
> 
> The only way I would buy that statement is if, you were working a job that had you struggling to survive. IE barely afford to eat and only have your basic needs at best. If that is the case well then you should get a second job, so that you can better your quality of life.
> 
> Any other case is falling on deaf ears. Think about it, do you have Cable TV? Do you need cable TV? I dont have it, I dont need it. Do you go to Starbucks, do you smoke, do you go the movies, to the bar, I could go on. Those things can be cut, if you have any money left after you buy food and basic needs then you have money you could spend on tech.
> 
> Now we could go further and define what a basic need is, a basic need is clothing to keep you warm, food and water, housing. thats it that is all the basic needs, anything beyond that is a want not a need.


Or you know, most people are probably ON here WHILE at their job...so they really couldn't get a second job instead of browsing these forums


----------



## Gedm5

Quote:


> Originally Posted by *y2kcamaross*
> 
> Or you know, most people are probably ON here WHILE at their job...so they really couldn't get a second job instead of browsing these forums


I know I am!


----------



## y2kcamaross

Quote:


> Originally Posted by *Gedm5*
> 
> I know I am!


Same
90% of the time I'm on this site - it's from work.


----------



## bigjdubb

Quote:


> Originally Posted by *Gedm5*
> 
> I know I am!


Quote:


> Originally Posted by *y2kcamaross*
> 
> Same
> 90% of the time I'm on this site - it's from work.


Same here. I tend to forget about checking forums once I get home.... there are better things to do at home. While I am at work the forums are a great way to pass time while waiting for things to happen.


----------



## criminal

Quote:


> Originally Posted by *bigjdubb*
> 
> Same here. I tend to forget about checking forums once I get home.... there are better things to do at home. While I am at work the forums are a great way to pass time while waiting for things to happen.


Yes sir.


----------



## Bogga

http://www.sweclockers.com/nyhet/21927-nvidia-geforce-gtx-970-och-gtx-980-ersatts-av-pascal-med-gddr5x-till-computex

"Nvidia will first launch the cards to replace 970 and 980, one source also says that the replacement for the 980Ti will also be launched this summer"

"According to sources"...

Sweclockers aren't known to go out with stuff from unreliable sources. If they do, they're pretty good at saying that the sources are crappy and that we should take it with a pinch of salt... so please be true! I want that X80Ti this summer


----------



## Cyber Locc

I am usually here while at work as well, however if you have a job that you can check forums while at work. Then odds are you have a pretty good job right? so you are making decent money. so there for you have things that you spend money on that you dont NEED.

My opinion is going to differ from alot of people with Need Vs Want, I grew up very poor family half the time we lived on the street and the other half would be a studio apt with 5 people. However I always had what I NEEDED.

You need some sort of shelter, a 5 bedroom house is that for sure, but you could get buy with much much less. You dont NEED a car, you dont need a lot of things that spoiled society has came to think you do.

I am not saying that those things are not nice to have, I have them as well. However you do not need them, and there is many many other things that you spend money on that you do not NEED.

Furthermore, you pick out 1 piece of my argument to use it against, just because you have a job doesnt mean you cant work more. Do side work of some type you are good at ect, get a second job. Anything you want in this world you can have it, you may have to make compromises and you may have to work very very hard for it but you can do it. Cant is a cop out pure and simple, I am living proof that you can do anything despite the odds no matter how stacked against you.

Now if you were to say, "They Wont" prioritize it then I would accept that but Cant does not exist period. If you cant find a way to make it happen its because you really do not want to.


----------



## Klocek001

Quote:


> Originally Posted by *Bogga*
> 
> http://www.sweclockers.com/nyhet/21927-nvidia-geforce-gtx-970-och-gtx-980-ersatts-av-pascal-med-gddr5x-till-computex
> 
> "Nvidia will first launch the cards to replace 970 and 980, one source also says that the replacement for the 980Ti will also be launched this summer"
> 
> "According to sources"...
> 
> Sweclockers aren't known to go out with stuff from unreliable sources. If they do, they're pretty good at saying that the sources are crappy and that we should take it with a pinch of salt... so please be true! I want that X80Ti this summer


that'd mean no HBM2 on x80Ti


----------



## lolerk52

Quote:


> Originally Posted by *Klocek001*
> 
> that'd mean no HBM2 on x80Ti


Interesting that they're releasing the Ti fairly soon after the 980.

Perhaps they figure they can't milk the market like they did previously, since Polaris is a decent threat?


----------



## Bogga

Quote:


> Originally Posted by *Klocek001*
> 
> that'd mean no HBM2 on x80Ti


Well... the 740 was launched with both DDR3 and GDDR5









http://www.geforce.com/hardware/desktop-gpus/geforce-gt-740/specifications

Not that I for one second believe that they first launches X80Ti with GDDRX and then HBM2


----------



## Cyber Locc

Quote:


> Originally Posted by *Klocek001*
> 
> that'd mean no HBM2 on x80Ti


Well no offense to them but I can already tell you that article is bull.

"Another detail is that the graphics cards use memory standard GDDR5X , a further development of GDDR5. The first generation will clear rates of 10 and 12 Gbps, from 7 Gbps used in GTX 970 and GTX 980. Together with a 256-bit memory bus, this would provide bandwidth 320 and 384 Gb / s, in line with the GeForce GTX 980 Ti and AMD's Radeon R9 390X."
http://www.sweclockers.com/nyhet/21927-nvidia-geforce-gtx-970-och-gtx-980-ersatts-av-pascal-med-gddr5x-till-computex

That isnt possible because GDDR5x hasn't even started mass production yet.

"Micron has officially stated that GDDR5X memory will be going under mass production in summer, this year. This suggests that the yields of the new DRAM will be high and early samples are already achieving speeds of 13 GB/s compared to the 14 GB/s or higher that it is expected to hit when the memory launches.

Read more: http://wccftech.com/micron-gddr5x-memory-mass-production/#ixzz44VqlqpI7"

HBM2 however has already started mass production in February.

I would really love to know why they think the cards will have GDDr5x that isnt even mass produced yet? and wont even start until after the cards are said to be out.


----------



## y2kcamaross

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well no offense to them but I can already tell you that article is bull.
> 
> "Another detail is that the graphics cards use memory standard GDDR5X , a further development of GDDR5. The first generation will clear rates of 10 and 12 Gbps, from 7 Gbps used in GTX 970 and GTX 980. Together with a 256-bit memory bus, this would provide bandwidth 320 and 384 Gb / s, in line with the GeForce GTX 980 Ti and AMD's Radeon R9 390X."
> http://www.sweclockers.com/nyhet/21927-nvidia-geforce-gtx-970-och-gtx-980-ersatts-av-pascal-med-gddr5x-till-computex
> 
> That isnt possible because GDDR5x hasn't even started mass production yet.
> 
> "Micron has officially stated that GDDR5X memory will be going under mass production in summer, this year. This suggests that the yields of the new DRAM will be high and early samples are already achieving speeds of 13 GB/s compared to the 14 GB/s or higher that it is expected to hit when the memory launches.
> 
> Read more: http://wccftech.com/micron-gddr5x-memory-mass-production/#ixzz44VqlqpI7"
> 
> HBM2 however has already started mass production in February.
> 
> I would really love to know why they think the cards will have GDDr5x that isnt even mass produced yet? and wont even start until after the cards are said to be out.


Way ahead of schedule (the target was late Summer) Micron has started shipping GDDR5X Memory its customers, likely Nvidia first. Micron will offer the ICs in 8 Gb (1 GB) and 16 Gb (2 GB) densities which indeed is indicative for 8GB adn 16GB graphics cards. The upcoming GeForce GTX 1070 and 1080 (if they are named that) already have been indicated as 8GB products.

http://www.guru3d.com/news-story/micron-starts-sampling-gddr5x-memory-to-customers.html


----------



## Defoler

Quote:


> Originally Posted by *Klocek001*
> 
> that'd mean no HBM2 on x80Ti


We still don't know that.

Replacing the 980 and 970 first is understandable, and that is how they did it before, twice, with the 700 series and the 900 series.
But if the ti and Titan will share the chip just like before, we could very well see HBM2 version on the ti.

Last time it took about 6 months between the 80/70 release and the ti. We could see less time if AMD push Nvidia to release early, or not if they don't.


----------



## Cyber Locc

Quote:


> Originally Posted by *y2kcamaross*
> 
> Way ahead of schedule (the target was late Summer) Micron has started shipping GDDR5X Memory its customers, likely Nvidia first. Micron will offer the ICs in 8 Gb (1 GB) and 16 Gb (2 GB) densities which indeed is indicative for 8GB adn 16GB graphics cards. The upcoming GeForce GTX 1070 and 1080 (if they are named that) already have been indicated as 8GB products.
> 
> http://www.guru3d.com/news-story/micron-starts-sampling-gddr5x-memory-to-customers.html


Micron has begun sampling its still a few months away from sampling before mass production will begin.

If they just started sampling then the mass production is still at least 2-3 months off. Which means if the x70 and x80 do use it, they wont be out until Q3. They have not changed there summer date, mass production will start in June or July.

They are not way ahead of schedule Guru3d obviously doesn't know the difference between sampling and mass production.

I suggest you read this, hopefully tells guru to do the same. http://www.chinaimportal.com/blog/pre-production-sample-order-terms-a-complete-guide/

Also they promised Sampling in Q1 which they barely met,

"Micron originally promised to start sampling of its GDDR5X with customers in Q1 and the company has formally delivered on its promise. What now remains to be seen is when designers of GPUs plan to roll-out their GDDR5X supporting processors. Micron claims that it is set to start mass production of the new memory this summer, which hopefully means we're going to be seeing graphics cards featuring GDDR5X *before the end of the year*."

http://www.anandtech.com/show/10193/micron-begins-to-sample-gddr5x-memory


----------



## iLeakStuff

Design > Specs approved > Risk production > Samples sent to partners > Partners design cards > Partners say they want to use the product > Going to mass production > Partners start building cards > Partners prepare for shipping and launch > Launch

From samples to launch is a loooong time


----------



## Bogga

Damn you guys... don't kill my hopes before I've had time to build them up


----------



## Cyber Locc

Quote:


> Originally Posted by *iLeakStuff*
> 
> Design > Specs approved > Risk production > Samples sent to partners > Partners design cards > Partners say they want to use the product > Going to mass production > Partners start building cards > Partners prepare for shipping and launch > Launch
> 
> From samples to launch is a loooong time


Thank you, now we just need to explain that to Guru3d lol. Seriously even WCCFtech knows that......
Quote:


> Originally Posted by *Bogga*
> 
> Damn you guys... don't kill my hopes before I've had time to build them up


What are your hopes, that a word Swedclockers said is truth? Ya sorry throw those out the window. I agree they are normally pretty accurate this time however, well some one must have took a smoke break before writing that.


----------



## magnek

Quote:


> Originally Posted by *Klocek001*
> 
> that'd mean no HBM2 on x80Ti


No HBM2 = 0 interest

I'll be damned if I buy another GDDR5(X) card end of story.


----------



## bigjdubb

I think HBM2 is further along the road to production than GDDR5x is (Samsungs HBM2 at least), it is very possible to have an HBM2 card released this summer. Depending how the interposers work it also seems like an x80ti with 2-4gb stacks of HBM2 for 8gb could easily translate to a Titan with 2-8gb stacks of HBM2 for 16gb. I am not sure how interposers work so I don't know if the same interposer could be used for 4gb stacks and 8gb stacks.


----------



## Bogga

Quote:


> Originally Posted by *Cyber Locc*
> 
> Thank you, now we just need to explain that to Guru3d lol. Seriously even WCCFtech knows that......
> What are your hopes, that a word Swedclockers said is truth? Ya sorry throw those out the window. I agree they are normally pretty accurate this time however, well some one must have took a smoke break before writing that.


My hopes were that there was a slight possibility of seeing the Ti-version this summer, well at least before september









I know that the news wasn't 100% reliable since it said sources, but as I said, they're not known to post bull**** and if they themselves don't belive in it then they usually write that.


----------



## Kriant

Isn't HBM2 from Samsung is in mass production since January? So it would mean that unless Nvidia pulls out a fairly old GDDR5 on their cards again, HBM2 is more of a possibility than GDDR5(x) which is still being sampled?

Or, alternatively we will have to wait till Q3 for cards to arrive, which I hope won't happen.


----------



## Cyber Locc

Quote:


> Originally Posted by *Kriant*
> 
> Isn't HBM2 from Samsung is in mass production since January? So it would mean that unless Nvidia pulls out a fairly old GDDR5 on their cards again, HBM2 is more of a possibility than GDDR5(x) which is still being sampled?
> 
> Or, alternatively we will have to wait till Q3 for cards to arrive, which I hope won't happen.


Yep it has been, it will come way before gddr5x
Quote:


> Originally Posted by *magnek*
> 
> No HBM2 = 0 interest
> 
> I'll be damned if I buy another GDDR5(X) card end of story.


Me neither looks like we are going AMD this year







.


----------



## Kriant

I say, 5 more days, and we will see whether we will get Huang jumping around with wooden carving of a Pascal card, or whether we will see a "surprise" showing of a full GP100 chip (or the less surprising GP104)


----------



## Cyber Locc

Quote:


> Originally Posted by *Kriant*
> 
> I say, 5 more days, and we will see whether we will get Huang jumping around with wooden carving of a Pascal card, or whether we will see a "surprise" showing of a full GP100 chip (or the less surprising GP104)


WOOD!?!?! This isnt AMD man, the carving will be made of Marble duh.


----------



## bigjdubb

Quote:


> Originally Posted by *Cyber Locc*
> 
> WOOD!?!?! This isnt AMD man, the carving will be made of Marble duh.


It's a reference to the wood screws debacle with Fermi.

Quote:


> Originally Posted by *Cyber Locc*
> 
> Me neither looks like we are going AMD this year
> 
> 
> 
> 
> 
> 
> 
> .


Not sure AMD will have anything with HBM2 this year either.


----------



## carlhil2

Or, you could just wait and buy Pascal Titan...


----------



## Forceman

Quote:


> Originally Posted by *bigjdubb*
> 
> Not sure AMD will have anything with HBM2 this year either.


Since Vega isn't coming until next year, and they made a point of saying Vega is using HBM2, I'd say almost certainly not.


----------



## Cyber Locc

Quote:


> Originally Posted by *bigjdubb*
> 
> It's a reference to the wood screws debacle with Fermi.
> Not sure AMD will have anything with HBM2 this year either.


They were sheet metal screws anyway,

That is possible, then the TI and Titan could be using GDDR5x, but the x80 and x70 almost certainly wont be.


----------



## bigjdubb

Quote:


> Originally Posted by *Cyber Locc*
> 
> They were sheet metal screws anyway,
> 
> That is possible, then the TI and Titan could be using GDDR5x, but the x80 and x70 almost certainly wont be.


Fairly certain the Titan (and by association the Ti) will be using HBM2 unless they decide to make very different cards for Geforce and Tesla/Quadro.

As far as the screws go, I don't know if that was ever settled but it all started off with "wood screws".


----------



## Cyber Locc

Quote:


> Originally Posted by *bigjdubb*
> 
> Fairly certain the Titan (and by association the Ti) will be using HBM2 unless they decide to make very different cards for Geforce and Tesla/Quadro.
> 
> As far as the screws go, I don't know if that was ever settled but it all started off with "wood screws".


Ya I know it started as that, and I got the reference after you said that







. Just pulling your chain







they were sheet metal screws though.

It think the Ti and Titan were not going to be HBM2 that is why I said I guess they could be GDDR5x, I think its possible the TI will be just to cheap out but who knows.


----------



## Majin SSJ Eric

Quote:


> Originally Posted by *y2kcamaross*
> 
> Same
> 90% of the time I'm on this site - it's from work.


I'm the opposite. I don't have time at work to get on OCN usually so this is where I usually come as soon as I finally get home at night. Different strokes I guess...


----------



## Bogga

Anyone know which of the days of gpu-tech that they will be talking about pascal and the coming cards?


----------



## Kriant

Quote:


> Originally Posted by *Bogga*
> 
> Anyone know which of the days of gpu-tech that they will be talking about pascal and the coming cards?


April 5th around 9AM PTZ (I think it's PTZ)


----------



## Lass3

Yeah, I'll wait for GP100 / Vega with HBM2..


----------



## evoll88

Quote:


> Originally Posted by *Lass3*
> 
> Yeah, I'll wait for GP100 / Vega with HBM2..


I am going to do the same, hope they have a big jump over the titan x.


----------



## Cyber Locc

Guys I forgot what today is, the swedclockers thing is probably an april fools joke. They posted it for them late at night on the 31, I seen a bunch of april fools things popping up last night to be ready for today.


----------



## CluckyTaco

I can't wait for the new Ti's. My 780 Ti s are not cutting it for Ultrawide gaming. Hopefully like the others say Q3 seems more likely for their arrival but that's me hoping.


----------



## Cyber Locc

Quote:


> Originally Posted by *chaitu87*
> 
> I can't wait for the new Ti's. My 780 Ti s are not cutting it for Ultrawide gaming. Hopefully like the others say Q3 seems more likely for their arrival but that's me hoping.


I was saying q3 for the x80 and x70 if they use gddr5x. The Titan and To could come right now, however following there trend they should be out like now, or next February/March.


----------



## Majin SSJ Eric

Titan Pascal will not be out before 2017 with the Ti likely following by at least a couple months. That is 99.99% guaranteed.


----------



## Bogga

Anyone know which of the days of
Quote:


> Originally Posted by *Cyber Locc*
> 
> Guys I forgot what today is, the swedclockers thing is probably an april fools joke. They posted it for them late at night on the 31, I seen a bunch of april fools things popping up last night to be ready for today.


Yeah if you're talking about the refrigerator-case...


----------



## duganator

I'm currently rocking a 970 @144hz 1440p and I'm struggling big time. Should I pick up a cheap 980ti or wait on the 1080?


----------



## zealord

Quote:


> Originally Posted by *duganator*
> 
> I'm currently rocking a 970 @144hz 1440p and I'm struggling big time. Should I pick up a cheap 980ti or wait on the 1080?


no single GPU will get 144hz/fps at 1440p. Not a 980 Ti and not a GTX 1080.

To achieve that (IN MODERN GAMES !) you would need something that performs three times as well as a 980 Ti.

My friend has 2x Zotac 980 Ti Extreme AMP! clocked at 1500mhz and he hovers around 60-90 fps in games like The Division, Witcher 3, GTA V etc on his 1440hz 144hz IPS G-Sync Swift monitor.
The games are completely maxed out afaik.

If you lower settings a bit a single 980 Ti will have acceptable framerates at 1440p, but far from 144fps.

If I were you I'd wait. We are pretty close and the 980 Ti will probably be worth less than 450$ in 2 months.


----------



## Klocek001

Quote:


> Originally Posted by *zealord*
> 
> no single GPU will get 144hz/fps at 1440p. Not a 980 Ti and not a GTX 1080.
> 
> To achieve that (IN MODERN GAMES !) you would need something that performs three times as well as a 980 Ti.
> 
> My friend has 2x Zotac 980 Ti Extreme AMP! clocked at 1500mhz and he hovers around 60-90 fps in games like The Division, Witcher 3, GTA V etc on his 1440hz 144hz IPS G-Sync Swift monitor.
> The games are completely maxed out afaik.
> 
> If you lower settings a bit a single 980 Ti will have acceptable framerates at 1440p, but far from 144fps.
> 
> If I were you I'd wait. We are pretty close and the 980 Ti will probably be worth less than 450$ in 2 months.


980Ti SLI would be able to pull 144 fps in TW3, but:
-No hairworks
-at least 1.5GHz/8GHz
-a crazy fast CPU. You'd really need a 5GHz 6700K with some 4GHz DDR4 to exclude CPU bottleneck.

However,with g-sync you actually don't need steady 144 fps. If it keeps dropping in the 120-130 fps range you won't even notice most of the time.


----------



## zealord

Quote:


> Originally Posted by *Klocek001*
> 
> 980Ti SLI would be able to pull 144 fps in TW3, but:
> -No hairworks
> -at least 1.5GHz/8GHz
> -a crazy fast CPU. You'd really need a 5GHz 6700K with some 4GHz DDR4 to exclude CPU bottleneck.
> 
> However,with g-sync you actually don't need steady 144 fps. If it keeps dropping in the 120-130 fps range you won't even notice most of the time.


yeah he has a 4,6ghz 4930K and his GPUs are at full GPU usage so the CPU isn't bottlenecking him.

I played on his PC and at 70-80 fps with G-Sync enabled it was basically as smooth as it gets. He may have used 4K downsampling in Witcher 3 with hairworks off. I have to ask him again.


----------



## chronicfx

Quote:


> Originally Posted by *Klocek001*
> 
> 980Ti SLI would be able to pull 144 fps in TW3, but:
> -No hairworks
> -at least 1.5GHz/8GHz
> -a crazy fast CPU. You'd really need a 5GHz 6700K with some 4GHz DDR4 to exclude CPU bottleneck.
> 
> However,with g-sync you actually don't need steady 144 fps. If it keeps dropping in the 120-130 fps range you won't even notice most of the time.


I can add I am playing through wither 3 again because I realized I didn't keep my saves and really want to play hearts of stone, with 980Ti SLI at stock clocks (1342mhz), Corsair LPX 24gb at 3000MHZ, 4.9ghz overclocked 6700k, and an acer XB270HU monitor with gsync. I am using a DS4 controller connected by bluetooth and I am getting about 120FPS indoors and 75-85 FPS outside. I am having a seamless experience since the beginning. I cannot recall a single hitch or stutter since the beginning, I am dodging perfectly and feel impossible to even scratch in a fight, whereas my first play through was on 3x 290x and was filled with death and wishing for the new OMEGA drivers to release. My settings in this playthrough are all maxed out with hairworks at 8x, HBAO plus, etc. etc. With a gsync monitor you really don't need 144Hz for any game according to me, best use the best graphics settings you can apply staying over the 75FPS does the trick in my book. Don't sacrifice the visuals, this is what your cards were meant to do







I can't stress enough what a difference Gsync has made, I was ready to go back to PS4 before I got a new gsync monitor, now I feel like I have the piece of mind that I always wanted, sometimes I forget to enable the afterburner OSD lol I wonder if my PS4 will even work when uncharted 4 comes out


----------



## Klocek001

Tested TW3 on my 980Ti 1.51/7.8, all maxed out except Hairworks off: avg. 74.3fps, min. 55 fps
Pretty good results considering most of my 6 minute test was fighting a few nilfgardian guards and some effects during fight are fps killers in TW3.

The question that I'm raising is about the point where some visual effects (e.g. shadows) have a minimal effect on the looks of the game, yet they tank the framerate like crazy. I run TW3 with a mixture of mostly ultra but some high settings too which gives me +90 fps on avg. and I don't think the visual quality suffers that much if anything.


----------



## zealord

Quote:


> Originally Posted by *Klocek001*
> 
> Tested TW3 on my 980Ti 1.51/7.8, all maxed out except Hairworks off: avg. 74.3fps, min. 55 fps
> Pretty good results considering most of my 6 minute test was fighting a few nilfgardian guards and some effects during fight are fps killers in TW3.
> 
> *The question that I'm raising is about the point where some visual effects (e.g. shadows) have a minimal effect on the looks of the game, yet they tank the framerate like crazy. I run TW3 with a mixture of mostly ultra but some high settings too which gives me +90 fps on avg. and I don't think the visual quality suffers that much if anything*.


That is what I do for every game.

Looking what is the best performance/visual quality setting I can get while keeping a framerate of 60.









(It is quite hard on a 2500K/290X)


----------



## Klocek001

Quote:


> Originally Posted by *zealord*
> 
> That is what I do for every game.
> 
> Looking what is the best performance/visual quality setting I can get while keeping a framerate of 60.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> (*It is quite hard on a 2500K/290X*)


tell me about it, had 4.9GHz 2500K + 290 Trix myself. AMD overhead completely ruining min fps is the reason I moved to the green side. I could not stand fps drops since I was running v-sync and the stutter was unbearable.
It was a better idea to trade the 290 for 980 G1 than upgrade my CPU/RAM just so I can eliminate overhead on the R9.
Now that I'm running g-sync I frankly don't even know what fps I'm running unless out of pure curiosity.

Anyway, I got an idea to go for 1070 instead of 1080 this time, if it manages to more or less equal the 980Ti. I'm pretty fine with the 980Ti as far as playing @1440p and selling my 980Ti now then getting 1070 in June might bring me as much as 900 PLN of saved money that I could spend on a really kick ass audio setup or 750 GB SSD space to get rid of HDDs once and for all.


----------



## duganator

I'm Not looking to totally max 1440p, I'm just looking for medium to high settings at a good frame rate
Quote:


> Originally Posted by *zealord*
> 
> no single GPU will get 144hz/fps at 1440p. Not a 980 Ti and not a GTX 1080.
> 
> To achieve that (IN MODERN GAMES !) you would need something that performs three times as well as a 980 Ti.
> 
> My friend has 2x Zotac 980 Ti Extreme AMP! clocked at 1500mhz and he hovers around 60-90 fps in games like The Division, Witcher 3, GTA V etc on his 1440hz 144hz IPS G-Sync Swift monitor.
> The games are completely maxed out afaik.
> 
> If you lower settings a bit a single 980 Ti will have acceptable framerates at 1440p, but far from 144fps.
> 
> If I were you I'd wait. We are pretty close and the 980 Ti will probably be worth less than 450$ in 2 months.


----------



## zealord

Quote:


> Originally Posted by *duganator*
> 
> I'm Not looking to totally max 1440p, I'm just looking for medium to high settings at a good frame rate


Well a 980 Ti would do, but no matter what I wouldn't buy a 980 Ti or any high end enthusiast GPU currently.

Normally I tell people to buy a new GPU when they need one and want to play games, but we are so close to new ones.

In 3 days Nvidia is showing _something_. Maybe Pascal that is supposed to be released soon.

It is so hard to predict what will happen.

Worst case scenario both Pascal and Polaris will be a huge disappointment and you can still buy a GTX 980 Ti or whatever.

Best case scenario they are both amazing and offer us much higher performance than what most people hoped for.


----------



## carlhil2

I am just bored with maxwell, almost had all of them. time for something new to play with...hurry up nVidia...


----------



## Cyber Locc

Quote:


> Originally Posted by *carlhil2*
> 
> I am just bored with maxwell, almost had all of them. time for something new to play with...hurry up nVidia...


Well it should be coming soon and more evidence that it will suck. The last what 2 months of NV drivers have been destroying cards. So are they bricking Maxwells so people buy the crappy pascal? the world may never know.


----------



## i7monkey

Is it a good time to sell a 980ti now? How much are they worth? Mine's barely used and in great condition.


----------



## carlhil2

Quote:


> Originally Posted by *Cyber Locc*
> 
> Well it should be coming soon and more evidence that it will suck. The last what 2 months of NV drivers have been destroying cards. So are they bricking Maxwells so people buy the crappy pascal? the world may never know.


Lol, I don't believe in conspiracy theories...I just buy what I buy. if I don't like it, I will return/sell it...I just use a little common sense and don't fall for the hype, works for me...


----------



## Cyber Locc

Quote:


> Originally Posted by *carlhil2*
> 
> Lol, I don't believe in conspiracy theories...I just buy what I buy. if I don't like it, I will return/sell it...


There is no conspiracy theories they are killing cards.

KILLING CARDS MAN, THEY ARE DYING.....

http://www.fudzilla.com/news/graphics/40364-nvidia-geforce-364-72-drivers-come-with-plenty-of-issues


----------



## carlhil2

Quote:


> Originally Posted by *Cyber Locc*
> 
> There is no conspiracy theories they are killing cards.
> 
> KILLING CARDS MAN, THEY ARE DYING.....
> 
> http://www.fudzilla.com/news/graphics/40364-nvidia-geforce-364-72-drivers-come-with-plenty-of-issues


Maybe, mine are good, so, can't say, but, I was talking about this.. ". So are they bricking Maxwells so people buy the crappy pascal? the world may never know...."







to me, stuff like that sounds like crazy talk, no disrespect ...


----------



## Cyber Locc

Quote:


> Originally Posted by *carlhil2*
> 
> Maybe, mine are good, so, can't say, but, I was talking about this.. ". So are they bricking Maxwells so people buy the crappy pascal? the world may never know...."
> 
> 
> 
> 
> 
> 
> 
> to me, stuff like that sounds like crazy talk, no disrespect ...


Oh it was 100% crazzy talk, I was just kidding making fun of there no very nice aprils joke.

Just trying to how do you say, lighten the mood







.


----------



## Klocek001

Quote:


> Originally Posted by *i7monkey*
> 
> Is it a good time to sell a 980ti now? How much are they worth? Mine's barely used and in great condition.


Well, I'm selling mine and getting the GTX ??70 in June. Already decided not to go with GTX ??80 unless the pricing is reasonable, not like 970 and 980. The ??70 should be as fast as 980Ti while consuming half the power and hopefully much,much cooler. I know I'd have to wait for it until June but if my plan goes right then it will be worth the wait, I'll get a much more efficient card with a free game and the with money I save on the deal I'd be able to further upgrade my rig with some nice new stuff like two 480GB SSDs.


----------



## i7monkey

How much is a 980Ti worth now?


----------



## Klocek001

Quote:


> Originally Posted by *i7monkey*
> 
> How much is a 980Ti worth now?


IDK about your country.
Bought mine for 2950PLN, new. Got offers to sell for 2500PLN but I'm in no hurry so waiting for 2600-2700PLN.
Hoping for ??70 to launch at 1500PLN just like 970.


----------



## Bogga

Let's hope for some good news the coming days...

A used 980ti here in Sweden goes for around 5000sek which is about 540€/615$


----------



## Lass3

Quote:


> Originally Posted by *Klocek001*
> 
> 980Ti SLI would be able to pull 144 fps in TW3, but:
> -No hairworks
> -at least 1.5GHz/8GHz
> -a crazy fast CPU. You'd really need a 5GHz 6700K with some 4GHz DDR4 to exclude CPU bottleneck.
> 
> However,with g-sync you actually don't need steady 144 fps. If it keeps dropping in the 120-130 fps range you won't even notice most of the time.


Yeah if you lower details and play at default FOV, then maybe.


----------



## Cyber Locc

Quote:


> Originally Posted by *Lass3*
> 
> Yeah if you lower details and play at default FOV, then maybe.


He didn't specify a resolution and seeing how 1 TI can keep 60 FPS, solid without hair works at 1440. Sli should be able to 144 at 1080p lol.


----------



## BoredErica

Quote:


> Originally Posted by *sl4ppy*
> 
> ...nevermind..


This implied concession is probably the best I'm ever going to get, so I'll take it.


----------



## Xuvial

Quote:


> Originally Posted by *Cyber Locc*
> 
> He didn't specify a resolution and seeing how 1 TI can keep 60 FPS, solid without hair works at 1440. Sli should be able to 144 at 1080p lol.


It looks like I got myself into deep trouble with a 1440p 144hz









Hoping a 1080 Ti has the horsepower to pull off most games at 120+ fps without sacrificing settings too much.


----------



## Majin SSJ Eric

144Hz at 1440p is a very intensive resolution/framerate target for any video card setup. I really doubt any single card will be able to saturate that on new AAA titles any time real soon.


----------



## Lass3

Quote:


> Originally Posted by *Cyber Locc*
> 
> He didn't specify a resolution and seeing how 1 TI can keep 60 FPS, solid without hair works at 1440. Sli should be able to 144 at 1080p lol.


If you read the thread, you will see that he talks about 1440p.

Quote:


> Originally Posted by *Xuvial*
> 
> It looks like I got myself into deep trouble with a 1440p 144hz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hoping a 1080 Ti has the horsepower to pull off most games at 120+ fps without sacrificing settings too much.


I doubt it. And I don't think GP100 / 1080 Ti will launch before late this year or early next.. The upcoming cards are almost guaranteed to be only the small chips, like 680/670 and 980/970.

GTX 1080 will probably beat 980 Ti slightly, at a lower price with 2GB more VRAM. GTX 1080 Ti won't be 100% faster than GTX 1080.

1440p/144Hz (and 120-144 fps obviously) will always be hard to archieve with a single card in AAA games unless you compromise on IQ. Games are becoming more and more demanding every time.


----------



## flopper

Quote:


> Originally Posted by *Xuvial*
> 
> It looks like I got myself into deep trouble with a 1440p 144hz
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Hoping a 1080 Ti has the horsepower to pull off most games at 120+ fps without sacrificing settings too much.


980ti cant do 60fps on 1080p.
witcher game, so whatever people wanna believe 120fps on higher resolutions just wont happen in many generations


----------



## FinalForm7

Sounds like you'd be more CPU bound in the Witcher 3 rather than being bottlenecked by a 980ti.
Quote:


> Originally Posted by *flopper*
> 
> 980ti cant do 60fps on 1080p.
> witcher game, so whatever people wanna believe 120fps on higher resolutions just wont happen in many generations


----------



## Xuvial

Quote:


> Originally Posted by *Lass3*
> 
> 1440p/144Hz (and 120-144 fps obviously) will always be hard to archieve with a single card in AAA games unless you compromise on IQ. Games are becoming more and more demanding every time.


Games are becoming more demanding, but they are getting better optimized as well. Also I believe we are pretty close to the point where pushing visuals any further just won't be worth the huge performance drops, diminishing gains and what not. Things can only get so detailed.

I always like to hold Battlefront as a golden example of how a visually gorgeous game can still run great on all hardware.



Granted, a single 980 Ti is only managing 80fps which is falling short of the 120-140fps target...but maybe an overclocked 1080 Ti could stand a chance *shrug*.


----------



## JTHMfreak

Quote:


> Originally Posted by *flopper*
> 
> 980ti cant do 60fps on 1080p.
> witcher game, so whatever people wanna believe 120fps on higher resolutions just wont happen in many generations


I disagree, a 980 ti should be very capable of 60 fps at 1080p in Witcher 3, even with the settings cranked. Sure, you may get an occasional dip of a few fps here and there, but for the most part you should be able to have a fairly consistent 60 fps.


----------



## Lass3

Quote:


> Originally Posted by *flopper*
> 
> 980ti cant do 60fps on 1080p.
> witcher game, so whatever people wanna believe 120fps on higher resolutions just wont happen in many generations


What? I played Witcher 3 at 1440p on ultra settings with high FOV (no hairworks) just fine on my OC'ed 980 Ti. 60-90 fps range.

Quote:


> Originally Posted by *Xuvial*
> 
> Games are becoming more demanding, but they are getting better optimized as well. Also I believe we are pretty close to the point where pushing visuals any further just won't be worth the huge performance drops, diminishing gains and what not. Things can only get so detailed.
> 
> I always like to hold Battlefront as a golden example of how a visually gorgeous game can still run great on all hardware.
> 
> 
> 
> Granted, a single 980 Ti is only managing 80fps which is falling short of the 120-140fps target...but maybe an overclocked 1080 Ti could stand a chance *shrug*.


Graphics can be much improved, and they will in the years to come. We are far from the limit here.

FXAA is not good. Blurry AA option, but low performance hit, I'd choose No AA over FXAA any day, atleast it's sharp and crisp. And the average fps might be 83, but low fps is still in the 60s, *even for 980 Ti SLI*. Raise the FOV and choose a better AA option and the game would probably hit in the low 50s when it comes to minimum fps. When going for high fps, you also want a high minimum fps, or it's highly noticable, even with VRR tech. The smoothness of high Hz monitors are gone when going below 90 fps. 100+ at all times for optimal gameplay.


----------



## BoredErica

Quote:


> Originally Posted by *flopper*
> 
> 980ti cant do 60fps on 1080p.
> witcher game


I'm doing 60 on 1440p. No Hairworks.


----------



## mouacyk

Quote:


> Originally Posted by *flopper*
> 
> 980ti cant do 60fps on 1080p.
> witcher game, so whatever people wanna believe 120fps on higher resolutions just wont happen in many generations


Yes, some (if not most) can. My MSI can do 2560x1080p (33% more than 1080p) at 70fps with dips to around 65fps, maxed settings with hairworks, but hairworks AA at 4x. I even have foliage beyond max setting with tweak. This is a card running at 1493MHz core and 8GHz memory, only slightly above middle of the ground 980 TI.


----------



## Bogga

1,5 hours left until I starts... fingers crossed


----------



## outofmyheadyo

Dont see them announcing any new cards today.


----------



## Cyber Locc

Quote:


> Originally Posted by *outofmyheadyo*
> 
> Dont see them announcing any new cards today.


They announced the new Tesla 

Maybe gaming cards we will see, I am still watching.

Anyway p100 confirmed 600mm get your wallets ready for an extreme price hike.


----------



## rck1984

Quote:


> Originally Posted by *Cyber Locc*
> 
> They announced the new Tesla
> 
> Maybe gaming cards we will see, I am still watching.
> 
> Anyway p100 confirmed 600mm get your wallets ready for an extreme price hike.


Not a word on gaming cards yet...
People waiting a while for Pascal are sweating and getting a little nervous now


----------



## sblantipodi

rumors rumors rumors, when will we see official benchmarks?


----------



## Cyber Locc

Quote:


> Originally Posted by *rck1984*
> 
> Not a word on gaming cards yet...
> People waiting a while for Pascal are sweating and getting a little nervous now


Tesla Chips are also used in gaming cards that p100 is the Titan Ps chip. And we can expect a serious price jump Titan P is likely going to be around 2k.
Quote:


> Originally Posted by *sblantipodi*
> 
> rumors rumors rumors, when will we see official benchmarks?


September - October.


----------



## sblantipodi

Quote:


> Originally Posted by *Cyber Locc*
> 
> Tesla Chips are also used in gaming cards that p100 is the Titan Ps chip.
> September - October.


do you think that those cards will be out in september?


----------



## rck1984

Glad i did not just wait for Pascal but bought a 980Ti and enjoy gaming on it. As far as i understand, it might still take a while before we see new GeForce cards?


----------



## Greenland

Computex will likely be GP104 Launch.


----------



## Cyber Locc

Quote:


> Originally Posted by *sblantipodi*
> 
> do you think that those cards will be out in september?


If we are lucky.
Quote:


> Originally Posted by *rck1984*
> 
> Glad i did not just wait for Pascal but bought a 980Ti and enjoy gaming on it. As far as i understand, it might still take a while before we see new GeForce cards?


Me too all the poeple saying sell your Maxwells, wait for Pascal, ect ect. They are going to have a crappy rest of this year.


----------



## rck1984

Quote:


> Originally Posted by *Cyber Locc*
> 
> If we are lucky.
> Me too all the poeple saying sell your Maxwells, wait for Pascal, ect ect. They are going to have a crappy rest of this year.


I have seen so many "I'm waiting for Pascal for over a year now" comments lately. Christ... Why would you wait for over a year for a new GPU? If all you do is waiting for new tech, you will never get to enjoy..


----------



## sblantipodi

Quote:


> Originally Posted by *rck1984*
> 
> I have seen so many "I'm waiting for Pascal for over a year now" comments lately. Christ... Why would you wait for over a year for a new GPU? If all you do is waiting for new tech, you will never get to enjoy..


not every people likes the fact that their new GPU drop in price by over 50% after few months.
neither do I, this is the reason why I buy new GPUs as soon as they are launched.


----------



## rck1984

Quote:


> Originally Posted by *sblantipodi*
> 
> not every people likes the fact that their new GPU drop in price by over 50% after few months.
> neither do I, this is the reason why I buy new GPUs as soon as they are launched.


Shortly before release, sure. But i'm hearing people waiting for a year.. That is ridiculous if you ask me.


----------



## Cyber Locc

Quote:


> Originally Posted by *sblantipodi*
> 
> not every people likes the fact that their new GPU drop in price by over 50% after few months.
> neither do I, this is the reason why I buy new GPUs as soon as they are launched.


Well if you dont like the fact that your tech will drop in price by 50% in a few months then you are in the wrong hobby that is reality in the tech world and waiting doesn't help to avoid it.

If you buy an X80, it will be top dog for a few months then the Titan comes out and boom your card is mid range again. Thats just the way things go.

If it is a month or a few weeks from a confirmed release I can understand that, but when your guessing its a few months and you have no clue like right now its stupid. Everyone has been waiting saying oh I am waiting for x80 in May, well now those cards wont be here till likely near the end of the year, so lets wait another 6 months.

Then when the x80 is released it will be "Oh wait for the TI its so much better in a few months", then the TI comes and its wait Volta is a few months away with a ton of gains. There is always something in a few months that is better, that argument is foolish all you do is wait.

That is all you will ever do is wait, it will never stop there will never not be something faster better more power efficient around the corner.


----------



## HAL900




----------



## sblantipodi

Quote:


> Originally Posted by *rck1984*
> 
> Shortly before release, sure. But i'm hearing people waiting for a year.. That is ridiculous if you ask me.


I mean shortly before release, probably I didn't explained me well.


----------



## Cyber Locc

Quote:


> Originally Posted by *HAL900*


its sooo bogus.

(It wasnt linked here sorry, wrong thread lol)


----------



## mouacyk

Quote:


> Originally Posted by *Cyber Locc*
> 
> If we are lucky.
> Me too all the poeple saying sell your Maxwells, wait for Pascal, ect ect. They are going to have a crappy rest of this year.


It's OK. They're taking a selfless act of giving the rest of us cheap SLI GPUs to use in Vulkan and DX12. For that I salute them.


----------



## Stoogie

Quote:


> Originally Posted by *iLeakStuff*
> 
> Both equal in performance


How is 1070 equal in performance to 980 ti when it clearly has 10% less score.


----------



## guttheslayer

Quote:


> Originally Posted by *Stoogie*
> 
> How is 1070 equal in performance to 980 ti when it clearly has 10% less score.


Clock speed.


----------



## TheBlindDeafMute

2 pascal titans please


----------



## Stoogie

Quote:


> Originally Posted by *guttheslayer*
> 
> Clock speed.


clock speed is not performance. It's only a single variable.


----------



## guttheslayer

Quote:


> Originally Posted by *Stoogie*
> 
> clock speed is not performance. It's only a single variable.


What are u talking about? Clockspeed have no performance den why ppl bother to overclock?

The 1080 comes with 40% faster clock speed than titan x despite having 17% less core, that alone account for 25-30% boost


----------



## variant

Quote:


> Originally Posted by *guttheslayer*
> 
> What are u talking about? Clockspeed have no performance den why ppl bother to overclock?
> 
> The 1080 comes with 40% faster clock speed than titan x despite having 17% less core, that alone account for 25-30% boost


The 1080 is clocked about 60% higher than the GTX980Ti with only about a 20-25% more performance. It also only has about 10% fewer cores than the 980Ti.


----------



## Diogenes5

Quote:


> Originally Posted by *sblantipodi*
> 
> not every people likes the fact that their new GPU drop in price by over 50% after few months.
> neither do I, this is the reason why I buy new GPUs as soon as they are launched.


Sweet spot is to buy during clearance times which in my experience has been a month before black friday. I scored a 570 for under $200 when they were going for 300 and a 7970 for $200 when they were going for $300 and a r290 for $220. The 570 was an evga dud with a crappy cooler so I had to put it under water and a kraken-like cooler. The 7970 and 290 were both Vapor-X's so they were excellent deals. When I sold each of my previous cards, I actually broke even or made money. Hell, I can probably sell my 290 now and break even or do better because bitcoin mining seems to be trending again.

Of course the problem with always buying the mainstream card on sale is that you are 30-40% off the performance off the top cards like the Titan on 980TI's. But everyone I know that wants the best always takes a bath when they upgrade. Or it makes them way too attached to their cards and they ignore big leaps in tech because of the sunk cost of their old card.
Quote:


> Originally Posted by *variant*
> 
> The 1080 is clocked about 60% higher than the GTX980Ti with only about a 20-25% more performance.


Did they introduce a new architecture or are these cards mostly gains from a node shrink? I'd like to know since AMD advertised they redesigned their compute units for Polaris. The 1080 and 1070 also seem to be missing a lot of the tech promised for the GP100 flagship card, most notably HBM.


----------



## Stoogie

Performance consists of multiple variables.
Architecture, RoPS, TMuS, Shaders, Memory, Bus width, Clocks and even drivers, whatever else is under the 'hood'.

So far it seems its 9% slower.


----------



## paulerxx

....Seriously that's it? Haven't we waited long enough? What's a viable upgrade for a HD7870 in the $200 range (which I bought in 2013)


----------



## EightDee8D

Quote:


> Originally Posted by *paulerxx*
> 
> ....Seriously that's it? Haven't we waited long enough? What's a viable upgrade for a HD7870 in the $200 range (which I bought in 2013)


For 200$ you need to wait a bit more.


----------



## cg4200

Yeah it sucks to spend 670.00 on 980ti to sell at loss for 400.00 or so, but if you don't you are always a step behind waiting for better deal new technology.
My G1 at 1580 core 8100 could max some games depending on settings optimization 49 in wasabi mango 4k.
I was not happy that amd cards that cost way less could run better in DX12 yeah I know not a lot of games in DX12 matter yet but shows a pattern,1080 is suppose to run better DX12_1
so going forward I am not fanboy I buy the best card for performance I can... I would like to see benchmarks 1080 VS 980 ti VS AMD 480X DX11 and dx12 in games because that is what I do game...
It worries me NVidia rushed up release to push out before amd ???


----------



## Buris

Quote:


> Originally Posted by *guttheslayer*
> 
> What are u talking about? Clockspeed have no performance den why ppl bother to overclock?
> 
> The 1080 comes with 40% faster clock speed than titan x despite having 17% less core, that alone account for 25-30% boost


To put this into perspective, I and many other overclock.net denizens could easily engineer a graphics chip ourselves that has 5ghz clock speed-
A near 500% increase in clock speed! It would run like total garbage though. In short, clock speed is not indicative of performance- similar to a 5ghz 8350 AMD CPU vs. a 3ghz Intel Xeon , the Xeon is going to destroy the 8350 clocked at 5ghz, probably twice the performance too and half the clock speed:thumb:

Architecture is more important than clock speed, and because we don't have a sample of pascal or Polaris to look at ourselves, we can't make a judgement call

Even though pascal has less cores, it could actually be twice as fast per-core, I can't say for certain until I test one


----------



## NuclearPeace

Quote:


> Originally Posted by *paulerxx*
> 
> ....Seriously that's it? Haven't we waited long enough? What's a viable upgrade for a HD7870 in the $200 range (which I bought in 2013)


Honestly nothing. And this is why I am so disappointed in both NVIDIA and AMD.

Right now you could get a 380x thats slightly better for $230 or get a 960 thats around the same speed for $200. Yay.

With NVIDIA charging $600 for a new 1080 I severely doubt that there will be anything better than the above for you with this coming generation.


----------



## Buris

Quote:


> Originally Posted by *NuclearPeace*
> 
> Honestly nothing. And this is why I am so disappointed in both NVIDIA and AMD.
> 
> Right now you could get a 380x thats slightly better for $230 or get a 960 thats around the same speed for $200. Yay.
> 
> With NVIDIA charging $600 for a new 1080 I severely doubt that there will be anything better than the above for you with this coming generation.


Polaris will be for you, interesting that these Pascal cards with 16nm can reach 1.7ghz clock speeds... Polaris cards with 14nm, even if they have the same architecture as last Fab, should be capable of clock speeds around 1.4ghz-

And yet all leaked benchmarks point to an 800mhz clock... That's very strange don't you think?

I might pick up a 1080 depending on performance when they're sub 400$ in november


----------



## guttheslayer

Quote:


> Originally Posted by *Stoogie*
> 
> Performance consists of multiple variables.
> Architecture, RoPS, TMuS, Shaders, Memory, Bus width, Clocks and even drivers, whatever else is under the 'hood'.
> 
> So far it seems its 9% slower.


What are you talking about really? First you ask why it perform faster despite having 10% less cores. But the answer is a 40% boost from clock speed. Hence a overall 25% boost.

Performance = Cores x clock speed x Architecture efficiency

As of now I see, Pascal is at least Maxwell efficiency. Therefore i take both architecture as same.

2560 x 1.7 > 3072 x 1.1

You do the maths.


----------



## Buris

Quote:


> Originally Posted by *guttheslayer*
> 
> What are you talking about really? First you ask why it perform faster despite having 10% less cores. But the answer is a 40% boost from clock speed. Hence a overall 25% boost.
> 
> Performance = Cores x clock speed x Architecture efficiency
> 
> As of now I see, Pascal is at least Maxwell efficiency. Therefore i take both architecture as same.
> 
> 2560 x 1.7 > 3072 x 1.1
> 
> You do the maths.


I can see where your math is coming from, but you can't say for sure it's the same architecture, you could come up with a conclusion from transistor count as well as the math you've used...

The best practice is to always wait for real world performance to be shown and compared via a third party


----------



## sblantipodi

still don't understand why nvidia says that GTX1080 has double the performance of a GTX980 SLI.
is this nearly true?


----------



## Maintenance Bot

Quote:


> Originally Posted by *sblantipodi*
> 
> still don't understand why nvidia says that GTX1080 has double the performance of a GTX980 SLI.
> is this nearly true?


It has more than 2x in VR, but gaming scenario who knows. We have to wait a few weeks for reviews.


----------



## michael-ocn

Nice, with an X80ti, i'll be able to get a 144fps 34" curved monitor


----------



## Dragon 32

Quote:


> Originally Posted by *sblantipodi*
> 
> still don't understand why nvidia says that GTX1080 has double the performance of a GTX980 SLI.
> is this nearly true?


From what I saw they said the 1080 is only just better than 980 SLI for normal use.

Presumably they class 980 SLI as about 150% single 980 performance.


----------



## CuriousNapper

Quote:


> Originally Posted by *Dragon 32*
> 
> From what I saw they said the 1080 is only just better than 980 SLI for normal use.
> 
> Presumably they class 980 SLI as about 150% single 980 performance.


Issue I have with that is is that at stock or overclocked to 2ghz with a founder's card.

If it's stock, then once overclocked there's even more performance to be had. Especially when grrd5x is overclocked.


----------



## SuperZan

Quote:


> Originally Posted by *CuriousNapper*
> 
> Issue I have with that is is that at stock or overclocked to 2ghz with a founder's card.
> 
> If it's stock, then once overclocked there's even more performance to be had. Especially when grrd5x is overclocked.


That's the issue with buying first-class tickets on the hype train. 1080's gonna be a good card, no doubt, but we need a suite of third-party reviews to know where everything really stands. At the moment what we know is what Nvidia wants us to know. That's well and good but it's rarely the whole story from any company putting out a new product.


----------



## guttheslayer

Quote:


> Originally Posted by *CuriousNapper*
> 
> Issue I have with that is is that at stock or overclocked to 2ghz with a founder's card.
> 
> If it's stock, then once overclocked there's even more performance to be had. Especially when grrd5x is overclocked.


If you look at the video again, you will realized its a OC'ed 1080. You can see they are using EVGA Precision X for the overclocking.

Its not at stock speed.


----------



## guttheslayer

Quote:


> Originally Posted by *Buris*
> 
> I can see where your math is coming from, but you can't say for sure it's the same architecture, you could come up with a conclusion from transistor count as well as the math you've used...
> 
> The best practice is to always wait for real world performance to be shown and compared via a third party


Refer to the block diagram of GM200 and GP100, they are effectively the same, except having DP unit adding in and some configuration change in cores per SM.

7.2 billion transistor vs 8 billion on GM200 but GP104 run 40% faster. Again its a no-brainer indication that GP104 will lead by some margin.


----------



## pittguy578

GTX 1060? Is that really a card?


----------



## CuriousNapper

Quote:


> Originally Posted by *guttheslayer*
> 
> Refer to the block diagram of GM200 and GP100, they are effectively the same, except having DP unit adding in and some configuration change in cores per SM.
> 
> 7.2 billion transistor vs 8 billion on GM200 but GP104 run 40% faster. Again its a no-brainer indication that GP104 will lead by some margin.


Have to wait and see if it's purely die shrink for the performance gains.


----------



## airfathaaaaa

but but

but but but but


----------



## TranquilTempest

Quote:


> Originally Posted by *airfathaaaaa*
> 
> but but
> 
> but but but but


The graph with the bigger difference between maxwell and pascal is just for VR.


----------



## PlugSeven

Is the 2X perf in VR nvidia's roundaout way of saying lower latency?


----------



## sblantipodi

Quote:


> Originally Posted by *Maintenance Bot*
> 
> It has more than 2x in VR, but gaming scenario who knows. We have to wait a few weeks for reviews.


ok really interested in this.
if it's true that it has double the performance of a GTX980 SLI for normal use I will consider upgrading my GTX980 Ti SLI.


----------



## Buris

I'm waiting on 1080 Ti or big vega, performance of 1080 doesn't warrant the price from last-gen's 500-600mm dies- But this is a great upgrade for people with 970's/280x who are memory/core bottlenecked

calling it right now 16GB HBM2 in december on both the 1080 Ti and big vega and 4-6k cores- 3x performance of 980 Ti/Fury X


----------



## sblantipodi

Quote:


> Originally Posted by *Buris*
> 
> I'm waiting on 1080 Ti or big vega, performance of 1080 doesn't warrant the price from last-gen's 500-600mm dies- But this is a great upgrade for people with 970's/280x who are memory/core bottlenecked
> 
> calling it right now 16GB HBM2 in december on both the 1080 Ti and big vega and 4-6k cores- 3x performance of 980 Ti/Fury X


it seems that our new cards became old every 6 months.
I spent 1500€ for a GTX980 Ti SLI, how much will worth my cards if I will sell it on december to buy a GTX1080 Ti SLI?


----------



## guttheslayer

Quote:


> Originally Posted by *sblantipodi*
> 
> it seems that our new cards became old every 6 months.
> I spent 1500€ for a GTX980 Ti SLI, how much will worth my cards if I will sell it on december to buy a GTX1080 Ti SLI?


Provided the 1080 Ti will drop by December...

Also it wont be called 1080 Ti. In any chance, if GP100 has more than 1 cut down variant. Rest assured the entire lineup will be rebrand as 1100 series next year.


----------



## iLeakStuff

Quote:


> Originally Posted by *sblantipodi*
> 
> it seems that our new cards became old every 6 months.
> I spent 1500€ for a GTX980 Ti SLI, how much will worth my cards if I will sell it on december to buy a GTX1080 Ti SLI?


No.

GTX 780Ti arrived over a year after GTX 680
GTX 980Ti arrived 9 months after GTX 980

You guys are dreaming if you think a GTX 1080Ti will arrive this year.
Not gonna happen.

And btw, prepare to bleed for the card now that midrange Pascal cost $600.


----------



## sblantipodi

Quote:


> Originally Posted by *iLeakStuff*
> 
> No.
> 
> GTX 780Ti arrived over a year after GTX 680
> GTX 980Ti arrived 9 months after GTX 980
> 
> You guys are dreaming if you think a GTX 1080Ti will arrive this year.
> Not gonna happen.


story says that a new card is always released after 6 max 9 months.
if it will not arrive this year, it will arrive on february 2017.


----------



## xxdarkreap3rxx

Quote:


> Originally Posted by *sblantipodi*
> 
> still don't understand why nvidia says that GTX1080 has double the performance of a GTX980 SLI.
> is this nearly true?


SLI isn't always 100% better than a single card. Let's take an oc 980 Ti at a slightly better res than 1080p.

https://www.techpowerup.com/reviews/ASUS/GTX_980_Ti_Matrix/23.html

100/81 and we get 23.4. Look at 970 SLI on there. The matrix is 12% better than it and 980 is not that much faster than a 970. In fact, if you take 66/57, you get about 16% for 980 over 970. So if 1080 is closer to 30% over a 980 Ti, I think that's not out of the realm of possibility.


----------



## iLeakStuff

Quote:


> Originally Posted by *sblantipodi*
> 
> story says that a new card is always released after 6 max 9 months.
> if it will not arrive this year, it will arrive on february 2017.


So now you are moving that goal? Atleast we are getting somewhere









Doubt GTX 1080Ti will even arrive until Q2 2017. GTX 980Ti was a weird card, a placeholder while Nvidia worked on Pascal.
GTX 680 and GTX 780Ti is what you should be looking out to try to find when 1080Ti will arrive. Because 680 was a brand new architecture and a new node like Pascal.

GTX 680; March 2012
GTX 780Ti: October 2013

Thats 1.5 year. I personally think its somewhere in the middle. GTX 1080Ti near the summer next year


----------



## guttheslayer

Quote:


> Originally Posted by *sblantipodi*
> 
> story says that a new card is always released after 6 max 9 months.
> if it will not arrive this year, it will arrive on february 2017.


That february 2017 new card is a $999 card. If you can afford it den you can have it


----------



## sblantipodi

Quote:


> Originally Posted by *guttheslayer*
> 
> That february 2017 new card is a $999 card. If you can afford it den you can have it


I'm glad with the enthusiast gaming SLI segment like GTX980 Ti SLI,
$999 cards are the titan segment and I don't like this segment, is really a waste of money for gaming with no real benefit over the Ti segment.

When will we see the Ti segment of this cards?


----------



## sinholueiro

No one thinks that Nvidia can leave the 1080 as the flagship and the GP100 will come in the next series, like the GTX600 and GTX700?


----------



## zealord

Quote:


> Originally Posted by *sinholueiro*
> 
> No one thinks that Nvidia can leave the 1080 as the flagship and the GP100 will come in the next series, like the GTX600 and GTX700?


Well it's possible, but thats only a name.

It doesn't really matter if it is called 1080 Ti or 1180 or 2080 or whatever


----------



## Buris

As soon as AMD releases vega, 1080 will be a mid-300$ dollar card and nvidia will release the GP100 at 1000$ and 600$

This is the exact same strategy nvidia uses every refresh and it's hilarious to see the same 5-10 guys on here rush to sell their "new" card before it becomes a paperweight

I got my r9 290 day of its release, it's never played any game badly, and I used it to mine lite coin until the card *literally* paid for itself... And I'm still unimpressed with fury X, Titan X, Pro Duo, GTX 1080 etc. I'm not going to pay another 500-600 for a performance increase that's barely even noticeable, I'm waiting for the big chips to offer 2x performance


----------



## flopper

Quote:


> Originally Posted by *Buris*
> 
> As soon as AMD releases vega, 1080 will be a mid-300$ dollar card and nvidia will release the GP100 at 1000$ and 600$
> 
> This is the exact same strategy nvidia uses every refresh and it's hilarious to see the same 5-10 guys on here rush to sell their "new" card before it becomes a paperweight
> 
> I got my r9 290 day of its release, it's never played any game badly, and I used it to mine lite coin until the card *literally* paid for itself... And I'm still unimpressed with fury X, Titan X, Pro Duo, GTX 1080 etc. I'm not going to pay another 500-600 for a performance increase that's barely even noticeable, I'm waiting for the big chips to offer 2x performance


yea Vega will be the big thing


----------



## kithylin

EDIT: Nevermind, it was edited in to page 1. And the results are 2 months old and more than likely not even remotely legitimate.. here I was hoping for real results on the new cards.. I guess not yet.


----------



## sblantipodi

Rumors says that a GTX1080 could be 50% faster than a GTX980 Ti.
Is this possible for you?


----------



## kithylin

Quote:


> Originally Posted by *sblantipodi*
> 
> Rumors says that a GTX1080 could be 50% faster than a GTX980 Ti.
> Is this possible for you?


Until we have hard data with real benchmarks from reliable review persons on the internet it's all just that: rumors and fake leaks. Original post in this thread.. espically now that we know the new cards run at 1733 mhz core speed and you can go look at those screenshots and see it's like 500 something mhz core speed in the cards tested.. it's pretty obvious that it's some 900 series card run through the benchmark and claimed "a leaked 1080 score".


----------



## michael-ocn

My 980ti 3Dmark scores that were recently better than 99% of all scores are suddenly losing ground, down to 97%. Lot's of new 980ti sli systems after picking up a second ti for a good price maybe?


----------



## carlhil2

The way that I see it, 980Ti will have to be clocked close to 1600 to compete with a stock 1080, then, you can OC the 1080 to 2G+,.........anyways, nobody is pushing a 980Ti at 1600 in games, that I know of, on air to boot, so, lets say the average is 1500,that's a decent advantage,...we will see


----------



## kithylin

Quote:


> Originally Posted by *carlhil2*
> 
> The way that I see it, 980Ti will have to be clocked close to 1600 to compete with a stock 1080, then, you can OC the 1080 to 2G+,.........anyways, nobody is pushing a 980Ti at 1600 in games that I know of, on air to boot, so...we will see


None of the top 20 scores for water cooling even for the 980 Ti on hwbot.org are above 1500 - 1525 mhz. No one is getting close to or even making it to 1600 mhz on the 900 series cards except for LN2 and those cards aren't viable for more than 30-45 mins or so then they're dead and won't work anymore so.. yeah.

Supposedly a stock 1080 @ 1733 mhz is "Faster than" (quoting nvidia's own words here from twitter) than a pair of stock GTX 980's in SLI. I don't know how much.. could be just +5% more or something but that's what they say.


----------



## carlhil2

Quote:


> Originally Posted by *kithylin*
> 
> None of the top 20 scores for water cooling even for the 980 Ti on hwbot.org are above 1500 - 1525 mhz. No one is getting close to or even making it to 1600 mhz on the 900 series cards except for LN2 and those cards aren't viable for more than 30-45 mins or so then they're dead and won't work anymore so.. yeah.
> 
> Supposedly a stock 1080 @ 1733 mhz is "Faster than" (quoting nvidia's own words here from twitter) than a pair of stock GTX 980's in SLI. I don't know how much.. could be just +5% more or something but that's what they say.


True, and, even if it just matches sli 980, I had a pair, OCed, they were fast enough for 4k all day...that's clocked to 1528 on both..I am still calling 1080 at 25+% over the 980Ti...better yet, imagine running a 980Ti at 2G......


----------



## DigitrevX

Wow, OP posts terrible rumor then defends it with obvious math issues and this deserves 87 pages?


----------



## Raghar

Quote:


> Originally Posted by *DigitrevX*
> 
> Wow, OP posts terrible rumor then defends it with obvious math issues and this deserves 87 pages?


You created 88th page.


----------

