# 5970 vs GTX 480 vs 5870 - Heaven Benchmark



## Eduardv

Wow seems the GTx will be close to the 5970,then if priced right,omg sorry for the 5970.


----------



## D3TH.GRUNT

nice







I think that if nvidia prices the GTX 480 around the 5870's cost (roughly) this card has some serious potential to sell like crazy. However, knowing nvidia this will be priced to match the 5970 and with all things considered is just terrible. Im pretty sure if that happens nvidia better release the rest of there line-up or they could be looking at a terrible new year.


----------



## Murlocke

Quote:


Originally Posted by *Eduardv* 
Wow seems the GTx will be close to the 5970,then if priced right,omg sorry for the 5970.

Huh? Your reading it wrong. The 5970 destroys the GTX 480. My average FPS is beating it's maximum FPS.

Are you talking about the GTX 485?









Quote:


Originally Posted by *D3TH.GRUNT* 
nice







I think that if nvidia prices the GTX 480 around the 5870's cost (roughly) this card has some serious potential to sell like crazy. However, knowing nvidia this will be priced to match the 5970 and with all things considered is just terrible. Im pretty sure if that happens nvidia better release the rest of there line-up or they could be looking at a terrible new year.

Yea, I agree.

Nvidia's past prices though make me believe they'll try to get $550+ out of it.


----------



## muselmane

you forget, that with a nice cooler like the prolimatech mk13 a 5870 can be easily clocked to much higher performance. I dont think that the GTX480/470 will be a good OCer even with a good cooling.


----------



## ljason8eg

Quote:


Originally Posted by *Eduardv* 
Wow seems the GTx will be close to the 5970,then if priced right,omg sorry for the 5970.

The GTX 480's max FPS didn't even touch the 5970's avg. FPS. I would not call that close.

Quote:


Originally Posted by *muselmane* 
you forget, that with a nice cooler like the prolimatech mk13 a 5870 can be easily clocked to much higher performance. I dont think that the GTX480/470 will be a good OCer even with a good cooling.

We don't know how well Nvidia's offerings will overclock right now. What's a good reason that they won't OC well especially with good cooling?


----------



## DeviousAddict

that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison


----------



## ljason8eg

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

How is it flawed? You could compare a 5970 to a 8400GS if you wanted to. Sure we'd all know the outcome but I don't get what makes a test like this flawed or unfair.


----------



## Murlocke

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

Congrats on reading the OP. Comparing NVIDIA's Highest End to ATI's Highest End isn't fair? Seems very fair to me.

I will give it to you that having my card overclocked is a bit unfair, but even at stock... it's still going to be far higher.


----------



## ipar26

yes seems it will beat the 5870 and be very close to the 5970,just see what happens when it is released and the price will match the 5870 maybe.


----------



## Lord Xeb

:/ The GTX480 is about 10-15% more powerful than a 5870 or so I hear. Thus, no wonder why it was spanked, kicked, and beaten.


----------



## PanicProne

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

Like he said: He's just comparing both companies' *HIGHEST-END offers at THIS MOMENT.
*
Can't you read? HD5970 is ATI's flagship, GTX480 is nVidia's.

Geeez.


----------



## imadude10

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

He's just comparing the card he has to the benchmarks Nvidia have released. Like others said, It's very likely this "GTX480" will be priced in the 5970's range, thus making the 5970 a better value (If you can fit it in your case)


----------



## Lord Xeb

LOL agreed. I would have to remove my HD cage to even fit it my case lol


----------



## Murlocke

Quote:


Originally Posted by *Lord Xeb* 
:/ The GTX480 is about 10-15% more powerful than a 5870 or so I hear. Thus, no wonder why it was spanked, kicked, and beaten.

Yes, if you look at the second picture it is about 10-10% more than the 5870.

It all comes down to price. If they charge the same price as the 5870, these will sell fast. If they charge the same price as the 5970, they aren't going to sell.


----------



## E_man

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

Only part I will agree with is the part about the OC.

As to the dual gpu issue though, will depend on the price. If the GTX480 is priced in line with the 5870, then the 5870 will be the compared to card. If the GTX480 is similiar to the 5970 pricewise, then a 5970 will be the fair comparison. That's all that matters. If I have X dollars, what card should I buy. Not, if I feel like having X gpu's what card should I get. Who says that









My personal thought is, the GTX480 will be priced about 525$. ATI will lower prices, and release a very similiar (though still slightly slower) 5890 at the same or close to the same time as the 480 is released, for about 475$.

Of course, how much the market inflates each cards prices due to demand may be completly different


----------



## mdbsat

Quote:


Originally Posted by *Murlocke* 
Congrats on reading the OP. Comparing NVIDIA's Highest End to ATI's Highest End isn't fair? Seems very fair to me.

I will give it to you that having my card overclocked is a bit unfair, but even at stock... it's still going to be far higher.

This.

It cracks me up when people say a comparison is flawed. We are not stupid, we can deduce what we like from people posting benchmarks. Thanks OP for doing this.







Interesting indeed.


----------



## Skylit

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

IF they release at the same price, I'd say its fair.


----------



## Horsemama1956

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

Yeah it's unfair.. Comparing a 6 month old card to one not even out yet.. Of course the older card is going to lo.. Oh.


----------



## Evermind

Murlocke, out of curiosity what was your minimum FPS?


----------



## Xombie

Perhaps I misread that chart, but it looks to me like the GTX 480 preforms better, more consistently than the 5970... What's with this 'average' BS?

The GTX 480 reads: higher same higher higher higher same higher.

From the chart, the 5970 averages like 25 fps while the GTX 480 averages around 55. You can't compare your independent results to this study.

EDIT: Ok I get it. That chart was the 5870. I see now. :O


----------



## E_man

Quote:


Originally Posted by *Horsemama1956* 
Yeah it's unfair.. Comparing a 6 month old card to one not even out yet.. Of course the older card is going to lo.. Oh.









I'm tempted to sig that


----------



## PanicProne

Quote:


Originally Posted by *Horsemama1956* 
Yeah it's unfair.. Comparing a 6 month old card to one not even out yet.. Of course the older card is going to lo.. Oh.

It's noot about being an older or younger card! Geeez people. It's only a HIGH-END to HIGH-END comparison. It's not ATI's fault that nVidia doesn't have a dual GPU on the market to compete with HD5970, is it?


----------



## E_man

Quote:


Originally Posted by *Xombie* 
Perhaps I misread that chart, but it looks to me like the 480 preforms better, more of the time than the 5970...

That graph was 5*8*70 vs 480. Then compare to the sceenie of the 5970's score


----------



## Murlocke

Quote:


Originally Posted by *Evermind* 
Murlocke, out of curiosity what was your minimum FPS?

Benchmark doesn't tell me. Wish I knew how to make graphs.









I watched it and I believe the lowest I saw was around 32-37FPS. There are some pretty intense scenes in there, but the majority run at 100FPS+ at 1920x1080.

Quote:


Originally Posted by *Xombie* 
Perhaps I misread that chart, but it looks to me like the GTX 480 preforms better, more consistently than the 5970... What's with this 'average' BS?

First picture is 5970. Second picture is 5870 vs GTX 480. I don't know how to make those graphs, if I did I could give a much better comparison.

Quote:


Originally Posted by *PanicProne* 
It's noot about being an older or younger card! Geeez people. It's only a HIGH-END to HIGH-END comparison. It's not ATI's fault that nVidia doesn't have a dual GPU on the market to compete with HD5970, is it?

I almost didn't post it because I knew some people would throw a fit over it.


----------



## Aqualoon

2010 March Madness people


----------



## OutlawII

Also correct me if im wrong but doesnt tessalation come into play here? In other words turn it off and see what the gtx does.Correct me if im wrong just askin.


----------



## donutpirate

Nvidia will have to rely on it's die hard fanboys if the card comes out priced anywhere near the 5970. If it's reasonably priced, I'll happily sell my 5850 to get one as long as it doesn't cost me an arm and a leg on top of it. But if it's over $425, forget it. Considering I can get a 5870 for around $375 at some retailers, paying $50 more for a slight improvement seems like a pretty big waste to me.

We'll just have to wait and see


----------



## Murlocke

Quote:



Originally Posted by *OutlawII*


Also correct me if im wrong but doesnt tessalation come into play here? In other words turn it off and see what the gtx does.Correct me if im wrong just askin.


NVIDIA claims that the GTX 480 handles tessalation better than the 5870/5970. So if anything, disabling it would just increase the gap between the GTX 480 and 5970.


----------



## sublimejhn

Pricing is going to be everything, and I have little faith the 480 will be priced accordingly.

Luckily I will be skipping this generation of cards anyway. My GTX295 still holds it's own just fine and DX11 while promising, is nothing special as of now. Next gen will be when I decide to replace my current card


----------



## tryceo

ATi wins... New Nvidia cards are priced too high and can't beat cheap ATi cards.. Looks like red is taking over


----------



## Murlocke

Quote:



Originally Posted by *tryceo*


ATi wins... New Nvidia cards are priced too high and can't beat cheap ATi cards.. Looks like red is taking over


Now if only ATI can get a better driver development team. The 5XXX series still has many issues.

...and we need PhsyX support.


----------



## ShortySmalls

ati may be faster, but ill always stick with nvidia, no fail drivers or ugly looking cards to look at


----------



## Xzeara

Quote:



Originally Posted by *ipar26*


yes seems it will beat the 5870 and be very close to the 5970,just see what happens when it is released and the price will match the 5870 maybe.


The GTX480? Beat a 5870? Yes if I have been folowing right but it will beat it barely... and be close to a 5970? No way... not at this point. Besides, now we have 5970 cards with 4GB of GDDR5... and significant speed boosts at stock.

My best view on the DX11 score, ATI: 2 ... nvidia: 0

I say this why? Well ATI got there DX11 cards out first, so 1 point for them, now nvivia's cards look like really artistic paperweights... I mean I don't see them doing anything correct right now... maybe they'll pull another low blow like with the batman game. If they do that to say... Battlefield 3... they might even out... maybe.

Anyways, not a trying to be a fan of anybody but ATI is doing it right AND afordable. So yeah...


----------



## Aqualoon

Quote:



Originally Posted by *ShortySmalls*


ati may be faster, but ill always stick with nvidia, no fail drivers or ugly looking cards to look at










So you don't care about price or performance you're going to stick with Nvidia...you're the kind of market that the GTX 480 was made for


----------



## chatch15117

Having inside information is wonderful







. If only you knew what unannounced stuff NVIDIA(and ATI also) has planned for the near future!

The ATI cards look great for gaming. The only reason I would get the GTX4X0 is for the extras like physx, c++, cuda etc.


----------



## OutlawII

I think i will skip this gen too. After all mt 2 gtx's are only couple weeks old got tired of waiting lol. But not tired enuff to go to ATI lol


----------



## Ocnewb

In another thread, IEATFISH ran the benchmark with his *5850* n his *CPU @ LOW 3.25* *GHz* and it gives almost the *SAME RESULT* as the Nvidia's benchmark w/ the 5870. It's like Nvidia benched a 5850 w/ GTX480 but said it was a 5870 w/ GTX480.


----------



## chatch15117

Quote:



Originally Posted by *Ocnewb*


In another thread, IEATFISH ran the benchmark with his *5850* n his *CPU @ LOW 3.25* *GHz* and it gives almost the *SAME RESULT* as the Nvidia's benchmark w/ the 5870. It's like Nvidia benched a 5850 w/ GTX480 but said it was a 5870 w/ GTX480.



Or NVIDIA was sneaky and underclocked the 5870









It never said anything about it being stock clocks, right? The GTX could have been severely overclocked aswell. So at this point there is no telling until the benching sites get these cards.


----------



## E_man

Quote:



Originally Posted by *Ocnewb*


In another thread, IEATFISH ran the benchmark with his *5850* n his *CPU @ LOW 3.25* *GHz* and it gives almost the *SAME RESULT* as the Nvidia's benchmark w/ the 5870. It's like Nvidia benched a 5850 w/ GTX480 but said it was a 5870 w/ GTX480.


linky?


----------



## Velcrowchickensoup

I ran the bench at the same settings with my 5850 and beat the 480 at avg frames, somethings not right here.


----------



## FlyingNugget

Cant wait for some legit benches, looks like the GTX480 is going to be overpriced and epic fail.


----------



## Ocnewb

Quote:



Originally Posted by *E_man*


linky?


Go to the News thread: "*[Nvida] GTX480 Video/Benchmarking!!! Must see!!". *It's in there.


----------



## Aqualoon

Page 21


----------



## Fear of Oneself

Quote:



Originally Posted by *E_man*


Only part I will agree with is the part about the OC.

As to the dual gpu issue though, will depend on the price. If the GTX480 is priced in line with the 5870, then the 5870 will be the compared to card. If the GTX480 is similiar to the 5970 pricewise, then a 5970 will be the fair comparison. That's all that matters. If I have X dollars, what card should I buy. *Not, if I feel like having X gpu's what card should I get*. Who says that









My personal thought is, the GTX480 will be priced about 525$. ATI will lower prices, and release a very similiar (though still slightly slower) 5890 at the same or close to the same time as the 480 is released, for about 475$.

Of course, how much the market inflates each cards prices due to demand may be completly different


not true, i say that. Going from two 9800gt's to a 5770 in theory is slower, in application that is not the case, the 5770 hammers the 9800gt's and gives that performance over any application, the applicaton does not need to optimised for multiple GPU's.

other than that. knowing nVidia, you can sacrifce an arm, leg and first born for that card.


----------



## ljason8eg

Quote:



Originally Posted by *Velcrowchickensoup*


I ran the bench at the same settings with my 5850 and beat the 480 at avg frames, somethings not right here.


Your monitor is not 1920x1200...


----------



## Aqualoon

So the GTX 480 is between a 5850 and a 5870 for performance with knowing Nvidia's pricing levels will be comparable to the 5970's price tag.


----------



## Murlocke

Quote:



Originally Posted by *ljason8eg*


Your monitor is not 1920x1200...


These are done at 1920x1080, but his monitors maximum is 1600 x 900 apparantly. (Wierd resolution)

Quote:



Originally Posted by *Aqualoon*


So the GTX 480 is between a 5850 and a 5870 for performance with knowing Nvidia's pricing levels will be comparable to the 5970's price tag.











It really does seem like they downclocked the 5870 to make the GTX 480 look better. Perhaps they just had old drivers?

People are beating their 5870's scores with their 5850s... and their 5850 are matching the performance that they claim the GTX 480 gets.

If that's true than NVIDIA is really in for some trouble.


----------



## ljason8eg

Quote:



Originally Posted by *Murlocke*


These are done at 1920x1080, but his monitors maximum is 1600 x 900 apparantly. (Wierd resolution)


Ah, my bad. Yeah I looked his monitor up and that's why I questioned it. Or can you run unsupported resoutions on the Haven benchmark and still get results?


----------



## nist7

Murlocke, does the heaven bench provide a more detailed report than just the avg FPS?

it'd be more interesting to see what the min and max FPS was. but seeing how your avg FPS beatout the GTX480's max FPS....the results wont look pretty


----------



## Murlocke

Quote:



Originally Posted by *ljason8eg*


Ah, my bad. Yeah I looked his monitor up and that's why I questioned it. Or can you run unsupported resoutions on the Haven benchmark and still get results?


I've always wondered this myself, but my gut points to *no*. I don't think they would be accurate results. However, there are some people in other threads claiming similiar things as him.

Quote:



Originally Posted by *nist7*


Murlocke, does the heaven bench provide a more detailed report than just the avg FPS?

it'd be more interesting to see what the min and max FPS was. but seeing how your avg FPS beatout the GTX480's max FPS....the results wont look pretty


Nope. I'm hoping that someone with the knowledge of making graphs and charts will make a better comparison soon.

If I knew how I would of originally posted a chart instead of just the average FPS.


----------



## Chucklez

Well I have ran mine on stock just for a comparison.









Also with FRAPS recording my;

Min; 43
Max; 105
Avg; 59.7

Also Murlock was yours at stock? If so then that i7 makes a hell of a difference in Heaven.


----------



## nist7

Quote:



Originally Posted by *Murlocke*


Nope. I'm hoping that someone with the knowledge of making graphs and charts will make a better comparison soon.

If I knew how I would of originally posted a chart instead of just the average FPS.










Didn't some guy in the news thread about this topic made a graph that was in scale to the nvidia graph and he claimed that they were possibly using a 5850? LOL

but with that graph did they just use fraps numbers? if so you can just plug in numbers into excel and make a line graph, put the FPS in one column and another column have 1 second, 2second, etc. all the way to the end and then just plot time on x-axis and FPS on y-axis

or if you have fraps data, you can upload to a public google document and i can try to make a graph out of it


----------



## IEATFISH

I'll leave this here as well:










For more stuff, see this post (and my other posts in that thread):

http://www.overclock.net/hardware-ne...ml#post8656709


----------



## nist7

Quote:



Originally Posted by *Chucklez*


Well I have ran mine on stock just for a comparison.

*pic*

Also with FRAPS recording my;

Min; 43
Max; 105
Avg; 59.7

Also Murlock was yours at stock? If so then that i7 makes a hell of a difference in Heaven.


Yeah, gonna have at least an i7 920 at 3ghz to be comparable to what nvidia did in their video


----------



## Murlocke

Quote:



Originally Posted by *Chucklez*


Well I have ran mine on stock just for a comparison.









Also with FRAPS recording my;

Min; 43
Max; 105
Avg; 59.7

Also Murlock was yours at stock? If so then that i7 makes a hell of a difference in Heaven.


My GPU was at 935/1235. i7 920 was at 4GHz.

I think your score is lower because of your processor. The GPU overclock can't make that big of a difference. I believe they used a i7 at 3.3GHz in the video.

Quote:



Originally Posted by *CrazyNikel*


This is a fanboy thread if I have ever seen one.


Oh hi there, and the point of your post is? Oh right. Troll. I have 2 NVIDIA computers, yet i'm a fanboy. Consider your post reported.


----------



## CapDubOh

I don't understand how these benchmarks have any relevance at all. What test system was used for the 480vs5870 graph? Where did that graph even come from? You comparing your benchmarks to this is like comparing CPU temps over different ambients.


----------



## antuk15

This thread is full of win, Nvidia have been cherry picking results by the looks of it.


----------



## Chucklez

Quote:



Originally Posted by *Murlocke*


My GPU was at 935/1235. i7 920 was at 4GHz.

I think your score is lower because of your processor. The GPU overclock can't make that big of a difference. I believe they used a i7 at 3.3GHz in the video.


Try running at stock and see what you get please. i7 compared to i7 would be better then Phenom II compared to i7.


----------



## Tommie

What scores would leak?
The best offcourse.
Fermi tesselation performance is very good that is why it runs the Heaven benchmark particularly well. Doesnt mean the 480 will be close to the 5970.
Heaven is basically a tesselation demo.


----------



## Murlocke

Quote:


Originally Posted by *CapDubOh* 
I don't understand how these benchmarks have any relevance at all. What test system was used for the 480vs5870 graph? Where did that graph even come from? You comparing your benchmarks to this is like comparing CPU temps over different ambients.

The 2nd picture was released in a video from NVIDIA themselves. Check the News section.

Quote:


Originally Posted by *Chucklez* 
Try running at stock and see what you get please. i7 compared to i7 would be better then Phenom II compared to i7.

Stock GPU or CPU?


----------



## IEATFISH

Quote:


Originally Posted by *CapDubOh* 
I don't understand how these benchmarks have any relevance at all. What test system was used for the 480vs5870 graph? Where did that graph even come from? You comparing your benchmarks to this is like comparing CPU temps over different ambients.

Someone obviously didn't take the time to follow the link and read the thread...







I won't deprive you of the experience. Feel free to come back when you've answered your own questions.


----------



## Imglidinhere

It's about time I saw a benchmark. So in at least one game it outplays the 5870... (not surprising really...)


----------



## Tommie

Quote:


Originally Posted by *Imglidinhere* 
It's about time I saw a benchmark. So in at least one game it outplays the 5870... (not surprising really...)

Its not a game its a tesselation tech demo.


----------



## IEATFISH

Quote:


Originally Posted by *Imglidinhere* 
It's about time I saw a *benchmark*. So in at least one *game* it outplays the 5870... (not surprising really...)

Wut?


----------



## DannyM

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

He's just trying to make 5970 owners feel better. Seriously...what cream of the crop latest generation dual gpu wouldnt beat a competitors latest generation single gpu ??? The 5970 will get pummeled once nvidia puts out a dual gpu card and then he will sell his 5970 and buy the NVidia since he's says he's not a fanboy and only cares about having the best.


----------



## nist7

Quote:


Originally Posted by *Imglidinhere* 
It's about time I saw a benchmark. So in at least one game it outplays the 5870... (not surprising really...)

this may be a noob question but....is heaven a game? i though it was just a synthetic benchmark for dx11 tessellation


----------



## E_man

Quote:


Originally Posted by *Ocnewb* 
Go to the News thread: "*[Nvida] GTX480 Video/Benchmarking!!! Must see!!".* It's in there.


Quote:


Originally Posted by *Aqualoon* 
Page 21

Thanks for pointing it out

Quote:


Originally Posted by *IEATFISH* 
I'll leave this here as well:










For more stuff, see this post (and my other posts in that thread):

http://www.overclock.net/hardware-ne...ml#post8656709

Though it's nice when it lands back here too. Interesting results. How'd you do that graph? Looks legit to me, your results are right at/just below a 5870, to the point where a different system (It was said that thye used an i7 3.3Ghz?) could make up for the small differences. Or did something totally go past me. Haven't read the other thread, I'll do that know, might clear things up for me

Quote:


Originally Posted by *CrazyNikel* 
This is a fanboy thread if I have ever seen one.









informative post


----------



## IEATFISH

Actually, it is a good comparison. There are two reasons nVidia is showing the 5870 vs their 480: 1) it compares price-wise or 2) it compares performance-wise. As I highly doubt it nVidia will be able to price it at $400, it is because it is the closest performing card to the 480. That is speculation but if the nVidia card DID beat the 5970 and cost less, they have no reason to not use that comparison to crush ATI.

Quote:


Originally Posted by *E_man* 
Though it's nice when it lands back here too. Interesting results. How'd you do that graph? Looks legit to me, your results are right at/just below a 5870, to the point where a different system (It was said that thye used an i7 3.3Ghz?) could make up for the small differences. Or did something totally go past me








informative post

For the nVidia results, I put my i7 to 3.24 GHz. I just took the screenshot from the video, used Ctrl+Shift+Z in Paint.net to get the perspective right and then aligned my excel graphs with theirs (they did use FRAPS and Excel to make that graph, see the link to my other post for my testing method.)


----------



## r34p3rex

nvm


----------



## CapDubOh

What are the possible explanations for the peaks of that graph being nearly identical for 3 very different video cards? Looks like the rest of the benchmark the 480 is nearly 15-20 fps up on the 5870?


----------



## IEATFISH

Quote:


Originally Posted by *CapDubOh* 
What are the possible explanations for the peaks of that graph being nearly identical for 3 very different video cards? Looks like the rest of the benchmark the 480 is nearly 15-20 fps up on the 5870?

In those areas, tessellation is not the focus of the scene. So essentially, unless what you are doing involves heavy tessellation (which no games currently do) the 480 is about even with a 5870 (and my 5850).


----------



## Tommie

Quote:


Originally Posted by *IEATFISH* 
In those areas, tessellation is not the focus of the scene. So essentially, unless what you are doing involves heavy tessellation (which no games currently do) the 480 is about even with a 5870 (and my 5850).

This.
People are so stupid. What kinds of results do you expect to leak?

If ya mom asked you for you grades last month your are gonna mention the 1 you got for french but you wont mention the d you got for math.


----------



## Raiden911

interesting info.


----------



## Starbuck5000

Did you run the Bench at 3.33Ghz as well Murlocke?


----------



## E_man

Quote:


Originally Posted by *DannyM* 
He's just trying to make 5970 owners feel better. Seriously...what cream of the crop latest generation dual gpu wouldnt beat a competitors latest generation single gpu ??? The 5970 will get pummeled once nvidia puts out a dual gpu card and then he will sell his 5970 and buy the NVidia since he's says he's not a fanboy and only cares about having the best.

Has everyone forgotten that price matters? If the GTX480 comes in at the 5970 price, then yes, that is its competition, whatever the gpu quantity is

Quote:


Originally Posted by *IEATFISH* 
Actually, it is a good comparison. There are two reasons nVidia is showing the 5870 vs their 480: 1) it compares price-wise or 2) it compares performance-wise. As I highly doubt it nVidia will be able to price it at $400, it is because it is the closest performing card to the 480. *That is speculation but if the nVidia card DID beat the 5970 and cost less, they have no reason to not use that comparison to crush ATI.
*

For the nVidia results, I put my i7 to 3.24 GHz. I just took the screenshot from the video, used Ctrl+Shift+Z in Paint.net to get the perspective right and then aligned my excel graphs with theirs (they did use FRAPS and Excel to make that graph, see the link to my other post for my testing method.)

I agree. Why would nVidia do anything else? And thanks for your results. Slowly working through the other thread, but its quite a read!


----------



## saulin

Quote:


Originally Posted by *Murlocke* 
*Note #1:* I am not a fanboy. I go with what is best, and clearly the 5970 is the better card at the moment. I don't want to hear "the 5970 is dual-GPU". It doesn't matter, this is simply a comparison of NVIDIA's highest end vs ATI's highest end in the most demanding DX11 benchmark available.

*Note #2:* The second picture is a comparison of the 5*8*70 vs GTX 480. The first person is the score of a 5*9*70. You will need to open both pictures to compare the 5970 vs GTX 480.

------------------------

I use the same settings (as shown in last picture) as them. I'm using the Catalyst 10.3 Beta drivers on an untweaked Windows 7 64-bit OS. My 5970 is overclocked to 935/1235, which is my 24/7 overclock.

As you can see, my 5970's *average FPS* beats the GTX 480's *maximum FPS* by 4FPS. This is a rather large difference. Yes, the GTX 480 is single-GPU but it's the best NVIDIA has at the moment. The GTX 485 should beat the 5970 but there is no release date and it could be months before it is released. I don't believe it's even been officially announced.

We will have to wait and see how much they want to charge for the GTX 480 to see if it is a failure. It would need to be greatly cheaper than the 5970.

Current 5970 owners can rest easy that they still have the crown. Current 5870 owners might be regretting their purchase if the GTX 480 launches at a lower price or equal price.









*Edit #1:* Yes, I know i'm comparing an overclocked card to a stock card. That gives it a slight advantage, but it's not a 20FPS gain. Even at stock, the 5970 is going to be quite a bit ahead of the GTX 480. If I have time i'll run a stock comparison, or another 5970 user can. I know we have quite a few on these forums.

*Edit #2:* A user with a 5970 and the knowledge to create graphs would also be nice. So we can see a much more detailed chart.

*Edit #3:* There are some users that are scoring the same as the GTX 480 on their 5850s and 5870s. It seems that NVIDIA might have used old ATI drivers, or downclocked the 5870 to make the GTX 480 look better than it really is. This, of course, is 100% speculation.

Where is the graph for the 5970?


----------



## Velcrowchickensoup

My Bad... Time to drag my rig to the TV.

Be back testing results at 1080p...


----------



## Ocnewb

Quote:


Originally Posted by *saulin* 
Where is the graph for the 5970?

It's right there in the first post ever.
Edited: It basically isn't a graph since he said he doesn't know how to make one i think. You can still see the FPSs.


----------



## Jackeduphard

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

look guys ... this is not even a legit release and people are saying how much better everything is !!!! GAHH!!! just relax till a accredited place does the review with official release of drivers and everything!!!!

My money is on GTX 480, honestly they will with time and drives do way good







(plus we dont really have any good games yet to push push a video card!!!) (or so i think)


----------



## IEATFISH

Quote:


Originally Posted by *saulin* 
Where is the graph for the 5970?

It will be up in a while. He's getting me the data and I'll overlay it on nVidia's graph against the 480 and 5870(50).


----------



## CrazyNikel

Quote:


Originally Posted by *Murlocke* 
Oh hi there, and the point of your post is? Oh right. Troll. I have 2 NVIDIA computers, yet i'm a fanboy. Consider your post reported.

Why is it when someone defends themselves they always say they own the other brand?

This thread is just another among many threads that is meant to further settle the restless ATI owners out there that NV wont make a faster card....heaven forbid. I also like the overclocked card comparison to a non overclocked card...This is fair.

On a comparative note we should do a comparison of the GTX295(Overclocked of course) vs. 5870....oh wait..


----------



## saulin

Quote:


Originally Posted by *IEATFISH* 
It will be up in a while. He's getting me the data and I'll overlay it on nVidia's graph against the 480 and 5870(50).

Nice I want to see the graph. I care most about minimim fps always than average or max fps. Anything over 60 fps is just a waste.


----------



## E_man

Quote:


Originally Posted by *CrazyNikel* 
Why is it when someone defends themselves they always say they own the other brand?

This thread is just another among many threads that is meant to further settle the restless ATI owners out there that NV wont make a faster card....heaven forbid. I also like the overclocked card comparison to a non overclocked card...This is fair.

On a comparative note we should do a comparison of the GTX295(Overclocked of course) vs. 5870....oh wait..

They say that to make people like you not accuse them of fanboyism?

And I don't see it as that kind of thread, so far, there are good numbers up. Dispute them? I'd love to see your proof

As for a 5870 vs 295, I don't get your point. A 5870 is ~= 295. They trade blows, usually quite close, and occasionally one card kills the other. So far, it doesn't look like the 480 will kill a 5970. If it could, nvidia would be trying thier hardest to tell us


----------



## DannyM

Quote:


Originally Posted by *E_man* 
Has everyone forgotten that price matters?

Depends on the person wether or not price matters. It may matter to you, but who is to say that it matters to the next guy.


----------



## Romir




----------



## Imglidinhere

Quote:


Originally Posted by *Tommie* 
Its not a game its a tesselation tech demo.

Same difference.







You know what I mean and that's what matters.


----------



## E_man

Quote:


Originally Posted by *DannyM* 
Depends on the person wether or not price matters. It may matter to you, but who is to say that it matters to the next guy.

It always does. I guarantee you. The limit might be higher, but it always does. The people who are looking in the 5970 price range, and see the GTX480 there as well (or vise versa, and assuming thats the pricing) then they will compare the 2. Plain and simple, unless your a fanboy of either side. If a GTX495 comes out, and costs 1000$ and destroys everything in its path, and someone is willing to pay for that, then they are. BUT, if there are two cards at the same price point, and that's the price your willing to pay, only a fanboy would pick the lower performing card.

That said, this is all speculation, something that a lot of people lose track of. We know very little about these cards so far.


----------



## Skylit

Quote:


Originally Posted by *Romir* 
Benchs

Hmm, so the Phenom II isn't as bad as I thought. Thanks.


----------



## sub50hz

Quote:



Originally Posted by *CrazyNikel*


This thread is just another among many threads that is meant to further settle the restless ATI owners out there that NV wont make a faster card....heaven forbid. I also like the overclocked card comparison to a non overclocked card...This is fair.


How is this fanboyism? Every ATI owner should _want_ Fermi cards to be released at a competetive price: performace ratio, so that it drives card prices down.

Imagine, if you will, that GTX480 comes out at 5970 price and delivers 5970 performance. Then consider another scenario where a 5*8*70 with a modest overclock will perform on par with GTX480 -- and if it's at 5970 prices, _nobody_ wins. ATI can deservedly mark up prices a small amount, and people will still buy them.

If Nvidia delivers a great card at a good price, ATI prices come down, and Nvidia will follow suit when production increases. This is what we, as a community, should hope for, not the inverse. Fermi cards should, by all rights, outperform the current ATI offerings -- larger die, and over 6 months more development time.

Note: If Fermi numbers crush the 5800/5900 series, prices will remain high as well.


----------



## sub50hz

Quote:



Originally Posted by *DannyM*


Guess you are happy that your dual gpu beats a single GPU ???

LOL!


Take a look at 5870 vs. GTX295. Think about it.


----------



## E_man

Quote:



Originally Posted by *DannyM*


Guess you are happy that your dual gpu beats a single GPU ???

LOL!


If its at the same price, than yeah, chalk one for ATI. If they aren't and nvidia has better price/performance, than chalk one to nvidia


----------



## Velcrowchickensoup

Quote:



Originally Posted by *sub50hz*


How is this fanboyism? Every ATI owner should _want_ Fermi cards to be released at a competetive price







erformace ratio, so that it drives card prices down.

Imagine, if you will, that GTX480 comes out at 5970 price and delivers 5970 performance. Then consider another scenario where a 5*8*70 with a modest overclock will perform on par with GTX480 -- and if it's at 5970 prices, _nobody_ wins. ATI can deservedly mark up prices a small amount, and people will still buy them.

If Nvidia delivers a great card at a good price, ATI prices come down, and Nvidia will follow suit when production increases. This is what we, as a community, should hope for, not the inverse. Fermi cards should, by all rights, outperform the current ATI offerings -- larger die, and over 6 months more development time.

Note: If Fermi numbers crush the 5800/5900 series, prices will remain high as well.


This.

I agree whole heartedly


----------



## That_guy3

Regardless, 3 or if they allow 4x GTX480s in SLI then Theres no question it will perform better than the 5970 crossfire. IMO both cards costy to much


----------



## DannyM

Quote:



Originally Posted by *E_man*


It always does. I guarantee you. The limit might be higher, but it always does.


You make B I G assumptions thinking that price matters to everyone. Around here price isnt what matters, benches and epeen are what everyone cares about. The only people that price matters to are those on a budget or those who cant afford the top of the line cards. I wouldnt pay alot to have the latest and greatest. I see no point in upgrading to ATI 5XXX or GTX 4XX right now. There needs to be a half dozen or so games that are must have type games that are DX11 before I upgrade. My GTX 275's SLI'd are more than enough for any game (for now at least).


----------



## sub50hz

Quote:



Originally Posted by *That_guy3*


Regardless, 3 or if they allow 4x GTX480s in SLI then Theres no question it will perform better than the 5970 crossfire.


This is speculation at its finest. Chances are it would not, as the level of driver maturity will initially be too low, unless a large portion of the delay time has been spent scaling SLI performance in release drivers.


----------



## Riou

I have read that Fermi architecture does heavy tessellation scenes very well, but at the expense of rasterization speed as a lot of shaders are used.

Heaven is very tessellation heavy, especially that scene with that dragon. If tessellation is used lightly in games, the 5870 looks to be fairly close to the GTX 480.


----------



## Blameless

Quote:



Originally Posted by *Murlocke*


Benchmark doesn't tell me. Wish I knew how to make graphs.


Just watch the FPS counter?

The benchmark is only four minutes long.

Card in my sig dips as low as 18 FPS during the orbit around the dragon at the same settings the NVIDIA test used.

Quote:



Originally Posted by *ljason8eg*


Your monitor is not 1920x1200...


I ran mine in windowed mode at a higher resolution than my monitor supports.

Quote:



Originally Posted by *Murlocke*


Perhaps they just had old drivers?


This is my guess. NVIDIA would not have used beta or hot fix drivers, and the 5870 test could have been run months ago.

Quote:



Originally Posted by *Murlocke*


and their 5850 are matching the performance that they claim the GTX 480 gets.


I haven't seen anything that remotely suggests this. My 5850 @ roughly 5870 speed still gets it's ass handed to it by the GTX480 results.


----------



## Lt.JD

My comment on Nvidia's youtube video got deleted... hmm....


----------



## MrDeodorant

Quote:



Originally Posted by *Blameless*


I ran mine in windowed mode at a higher resolution than my monitor supports.


I thought windowed mode could be bad for framerates, but I never use it, so I have no first-hand experience with it.


----------



## E_man

Quote:



Originally Posted by *That_guy3*


Regardless, 3 or if they allow 4x GTX480s in SLI then Theres no question it will perform better than the 5970 crossfire. IMO both cards costy to much



Quote:



Originally Posted by *DannyM*


You make B I G assumptions thinking that price matters to everyone. Around here price isnt what matters, benches and epeen are what everyone cares about. The only people that price matters to are those on a budget or those who cant afford the top of the line cards. I wouldnt pay alot to have the latest and greatest. I see no point in upgrading to ATI 5XXX or GTX 4XX right now. There needs to be a half dozen or so games that are must have type games that are DX11 before I upgrade. My GTX 275's SLI'd are more than enough for any game (for now at least).


If someone released a high end card at 4K a pop, but killed every other card in existance, you think people would buy? Keep increasing the price, and eventually, you wil have 0 people. Yes, budget does eventually matter, if you read my post you would see that is what I said







An extreme example, but hey, I was correct in my point, and you completly missed it









People look at a price point. Show me one reason why someone would buy a second best card, even if a first rate card was at the same price? Besides being a fanboy


----------



## nub

Quote:



Originally Posted by *CrazyNikel*


On a comparative note we should do a comparison of the GTX295(Overclocked of course) vs. 5870....oh wait..


And how much does that 295 cost? For about the price of a 295 you can get 2 5850's and crossfire. That would be a fair comparison of value for your money.


----------



## grunion

I posted this in the other thread.

Quote:



Originally Posted by *grunion*


I ran Heaven with different AA/AF settings, notice that the min fps barely change.

What happens when Fermi's cuda cores have to run higher filtering levels?
Remember ATI has a dedicated tessellator, Fermi tessellation runs through the cuda cores.

I'll say it again, the Heaven benchmarks are worthless without AA/AF applied.

Attachment 144535


----------



## Blameless

Quote:



Originally Posted by *MrDeodorant*


I thought windowed mode could be bad for framerates, but I never use it, so I have no first-hand experience with it.


It can be, but the difference is minimal.

I would have averaged less than 2 fps higher (under 5%), if the trend is the same as with other resolutions.


----------



## Murlocke

I will be redoing the thread soon with much better graphs.


----------



## badger6021

reading through this thread it seems that price-performance is the most important thing and we all know ati win that 99% of the time. Im sure NV will overcharge for there cards.


----------



## sub50hz

Quote:



Originally Posted by *badger6021*


Im sure NV will overcharge for there cards.


You never know. I wish Fermi cards would come out already so I can get a second 5850 and a third GTX275 at a more reasonable price.


----------



## criminal

Quote:



Originally Posted by *DannyM*


He's just trying to make 5970 owners feel better. Seriously...what cream of the crop latest generation dual gpu wouldnt beat a competitors latest generation single gpu ??? The 5970 will get pummeled once nvidia puts out a dual gpu card and then he will sell his 5970 and buy the NVidia since he's says he's not a fanboy and only cares about having the best.


If the prices are similar, it is a fair comparison. The 5870 and the GTX295 get compared all the time and the GTX295 cost a hundred more.


----------



## badger6021

Quote:



Originally Posted by *criminal*


If the prices are similar, it is a fair comparison. The 5870 and the GTX295 get compared all the time and the GTX295 cost a hundred more.


+1 and i would rather have a 5870 im useing a 5850 and it performs better in most games for me than my old 295


----------



## mocha989

are oyu guys blind. its not even the HD5970 its the hd5870


----------



## Chucklez

Quote:



Originally Posted by *Murlocke*


I will be redoing the thread soon with much better graphs.


Ok cool cant wait to see how the GTX480 compares to the 5970.


----------



## badger6021

Quote:



Originally Posted by *mocha989*


are oyu guys blind. its not even the HD5970 its the hd5870


 yes we know the 5970 is still the king so i can't wait for fermi to hit so i can get anthor cheaper 5850 then i will have a 5970.


----------



## MrDeodorant

Quote:



Originally Posted by *criminal*


If the prices are similar, it is a fair comparison. The 5870 and the GTX295 get compared all the time and the GTX295 cost a hundred more.


This is a truth so simple and complete, it should be sprawled as a banner across the front page.


----------



## Murlocke

Original Post updated with new chart.


----------



## Riou

3.8 GHz i7 920, 5870 @ stock, Cat 10.3 beta


3.8 GHz i7 920, 5870 CF @ stock, Cat 10.3 beta


----------



## Murlocke

Quote:



Originally Posted by *Riou*


3.8 GHz i7 920, 5870 @ stock, Cat 10.3 beta


3.8 GHz i7 920, 5870 CF @ stock, Cat 10.3 beta



Yea, this benchmark scales REALLY well with crossfire. Might be why the 5970 really shines here.


----------



## grunion

Quote:



Originally Posted by *grunion*


I posted this in the other thread.












Quote:



Originally Posted by *Murlocke*


Original Post updated with new chart.



Quote:



Originally Posted by *grunion*


I ran Heaven with different AA/AF settings, notice that the min fps barely change.

What happens when Fermi's cuda cores have to run higher filtering levels?
Remember ATI has a dedicated tessellator, Fermi tessellation runs through the cuda cores.

I'll say it again, the Heaven benchmarks are worthless without AA/AF applied.

Attachment 144535


You see my runs?


----------



## antuk15

Tried to be as accurate as I could


















0 - 64fps

20 - 42fps vs 49fps - 15% advantage GTX 480

40 - 31fps vs 50fps - 63% advantage GTX 480

60 - 40fps vs 35fps - 12% advanatage 5870

80 - 25fps vs 40fps - 38% advanatage GTX 480

100 - 20fps vs 38fps - 48% advantage GTX 480

120 - 17fps vs 37fps - 54% advantage GTX 480

140 - 50fps vs 60fps - 17% advantage GTX 480

160 - 26fps vs 36fps - 28% advantage GTX 480

180 - 22fps vs 43fps - 49% advantage GTX 480

200 - 19fps vs 45fps - 58% advantage GTX 480

220 - 32fps vs 45fps - 29% advantage GTX 480

240 - 42fps vs 49fps - 15% advantage GTX 480

260 - 24fps vs 49fps - 51% advantage GTX 480

Overall average GTX 480 advantage - 33%


----------



## killerhz

Quote:



Originally Posted by *Murlocke*


Congrats on reading the OP. Comparing NVIDIA's Highest End to ATI's Highest End isn't fair? Seems very fair to me.

I will give it to you that having my card overclocked is a bit unfair, but even at stock... it's still going to be far higher.


well i read the OP and completely disagree with the fact that you say "single vs. dual doesn't matter". it completely does matter. the only thing that i grabbed from this was what i thought exactly results would be. top of the line GPU dual will will beat top of the line single GPU. 
I don't know for sure but does the 5970 thrash the 5870? if so does that mean ATI is fail?

most get it. ATI did a great job on their current cards. big deal. i am sure that NV will do the same. this is what they do. no big deal.


----------



## Evtron

Quote:



Originally Posted by *antuk15*


Overall average GTX 480 advantage - 33%



Not sure you can really use your kind of math for that average. I think you would have to count every second of the demo as the term is "frames per second".


----------



## Murlocke

Quote:



Originally Posted by *grunion*


You see my runs?


Yes I did.









Quote:



Originally Posted by *grunion*


What happens when Fermi's cuda cores have to run higher filtering levels?
Remember ATI has a dedicated tessellator, Fermi tessellation runs through the cuda cores.

I'll say it again, the Heaven benchmarks are worthless without AA/AF applied.


Hmm.. that's really interesting and brings up a good point.

Quote:



Originally Posted by *killerhz*


well i read the OP and completely disagree with the fact that you say "single vs. dual doesn't matter". it completely does matter. the only thing that i grabbed from this was what i thought exactly results would be. top of the line GPU dual will will beat top of the line single GPU. 
I don't know for sure but does the 5970 thrash the 5870? if so does that mean ATI is fail?

most get it. ATI did a great job on their current cards. big deal. i am sure that NV will do the same. this is what they do. no big deal.


Come on, don't be so dense. We are comparing the top of the line ATI to the top of the line NVIDIA. It's Simple, don't put more thought into it. It's no ones fault but NVIDIAs that they don't have a Dual-GPU DX11 card out.

I'm sure NVIDIA will have a dual-GPU DX11 card down the road that beats the 5970. These charts are only comparing the best of the best *at this exact time*.

By the way, I reworded the OP and that that line was removed in the process. I honestly don't know how I could make the OP any less biased.


----------



## Riou

A better benchmark to see the limits of each card would be to run this benchmark with max AF and AA. I want to see how well Fermi copes with high tesellation, high AF, and high AA.

No AA and no AF won't let you see if Fermi has any problems coping with tesellation and rasterization with the way they designed their CUDA core shaders.


----------



## antuk15

Quote:



Originally Posted by *Evtron*


Not sure you can really use your kind of math for that average. I think you would have to count every second of the demo as the term is "frames per second".


I would need a more detailed graph so that I could get the frame rate for each card at 1 second intervals which isn't going to happen.


----------



## antuk15

What's going to happen to GTX 480 if it's playing a game that's both tessellation and pixel/vertex shader heavy?

Does the card use the cores for tessellation and then become pixel/vertex shader bound and thus slow to a crawl, or does it devote more shaders to pixel/vertex work and become tessellation bound and slow to a crawl?


----------



## IEATFISH

Quote:



Originally Posted by *killerhz*


well i read the OP and completely disagree with the fact that you say "single vs. dual doesn't matter". it completely does matter. the only thing that i grabbed from this was what i thought exactly results would be. top of the line GPU dual will will beat top of the line single GPU. 
I don't know for sure but does the 5970 thrash the 5870? if so does that mean ATI is fail?

most get it. ATI did a great job on their current cards. big deal. i am sure that NV will do the same. this is what they do. no big deal.


"I don't know for sure but does [ATI] thrash [ATI]? if so does that mean ATI is fail??" Wut?

It is very relevant. From the results nVidia has released and what our members have tested, the 480 runs Heaven in areas without heavily tessellation on a 5850 level. When tessellation is applied in force, it is closer to a 5970. Of course, this is without AA and AF which will cause nvidia's score to drop more drastically than ATI's due to the differences in architecture. So on a purely tessellation scale, the 5970 appears to trade blows with the 480.


----------



## WingedCow

Pic might be a bit big, but thats my single 5870 running in windowed mode and since I'm too lazy to restart my computer for fresh boot and all that jazz, 2 day of uptime without restart.


----------



## GekzOverlord

Quote:



Originally Posted by *killerhz*


well i read the OP and completely disagree with the fact that you say "single vs. dual doesn't matter". it completely does matter. the only thing that i grabbed from this was what i thought exactly results would be. top of the line GPU dual will will beat top of the line single GPU. 
I don't know for sure but does the 5970 thrash the 5870? if so does that mean ATI is fail?

most get it. ATI did a great job on their current cards. big deal. i am sure that NV will do the same. this is what they do. no big deal.


What are u smoking? 
the op stated reasons why this doesnt matter, for a simple fact is that its a benchmark comparing cards. thats it.


----------



## killerhz

Quote:



Originally Posted by *Murlocke*


Yes I did.









Hmm.. that's really interesting and brings up a good point.

Come on, don't be so dense. We are comparing the top of the line ATI to the top of the line NVIDIA. It's Simple, don't put more thought into it. It's no ones fault but NVIDIAs that they don't have a Dual-GPU DX11 card out.

I'm sure NVIDIA will have a dual-GPU DX11 card down the road that beats the 5970. These charts are only comparing the best of the best *at this exact time*.

By the way, I reworded the OP and that that line was removed in the process. I honestly don't know how I could make the OP any less biased.


dont be a jerk. i'm not dense. maybe you did remove that line. but the fact is it was there when i read it. and that's what i based my opinion on.


----------



## mocha989

Quote:



Originally Posted by *badger6021*


yes we know the 5970 is still the king so i can't wait for fermi to hit so i can get anthor cheaper 5850 then i will have a 5970.










uhhh okay.


----------



## cchun39

400 mhz OC gave me 1 more pt

AMD 965 BE stock/oc

stock XFX 5870 XXX


----------



## Domino

Quote:



Originally Posted by *ipar26*


yes seems it will beat the 5870 and be very close to the 5970,just see what happens when it is released and the price will match the 5870 maybe.


That is what is up to Nvidia right now. If they want to make it this generation, they are going to make sure they price it right. It is by far too late in the generation to release a card, especially of only this caliber, and expect AMD/ATI to not lower their prices....especially on a chip that costs so little to manufacture.


----------



## saulin

Quote:



Originally Posted by *cchun39*




















400 mhz OC gave me 1 more pt

AMD 965 BE stock/oc

stock XFX 5870 XXX


It's a video card benchmark not a CPU benchmark.

So we still no have graph data of the 5970, the 5870 and the 480?


----------



## Imglidinhere

Quote:


Originally Posted by *mocha989* 
i like how you guys compare a dual GPU card to a single GPU card. you can't even do that. theres no logical reason to and its just stupid it just throws everyone off. compare the 480 to the 5870 not the 5970

Yeah... go ahead and use that logic... Not like the 5870 was put up against the 295 and nearly won. It's about even. It's normal to put it up against the next highest marked card on the market. And apparently it IS fair and logical to put it up against a dual GPU card IF IT NEARLY EQUALS IT. Why would you put it up against a single GPU card if it's going to trash it?


----------



## sti-06

Quote:


Originally Posted by *DeviousAddict* 
that is not a fair assement at all
for a start you've overclocked your card and its a dual gpu and your comparing it to a standard clocked card thats a single gpu.
its a completely flawed comparison

It is not ATI's fault nvidia's best card is single GPU!

It is a comparison on both company's best cards! How is that not fair?


----------



## gtarmanrob

LOL

i am absolutely LOVING how everyone is standing up for Nvidia by saying "not fair guys you cant compare dual-gpu to single-gpu cards"

this is hilarious.

what card do you think Fermi has been compared to in all the months of speculation leading up to its release? which card do you think the claims were that Fermi was going to beat by (in some ridiculous cases) 40% or more?

and NOW, that it appears to not do that very job (in ONE benchmark we might add, we havnt seen anything else yet, still has a chance) everyone is all like "oh well yeah but you cant compare to the 5970 thats not fair."

some of you make me lol. some of you are just pathetic.

the only thing that this says to me is, if ATI is gonna do a price drop it wont be very substantial now which sucks.

cry, sook, whinge all you want. fact is, the 5970 is a single card with dual-gpu cores, the fastest single card on the market. and so far, Nvidia's answer is not up to scratch with it. not in Unigine Heaven anyway. get over it.


----------



## Murlocke

Quote:


Originally Posted by *saulin* 
It's a video card benchmark not a CPU benchmark.

So we still no have graph data of the 5970, the 5870 and the 480?

Yes we do, it's on the original post now.


----------



## Jbgough123

The price of a 480 will prob be same close or mayb even higher then a 5970 anyway so..... lol might as well compare them. funny when people where comparing the dual gpu nvidia cards to ati's single cards back in the day and it was "fair" then lol. Im just going to wait for real benchies from the 480gtx no company benches. couldn't care less if it was faster anyway i hope it is to be honest so ati drops prices


----------



## saulin

Quote:


Originally Posted by *gtarmanrob* 
LOL

i am absolutely LOVING how everyone is standing up for Nvidia by saying "not fair guys you cant compare dual-gpu to single-gpu cards"

this is hilarious.

what card do you think Fermi has been compared to in all the months of speculation leading up to its release? which card do you think the claims were that Fermi was going to beat by (in some ridiculous cases) 40% or more?

and NOW, that it appears to not do that very job (in ONE benchmark we might add, we havnt seen anything else yet, still has a chance) everyone is all like "oh well yeah but you cant compare to the 5970 thats not fair."

some of you make me lol. some of you are just pathetic.

the only thing that this says to me is, if ATI is gonna do a price drop it wont be very substantial now which sucks.

cry, sook, whinge all you want. fact is, the 5970 is a single card with dual-gpu cores, the fastest single card on the market. and so far, Nvidia's answer is not up to scratch with it. not in Unigine Heaven anyway. get over it.

Comparing is fine. Expecting it to beat the 5970 is just dumb. You have to think that it's basically 2 5850s in CF. I would like someone to show me that one time when the single GPU card was able to beat the dual GPU card









Lets see, the 4870x2 had the crown, who took it after? the GTX 295, who took it after the 5970, why should it be different this time?


----------



## killerhz

Quote:


Originally Posted by *gtarmanrob* 
LOL

i am absolutely LOVING how everyone is standing up for Nvidia by saying "not fair guys you cant compare dual-gpu to single-gpu cards"

this is hilarious.

what card do you think Fermi has been compared to in all the months of speculation leading up to its release? which card do you think the claims were that Fermi was going to beat by (in some ridiculous cases) 40% or more?

and NOW, that it appears to not do that very job (in ONE benchmark we might add, we havnt seen anything else yet, still has a chance) everyone is all like "oh well yeah but you cant compare to the 5970 thats not fair."

some of you make me lol. some of you are just pathetic.

the only thing that this says to me is, if ATI is gonna do a price drop it wont be very substantial now which sucks.

cry, sook, whinge all you want. fact is, *the 5970 is a single card with dual-gpu cores, the fastest single card on the market*. and so far, Nvidia's answer is not up to scratch with it. not in Unigine Heaven anyway. get over it.

agreed. that the 5970 the fastest card with dual GPU cores. hands down. nothing with change that fact at all.

i see the light now. sorry for being pathetic







my bad.


----------



## Murlocke

I find it rather strange that the GTX 480 actually has higher minimum frames than the 5970.

That might be because of the 1x AF and 0x AA. I bet that will change when the CUDA cores have to start rendering AA/AF. I think that is where the 5XXX series will really shine over the GTX 480.

Quote:


Originally Posted by *saulin* 
Comparing is fine. Expecting it to beat the 5970 is just dumb. You have to think that it's basically 2 5850s in CF. I would like someone to show me that one time when the single GPU card was able to beat the dual GPU card









Lets see, the 4870x2 had the crown, who took it after? the GTX 295, who took it after the 5970, why should it be different this time?

I assume you mean in the same generation?

If not:
The 5870 is on par the the GTX 285. There are 1-2 year old dual-gpu cards that are destroyed by single GPU cards.

Quote:


Originally Posted by *killerhz* 
agreed. that the 5970 the fastest card with dual GPU cores. hands down. nothing with change that fact at all.

i see the light now. sorry for being pathetic







my bad.

I had some lines in the original post that could be taken the wrong way, so no hard feelings. I've greatly changed the original post, added charts, completely reworded it, etc. I think everyone *should* be happy now. There are a bunch of ugly posts in the first few pages.

Comparing NVIDIA and ATI, it's really hard to make everyone happy.


----------



## sepheroth003

Thanks for the work murlocke, i was curious about this after watching the nvidia vid.

And to those saying dual gpu vs single is unfair, its all about price, if the GTX 480 is as expensive as its looking then yes, the 5970 is the direct comparison.


----------



## Raul-7

That's the most illogical rebuttal I've heard. If they cost the same then the comparison is FAIR, plane and simple! People decide on price/performance not one dual GPU/single GPU.

Yes, I know the prices are not set in stone on the GTX480 but if it's around the MSRP of the 5970 then the comparison is fair.


----------



## saulin

Quote:


Originally Posted by *Murlocke* 
Yes we do, it's on the original post now.

Sweet. Wow the 5970 has huge max fps and average fps. But the GTX 480 still beats it in minimum fps. I wasn't expecting that. I thought the 5970 would beat it all the way.









Quote:


Originally Posted by *Murlocke* 
I find it rather strange that the GTX 480 actually has higher minimum frames than the 5970.

That might be because of the 1x AF and 0x AA. I bet that will change when the CUDA cores have to start rendering AA/AF. I think that is where the 5XXX series will really shine over the GTX 480.

I assume you mean in the same generation?

If not:
The 5870 is on par the the GTX 285. There are 1-2 year old dual-gpu cards that are destroyed by single GPU cards.

Exactly I mean in the same generation :


----------



## Ocnewb

Quote:


Originally Posted by *saulin* 
Sweet. Wow the 5970 has huge max fps and average fps. But the GTX 480 still beats it in minimum fps. I wasn't expecting that. I thought the 5970 would beat it all the way.









The GTX480 only beat the 5970 @ minimum FPS in few scenes AFAISee. Most of 5970's minimum FPS beat the GTX480 tho, i just say what i see from the graph







.


----------



## sosikwitit

For a $700 card the 5970 is under 40 frames and closer to 30 way too often but it's just a benchmark anyway.


----------



## saulin

Quote:


Originally Posted by *Ocnewb* 
The GTX480 only beat the 5970 @ minimum FPS in few scenes AFAISee. Most of 5970's minimum FPS beat the GTX480 tho, i just say what i see from the graph







.

I know but the most important fps is the minimum frame rate. Well at least for me.

Do you rather get

160 max fps and 23 minimum?

or 70 max fps and 39 minimum?

Which one will give you smoother gameplay? What if the game has lots of areas where the frames drop a lot because of everything that is going on on the screen?

When I try new drivers, the first thing I check is the minimum fps.

However by looking at this chart, yeah we can see that the 5970 is very fast







It wins in average and max frame rates by a long shot.


----------



## TFL Replica

The 480 looks better than it did in the original graph. Seems the 5970 doesn't really do much for minimum frames.


----------



## Murlocke

Quote:


Originally Posted by *Ocnewb* 
The GTX480 only beat the 5970 @ minimum FPS in few scenes AFAISee. Most of 5970's minimum FPS beat the GTX480 tho, i just say what i see from the graph







.

I believe it's a drive issue. Chucklez has the same issue on his 5970. Sometimes when the scenes switch in the benchmark your FPS dips down to like 35FPS while it loads, then after it's done it shoots back up.

The GTX480 doesn't seem to do that. It seems to load the stuff much faster, so it suffers less minimum frames. After everything is loaded up though the 5970 starts crunching.









Quote:


Originally Posted by *saulin* 
I know but the most important fps is the minimum frame rate. Well at least for me.

Do you rather get

160 max fps and 23 minimum?

or 70 max fps and 39 minimum?

Which one will give you smoother gameplay? What if the game has lots of areas where the frames drop a lot because of everything that is going on on the screen?

When I try new drivers, the first thing I check is the minimum fps.

Definitely 70/39. Check my above quote. I believe it is driver related.

Quote:


Originally Posted by *sosikwitit* 
For a $700 card the 5970 is under 40 frames and closer to 30 way too often but it's just a benchmark anyway.


Quote:


Originally Posted by *TFL Replica* 
The 480 looks better than it did in the original graph. Seems the 5970 doesn't really do much for minimum frames.

See above.


----------



## gtarmanrob

Quote:


Originally Posted by *saulin* 
Comparing is fine. Expecting it to beat the 5970 is just dumb. You have to think that it's basically 2 5850s in CF. I would like someone to show me that one time when the single GPU card was able to beat the dual GPU card









Lets see, the 4870x2 had the crown, who took it after? the GTX 295, who took it after the 5970, why should it be different this time?

haha yeah thing is, your logic is perfect. and really, no one would normally compare or try to compare the two coz even though the argument about the dual-gpu ATI cards is never ending, the fact still stands that it has a huge performance advantage and most would agree is it IS unfair to compare them to a single gpu core solution.

what my argument is about, is that all the Nvidia fanboys, and all the speculation threads and rumours etc. claimed that the GTX 480, Fermi, would not only beat the 5970, but smash it. percentages ranged from 10% - 40%+ i saw in one article.

and now when it hasnt yet, all the excuses come flying and the fans say its not fair to compare. yet a month ago they were comparing it themselves, without the results, jumping on the bandwagon and getting their hopes up. and now that their little ego's are bruised, rather than suck it up and accept the defeat, for now, they are throwing blame and excuses all around the place.

i've said it before and ill stand by it every time, when it comes to fanboys, Nvidia's team are the worst lot.


----------



## zxo0oxz

Too bad I am colorblind. I would have enjoyed reading this...


----------



## Murlocke

Quote:


Originally Posted by *zxo0oxz* 
Too bad I am colorblind. I would have enjoyed reading this...

The top line is the 5970 @ 935/1235 and i7 at 4GHz.
The line below that is the 5970 @ stock and i7 at 3.3GHz.
The line below that is the GTX 480.
The line below that is the 5870.

Hope that helps.


----------



## Celeras

I love the fact that you show your bias right off the bat and show overclocked i7 numbers on *one* of the GPU tests. You know, just for fun! What a joke lol.


----------



## Murlocke

Quote:


Originally Posted by *Celeras* 
I love the fact that you show your bias right off the bat and show overclocked i7 numbers on *one* of the GPU tests. You know, just for fun! What a joke lol.

I honestly don't know what you talking about?

If you took some time, and read the original post before posting a reply, you'd see that the GTX 480 and 5870 charts were *not made by me*. They came directly from NVIDIA. So I guess you are saying NVIDIA is biased?

The NVIDIA tests were ran at 3.3GHz (as the original post says), and I did a run at 3.3GHz.


----------



## Murlocke

Quote:


Originally Posted by *Celeras* 
If you were just trying to "compare" and not be a total tool, you wouldn't show data that used an overclocked processor in only ONE of the trials.

No offense, but your logic really fails. As I said before, I own 2 NVIDIA computers and 2 ATI computers.

Also, I totally have a 5870 and GTX 480 to run an overclocked test with. Considering the GTX 480 isn't released, how exactly am I suppose to post an overclocked chart for it? Think before you post. This test doesn't even benefit from CPUs.


----------



## Ocnewb

Quote:


Originally Posted by *antuk15* 
What are you talking about









I know!! Some people don't like to use their brain to think i guess







.


----------



## Chucklez

Quote:


Originally Posted by *Murlocke* 
I believe it's a drive issue. Chucklez has the same issue on his 5970. Sometimes when the scenes switch in the benchmark your FPS dips down to like 35FPS while it loads, then after it's done it shoots back up.

The GTX480 doesn't seem to do that. It seems to load the stuff much faster, so it suffers less minimum frames. After everything is loaded up though the 5970 starts crunching.







.

Yeah hopefully the full 10.3's fix that issue. The pre 10.3's don't like my monitor to much







. But I definitely agree with Murlocke seems to be a driver issue with CF cards because brett with two 5850's in CF has that same exact issue.

Quote:


Originally Posted by *Celeras* 
If you were just trying to "compare" and not be a total tool, you wouldn't show data that used an overclocked processor in only ONE of the trials.

He is showing you what the 5970 can do overclocked slightly and the CPU at 4.0Ghz. Is that a crime now?


----------



## Murlocke

Quote:



Originally Posted by *Celeras*


I'm talking about the fact that all the trials were run with a stock i7, except for the one trial with the 5970 where it was run at 4.0 Did I st-st-st-studder?


As I just edited above:

"Also, I totally have a 5870 and GTX 480 to run an overclocked test with. Considering the GTX 480 isn't released, how exactly am I suppose to post an overclocked chart for it? Think before you post. This test doesn't even benefit from CPUs."


----------



## Celeras

Quote:


Originally Posted by *Chucklez* 
He is showing you what the 5970 can do overclocked slightly and the CPU at 4.0Ghz. Is that a crime now?

When you don't give the same opportunity to the other cards in the trials, that's called Bias regardless whether or not you have the capability to do so. You don't include that data.


----------



## sora1607

Nice benchmark







What I'm interested about in this comparison is the fact that overclocking the i7 can really help boost the performance of the 5970. I'm amazed at the performance boost actually. I never thought a 3.2 Ghz i7 would be bottlenecking the 5970 that much


----------



## Murlocke

Quote:



Originally Posted by *Celeras*


When you don't give the same opportunity to the other cards in the trials, that's called Bias.


For the 3rd time, how am I suppose to do an overclocked run of a card that hasn't been released. Those are the ONLY benchmarks released and they came straight from NVIDIA.

Keep trolling, get infracted. Just because i'm a retired director doesn't mean that I can't just PM someone to get the job done. Your posts have been rude, and haven't help anyone at all.


----------



## Celeras

For the 4th time now, *you don't include bias data unless you want to come off as being biased.*

Or would it be perfectly acceptable for me to do a benchmark with an nVidia card vs. an ATI card... and then include some red lines featuring an UNDERCLOCKED cpu? You know, just because I can.


----------



## Ocnewb

Quote:


Originally Posted by *Celeras* 
For the 4th time now, *you don't include bias data unless you want to come off as being biased.*

Or would it be perfectly acceptable for me to do a benchmark with an nVidia card vs. an ATI card... and then include some red lines featuring an UNDERCLOCKED cpu? You know, just because I can.

Jesus dude, there is a line for the i7 @3.3Ghz n the 5970 @stock (comparable with the Nvidia's system). The OC'ed i7 line is just the extra stuff. Either u can't read or ur brain can't think.


----------



## sora1607

The Nvidia benchmark runs @ 3.3Ghz which is why he did his benchmark @ 3.3 Ghz. He included the 4.0 Ghz to show the difference between going up to the 4.0 Ghz so that he can later compare that to the actual 295 4.0 Ghz.... What's the big deal with that?


----------



## Murlocke

Quote:


Originally Posted by *Celeras* 
So then it would be perfectly acceptable for me to include some 'extra' red lines on a benchmark showing ATI cards with an underclocked processor. Good to know!

If you included in the benchmark that it was underclocked then there would be absolutely nothing wrong with it. I don't see why you would ever underclock for a benchmark, but hey, thats your arguement. I clearly show that I was overclocked on that run, and I even gave a comparison run with the same settings.

I'm not sure what you want.


----------



## saulin

Quote:



Originally Posted by *Celeras*


So then it would be perfectly acceptable for me to include some 'extra' red lines on a benchmark showing ATI cards with an underclocked processor. Good to know!

I'll be done here now since I'm apparently just "trolling the retired director".



Yeah man there is a benchmark at 3.3Ghz too.

It doesn't matter. The GTX 480 still wins with the better minimum frame rates. I would not be surprised if it's a driver bug. Not to trash talk AMD/ATI but they are famous for those.

The 5970 is a fast card though. It is very weird how it spikes on the max fps too. It could be a bug I guess.


----------



## Murlocke

Quote:



Originally Posted by *saulin*


Yeah man there is a benchmark at 3.3Ghz too.

It doesn't matter. The GTX 480 still wins with the better minimum frame rates. I would not be surprised if it's a driver bug. Not to trash talk AMD/ATI but they are famous for those.

The 5970 is a fast card though. It is very weird how it spikes on the max fps too. It could be a bug I guess.


I agree. The one reason why I tend to actually prefer NVIDIA over ATI. ATI seems to have alot more driver issues.

I wouldn't consider these settings a good test for minimum frames. I think if you did something like... 1920x1200, 4x AA, 16x AF the 5970 would pull better minimum frames. I run at 2560x1600, 4x AA, 16x AF during normal games... So I greatly prefer the 5970. I can't imagine the GTX 480 will do that good of a job at those settings, have to wait for the GTX 485.


----------



## Yangas

Sorry to say I've been lurking thread and don't understand what you're trying to prove with that Celeras.








Mind clearing it up ?


----------



## Sun

Assuming the nVidia charts to be correct:

The total area under the nVidia curves is 58.37% of an arbitrary area.
The total area under the ATI curves is 42.32% of the same arbitrary area.

This means that, according to nVidia, the 480GTX is 38% faster than the 5870. Conversely, the 5870 is 27% slower than the 480GTX.

Images attached, analyzed with ink usage software.


----------



## mth91

is that what heaven looks like?!


----------



## Murlocke

Quote:



Originally Posted by *Sun*


Assuming the nVidia charts to be correct:

The total area under the nVidia curves is 58.37% of an arbitrary area.
The total area under the ATI curves is 42.32% of the same arbitrary area.

This means that, according to nVidia, the 480GTX is 38% faster than the 5870. Conversely, the 5870 is 27% slower than the 480GTX.

Images attached, analyzed with ink usage software.


The thing NVIDIA didn't say is.... tesselation runs on the same cores that AA/AF does. They purposely ran their test at 1x AF and 0x AA for this reason. You can expect a large drop in minimum FPS on the GTX 480 after applying 2x AA and 16x AF. Whereas, on the 5XXX series tesselation has it's own dedicated core, so the FPS loss when adding AA/AF is very small.

I'm expecting to see a very large difference after we see some benchmarks that include tesselation along with AA/AF.


----------



## etrigan420

Quote:



Originally Posted by *Murlocke*


I'm not sure what you want.


Check out his sig and what he wants becomes obvious...

Back to the topic...the results worry me a bit. I don't upgrade as often as some (probably most) here, and I try to switch sides each cycle.

Card before my current ones was a 7800GTX, before that was a 9800AIW, before that was a GeForce 2 Ultra (told ya







). And while I love my 4850's, they *are* starting to show their age...just a little bit.

If the GTX480 doesn't match the 5870 price point (and seriously, how *could* it? Bigger chips, lower yields...I just don't think it's possible) then things might get interesting, as far as my purchasing decisions at least.

I agree with saulin that minimum frame rates are more important but, for me at least, it really comes down to price vs. performance.

Personally, I wish they'd just release them already, and let the chips fall where they may!!! (SEE WHAT I DID THERE???)









Thanks for the work Murlocke! Much appreciated!


----------



## Riou

I am not sure why some members here are getting on Murlocke's case. His data is fair to compare. Until we get real GTX 480 cards in our hands, this is the best we can do so far.

I just find it strange that Nvidia only used 0x AA and 0x AF. If the GTX 480 really owned, Nvidia would release benchmarks with high AA and AF settings. It is just really suspicious.


----------



## Sun

Quote:



Originally Posted by *Murlocke*


The thing NVIDIA didn't say is.... tesselation runs on the same cores that AA/AF does. They purposely ran their test at 1x AF and 0x AA for this reason. You can expect a large drop in minimum FPS on the GTX 480 after applying 2x AA and 16x AF. Whereas, on the 5XXX series tesselation has it's own dedicated core, so the FPS loss when adding AA/AF is very small.

I'm expecting to see a very large difference after we see some benchmarks that include tesselation along with AA/AF.


I would tend to agree, although nVidia engineers may nave a trick or two. Just providing some analysis, is all. Would not mind being mentioned in the OP.









In any case, I eagerly await the HardOCP, Bit-Tech, and Anand reviews. Only then will we know the truth.


----------



## Meta-Prometheus

Quote:



Originally Posted by *Murlocke*


Yea, I agree.

Nvidia's past prices though make me believe they'll try to get $550+ out of it.


I really hope they don't try that. Anyways, thanks for the comparison. I expect to see an overclocked GTX 480 line on that chart when it comes out!


----------



## amstech

Well, I just can't wait to see some real testing when these babies finally release.


----------



## d0gZpAw

Quote:


Originally Posted by *Murlocke* 
Now if only ATI can get a better driver development team. The 5XXX series still has many issues.

...and we need PhsyX support.









Well, I think the drivers are fine, only because I've had very few issues with new CCC drivers, and that was with 9.12 and 10.1. Does everyone think that nVidia's first driver release for GTX4xx will be PERFECT?

As for PhysX support.. ehh, can't really say I miss PhysX at all when my nVidia card is pulled.. there are far too few games that support real GPU PhysX that I think ATi will turn a blind eye.

For example, the physics in Bad Company 2 are great, and that's a DX11 title, with no PhysX support.. Wood fences splinter, dirt flies up when bullets/heavy munitions hit the ground, houses get blown apart.. Looks a lot better than the PhysX physics in Dark Void, which was heavily polished and very well presented, albeit very short. Why do we even need PhysX anymore with the HD5xx0 series around?


----------



## saulin

Quote:


Originally Posted by *d0gZpAw* 
Well, I think the drivers are fine, only because I've had very few issues with new CCC drivers, and that was with 9.12 and 10.1. Does everyone think that nVidia's first driver release for GTX4xx will be PERFECT?

As for PhysX support.. ehh, can't really say I miss PhysX at all when my nVidia card is pulled.. there are far too few games that support real GPU PhysX that I think ATi will turn a blind eye.

For example, the physics in Bad Company 2 are great, and that's a DX11 title, with no PhysX support.. Wood fences splinter, dirt flies up when bullets/heavy munitions hit the ground, houses get blown apart.. Looks a lot better than the PhysX physics in Dark Void, which was heavily polished and very well presented, albeit very short. Why do we even need PhysX anymore with the HD5xx0 series around?









Have you seen the latest PhysX demo? do you think the 5970 can do that without PhysX.


----------



## Dopamin3

Quote:


Originally Posted by *Eduardv* 
Wow seems the GTx will be close to the 5970,then if priced right,omg sorry for the 5970.

Not in real games though. Remember that Fermi was a scrapped workstation card so it will excel in stuff that uses heavy tessellation. If you actually _played_ a game it would only be 5% faster than a 5870.


----------



## Zerkk

So from the source video we find out that the GTX480 is 6+6 Pin power







I thought it was confirmed that the 480 was going to be 6+8... Could it have been a 5850 vs a 470 comparison? Didn't read all the way through the pages so if someone already stated this then disregard it. I just think you guys are blowing this way out of proportion before we really know anything.


----------



## Vlasov_581

am i reading the pics right?.....is the GTX480 barely beating the 5870?......overall performance is higher but only by a little bit.........also the multi-gpu frame drops are very evident with the 5970


----------



## NFL

Quote:


Originally Posted by *E_man* 







I'm tempted to sig that

One step ahead of you


----------



## SGT. Peppers

Quote:


Originally Posted by *Horsemama1956* 
Yeah it's unfair.. Comparing a 6 month old card to one not even out yet.. Of course the older card is going to lo.. Oh.

Just like how people compare the 5870 and 5970 to the GTX 285 and GTX 295 when those ATI cards have almost if not an entire year on the nvidia cards :S


----------



## NFL

Quote:


Originally Posted by *SGT. Peppers* 
Just like how people compare the 5870 and 5970 to the GTX 285 and GTX 295 when those ATI cards have almost if not an entire year on the nvidia cards :S

But while the 5970 was out 6 months prior, it and the GTX 480 are the same generation (DX 11). Comparing the the 285 and 295 to the 5870 isn't fair because they aren't of the same generation (DX 10 vs DX 11)


----------



## Celeras

I was trying to emphasize that it's not about the results, the 5970 is obviously the beast of the group and rightfully so with it being a dual GPU. If it doesn't smash a single GPU fermi, that'd be a major problem.

But that doesn't mean that the data shouldn't be represented in a neutral fashion, which this isn't.


----------



## windfire

It is just what I expected. 480 beats 5870 and is beaten by 5970.

I think, it would be a more complete test if two 480s (or 470s) in SLI were also tested (as this could act as a foretaste of how a hypothetical 495 could perform)


----------



## Dopamin3

Quote:


Originally Posted by *windfire* 
It is just what I expected. 480 beats 5870 and is beaten by 5970.

I think, it would be a more complete test if two 480s (or 470s) in SLI were also tested (as this could act as a foretaste of how a hypothetical 495 could perform)

The 495 isn't going to released for a long time. If one GPU needs a 8 pin + 6 pin and has such a beastly stock cooler, no way Nvidia is going to launch a card like that. People would have to RMA their motherboards because the PCI-E slots would melt


----------



## NFL

Quote:



Originally Posted by *windfire*


It is just what I expected. 480 beats 5870 and is beaten by 5970.

I think, it would be a more complete test if two 480s (or 470s) in SLI were also tested (as this could act as a foretaste of *how a hypothetical 495 could perform*)



But that kind of power may open a black hole inside a computer...there's a 2012 scenario...GTX 495 ends life as we know it


----------



## Ocnewb

Quote:



Originally Posted by *Eduardv*  
Wow seems the GTx will be close to the 5970,then if priced right,omg sorry for the 597


From what i've seen, the GTx is nowhere close to the 5970 unless Nvidia release its dual version.


----------



## Ihatethedukes

To be honest, I don't see why anyone really wants to compete with an unreleased GPU in a canned bench...


----------



## etrigan420

Should anything be made of the fact that Mr. Peterson is running v1.1 of the demo?


----------



## saulin

Have you guys seen this yet?

Apparantely new GTX 470 benchmarks and they look better than what I expected.

http://translate.google.com/translat...lt-946411.html


----------



## d0gZpAw

Quote:


Originally Posted by *saulin* 
Have you seen the latest PhysX demo? do you think the 5970 can do that without PhysX.

Which is the latest one? If you mean Rocket Sled, then I've only seen videos that don't really show off the physics that well..

Did you know you can have an HD5970 AND PhysX at the same time? I think with the HD5970 and a dedicated PhysX PPU, the GTX480 will get creamed no matter what PhysX tech demos you run.


----------



## NCspecV81




----------



## antuk15

Quote: 
   Originally Posted by *saulin*   Have you seen the latest PhysX demo? do you think the 5970 can do that without PhysX.  
This is proof enough that you don't need PhysX

  
 YouTube- Crysis Nuke w/ Peacemaker SFX  



 
The engine is only Dual Core enabled as well, So they could have even more Physics if they used a Quad Core. In fact I believe you might know the game?


----------



## Imglidinhere

Quote:



Originally Posted by *Murlocke*


Congrats on reading the OP. Comparing NVIDIA's Highest End to ATI's Highest End isn't fair? Seems very fair to me.

I will give it to you that having my card overclocked is a bit unfair, but even at stock... it's still going to be far higher.


THANK YOU. Same deal with the 5870 and the GTX295.


----------



## grunion

Quote:



Originally Posted by *etrigan420*


Should anything be made of the fact that Mr. Peterson is running v1.1 of the demo?


Only if the 5870 results are derived using version 1.0.

From the 1.1 work log.

Quote:



Tessellation optimization gives up to 30% performance boost.



21 posts deleted, probably more needed.

Stay clean, disagree all you want..
Just don't start slinging personal attacks, name calling, etc....


----------



## systemviper

interesting thread, even though it's all over the place, glad to see it getting in line


----------



## wcdolphin

the way I am reading it puts them even...


----------



## steven937595

the fermi chips are tuned towards benchmarks, not games. even so, looks like ATI/AMD will hold the crown for quite some time. seeing how the 480 is right up against the 300w limit, there will obviously be no X2 card. Should get started on Fermi II or something to replace it soon


----------



## IEATFISH

Quote:



Originally Posted by *grunion*


Only if the 5870 results are derived using version 1.0.

From the 1.1 work log.

21 posts deleted, probably more needed.

Stay clean, disagree all you want..
Just don't start slinging personal attacks, name calling, etc....


Wait, so version 1.1 gives a performance BOOST in tessellation? That makes my 5850 seem even more powerful if it can keep up with their 5870 AFTER the 5870 has a 30% performance boost. Definitely something fishy there...


----------



## MrDeodorant

'Up to 30% boost'. Did they say what hardware it was optimized for?


----------



## paras

guys i have a doubt doesnt heaven use both the gpus while benchmarking coz my score is way down and also i noticed that my other gpu was not at all used and my fps was very low

is this normal?


----------



## Blameless

I can't find Heaven 1.1 anywhere?

Quote:


Originally Posted by *MrDeodorant* 
'Up to 30% boost'. Did they say what hardware it was optimized for?

I'd assume they optimized for DX11 as all DX11 cards have tessellation hardware.


----------



## Murlocke

Quote:


Originally Posted by *Blameless* 
I can't find Heaven 1.1 anywhere?

Ditto.

All tests on the OP were ran on 1.0.


----------



## paras

guys can any1 let me know if this fps is normal i mean i should have close to 5970 performance or maybe high(correct me if iam wrong)


----------



## Murlocke

Quote:


Originally Posted by *paras* 
guys can any1 let me know if this fps is normal i mean i should have close to 5970 performance or maybe high(correct me if iam wrong)










The 5970 is pretty much 2x faster than the 5870 when both GPUs are at stock. The 5970 also easily overclocks 25% higher. You are running stock, so an overclocked 5970 is probably about 2.5x faster than your card. There's no way you will ever match the performance of a 5970 with your 5870. You are comparing a $700 card to a $400 card.

Run at 1920x1080, 1x AF, 0x AA, rest default. Then compare your 5870 score to the one on the main post.


----------



## paras

^^ i have 5850 crossfire setup if u didnt see

doesnt crossfire work with heaven bench?

also is 5850 crossfire = 5970?

also as my cards are factory OCed will that effect the fps much when compared to stock 5970?


----------



## ryboto

Quote:


Originally Posted by *Murlocke* 
The 5970 is pretty much 2x faster than the 5870 when both GPUs are at stock. The 5970 also easily overclocks 25% higher. You are running stock, so an overclocked 5970 is probably about 2.5x faster than your card. There's no way you will ever match the performance of a 5970 with your 5870. You are comparing a $700 card to a $400 card.

Run at 1920x1080, 1x AF, 0x AA, rest default. Then compare your 5870 score to the one on the main post.

...he's got crossfired 5850's...


----------



## IEATFISH

Quote:


Originally Posted by *paras* 
guys i have a doubt doesnt heaven use both the gpus while benchmarking coz my score is way down and also i noticed that my other gpu was not at all used and my fps was very low

is this normal?

http://i49.tinypic.com/2z82334.png

Make sure you run it in Fullscreen mode. Windowed only uses 1 GPU.


----------



## Murlocke

Quote:


Originally Posted by *paras* 
^^ i have 5850 crossfire setup if u didnt see

doesnt crossfire work with heaven bench?

also is 5850 crossfire = 5970?

also as my cards are factory OCed will that effect the fps much when compared to stock 5970?

I honestly don't know what I was looking at to think you had a 5870. I don't think that is a normal score. Try 2x AA. My 5970 suffers around a 250% FPS gain when I set it to 4x AA over 2x AA. Driver bug maybe?

Quote:


Originally Posted by *ryboto* 
...he's got crossfired 5850's...

Yea I feel stupid, no idea where I saw 5870. I just woke up and came on here first thing. Time to go take a shower and maybe I won't come back so stupid.









Quote:


Originally Posted by *IEATFISH* 
Make sure you run it in Fullscreen mode. Windowed only uses 1 GPU.

This. You are in windowed mode according to the screenshot. You should get about 60-62FPS average at 1920x1080, 1x AF, 0x AA in fullscreen. Maybe a bit higher because thats what my stock 5970 gets.


----------



## paras

^^ thanks also dude happens to many guys so chill dude

also guys will i be able to run this game (when its released) at full 8AA and 16AF and all max at 1920x1080?

i mean is my setup capable of this coz i wont be upgrading anytime soon as my setup kicks ass lols for all games


----------



## Murlocke

Quote:



Originally Posted by *paras*


^^ thanks also dude happens to many guys so chill dude

also guys will i be able to run this game (when its released) at full 8AA and 16AF and all max at 1920x1080?

i mean is my setup capable of this coz i wont be upgrading anytime soon as my setup kicks ass lols for all games


This isn't a game, it's just a benchmark. There was a rumor that a game was being made that uses this engine though. There's absolutely no telling how the performance will be or if the game will even see the market.


----------



## paras

^^ oh i didnt knew about it thanks for the info


----------



## Imglidinhere

Quote:



Originally Posted by *steven937595*


the fermi chips are tuned towards benchmarks, not games. even so, looks like ATI/AMD will hold the crown for quite some time. seeing how the 480 is right up against the 300w limit, there will obviously be no X2 card. Should get started on Fermi II or something to replace it soon


Um... o.o This makes no sense... Just add another 8-pin connector for a total of 3? It sounds ridiculous but it's not inconcievable at this point. (pretty sure I spelled that word wrong... XD)


----------



## ]\/[EGADET]-[

I am starting to get real sick of this back and forth nerdery. Most of the time I see ATI uses just wanting to justify their 5 series purchases. Lets just wait and see and stop acting like little girls....


----------



## IEATFISH

Quote:



Originally Posted by *]\\/[EGADET]-[*


I am starting to get real sick of this back and forth nerdery. Most of the time I see ATI uses just wanting to justify their 5 series purchases. Lets just wait and see and stop acting like little girls....


I'm not quite sure how this qualifies as 'justifying' and 'nerdery' or 'acting like little girls'. It is an extremely unbiased approach to comparing NVidia's release results against our own...


----------



## ]\/[EGADET]-[

Quote:



Originally Posted by *IEATFISH*


I'm not quite sure how this qualifies as 'justifying' and 'nerdery' or 'acting like little girls'. It is an extremely unbiased approach to comparing NVidia's release results against our own...


I'm not talking about the graphs in the original post. It's the numerous posts here and every other thread about this, full of wanna be know it alls. It's pathetic, I can't wait for the damn things to be released already cause I don't know about you but I'm sick of it.


----------



## Murlocke

Quote:



Originally Posted by *Imglidinhere*


Um... o.o This makes no sense... Just add another 8-pin connector for a total of 3? It sounds ridiculous but it's not inconcievable at this point. (pretty sure I spelled that word wrong... XD)


The 300W limit is a limit of PCI-E 1.0 itself. If they break the barrier, and a user with PCI-E 1.0 tries to put a card in their system... it probably won't be pretty. You could add a 3rd 8-pin slot and run it on PCI-E 2.0.

The way I see it is... if a user buys a $700 video card, and is still on PCI-E 1.0... he deserves to be shot.









Quote:



Originally Posted by *]\\/[EGADET]-[*


I'm not talking about the graphs in the original post. It's the numerous posts here and every other thread about this, full of wanna be know it alls. It's pathetic, I can't wait for the damn things to be released already cause I don't know about you but I'm sick of it.


I see a few posts that just would be better if they weren't there. I hope you are not talking about any of my posts though, I try to be as unbiased as possible but it's hard to do that with text and not get your message misunderstood.

I can't help but laugh when someone calls me an ATI fanboy. If anything I like NVIDIA more because they tend to produce better drivers. The whole fanboy thing has really gotten out of hand.


----------



## ljason8eg

Quote:



Originally Posted by *Murlocke*


The 300W limit is a limit of PCI-E 1.0 itself. If they break the barrier, and a user with PCI-E 1.0 tries to put a card in their system... it probably won't be pretty. You could add a 3rd 8-pin slot and run it on PCI-E 2.0.

The way I see it is... if a user buys a $700 video card, and is still on PCI-E 1.0... he deserves to be shot.










Yeah I really don't see the problem with breaking through the 300w barrier. It's not like a card of the caliber is going to be popular with OEMs anyway.


----------



## NCspecV81

I'll be honest..as long as it's not melting pcie slots and crashing my psu...bring on the wattage.


----------



## Ocnewb

I just ran the Heaven Benchmark with my 5850 @ 950/1300 (trying to reach 5870 speed). Here is the screenshot, and i'm wondering why it's so different with the 5870 in the Nvidia' benchie.
My benchie:








Nvidia's benchies:


----------



## Evtron

Well the only thing we have is this benchmark they have been using. So, the only thing that we can conclude with our OCN users 5970 bench is:

the 5970 will still be much faster than the GTX 480?

Even using a benchmark that favors a GPU like the new GTX series the 5970 still beats the GTX 480 by a large margin (5970 AVG FPS higher than GTX 480 max FPS as started before)


----------



## gtarmanrob

Quote:


Originally Posted by *Ocnewb* 
I just ran the Heaven Benchmark with my 5850 @ 950/1300 (trying to reach 5870 speed). Here is the screenshot, and i'm wondering why it's so different with the 5870 in the Nvidia' benchie.
My benchie:








Nvidia's benchies:

the other benches used 1x Anisotropy i think. you used 16x

also a 5850 @ 950/1300... they are above 5870 clocks. i had to flash my 5870's BIOS to allow me to go past 900/1200. now running 940/1330.


----------



## humbleguy

I'll wait for a good benchmark site to review the card before coming to a decision.
Personally I love the 5970 tho. All that POWER!


----------



## Ocnewb

Quote:


Originally Posted by *gtarmanrob* 
the other benches used 1x Anisotropy i think. you used 16x

also a 5850 @ 950/1300... they are above 5870 clocks. i had to flash my 5870's BIOS to allow me to go past 900/1200. now running 940/1330.

Nah if u look @ the 2 pics that i linked below my benchie, u can see that they ran @ 4x AA and 16x too. It's just that the GTX470 is supposed to be in between the 5850 n 5870 but it's not really if not any faster than the 5850.


----------



## Riou

Quote:



Originally Posted by *Ocnewb*


Nah if u look @ the 2 pics that i linked below my benchie, u can see that they ran @ 4x AA and 16x too. It's just that the GTX470 is supposed to be in between the 5850 n 5870 but it's not really if not any faster than the 5850.


Nvidia benches were ran with no AA and no AF (1x) for some reason.


----------



## gtarmanrob

Quote:



Originally Posted by *Ocnewb*


Nah if u look @ the 2 pics that i linked below my benchie, u can see that they ran @ 4x AA and 16x too. It's just that the GTX470 is supposed to be in between the 5850 n 5870 but it's not really if not any faster than the 5850.


nah the OP ran his own test at those settings with his HD5970.

the original tests were run with no AA and 1x AF. its in the first post.


----------



## jspeedracer

Well what about Nvidia's dual gpu offering? There has always been one, I haven't read up on any of the new cards they have lined up but I'm sure there has to be a dual gpu version that will be their 'flagship' card.


----------



## CryWin

Quote:



Originally Posted by *jspeedracer*


Well what about Nvidia's dual gpu offering? There has always been one, I haven't read up on any of the new cards they have lined up but I'm sure there has to be a dual gpu version that will be their 'flagship' card.


They can't even get the single cards out, it will probably be a long while before they get a dual card out.


----------



## saulin

Quote:


Originally Posted by *CryWin* 
They can't even get the single cards out, it will probably be a long while before they get a dual card out.

How long did it take for the GTX 295 to come out after the GTX 280?

I think about 6 months. I can't remember correctly. But how long did it take for the 5970 to come out after the 5870? Not much at all. How do we know that Nvidia is not already making the dual GPU card?

But yeah if we go by the last Nvidia release, it could be about 6 months, although I doubt they would wait that long unless the GTX 480 is really a lot faster than the 5870.


----------



## NoahDiamond

The GTX 480/470 will need a serious down clock and die shrink with reduced ROPs and memory allocation/clocks to be made into a dual GPU card. The single GPU GTX 480 draws as much power as the HD 5970 and requires the 300 Watt power configuration.

If the ATX standards for PCI-Express power consumption are increased, then maybe we can see a bump in performance, but no manufacturer can release a card that consumes more than 300 watts at peak. ATI designed their cards to run at full clock speeds and to be overclocked as well, but they could not "SELL" them to mass market with those settings, hence the afterburner profiles and the ATI Overdrive being included in the drivers, where nvidia requires an additional set of software which is greatly unstable.

The Catalyst 10.3 Pre-Release boosts performance on the HD 5970 significantly, and now allows for crossfire/crossfirex profiles to be released independent of the primary driver.

I personally run a HD 5970 with stock cooling at 1.174V Core and stock memory voltage at with clocks of 850-Core and 4800-Memory, identical to having two 5870s on one board, with the PCI express overclocked to allow the card to breathe. I run a nvidia GT 240 card for Physx and Cuda compatibility in games, allowing me to play the full suite of gaming software and eliminating the complexities of choosing games. Yes, I do use hacked nvidia drivers, but nvidia should have never introduced the lock-out feature in the first place.

If nvidia continues their trend in limiting their hardware and software, and being totally proprietary, they are doomed to failure just like 3dfx. You would think they would have learned a lesson from them. At least they still sell their chips to other manufacturers, but they are locking down warranties on non reference designs, so they might as well be producing their own cards.

I love nvidia, ATI, Intel and AMD, but seeing as AMD/ATI and Intel are teaming up against nvidia, and nvidia has no rights to x86 architecture or newer intel/AMD socket designs, their future as a motherboard and integrated graphics provider is doomed.

nvidia really should wise up on their marketing. Fermi was ready for release last year, but it was not within nvidia spec and failed to compete with the 5870. This forced nvidia to redesign everything, including the BGA packaging, causing huge problems for Fermi board manufacturers who received pre-release package reference layouts.

At this point, nvidia have pissed off ATI/AMD, Intel, and board manufacturers. If you take a look at the stock of nvidia vs ATI, you will see the success of both companies. From a business standpoint, this is how you determine a companies financial success. nvidia are loosing money and have poor availability for their high end products.

ATI has problems with availability of the 5970, but not for the same reason as nvidia and the GTX 295. The GTX 295 has low manufacturer availability, where the HD 5970s are purchased within 15 minutes of coming in stock, making it very challenging to get them to everyone.

Now that we have OpenCL and Direct Compute 5.0, as well as multiple open source platforms, it is highly likely that game developers will make the jump to open source to increase their market. It's all about money.

To me, the funniest, and most intriguing thing about AMD and Intel is that all of the post Directx 10 games released, especially this year and the second half of last year, have started with the following logos in sequence...

"AMD - The Future is Fusion", Followed DIRECTLY by the Intel logo.

I also find it funny that Intel down played Directx 11 and open source and focused on Physx, only to suddenly change their minds and switch to a new platform. This is not to say ATI have not done the same thing. The Dual Chip ATI cards, as well as the crossfire/crossfirex designs were released only a year after ATI downplayed the concept of using multiple GPUs. Both companies have done some strange design strategies over the years, but as it stands, nvidia is on the tail end.

Having coded for Directx 11, it is much more enjoyable experience than Directx 10/10.1. If you look at the library implementation of 11 vs 10x, you will see 11 is much friendlier and more resembles OpenGL.

Speaking of OpenGL, we need more OpenGL games these days. it seams that id software is still working with OpenGL as John Carmack has a raging love affair with open source, and I personally prefer OpenGL over Directx, but since many companies want to have an easy port from the Xbox 360 and the PC, they are stuck using backwards compatible Directx coding, hence many games supporting Directx 9, 10 and 11. OpenGL is still used on the Playstation 3, as they lack a proper license to run Directx on their "pseudo-unix" based platform, and their graphics are quite good looking.

Since the PC is once again becoming the dominant gaming preference for hardcore gamers, and sports the latest in technology, it is safe to say that game developers are free to develop their software in a more open environment, then port them to either console.

I'm sure we all remember when GLide died, as developers favored the original SGI OpenGl open source standards over the proprietary GLide 3dfx standard. Keeping a locked down code that only runs on your product is a ridiculous and careless way of running the gaming industry.

Enough with the Eye Candy... We want GAME PLAY AND ENTERTAINMENT! ATI has focused on the game play and entertainment, while throwing in lots of eye candy, where nVidia are focusing on eye candy. We have movies for that. Leave the eye candy in the demonstration programs, and bring back game play in games. As far as I can tell, neither Eye nor Candy exists in the word Game.


----------



## thiru

Quote:



Originally Posted by *NoahDiamond*


The GTX 480/470 will need a serious down clock and die shrink with reduced ROPs [...]


Good read, rep+


----------



## NoahDiamond

As a developer, I see a lot come and go. People need to take into consideration the standards imposed upon them before jumping into the game. nVidia had a great product with the first version of the GTX 295, since you could software volt mod them, but in an effort to save costs, since the previous 295 was only breaking even with the development cost, they revised the card to version B, which was a single PCB. If they had designed it with proper thermal control, they could have done better, but they missed out on a great opportunity to die shrink the GTX 285s and SLI them on one board.

nVidia did a die shrink sales test in the real world market with the GT 240 and found it to be a great success. It does not need any additional power connectors, and it is easy to work with, making it a great mid-range graphics card and a phenomenal dedicated physx card.

When I play physx enabled games with my HD 5970 with a GT 240 for physx, even Cryostasis is perfectly playable at 1920x1200 at maximum settings... I just can't seem to get past the first enemy since I have no idea how to use the darned ice gun, or whatever it is.

Matrox tried to join and failed, NEC tried to join and failed, STB tried to join and failed, Intel tried to join and failed, Vision tried and failed miserable, yet ATI and nVidia continue on. They found something right in the card market.

By the way, EA is a crappy company to work for. I can only assume it feels like hell, though I can't imagine hell being any worse. Barracuda Soft is the most fun to work for these days, as they actually feed us and stock two ply toilet paper.

Having worked for many game developers over the years, as a beta tester, a coder, physics technician, graphics engineer, and sound manager, I can safely say that being a simply coder at BCS is awesome. We don't set unreasonable deadlines, and we release our products on schedule. If only nVidia hadn't aquired Ageia, life would be much better.

We are switching to Open Source, as are many game developers. Anyway, I have to go now and update the main server to run the new windows updates overnight. Oh joy. There is nothing more fun that working with streaming update servers with windows. The job only takes 5 minutes... today... and 3 hours tomorrow to iron out everyone's workstation since they have so many things customized.

By the way, do you guys have Nerf wars or get bombed with M&Ms at random where you work?


----------



## Ocnewb

Quote:


Originally Posted by *gtarmanrob* 
nah the OP ran his own test at those settings with his HD5970.

the original tests were run with no AA and 1x AF. its in the first post.

Ah ok i thought u were talking about my benchie with the GTX470's bench. The one in OP was the benchie between the GTX480 n the 5870 (that what they said). The one that i posted (different settings) was between the GTX 470 n the 5870 (that what they said too). In that benchmark, they said the GTX470 beat the 5870 so that's why i ran my 5850 to compare.


----------



## sora1607

Question to the OP: Was your CPU running at high load in order to cope with the HD 5970 when running @ 4.0 Ghz


----------



## gtarmanrob

Quote:



Originally Posted by *Ocnewb*


Ah ok i thought u were talking about my benchie with the GTX470's bench. The one in OP was the benchie between the GTX480 n the 5870 (that what they said). The one that i posted (different settings) was between the GTX 470 n the 5870 (that what they said too). In that benchmark, they said the GTX470 beat the 5870 so that's why i ran my 5850 to compare.


aaaahhh







im with ya now. thought you were comparing to the original Nvidia GTX480 benches in the OP.


----------



## Ihatethedukes

I just ran it for fun on my trifire set up all stock clocks except the CPU being at 3.34GHz (close as I can get) and had a average of 95 FPS. The lowest FPS I saw was a momentary 42 FPS then following by a 'jittery' 45 (one above one below skipping back and forth for a few seconds). That that for what it's worth.


----------



## gtarmanrob

Quote:



Originally Posted by *Ihatethedukes*


I just ran it for fun on my trifire set up all stock clocks except the CPU being at 3.34GHz (close as I can get) and had a average of 95 FPS. The lowest FPS I saw was a momentary 42 FPS then following by a 'jittery' 45 (one above one below skipping back and forth for a few seconds). That that for what it's worth.


off topic...how the hell do you have your i7 920 @ 4.7ghz on stock cooling? surely you're taking the piss.


----------



## Quantum Reality

off-topic a bit...

Quote:

Intel tried to join and failed
Because their integrated video was such unutterable crud compared to even low-end dedicated GPU cards; had Intel gotten their game together and realized how useful it would be to make motherboards with powerful integrated GPUs they could have competed against the low-end nVidia and ATI setups. As it was they simply fell further and further behind to the point where GMA950 was only playable at 800x600 with 16-bit color even with very old games, like Freelancer.

As proof of concept I once tested SimCity Societies on some Intel GMA boards and while it was semiplayable at those low resolutions I induced severe artifacting on one board after 30 minutes of gameplay, indicating that Intel hadn't even bothered specifying minimum thermal dissipation requirements (I think I damaged the integrated GPU; I put a 7600GT on that mobo and it has worked fine since).

In short, Intel was already giving up on trying to be a serious competitor in the gaming-GPU market by 2005/6.

That's sort of bad, since ATI and nVidia have, as shown in the past, proven susceptible to temptation to be duopolists and engage in price-fixing behavior.

A serious third-party competitor would probably keep them on their toes a bit more.


----------



## NoahDiamond

Quote:



Originally Posted by *Quantum Reality*


off-topic a bit...

Because their integrated video was such unutterable crud compared to even low-end dedicated GPU cards; had Intel gotten their game together and realized how useful it would be to make motherboards with powerful integrated GPUs they could have competed against the low-end nVidia and ATI setups. As it was they simply fell further and further behind to the point where GMA950 was only playable at 800x600 with 16-bit color even with very old games, like Freelancer.

As proof of concept I once tested SimCity Societies on some Intel GMA boards and while it was semiplayable at those low resolutions I induced severe artifacting on one board after 30 minutes of gameplay, indicating that Intel hadn't even bothered specifying minimum thermal dissipation requirements (I think I damaged the integrated GPU; I put a 7600GT on that mobo and it has worked fine since).

In short, Intel was already giving up on trying to be a serious competitor in the gaming-GPU market by 2005/6.

That's sort of bad, since ATI and nVidia have, as shown in the past, proven susceptible to temptation to be duopolists and engage in price-fixing behavior.

A serious third-party competitor would probably keep them on their toes a bit more.


Yes, Intel's discrete design was an utter flop, but now that they have teamed up with ATI for integrated graphics, they have a great market with their chipsets. They consume less power, and are compatible with the newer CPUs. In order for a nVidia graphics solution to be installed in a mobile computer, it either needs to be based on the older Core architecture, use their Cellular phone technology(which never took off), or it has to be a mini-pci-express card installed as a discrete board, which would kill battery life.

Getting back on topic, the only boards that are Fermi that I have seen are low clocked beta designs solely for coding. They are present to allow for compatibility of future games The hardware units were based on the 285 and 295 boards to test what would happen, but the dual boards did little more than cook eggs. The new setup we have uses software emulation as the Fermi has changed so much, so I can't post scores on it that are not "virtual". Our software runs the code in processing time, not in real time, then it outputs it to a video file for inspection, including artifacts and frame drops. This gives us an idea of what the game will perform like on Fermi. We also use the 5870 as a basis of testing our Directx 11 software.

To be quite honest, the Fermi board will be a great competition for the 5870, but, since as it presently stands, it lacks a hardware tessellation unit, it will have to cut into it's own processing power to render it. As far as we can tell, the Fermi design is still more software than hardware. From the looks of the virtual drivers we are using, I can only assume that the new drivers will enable Directx 11 on ALL cards from the 8000 series and up using Cuda cores to emulate the processing, as they have done for all versions of render.

As for speculation, I can only assume what the FINAL product will be capable of, but I know one thing for CERTAIN... it will be much more CPU dependent than the 5870. Those with Core 2 Duo machines will see a huge performance hit. Overclocked Core 2 Quads with lots of cache and ram will not see a big hit, provided the games are not using all the cores... which they are likely to start doing soon.

nVidia have ALWAYS been software oriented. ATI has mostly been hardware oriented. If you put a 5870 on a Core 2 Duo, it doesn't take anywhere near the same performance hit of a GTX 295, or 285 to be fair with the single GPU setups. nVidia SLI is a software implementation, using large amounts of bandwidth and CPU power to unite the two cards, where CrossfireX (the new method of crossfire on modern ATI cards), uses significantly less CPU power. The bus bandwidth is still being used, but it is more dependent on the chipset than the CPU.

IF, and that is a BIG IF, nVidia can get their software straightened out, fix their fan control issues (which is one reason Fermi is being held back, due to backwards compatibility of the drivers when ntune is installed), resolve their disputes with existing integration partners and board manufacturers on the new GPU packaged BGA design, they will be able to make a success, but they certainly will not see a profit for at least a year.

The fact remains that though nVidia has the market share, ATI has the stock and profits, and has always made more money than nVidia. The plan to merge with AMD was a smart decision as they had a guaranteed partner to introduce integrated graphics, and access to FAB facilities all over the world.

nVidia is restricted to TSMC. nVidia's availability of the Fermi, and 295 boards for that matter, is due to integration and reliability reasons, where ATI learned both from the competition, the computing world, and their own past.

The name of the game really is MONEY, and ATI is winning.

(Insert rumor going around the office)
There is a Rumor going around the office that nVidia may file for bankruptcy in the next 2 years if Fermi is not a success. That does not mean they will cease to exist, but I am not sure they will have someone to bail them out. If the rumor turns out to be true, and I have strong doubts about the rumor, I will be puzzled.
(End rumor going around the office)

Since ATI has more FAB facilities, more software vendors to back them up, more chipsets that support their Crossfirex, more CPU support, more finances, a more successful launch strategy, and fewer failing components, nVidia may have a problem.

I personally would like to see nVidia partner with Intel so they can get access to their FABs and end their disputes. If that happens, nVidia would be unstoppable. They would also be in a different price bracket, and would have better software support.

As for the brass taxes of performance, Yes, Fermi GTX 480 will have faster tessellation compared to the 5870, but it will come at a performance hit when game play and Physx are also involved. The increased memory is not to facilitate newer games, it is to resolve performance issues. The increased bit-rate of the memory is also needed. When it comes down to real game play though, as it stands, and the way things look, ATI's 5870 will still be head to head with it. nVidia cannot stand the chance of pushing Fermi back any farther, as it would completely jeopardize their foothold as a solid graphics production company, and it would cause stockholders, investors, and ultimately clientele and consumers, to re-think their strategies.

Take into account that when the Fermi is released, the 5870 will probably drop in price, as will the 5970. The only reason you can't find a 5970 in stock is because they are all already back-ordered, and the only way to ensure you get a 5970 right now is to pay exuberant prices.

The grand finale is the fact that Fermi MUST be backwards compatible with all the previous software requirements. From what we have seen here in the emulation, Fermi is great in Directx 11, but not so much in 10.1/10 or 9.0c. I am sure it's a software issue, but nVidia have had major problems with their software... so much so that their drivers caused the largest percentage of Windows Vista crashes during it's launch year. Fermi will be released when the software is ready.

Just remember one thing... Fermi started life as a math engine for corporate work, and not as a gaming solution. nVidia did not have the time OR the money to develop an entirely new Directx 11 system. They chose their path, insisting that Directx 11 would not be a major advantage. Now they are kicking themselves for not taking action sooner.

(Insider information)
The Fermi is NOT, in any way except for the Cuda cores, based on the previous nVidia architecture, so if they can figure out their backwards compatibility issues, they will probably be able to kill. One thing you can count on is 2D performance. It will be, and presently is STELLAR! Their ability to handle hardware decoding of video is implemented very well. The software is not.)


----------



## MoBeeJ

WOW^ i have to read all THAT?!?!?! nvm


----------



## Gunslash

following nvidia's track record I think it's reasonable to say the card will be $500+ for the first few months...that's a no go in my book, I wouldn't bother upgrading my two "old" 280s for such a premium. Most last generation high end ati/nvidia(4870/4890/280/285/260) cards are still fully capable at running just about anything worth playing and now they're cheap enough to crossfire/sli them...never mind the current 5xxx series ATI cards which are reasonably priced and also capable...
I honestly don't see as this being a good year for nvidia...


----------



## NCspecV81

quad 5870's on phenom II @ 4.3ghz 1030/1300 gpu clocks came to be 135fps avg @ those settings.


----------



## PhaedraCorruption

Quote:



PhaedraCorruption says:
When are you doing the fermi runs?
[nick shih] fermi 470 x4 ln2? Says:
Maybe not
wasting time
PhaedraCorruption says:
Then what will you be useing in the next run?
Ati cards again?
With their awesome drivers?
[nick shih] fermi 470 x4 ln2? Says:
Ati driver ****ed
PhaedraCorruption says:
Yes. Yes they are. 
[nick shih] fermi 470 x4 ln2? Says:
05 need rebench
PhaedraCorruption says:
What did you get on the last run?
Oh, you ran out of ln2
[nick shih] fermi 470 x4 ln2? Says:
Driver make my score crap
PhaedraCorruption says:
Why do you think useing fermi will be waste time?
[nick shih] fermi 470 x4 ln2? Says:
470 is junk



olololololol

This sums up my thoughts

Quote:



Originally Posted by *Chucklez*


Haha so we even have benches saying Fermi sucks, I'm not trying to be a fanboy but this is just great!











I believe he already has a 470


----------



## grunion

Wonder why he says the 470 is junk, no oc headroom?


----------



## PhaedraCorruption

Quote:



Originally Posted by *grunion*


Wonder why he says the 470 is junk, no oc headroom?


He actually said quite a bit more but I needed to cut it out due to the cursing.








[/IMG]


----------



## philhalo66

lol looks like the ati totally spanks nvidia once again


----------



## Imglidinhere

Quote:


Originally Posted by *NoahDiamond* 
The name of the game really is MONEY, and ATI is winning.

Out of all that, this one sentence is the primary thing I saw. Yes they are winning... Right as I decide to switch over to Nvidia too. XD

Then again, AMD owns ATI so they have two sources of income at the moment whereas Nvidia only has one.


----------



## NoahDiamond

Ok, just to clarify, I am capable of writing short posts.

nVidia needs to get on track.
ATI has the ball.
nVidia has for the time being dropped out of the chipset business.
nVidia SLI is not supported on AMD chipsets, and requires a license to run on Intel's.

Fermi was promised to perform well, but it so far is proving to not be much for all the time it took, and the price nVidia is asking is too high for what the product offers.

ATI is already available, and most coders, including the Barracuda Soft, are dominate with coding on ATI cards for Directx 11.

ATI is about to overtake the marketshare. nVidia is loosing business too quickly to gain foot.

There. Is that short enough for some of you to read?

By the way... I spoke to both God and Jesus today, and they both hate everybody, and all of us are destined to go to hell for trying to sort out such an easy concept.

3dfx was, is, and always be the best graphics card manufacturer, the Sega Dreamcast will be resurrected, and Duke Nukem Forever is being released this week.

YAY! I made a funny.


----------



## E_man

Quote:



Originally Posted by *NoahDiamond*


There. Is that short enough for some of you to read?


Don't let some of them get under your skin. They were really good posts for the people who care enough to read and not just pop comments


----------



## NoahDiamond

Quote:



Originally Posted by *E_man*


Don't let some of them get under your skin. They were really good posts for the people who care enough to read and not just pop comments


I appreciate the fact that you took the time to use your hard earned education to read something that is shorter than most comic books or periodicals in a pornography magazine.

For the rest of you, if you really want to spend all your money on something that isn't much more... give it to me. I will gladly take it. I want all your money.

You can spend 1 million dollars on a Ferrari Enzo, which had VERY limited production, or you can spend $50,000 on an Ariel Atom and get the same power to weight ratio with better handling, better reliability, lower maintenance costs, and all around more fun to drive.

You can spend half a million dollars buying a Lamborgini Gallardo and only get the bare minimum package, or you can buy a Mitsubishi Lancer Evolution X FQ-400 for around $60,000 with all the trimmings, Sat Nav, traction control, and a Heads Up Display, AND get 4 doors and a trunk, plus it requires less maintenance, and is cheaper to operate.

nVidia are marketing a Bugatti Veyron, and have failed to take into account the fact that roads do in fact have curves and the car will in deed be required to go around them without colliding with something on either side. What is more, the performance they are offering is available... in a perfectly optimized and scripted environment, with a straight and level road with no imperfections that is close to 6 miles long to allow you to experience what they claim it can do, and the fuel and tire costs will be more than you pay for rent each month. It will pass everything but a petrol station. The only reason nVidia have included seat belts is so the paramedics can more easily find the body.

ATI are marketing a high performance, reliable, affordable, very easy to work with product that has anticipated that not every program will be written perfectly, and makes up for it with dedicated hardware and a better design interface. They have produced a Gumpert Apollo which, though it cannot reach the same top speed, is highly customizable in it's configuration, takes corners the way you want to take them, and will get you home in one piece, no body bag needed. They designed it to perform, and that is what it does. It doesn't just perform well in one or two scenarios, it performs well in any scenario. It is welcome on any road, and will certainly be more than enough to make your neighbors jealous.

You can wait for Fermi and spend a crap ton of money on something that is only slightly faster in a straight line, or you can get a beautifully designed piece of engineering that is enough to get even a woman's rocks off, and it will do more than just a quarter mile drag. It will actually turn around corners, and even stop when you want it to. What's more, you will save money in the end with ATI because your power bill will be lower. Think about it. Unless you have a gun held to your head with the nVidia militia forcing you to buy the Fermi, use some common sense and pick the product that is proven, reliable, and has not pissed off Apple, Microsoft, Intel, AMD and many board manufacturers.

If you plan on having support for your product in the next 2 years, I recommend you buy ATI. You remember what happened to 3dfx, Number 9, Vision, NEC, SuperFX, Atari, and all the others who tried to dominate with hype. You probably even remember the sickening name S3 Verge (vomits up his quesadillas). Oh what a joy of a product that turned out to be.

Wait. Wait. MATAI!!!! I have an idea. Lets all go outside and see if the sun or moon still exists. I bet there is an atmosphere out there. I bet there is even the opportunity to get laid, listen to or create music, and generally live life. You know what will happen to Directx 11. It will have the same fate as Directx 9 and 10. Illusion software will exploit it to release Sexy Beach 4 in Japanese, only to be pirated and translated for the American men and lesbians to masturbate to. Then they will release a more realistic rapelay and Artificial girl with more detailed nipples. nVidia Physx will be used to make bouncing boobs, and your high end sound cards will be releasing the sweet success of slushing and slurping noises.

THEY ARE JUST VIDEO CARDS! The fact still remains that ATI has a better product. Besides, how many of you held on to your 3dfx cards for their sentimental value? Mine are on a cork board on display for all to see, and yes, they all work... granted the need PCI 32-bit or AGP 2X slots to run, and only support VGA with questionable TV out quality, but I still love them. I suppose I am in a paradox here. I know it's a moot point to discuss the performance of the two cards. The only useful choice is to either discuss the prices, or just stick with the card you have for another 2 years. Unless it dies due to you upsetting the gods of Integrated Circuits, I am certain the card you have RIGHT NOW is more than sufficient. If it is not, just buy the previous generation of cards or wait for price drops. Only insane people like me go out of their way to own every top end graphics product ever produced, and I don't regret it. I just wish they didn't come with crack cocaine dust that gets you addicted the moment you open the box.

ADHD Version:
Buy the HD 5870 or lesser, unless you need serious power, in which case go for the HD 5970... but only if you NEED it. If all your favorite games are running just fine, why throw another cog in the machine that only costs more money that you will be unable to notice unless you are insistent upon living on the cutting edge of technology. Believe me, it's a razor sharp edge. One slip and you might limp away a eunuch.

------------------------------
Who wants to make a bet with me? It's not like I know anything about these newfangled calculators or these so called graphiks kards, or whatever the kids are calling them toys these days.


----------



## NoahDiamond

Quote:



Originally Posted by *Imglidinhere*


Out of all that, this one sentence is the primary thing I saw. Yes they are winning... Right as I decide to switch over to Nvidia too. XD

Then again, AMD owns ATI so they have two sources of income at the moment whereas Nvidia only has one.


Would you really give your money to a company who will declare their Fermi 480/470 cards as EOL (end of life) in two years? Just stick with the card you have now. By the way, what card do you have now? If it's above a 260 216 core card, I would not bother buying a card unless you want bragging rights. Hell, ATI's 4000 series and nVidia's 8000/9000 series are still holding on strong. Why go out and do it? Are you some sort of investigative reporter? nVidia wants your money. They do not care about you.

All this over a bloody free T-shirt. Your new Fermi card will come with a dedicated fuel cell power generator and liquid nitrogen to keep it cool, where the ATI card is compatible with our old school Amish ways of AC power outlets and ATX power supplies.

Yes, I know, I am being a bit mean now. I apologize. I don't mean to upset you... it just so cute seeing you make these statements.

"Yes, I know the company is loosing money and shows signs of decline and possible failure in the future, so I am going to invest in them." It's a good thing the Nazi's invested in Zeppelin.

Hail nVidia! Hail the Fuhrer.

Are you going to start rounding up ATI fans into internment camps and force them at soldering gun point to produce nVidia products when TSMC drops them from their fab? Are you going to make them chisel them out of silicon by hand? wave solder the boards together using the natural changes of the solstice? Come on.

Oh boy, I sure do hope you all have a sense of humor.









It's all in good fun. I look forward to the insults I receive. I sincerely enjoy receiving them more than giving them. Maybe I will learn some good ones. Show me something that is a slap to my face... or you could just challenge me to a duel in some Quake III Arena. I'm sure your computer can handle that.


----------



## sosikwitit

Can't believe i read these long ass posts all the way down and the only things that caught my attention was Fermi being compared to a Ferrari and Lamborghini and it's much cheaper and affordable so i'll be getting one. Well marketed Noah, Thanks.


----------



## NoahDiamond

Quote:


Originally Posted by *philhalo66* 
lol looks like the ati totally spanks nvidia once again

Here, have a free Initiation paddle. Everyone has them these days. They are like light sabers, but made of wood. Chicks dig them too. This one time, at band camp...

There is no need to bash nVidia or ATI at this point. This is the pinnacle of a new horizon. We are but a fetus in the birth canal. For us at this moment to comprehend that in 40 years, we will be living in downtown Manhattan, working for Sanford Financial, living in a decent apartment flat, collecting minor works of cubist art is just insane. We don't know for CERTAIN what will happen. That is why we are called the prophets of gaming. Lets just hope we have a good idea as to what we are doing.

Personally, I think the whole gaming community is about to collapse due to the economy, and everyone is trying to get out their latest technology before they begin to focus more on software, laying off employees, and borrowing money from the government.

There really hasn't been a spectacular game, in my opinion, as the original Bioshock, in a long time. Before that, What was the best game? Did it require insane amounts of power to run it?

Heck, I still play Super Mario Bros. to this day and get enjoyment out of it, and that was a Hanukkah present 23 years ago.

Maybe I'm old fashioned, but the whole battle of ATI over nVidia is unnecessary. I suggest that all the CPU, GPU, APU and PPU companies combine forces and produce the Voltron of all computer gaming systems.

I want sharp edges, glass shards sticking out the side, obsidian as the face plate, the Monolith from 2001 a space odyssey as the core, nuclear cooling and self contained power generation, and wheels constructed of unobtainium. No, screw wheels. It has to hover, and it has to come with a force feedback codpiece, virtual reality body suit, and a lithium drip directly into your brain to ensure your happiness regardless of whether you win or loose.

Darn, I need a drink.


----------



## NoahDiamond

Quote:


Originally Posted by *sosikwitit* 
Can't believe i read these long ass posts all the way down and the only things that caught my intention was Fermi being compared to a Ferrari and Lamborghini and it's much cheaper and affordable so i'll be getting one.

I see the force is not strong with this one. Where did you matriculate?

Your selective reading skills indicate a strong tendency toward ADHD and dyslexia. When you are ready to move onto intellectual readings, we can compare diplomas.


----------



## liberalelephant

Quote:


Originally Posted by *NoahDiamond* 
Personally, I think the whole gaming community is about to collapse *due to the economy*, and everyone is trying to get out their latest technology before they begin to focus more on software, laying off employees, and borrowing money from the government.

I think more than the gaming community is going to collapse when the next wave hits







. I'm excited personally. I am making wallpaper out of my USDs







.


----------



## Drug

wow nvidia have once failed again and all my cards have been nvidia, just sold my 2x 295's!! Same price as the 5970 and just barely beats the 5870?? Might stick with crossfire 5870s


----------



## NoahDiamond

I hope I don't do something stupid like screw up and die, and end up missing the whole grand finale of the world collapsing into disorder. Seriously... I bought tickets. They are in the nosebleed section, but I wanted to be far enough from the splatter while watching them on the big screen. There is nothing like a good old fashioned new world order to set things straight.


----------



## liberalelephant

Quote:


Originally Posted by *NoahDiamond* 
I hope I don't do something stupid like screw up and die, and end up missing the whole grand finale of the world collapsing into disorder. Seriously... I bought tickets. They are in the nosebleed section, but I wanted to be far enough from the splatter while watching them on the big screen. There is nothing like a good old fashioned new world order to set things straight.

You could always move to Australia. The view from there will be pretty nice and none of the Illuminati really want anything to do with it







. "So what's the downside?" you ask...Well, they have about 13 out of the top 10 deadliest animals on the face of the planet lol.


----------



## NoahDiamond

Quote:


Originally Posted by *Drug* 
wow nvidia have once failed again and all my cards have been nvidia, just sold my 2x 295's!! Same price as the 5970 and just barely beats the 5870?? Might stick with crossfire 5870s









If you have a decent power supply, install Catalyst 10.3 pre-release, go to the ATI overdrive page in the Catalyst Control Panel, and set the core and memory on both chips to 850/4800 respectively, then install the latest MSI afterburner, I believe it is 1.5. Then, set the volts to 1.174.

Enjoy the same clocks as two 5870s on the same card, with significantly less latency between them, lower power consumption, and less overall heat generated.

I suggest overclocking your PCI Express bus to 115MHZ if your internals can take it. The PLX bridge on the HD 5970 can easily take it. I have found anything higher than this causes the internal raid to mess up, and generally cause the system to become unstable.

I recommend these settings because ATI sold the card under-volted and under clocked on both ram and cores to meet the ATX 300 Watt standards for consumer products. You are welcome and even encouraged to go above and beyond via your own doings.

-----------------------------------------

This is not recommended on nVidia cards, unless you like to watch fireballs shoot out the side of your card. It's a very expensive show, and it only happens once.


----------



## NoahDiamond

Quote:


Originally Posted by *liberalelephant* 
You could always move to Australia. The view from there will be pretty nice and none of the Illuminati really want anything to do with it









We want your Kangaroos and your sand. We also want your women. Damn, some Ausie chicks are freakin' smokin' hot.

I still love Chris's song "Bloke".

I'm a global man myself.

Did you know that ATI started life as a Canadian company? I guess you nVidia love children really can blame them for something.

I too love nVidia, but I know when it's time to move on. When your mate or partner comes home with boils, sores, and weird smells coming from down there, and she has all kinds of STDs and was last seen covered in paper cuts jerking off in an aids clinic, than perhaps it's time to move on.


----------



## sosikwitit

Quote:


Originally Posted by *NoahDiamond* 
I see the force is not strong with this one. Where did you matriculate?

Your selective reading skills indicate a strong tendency toward ADHD and dyslexia. When you are ready to move onto intellectual readings, we can compare diplomas.

The depth of your posts lead me to believe the same, Can i have some of your medication? And then we can go on typing big carefully placed words that make it sound like video cards are gonna revolutionize the world and humans will start mating in nests and fly to outer space. My Geforce is very strong by the way, that's what happens when team red fails you after being given enough chances but i'm just one measly customer that doesn't make a difference anyway so i rather throw what little i do have away to the fools than the exceptional diploma hierarchy that judges and looks down on me. intellectual enough for you? O wait...Most likely not.


----------



## liberalelephant

Quote:


Originally Posted by *NoahDiamond* 
Did you know that ATI started life as a Canadian company? I guess you nVidia love children really can blame them for something.

Pfft Canada isn't real. Lol you make funny jokes. Hey everybody lets go to "Canada" and hunt some "unicorns" hahahahaha.


----------



## sosikwitit

Quote:



Originally Posted by *liberalelephant*


Pfft Canada isn't real. Lol you make funny jokes. Hey everybody lets go to "Canada" and hunt some "unicorns" hahahahaha.


Actually we live in igloos and fish for dinner through a whole in the snow, But you already knew that.


----------



## NoahDiamond

Quote:



Originally Posted by *sosikwitit*


The depth of your posts lead me to believe the same, Can i have some of your medication? And then we can go on typing big carefully placed words that make it sound like video cards are gonna revolutionize the world and humans will start mating in nests and fly to outer space. My Geforce is very strong by the way, that's what happens when team red fails you after being given enough chances but i'm just one measly customer that doesn't make a difference anyway so i rather throw what little i do have away to the fools than the exceptional diploma hierarchy that judges and looks down on me. intellectual enough for you? O wait...Most likely not.


Quaint. You could use some work on your grammar, but overall, I say an 85 average. You got a slightly above the be grades there.

Just because you were burned by a single product one time isn't cause to dump the entire company from your bedside.

I personally had more nVidia graphics cards fail than I have had cars break down on me, and I live in the United States... In FLORIDA! When a company sells 10 million products, some are bound to be either faulty or installed improperly. I'm not calling you a technical fool... maybe a literary anarchist, but you are not an idiot. Step back and look at the big picture.

You remember the Chevrolet Cavalier? You know why they had so many failures? Because they sold more Cavaliers than any other vehicle for a long time due to them being affordable. Their release to failure ratio was very low. ATI's release to failure ratio is exceptionally low. nVidia's failure ratio is approaching 15%.

Now, back to the original question here. What was it? Oh yeah... which of us has bigger man boobs. I'm calling pizza hut right now. The race is on.









God bless the humorous souls that grace this planet. Without them, all hope would be lost. I personally have lost all sense of humor since I sexually molested my inner child.

As far as I am concerned, team Green are the stoners, and Team red are the opium dippers. I'm team multicultural open-minded educated fun. And yes, you can have some of my medication. I get it all the time at the library.


----------



## liberalelephant

Quote:



Originally Posted by *sosikwitit*


Actually we live in igloos and fish for dinner through a whole in the snow, But you already knew that.










Those are Mexicans....

wait...

...my map was upside down.










There we go. Now thats a map any elementary school graduate can understand


----------



## Ulver

Is sad to see that some people can't get past defending companies that they don't own. Fanboys suck!









Anyway, great thread OP!
Seems like ATI still has the upper hand. I will gladly stick with them until nVidia comes up with something significantly better (and then go back to ATI when they do the same, haha).
Nothing like some market competition to keep things moving forward!


----------



## NoahDiamond

Quote:



Originally Posted by *sosikwitit*


Actually we live in igloos and fish for dinner through a whole in the snow, But you already knew that.










You Aussies all have Melanoma, walking around outside with no shirts on, your skin audibly crackling in the sun. You don't even have to light your grills at night. That's why everything you cook is over fire. You don't even have to light the fire. You just toss the stuff on the grill and it bursts into flame... and when you are not doing that, you fling yourselves like rag-dolls directly into the local oceans and seas, which are inhabited exclusively by things designed to kill you.

Sharks, Jellyfish, Swimming Knives, they're all out there.

I love Dylan Moran's comedy.


----------



## NoahDiamond

Quote:



Originally Posted by *Ulver*


Is sad to see that some people can't get past defending companies that they don't own. Fanboys suck!









Anyway, great thread OP!
Seems like ATI still has the upper hand. I will gladly stick with them until nVidia comes up with something significantly better (and then go back to ATI when they do the same, haha).
Nothing like some market competition to keep things moving forward!










When it comes to fanboy suckage, I can suck the chrome off a trailer hitch.

I am a fanboy of nVidia, ATI, nVidia and Intel. I love them all, I research them all, and, sadly, I have invested in all of them throughout my lifetime. Now here I sit, a tired and broken man, with fast typing fingers that would get even the most desensitized hooker off in 2 seconds, and I am defending what I own. To clarify, I own a GTX 295, a HD 5970, a GT 240, and an Athlon Machine. What the heck is wrong with me?

I think I found the problem. I AM NOT GETTING LAID ENOUGH!


----------



## Voidsplit

Quote:



Originally Posted by *NoahDiamond*


When it comes to fanboy suckage, I can suck the chrome off a trailer hitch.

I am a fanboy of nVidia, ATI, nVidia and Intel. I love them all, I research them all, and, sadly, I have invested in all of them throughout my lifetime. Now here I sit, a tired and broken man, with fast typing fingers that would get even the most desensitized hooker off in 2 seconds, and I am defending what I own. To clarify, I own a GTX 295, a HD 5970, a GT 240, and an Athlon Machine. What the heck is wrong with me?

I think I found the problem. I AM NOT GETTING LAID ENOUGH!



Seriously? what happened to this thread...


----------



## NoahDiamond

We figured out that it doesn't matter who wins or loses. It's who has the biggest breasts.

No, but seriously, getting back on topic. The Fermi really does have potential. Having coded for it in pre-alpha, I can say that the new extensions are a breakthrough for nVidia, but they are not new to ATI.

nVidia wants to do things differently, but this time around, they have to adapt to the world ATI set for them. Since ATI invented this wheel, there is no proper way for nVidia to re-invent it. At this point, all they can do is catch up. Lucky for nVidia, ATI made their source code public, allowing anyone to code for it. This means nVidia was free to look over the optimizations and see what was needed to tweak their cards.

The best part about nVidia is they chose to run the Tessellation and Directx 11 features on the cuda cores instead of a dedicated hardware set, meaning that you can SLI a bunch of older 8000/9000 cards together and probably get somewhat decent Directx 11 support out of them. The Cuda Cores have not changed significantly in their programming that they could not be reverse programmed to run on the older cards and their existing cores.

nVidia is opting to go the software route, which at first seemed silly to me, but now I know why. They are trying to bring Directx 11 to their existing clientele and consumers. The performance isn't sensational, but when your GTX 295 owners suddenly get Directx 11 support, you will be pleased you stuck with your current card, and wonder why nVidia worked so hard to produce a new card to begin with. The reason they produced a new card is because of pure processing power. ATI destroyed nVidia in the raw power department. Now nVidia wants to catch up, and what better way than to software retrofit their existing CUDA/Directx 10 cards to support Directx 11. This is how we were able to do our coding using the 295 and multiple 295s in SLI. The older cards CAN run Directx 11 in software. They already run Directx 10 in software.

ATI decided to come up with such a revolutionary new design in architecture that nvdia doesn't have anything to compete with it. One single 5870 is about a fast a GTX 295 dual GPU SLI solution. This frightened nvidia, and then the HD 5970 dual GPU came out, and all was lost for nVidia in their eyes as they watched their clients buy ATI products over theirs.

Pile on top of that all the lawsuits, the problems with relations at TSMC, product availability and cost, manufacturing flaws, a huge history of epic hardware recalls that never reached the mainstream, and declining finances as investors are ready to pull out if they don't launch Fermi by june, and you have a huge mess that will take many janitors to clean up... and they have to be paid too.

ATI pulled a market strategy in performance... Wait for nVidia to become arrogant, then release the big shebang when they were completely caught off guard and climbing to stand on their pedestal.

Intel doesn't want to license their CPU sockets to nVidia, AMD doesn't want to support SLI, and ATI has the processing power and the technology that has been out long enough for people to grow tired of waiting and go out to see what the Directx 11 is all about.

Even if customers only bought the base model Directx 11 ATI cards, just to see what the images look like, they still invested.

nVidia needs more finances to continue, and they simply don't have it.

What it all boils down to is the fact that ATI has a solid product already on the market, and they not only designed the roads... they paved them and the way to the future of gaming. nVidia is no longer the fore front, and has to catch up and get people to believe in them again.

I personally think Fermi has a shot with the people who have not purchased Cypress based boards from ATI, but they will be paying a high premium for not a lot more processing power. If the people want it, they will pay.


----------



## sosikwitit

Notice the heat waves in the air?

  
 YouTube- NVIDIA GeForce GTX 480 running the Rocket Sled demo at CeBIT 2010


----------



## NoahDiamond

Quote:



Originally Posted by *sosikwitit*


Notice the heat waves in the air?

YouTube- NVIDIA GeForce GTX 480 running the Rocket Sled demo at CeBIT 2010


Ok ok, enough about the fact that they made a product that can cook meat. I personally have cooked a hot dog off the heat expelled from my HD 5970 when it was overclocked to 925/5000. I sincerely can cook with it. The difference is the method of cooling. ATI designed a revolutionary way to pull heat away from the board, where nVidia still use fins and heat pipes.

Further more, yes, nVidia cards DO run hotter at stock speeds. This is because the previous stock speeds were not fast enough to compete. The final product will be the higher binned GPUs, which will have fewer flaws, and will be installed in the GTX 480, where the chips that don't bin as well get laser cut and down clocked, then installed in GTX 470s. This is how nVidia has always done things. It's called increasing the yield. It is also why they have so many packaging and die failures in the mainstream, commercial and industrial market.

I say we give nVidia a chance. Let them put the cards on the store shelves with enough stock that we can all go out and buy one. Make them available to us, and make them affordable within their performance brackets. Lets see if nVidia can really do all they claim with their products. The only way we will bring this to a conclusion is to purchase one of the new products, and test them out for ourselves in the real world.

nVidia, you say you are ready. We are willing to comply and work with you as consumers. Lets see what you have. I promise not to bash you. I simply want to see for myself. No more videos or speeches. No more concept designs and hopeful launch dates. Give us the dates, give us the specs, let us know what we are buying, and put the products on the darned shelves. Just to settle this whole thread, and all the threads myself, when it comes out, I will buy a 480, and compare it on a FRESH WINDOWS INSTALL. I will record it on video you can see me doing it. Then we can all see what happens when, in an identical machine, how the cards perform. I have the 295 and the 5970. I am ready to get a 480 to see how much of a performance boost can be gained. I do know how to turn off the second chip on the 5970. Simply Disable the secondary chip in the device manager and poof, Catalyst Control Center only sees one chip and one memory bank. Then I clock them to identical clocks as the single 5870. Then I do the test. After that, I enable the second card, drop the clocks to stock, and test that configuration. Finally. I boost the PCI Express bus speeds to 115, OC the 5970 to the clocks it was designed from the factory to run at of dual 5870s at full speed, and test both cards on the 115 bus.

I think that sounds fair, doesn't it? Why should you spend your hard earned unemployment money when I can simply go out and buy one to end the debate. I will be happy to sell off the 480 to anyone who wants it for the price I paid, if the buyer pays shipping. Taxes were already paid and I don't intend to make a profit. Either that, or I add it to my stockpile of top end graphics cards.

What do you all think about that?


----------



## pcNub

Quote:


Originally Posted by *NoahDiamond* 
We figured out that it doesn't matter who wins or looses. It's who has the biggest breasts.

No, but seriously, getting back on topic. The Fermi really does have potential. Having coded for it in pre-alpha, I can say that the new extensions are a breakthrough for nVidia, but they are not new to ATI.

nVidia wants to do things differently, but this time around, they have to adapt to the world ATI set for them. Since ATI invented this wheel, there is no proper way for nVidia to re-invent it. At this point, all they can do is catch up. Lucky for nVidia, ATI made their source code public, allowing anyone to code for it. This means nVidia was free to look over the optimizations and see what was needed to tweak their cards.

The best part about nVidia is they chose to run the Tessellation and Directx 11 features on the cuda cores instead of a dedicated hardware set, meaning that you can SLI a bunch of older 8000/9000 cards together and probably get somewhat decent Directx 11 support out of them. The Cuda Cores have not changed significantly in their programming that they could not be reverse programmed to run on the older cards and their existing cores.

nVidia is opting to go the software route, which at first seemed silly to me, but now I know why. They are trying to bring Directx 11 to their existing clientele and consumers. The performance isn't sensational, but when your GTX 295 owners suddenly get Directx 11 support, you will be pleased you stuck with your current card, and wonder why nVidia worked so hard to produce a new card to begin with. The reason they produced a new card is because of pure processing power. ATI destroyed nVidia in the raw power department. Now nVidia wants to catch up, and what better way than to software retrofit their existing CUDA/Directx 10 cards to support Directx 11. This is how we were able to do our coding using the 295 and multiple 295s in SLI. The older cards CAN run Directx 11 in software. They already run Directx 10 in software.

ATI decided to come up with such a revolutionary new design in architecture that nvdia doesn't have anything to compete with it. One single 5870 is about a fast a GTX 295 dual GPU SLI solution. This frightened nvidia, and then the HD 5970 dual GPU came out, and all was lost for nVidia in their eyes as they watched their clients buy ATI products over theirs.

Pile on top of that all the lawsuits, the problems with relations at TSMC, product availability and cost, manufacturing flaws, a huge history of epic hardware recalls that never reached the mainstream, and declining finances as investors are ready to pull out if they don't launch Fermi by june, and you have a huge mess that will take many janitors to clean up... and they have to be paid too.

ATI pulled a market strategy in performance... Wait for nVidia to become arrogant, then release the big shebang when they were completely caught off guard and climbing to stand on their pedestal.

Intel doesn't want to license their CPU sockets to nVidia, AMD doesn't want to support SLI, and ATI has the processing power and the technology that has been out long enough for people to grow tired of waiting and go out to see what the Directx 11 is all about.

Even if customers only bought the base model Directx 11 ATI cards, just to see what the images look like, they still invested.

nVidia needs more finances to continue, and they simply don't have it.

What it all boils down to is the fact that ATI has a solid product already on the market, and they not only designed the roads... they paved them and the way to the future of gaming. nVidia is no longer the fore front, and has to catch up and get people to believe in them again.

I personally think Fermi has a shot with the people who have not purchased Cypress based boards from ATI, but they will be paying a high premium for not a lot more processing power. If the people want it, they will pay.

I tried reading this wall of text, as it looked intelligent enough and lo' and behold, I understood most of it.








I really do hope that Fermi drops the prices of cards...it's a real pain the arse using this 9500gt that lags in CoD 4...


----------



## NoahDiamond

Quote:


Originally Posted by *pcNub* 
I tried reading this wall of text, as it looked intelligent enough and lo' and behold, I understood most of it.








I really do hope that Fermi drops the prices of cards...it's a real pain the arse using this 9500gt that lags in CoD 4...









Thank you for taking the time to read my articles. I rarely make posts, but thanks to you, I am now.

Thanks.


----------



## Ulver

Quote:


Originally Posted by *NoahDiamond* 
blahblahblah... when it comes out, I will buy a 480, and compare it on a FRESH WINDOWS INSTALL. I will record it on video you can see me doing it. Then we can all see what happens when, in an identical machine, how the cards perform. I have the 295 and the 5970. I am ready to get a 480 to see how much of a performance boost can be gained. I do know how to turn off the second chip on the 5970. Simply Disable the secondary chip in the device manager and poof, Catalyst Control Center only sees one chip and one memory bank. Then I clock them to identical clocks as the single 5870. Then I do the test. After that, I enable the second card, drop the clocks to stock, and test that configuration. Finally. I boost the PCI Express bus speeds to 115, OC the 5970 to the clocks it was designed from the factory to run at of dual 5870s at full speed, and test both cards on the 115 bus.

I think that sounds fair, doesn't it? ... etc

Nope, it doesn't.
Its not fair comparing a dual-chip card to anything if one of its chips is turned off. Its like comparing a Corvette with a Ferrari and saying you will lock-up 4 cylinders of the Ferrari's V12 just to make the comparison fair.


----------



## NoahDiamond

Quote:


Originally Posted by *Ulver* 
Nope, it doesn't.
Its not fair comparing a dual-chip card to anything if one of its chips is turned off. Its like comparing a Corvette with a Ferrari and saying you will lock-up 4 cylinders of the Ferrari's V12 just to make the comparison fair.









You see, when I disable a single chip, and configure the board to run as a single board, it is the same as disabling sli on the 295. However, if you truly insist that I make a brass taxes assessment, I will have to get a 480 and a 5870. If it means that much to you, I will do it, but I assure it will perform the same as a 5970 with one 5870 disabled. The other 5870 gets all the bandwidth and the PLX chip switches from being a hub to a channel. Channels have no latency. They just align their pathways to allow the single card to run solo. Not a problem. I will do the 5870 as a discrete independent test if you REALLY want, but someone is going to have to buy it when I am done. Otherwise, it gets added to the ever growing pile. There is a 9800GTX+, a GX2, a 8800GTX, even a Voodoo 5 5500!

Speaking of which, who would like a montage of the changes of gaming since the 3dfx add-on card was released, up to today's modern cards? Or even before? How about I go from Pong, all the way up to the 5970, covering all the consoles with my friends and a bunch of TV sets. Something to set the mood for the era we are in now.

Or, someone can donate a card for the comparison. I assure you will get it back, with a nifty surprise included in the box as a gift for your assistance.

Your opinion, please?


----------



## d0gZpAw

Quote:


Originally Posted by *NoahDiamond* 
Ok ok, enough about the fact that they made a product that can cook meat. I personally have cooked a hot dog off the heat expelled from my HD 5970 when it was overclocked to 925/5000. I sincerely can cook with it.

Well, since hotdogs are comprised of cured meat and are "fully cooked" before packaging, technically what you did was -reheat- a hotdog on your video card. Not quite as "cool" as the Thermidor egg cooker (using AthlonXP CPU), because, to be frank (pun), you shouldn't be eating hotdogs! All it's comprised of is fat, lips and bungholes!

Next time, cook a steak! Maybe using the "grille" on top of that nVidia demo box!

Quote:


Originally Posted by *NoahDiamond* 
I will do the 5870 as a discrete independent test if you REALLY want, but someone is going to have to buy it when I am done.

I will take it off your hands, very cheaply of course! XD

Quote:


Originally Posted by *Ulver* 
Nope, it doesn't.
Its not fair comparing a dual-chip card to anything if one of its chips is turned off. Its like comparing a Corvette with a Ferrari and saying you will lock-up 4 cylinders of the Ferrari's V12 just to make the comparison fair.









Dude that's not fair at all!! The Ferrari 612 has a 5.7L V12, and that's smaller displacement than the base Corvette's V8.. If you disabled a third of the ferrari's cylinders, you would end up with a 3.8l "gimp" V12 vs a 6.0l V8.. Again, not fair! You guys should work on your car analogies!


----------



## NoahDiamond

Quote:


Originally Posted by *Ulver* 
Nope, it doesn't.
Its not fair comparing a dual-chip card to anything if one of its chips is turned off. Its like comparing a Corvette with a Ferrari and saying you will lock-up 4 cylinders of the Ferrari's V12 just to make the comparison fair.









Just so you know, it's perfectly fine to compare a Corvette ZO6 to a Ferrari 430 or a v12 if you like, no cylinders disabled. In the automotive world, things are done a bit differently. I use metaphors in a linear and logical fashion.

Disabling one chip doesn't have a real comparison in the automotive world unless there is a car with two identical motors, and one is for sale with only one motor, and you remove one drive train to make them equal to see how they perform on level ground, but locking up cylinders? That is just awful. I would never wish that upon any car. I could see setting the electronic valve timing to alternate cylinder timing, but that would be unfair due to drag co-efficient of the engine's internals, mainly the cylinder pressure and drag placed on both the crank and the cams.

I ask that you stick with metaphors that fit properly. I am not insulting you, and I am very pleased to see you reading my posts to a degree. At least you are open to outside thoughts and have not closed your mind. Open ears are nothing with a closed mind, and an open mind is nothing with a sealed soul.


----------



## sosikwitit

Quote:


Originally Posted by *NoahDiamond* 
I think that sounds fair, doesn't it? *Why should you spend your hard earned unemployment money* when I can simply go out and buy one to end the debate. I will be happy to sell off the 480 to anyone who wants it for the price I paid, if the buyer pays shipping. Taxes were already paid and I don't intend to make a profit. Either that, or I add it to my stockpile of top end graphics cards.

What do you all think about that?









I'm loling my ass off.


----------



## NoahDiamond

Quote:


Originally Posted by *d0gZpAw* 
I will take it off your hands, very cheaply of course! XD

I will gladly sell it for the price I pay for it, if it comes to that. Right now, there is no nVidia card to compare it to, so it's a moot point. When the time comes, I will do it.


----------



## grunion

Keep this technical and on topic.

No need for innuendo, comments about indigenous people, etc.............
Thanks


----------



## NoahDiamond

Quote:



Originally Posted by *sosikwitit*









I'm loling my ass off.


I didn't intend to make a joke. Ok, I did. If anyone here is living in America, and you are one these forums debating these cards on the weekdays before and after lunch times in all time zones, I have to assume you have more free time on your hands than most bosses would allocate. Unless, of course, you are your own boss, in which case I say good luck to you.


----------



## NoahDiamond

Quote:



Originally Posted by *grunion*


Keep this technical and on topic.

No need for innuendo, comments about indigenous people, etc............. 
Thanks


Did my lewd comments, followed by huge amounts of technical on topic jargon fall into this category? If so, then I will never mention breasts on this forum again.


----------



## Ulver

Quote:


Originally Posted by *NoahDiamond* 
Just so you know, it's perfectly fine to compare a Corvette ZO6 to a Ferrari 430 or a v12 if you like, no cylinders disabled. In the automotive world, things are done a bit differently. I use metaphors in a linear and logical fashion.

Disabling one chip doesn't have a real comparison in the automotive world unless there is a car with two identical motors, and one is for sale with only one motor, and you remove one drive train to make them equal to see how they perform on level ground, but locking up cylinders? That is just awful. I would never wish that upon any car. I could see setting the electronic valve timing to alternate cylinder timing, but that would be unfair due to drag co-efficient of the engine's internals, mainly the cylinder pressure and drag placed on both the crank and the cams.

I ask that you stick with metaphors that fit properly. I am not insulting you, and I am very pleased to see you reading my posts to a degree. At least you are open to outside thoughts and have not closed your mind. Open ears are nothing with a closed mind, and an open mind is nothing with a sealed soul.









Man, too much coffee is bad for u!

Let me put it simply, nVidia's 480GTX is one product and ATI's 5970 is also just one product, regardless of the number of GPUs inside them. So, I believe that comparing one to the other is valid and disabling features as a means of "making it fair" ends up making it pointless.








I am talking about comparing video cards as a whole, not parts of it, as is the topic of this thread (please read the first few lines of the first post).

If you want to make a more technical comparison of the cards architecture, make your custom circuit board and use statistics, double-blinds and other experimental techniques to prove your point... well, thats all very impressive and we're all going to give you the attention you want.









Just make a new thread cause here, you're off-topic.


----------



## NoahDiamond

All alone. Am I talking to myself? I think I am talking to myself. I AM! I AM talking to myself!

- = (Gnarly surfer accent)
-- = (Me, normal human being)
-Dude, what's up?
--Not much, just trying to compare the 5870 to the GTX 480?
-Dude, uh, what?
--It's computer stuff.
-Wait, let me turn the music down. Ok, say it again?
--I'm comparing the latest in discrete single GPU graphics solutions.
-Um... dude? turning down the music didn't help. I'ma take a rip off this stuff.\\
--Come on, grow up. This stuff is both insignificant and critical to me at the same time.
-Ok, ok... try to explain it to me.
--Ok, so there are these two companies who are working on the latest in video game technologies, like that pac-man you were playing earlier, only much more advanced. Ever played Grand theft auto III, or any of it's successors?
-Dude, I totally love cheating in there and cheifing on the tank, That's so rad and out there,
--ye...Yeah. Ok. Now imagine that game, but looking 10 times better, almost realistic.
-Leme clean this bong, man.
--(covers eyes and whispers to himself) God, please just strike him with intelligence.
-I HEARD THAT PUNK!
--Heard what? What did I say>
-I totally agree, Mindless self Indulgence rules.
--Ok, back on track, now one company is ready and available now, and the other company is not ready, but they insist that they will be ready soon and that their product will cost $600 or more, yet it won't be much faster than the competitors stuff.
-Duuuud, Talk about conspiracy, man!
-- No... ok, there is no conspiracy here. It's business against business. Lets say you have been preparing for a bicycle tournament, and 2 months before time trials, you have to check in your new bike, and you have unlimited funds to get the best bike you can get, no expense spared.
-Freakin sweet! Can my ladies watch me as I tackle the tournament?
--Um, sure, whatever, but there's a problem... You want to get a high performance bike from bike maker A, and they have it ready for the track, proven and tested by Lance Armstrong himself. Then suddenly bike maker B comes in and say our product is almost ready, it's going to cost almost twice as much, but it will ride a tid bit faster in stock condition, and you are not allowed to modify it aside from comforts, but there is a problem, they have no bike to show you. He says it will be awesome, and to trust him, but by no means will he show you the bike, and he especially won't let you take it for a ride. Who do you go with?
- Dude... ow... that's nuts man. I'd pick A because they have something right in front of me and they are about to let me take a test run all I want.
-- Ok, now you go with bike maker a, and suddenly bike maker B comes to you and says they will show you the B bike. You can't see it yourself... you can only watch videos of it. Are you interested in watching them?
-Heck no, lets go party on bike A. It's already proven to be faster, lighter, conserves my energy, has more storage for water, the chain is PERFECT MAN! Just freaking awesome.
--OK, So, you take out bike A, called the 5970, and you win the tournament with little effort. You come back, and now everyone wants to buy the bike you rode. Bike Maker A steps into gear and starts shipping bikes by the millions. Bike maker B has not even publicly displayed a final bike at all, yet it will be expensive. Nobody listens to them because they have nothing to show the public. Sure, lots of videos and stuff, but nothing the buyers can feel or ride.
- Oh Man, dude, it seems like Bike maker B is in trouble.
--I would have to agree.

Lets look at the numbers.
Shipped last year and this year combined.

Bike maker A (AMD ATI 5870 & 5970) has sold over 1 million products, They are in such high demand that stores cannot keep them in stock.

Bike Maker B nVidia GTX 480 and 470) have sold, are you ready for this number, it's easy for you stoner boy. 0. Zero. Zilch. Not a single product has shipped, and they are having manufacturing problems.

Got the basics, my man?

Dude, I got it all, baby. Peace out.


----------



## sosikwitit

Bet someone read this post.


----------



## NoahDiamond

Quote:


Originally Posted by *Ulver* 







Man, too much coffee is bad for u!

Let me put it simply, nVidia's 480GTX is one product and ATI's 5970 is also just one product, regardless of the number of GPUs inside them. So, I believe that comparing one to the other is valid and disabling features as a means of "making it fair" ends up making it pointless.








I am talking about comparing video cards as a whole, not parts of it, as is the topic of this thread (please read the first few lines of the first post).

If you want to make a more technical comparison of the cards architecture, make your custom circuit board and use statistics, double-blinds and other experimental techniques to prove your point... well, thats all very impressive and we're all going to give you the attention you want.









Just make a new thread cause here, you're off-topic.

I will test with all cards independently. Period.


----------



## NoahDiamond

I just had my period! YAY! John didn't me pregnant...

Wait, I thought I was a guy.

This second life is so weird.


----------



## NoahDiamond

Quote:



Originally Posted by *Ulver*









Man, too much coffee is bad for u!

Let me put it simply, nVidia's 480GTX is one product and ATI's 5970 is also just one product, regardless of the number of GPUs inside them. So, I believe that comparing one to the other is valid and disabling features as a means of "making it fair" ends up making it pointless.








I am talking about comparing video cards as a whole, not parts of it, as is the topic of this thread (please read the first few lines of the first post).

If you want to make a more technical comparison of the cards architecture, make your custom circuit board and use statistics, double-blinds and other experimental techniques to prove your point... well, thats all very impressive and we're all going to give you the attention you want.









Just make a new thread cause here, you're off-topic.


I don't drink coffee. I just naturally think fast.


----------



## philhalo66

Quote:



Originally Posted by *NoahDiamond*


Here, have a free Initiation paddle. Everyone has them these days. They are like light sabers, but made of wood. Chicks dig them too. This one time, at band camp...

There is no need to bash nVidia or ATI at this point. This is the pinnacle of a new horizon. We are but a fetus in the birth canal. For us at this moment to comprehend that in 40 years, we will be living in downtown Manhattan, working for Sanford Financial, living in a decent apartment flat, collecting minor works of cubist art is just insane. We don't know for CERTAIN what will happen. That is why we are called the prophets of gaming. Lets just hope we have a good idea as to what we are doing.

Personally, I think the whole gaming community is about to collapse due to the economy, and everyone is trying to get out their latest technology before they begin to focus more on software, laying off employees, and borrowing money from the government.

There really hasn't been a spectacular game, in my opinion, as the original Bioshock, in a long time. Before that, What was the best game? Did it require insane amounts of power to run it?

Heck, I still play Super Mario Bros. to this day and get enjoyment out of it, and that was a Hanukkah present 23 years ago.

Maybe I'm old fashioned, but the whole battle of ATI over nVidia is unnecessary. I suggest that all the CPU, GPU, APU and PPU companies combine forces and produce the Voltron of all computer gaming systems.

I want sharp edges, glass shards sticking out the side, obsidian as the face plate, the Monolith from 2001 a space odyssey as the core, nuclear cooling and self contained power generation, and wheels constructed of unobtainium. No, screw wheels. It has to hover, and it has to come with a force feedback codpiece, virtual reality body suit, and a lithium drip directly into your brain to ensure your happiness regardless of whether you win or loose.

Darn, I need a drink.


actually im not bashing nvidia i actually plan to buy 2 480's maybe 3


----------



## gtarmanrob

Noah dude, we get that you seem to know quite a bit about the subject, and you seem to think you're a pretty funny guy, but do you have to spam the threads with multiple posts? either get it all out in a single post or just wait. or better yet, keep your trap shut. given this is an online forum i realise you are not actually speaking to be able to shut your trap, but im sure a person of your intelligence can read into what i mean


----------



## NoahDiamond

Quote:



Originally Posted by *gtarmanrob*


Noah dude, we get that you seem to know quite a bit about the subject, and you seem to think you're a pretty funny guy, but do you have to spam the threads with multiple posts? either get it all out in a single post or just wait. or better yet, keep your trap shut. given this is an online forum i realise you are not actually speaking to be able to shut your trap, but im sure a person of your intelligence can read into what i mean










Bat country burned that sense of me.


----------



## Dopamin3

Quote:



Originally Posted by *philhalo66*


actually im not bashing nvidia i actually plan to buy 2 480's maybe 3


Biggest CPU bottleneck of the century.


----------



## NoahDiamond

I am looking forward to dual cpu socket boards to alleviate this issue.


----------



## scrotes

Quote:



Originally Posted by *muselmane*


you forget, that with a nice cooler like the prolimatech mk13 a 5870 can be easily clocked to much higher performance. I dont think that the GTX480/470 will be a good OCer even with a good cooling.


agreed from the cut outs it looks like its going to run way to hot, so that leaves little room for safe overclocking


----------



## gtarmanrob

Quote:



Originally Posted by *NoahDiamond*


I am looking forward to dual cpu socket boards to alleviate this issue.


yeah im surprised they actually havnt appeared yet. i mean i know there are dual-cpu server boards or whatever they are used for.

but i've yet to see a dual-cpu gaming platform. i guess we dont really need it yet, but then, really, we kinda do if you're into extreme gaming with multiple high-end gpus.


----------



## ericeod

Quote:



Originally Posted by *gtarmanrob*


yeah im surprised they actually havnt appeared yet. i mean i know there are dual-cpu server boards or whatever they are used for.

but i've yet to see a dual-cpu gaming platform. i guess we dont really need it yet, but then, really, we kinda do if you're into extreme gaming with multiple high-end gpus.


eVGA has one:
Take a Sneak Peek at EVGA's Dual Socket LGA 1366 Motherboard

EVGA shows us their Dual Socket 1366 Board


----------



## gtarmanrob

holy crap haha. does anyone on OCN have one of those going?


----------



## ericeod

Quote:



Originally Posted by *gtarmanrob*


holy crap haha. does anyone on OCN have one of those going?


They have not been released yet that I know of.


----------



## Ulver

Quote:


Originally Posted by *ericeod* 
eVGA has one:









Wow!


----------



## grunion

Quote:


Originally Posted by *ericeod* 
eVGA has one:
Take a Sneak Peek at EVGA's Dual Socket LGA 1366 Motherboard

EVGA shows us their Dual Socket 1366 Board










Ugh
I hate the sata port placement.


----------



## NoahDiamond

I've seen dual CPU server boards. The best one I have seen is the Super Micro, but it is not designed for SLI. It's made for Tesla and performance RAID.


----------



## NoahDiamond

Quote:


Originally Posted by *ericeod* 
eVGA has one:
Take a Sneak Peek at EVGA's Dual Socket LGA 1366 Motherboard

EVGA shows us their Dual Socket 1366 Board










The board you see in this picture would be fine with water blocks. Still, the Super-Micro extended ATX boards are the bomb. They WILL do Crossfire, but they are not optimized for it. It's not like you will get a real performance hit, but Crossfire is nowhere as CPU intensive as SLI.

Besides, who wants to pay 53 cents per hour while their machine idles, downloading a torrent?


----------



## NoahDiamond

It got really quiet in here. Let me stir some things up.

ATI, AMD, INTEL and nVidia all SUCK! Only 3dfx rules! WOO SGI.

Now, will someone say something for god sakes?


----------



## Ulver

Quote:


Originally Posted by *NoahDiamond* 
It got really quiet in here. Let me stir some things up.

ATI, AMD, INTEL and nVidia all SUCK! Only 3dfx rules! WOO SGI.

Now, will someone say something for god sakes?

You Sir, are a very peculiar person.









Anyway, continuing on the subject, I think the new 4Gb 5970 will, most assuredly, be light-years ahead of any 480GTX or even 480GTX SLI for that matter.
Have you guys seen the ASUS version? Its the meanest looking video card ever!


----------



## NoahDiamond

Quote:


Originally Posted by *Ulver* 
You Sir, are a very peculiar person.










Anyway, continuing on the subject, I think the new 4Gb 5970 will, most assuredly, be light-years ahead of any 480GTX or even 480GTX SLI for that matter.
Have you guys seen the ASUS version? Its the meanest looking video card ever!


















It is also a rendered concept drawing in 3d.


----------



## liberalelephant

Quote:


Originally Posted by *NoahDiamond* 
It is also a sexy rendered concept drawing in 3d.

fixed


----------



## Hadenman

Dunno, that Asus 5970 design kinda looks like it's made by Ronco , and should be able to make a mean batch of julienne fries or diced onions.

NVIDIA fans can take solace in the fact that when they release a DX11 COD sequel, it will sink the 5970 battleship. Always amazed me that the COD games make ATI their b**ch, but would run like the wind on an inferior NVIDIA card. I mean, their both directX compliant/certified right? Anyone have a good link that gives some detail into why the COD games perform so badly on ATI cards?


----------



## NoahDiamond

We need more nuclear graphics cards with heat shields and and flux capacitors.

And more sexy women. I want more sexy women on my graphics. I don't mean rendered... I mean tangible sexy women who will please me in every way... like programming in C and beating me at games.

Guys... you know you love to get your ass kicked by a hot chick in a shooter. Anything with human with proper mammaries and an available, well, you know what is supposed to be down there, that can kick my ass in a game can darned well kick my ass in other things, and I like that.

We need ATI to produce more digital women for us to ogle at, and then have them jump out the screen and get what they want... a mouse and keyboard... for themselves.

Off topic.

Back on topic, we ran into a problem with are Fermi backwards compatibility coding. Games designed to run on 480s and 470s will run very slow on older cards unless the final version has a fixed memory leak when stepping down to older directx versions. Hopefully it's just a software problem.

By the way, how many females are here, or is this a sausage fest?


----------



## NoahDiamond

Quote:



Originally Posted by *Hadenman*


Dunno, that Asus 5970 design kinda looks like it's made by Ronco , and should be able to make a mean batch of julienne fries or diced onions.

NVIDIA fans can take solace in the fact that when they release a DX11 COD sequel, it will sink the 5970 battleship. Always amazed me that the COD games make ATI their b**ch, but would run like the wind on an inferior NVIDIA card. I mean, their both directX compliant/certified right? Anyone have a good link that gives some detail into why the COD games perform so badly on ATI cards?


TWIMTBP games are intentionally crippling ATI code. Only in later patches are the problems resolved. If you want help with a custom patch for your particular card, contact me with the game, and I will make you an executable that will add or replace the shader libraries entirely, and remove that stupid logo.


----------



## badger6021

Quote:



Originally Posted by *Hadenman*


Dunno, that Asus 5970 design kinda looks like it's made by Ronco , and should be able to make a mean batch of julienne fries or diced onions.

NVIDIA fans can take solace in the fact that when they release a DX11 COD sequel, it will sink the 5970 battleship. Always amazed me that the COD games make ATI their b**ch, but would run like the wind on an inferior NVIDIA card. I mean, their both directX compliant/certified right? Anyone have a good link that gives some detail into why the COD games perform so badly on ATI cards?


 they don't perform badly well.... not for me anyways.


----------



## sepheroth003

Quote:



Originally Posted by *Hadenman*


Dunno, that Asus 5970 design kinda looks like it's made by Ronco , and should be able to make a mean batch of julienne fries or diced onions.

NVIDIA fans can take solace in the fact that when they release a DX11 COD sequel, it will sink the 5970 battleship. Always amazed me that the COD games make ATI their b**ch, but would run like the wind on an inferior NVIDIA card. I mean, their both directX compliant/certified right? Anyone have a good link that gives some detail into why the COD games perform so badly on ATI cards?


Ya sorry I disagree with that comment. Recently my 4850 died and Ive been using my 8800GT backup card, for the most part they both play maxed settings on MW2 at 1680x1050 but the 8800GT does lag every once in a while. It was enough enough I turned AA down on the 8800GT.


----------



## Ulver

Quote: 
   Originally Posted by *liberalelephant*   fixed  
Indeed!

It was the only pic I found at the time but theres a video of a real-working version right here:
  
 YouTube- CeBIT 2010 : Two 5870 GPUs on one board? That's ARES from ASUS


----------



## Nautilus

I think my results are better than GTX 480 @ Stock:
(Used 5870 1000/1300)


----------



## smash_mouth01

I think mine rival yours..... JK










And this is me using their settings


----------



## Nautilus

Quote:

I think mine rival yours..... JK
Well you passed me only by 0.3 FPS but i give you the credit anyway. I think my limitation is the CPU. 3330Mhz is a rather low frequency. Your 3.6Ghz triple cores are faster in gaming any day. 3.3Ghz is the max. this chip can do.


----------



## billtsai

The life of a product become shoter and shorter. 5970, the best card in the world, just released few months ago. However, GTX 480 release can make the price of 5 series go down. It's a good news, isnt it?


----------



## NoahDiamond

Honestly, you have no reason to upgrade for another unless you want to use all the features. Sure, the graphics will be limited a bit, but game play won't. The graphics chip is still faster than the 360. I sincerely update shader code for most game engines in Directx and OpenGL. I just need to know your OS, card and game. In the end you will get either a MSI or an EXE that does the work you, I do it all the e time now adayx,


----------



## NoahDiamond

Quote:


Originally Posted by *Ulver* 
Indeed!

It was the only pic I found at the time but theres a video of a real-working version right here:
YouTube- CeBIT 2010 : Two 5870 GPUs on one board? That's ARES from ASUS

Please ignore the dumb title from Hexus, it is of course a 5970 and the only things that are new about it is that it has 4Gb and a new cooling design (which looks - spectacular and- shorter than the standard 5970 but wider as well).









And Sapphire's one:

YouTube- CeBit 2010 : Hands on with the Sapphire 5970... and more!

Not bad looking. does it take 3 slots?


----------



## smash_mouth01

Quote:


Originally Posted by *Nautilus* 
Well you passed me only by 0.3 FPS but i give you the credit anyway. I think my limitation is the CPU. 3330Mhz is a rather low frequency. Your 3.6Ghz triple cores are faster in gaming any day. 3.3Ghz is the max. this chip can do.









damn, but good show none the less. I don't think I'll need to upgrade for a while


----------



## NoahDiamond

Like I said... I make my own graphical patches. The only problem is the multiplayer support... but the card is fast enough to keep up with the stock software.


----------



## NoahDiamond

Quote:



Originally Posted by *smash_mouth01*


damn, but good show none the less. I don't think I'll need to upgrade for a while


If you play with your GTLS and invest a week into fine tuning, you can get more out of it. I promise. The largest limiting factor is patience, which the world is rapidly running out of.

If you send me your entire system config, I may be able to help you quickly extract more out of it without much work.


----------



## NoahDiamond

Intel CPUs have the upper hand not only in raw performance... but in overclockability. The only issue is their price. I am sticking with my Q9550 E0 for the time being until high end Intel CPUs come down in price, or AMD comes out with a surprise CPU that suddenly rules the world. As it stands, right now, today, the optimal config is Intel cpu and chipset, and AMD/ATI graphics.

To be fair, Intel has a hugely unfair advantage in technology. I am not saying AMD CPUs are bad. They are excellent cores, but as I said before, they are lacking in development and fab upgrades.

Intel have an unspoken agreement... Intel stays in the high performance, high price CPUs, and Intel stays in the slightly slower, lower price segments. This is not to belittle AMD at all. I think their CPUs are fantastic, but their finances are mostly tied up in graphics design over CPU. Intel even has a contract with AMD to integrate their ATI Radeons into their Intel chipsets. It's a unique day we live in. 10 years ago, this would be unheard of, but today is today, and it is what is happening.

AMD has a set of new CPUs in the works, but Intel does as well.

Intel wants AMD as a competition so they don't have a monopoly, and they want to be able to use their graphics since almost every Intel graphics solution has sucked.

nVidia are in a pinch in this scenario. Clock for clock, AMD/ATI are far superior, but the name for nVidia is so hugely present that they are still considered the top dog, even though last week the marketshare shifted to AMD/ATI for graphics solutions. Slowly, AMD/ATI/Intel are working together to oust nVidia, while at the same time, nVidia is trying to create an entirely new method of computing technology that is not x86.

If nVidia are successful in their extremely expensive endeavor, they will be an amazing prospect, but currently, their CPUs and integrated graphics appear to be focused, as for today and today's attempted production and attempted market, towards mobile devices that don't use x86 code, where they are free to make anything they want. The only problem is that I have yet to see a single nVidia based mobile product. By mobile, I mean hand held, not netbooks and laptops.

I do believe nVidia has a shot at regaining the throne... but right this second, they have only a couple shots. Lucky for nVidia, their fan base is so humongous that they could release a set of tinker toys and people would buy them.

As a developer and engineer, I really hope nVidia jumps on the open source bandwagon so we can code games for everyone without having to make so many separate optimizations. Right now, it is a chore to do so. We essentially have to make 3 graphics engines in one... One optimized for Open Source code, one optimized for nVidia code, and one for final compilation to output. It would be so much easier if we could just make a direct to open source engine and code to save on time so we could deliver more games at lower prices and higher volumes.

Right now, it sucks and the money is not there... And on the subject of money, nVidia is running out of money to bribe game developers to make their cards perform superior. Right now, it is just cheaper to use open source and hope nVidia catches up. nVidia cards have enough power on cards 3 years old to run open source code if it is optimized.


----------



## NoahDiamond

I have a great idea... Lets bomb Jack Thompson's house.


----------



## smash_mouth01

Quote:



Originally Posted by *NoahDiamond*


If you play with your GTLS and invest a week into fine tuning, you can get more out of it. I promise. The largest limiting factor is patience, which the world is rapidly running out of.

If you send me your entire system config, I may be able to help you quickly extract more out of it without much work.


Your on Doctor House, what information do you need ?.
Also keep in mind my graphics cards are still on stock 5770 V2 cooling (the egg shaped cooler).
Please pardon my lack of tech talk but what does GTLS mean?.


----------



## chaz

Quote:



Originally Posted by *NoahDiamond*


nVidia is running out of money to bribe game developers to make their cards perform superior.


Seriously guy, instead of countering each of your delusional malformed-facts paragraphs, I'm just gonna say with most of the BS you bring around here, not only Hugh Laurie wouldn't give one but that you should really reconsider the amount of off-topic bullery you present on the internet.

Also, look at the amount of complexity and quantities of the SMs for both sides, I know this is bad math but the error margin is permissible, not taking into considerations ROPS, texture units etc. Here I'll take some benchmark statistics from another site.

Code:


Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/core  Core Clock
5970         37FPS  3000            1500                ~0.01233      725mhz
295GTX     26FPS  480              240                 ~0.05417       576mhz

From what I see, nvidia's SM technology is drastically unsubmissive to ATI even under a 200mhz core clock difference.
I haven't gotten any other solid numbers for the 400 series that are equally comparable in terms equivalence with other components and clock settings. Here is where I provide evidence to believe Nvidia is and always will be the graphics computing leader. And yes, I firmly believe the 5970 is the king currently, but to take that accolade as a stronger title, they'd have to prove their manufacturing process is superior to Nvidia.


----------



## Ihatethedukes

Quote:



Originally Posted by *chaz*


Code:


Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/core  Core Clock
5970         37FPS  3[B]2[/B]00            1500                no      725mhz
295GTX     26FPS  480              240                 ~0.05417      576mhz

From what I see, nvidia's SM technology is drastically unsubmissive to ATI even under a 200mhz core clock difference.
I haven't gotten any other solid numbers for the 400 series that are equally comparable in terms equivalence with other components and clock settings. Here is where I provide evidence to believe Nvidia is and always will be the graphics computing leader. And yes, I firmly believe the 5970 is the king currently, but to take that accolade as a stronger title, they'd have to prove their manufacturing process is superior to Nvidia.


You clearly don't understand the architectures.

First off, ATI shaders are grouped in an arrangement that has 1 'fat' SIMD core and 4 'skinny' cores that are not capable of complex computing. nV's architecture has ONLY 'fat' SIMD cores that are all capable of complex computing.

Secondly, nV's sp's are NOT running at 576MHz. They are running at ~1250Mhz (the shader clock). ATI doesn't have heterogeneous clock speeds like nV, so their shaders run at the core speed of 725MHz.

So it's REALLY more like:

Code:


Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/(SIMD core*clocks)  Core Clock
5970         37FPS  640s+2560d    320s+1280d       ~7.97x10^-5          725mhz
295GTX     26FPS  480s              240s                 ~4.33x10^-5       ~1250mhz

You see, nV solved the problem with a shader that has more frequency and ATI solved it with giving their core little 'helpers'. If you factor in shader speed ATI looks even better (only because their 'helper' cores give the SIMD units more oomph.)


----------



## Ovlovian

Quote:


Originally Posted by *Ihatethedukes* 
You clearly don't understand the architectures.

First off, ATI shaders are grouped in an arrangement that has 1 'fat' SIMD core and 4 'skinny' cores that are not capable of complex computing. nV's architecture has ONLY 'fat' SIMD cores that are all capable of complex computing.

Secondly, nV's sp's are NOT running at 576MHz. They are running at ~1250Mhz (the shader clock). ATI doesn't have heterogeneous clock speeds like nV, so their shaders run at the core speed of 725MHz.

So it's REALLY more like:

Code:



Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/(SIMD core*clocks)  Core Clock
5970         37FPS  640s+2560d    320s+1280d       ~7.97x10^-5          725mhz
295GTX     26FPS  480s              240s                 ~4.33x10^-5       ~1250mhz

You see, nV solved the problem with a shader that has more frequency and ATI solved it with giving their core little 'helpers'. If you factor in shader speed ATI looks even better (only because their 'helper' cores give the SIMD units more oomph.)









+rep


----------



## Monago

Here's a screenshot of my benchmark results. My E8400 still seems to be serving me quite well. I was honestly expecting MUCH more of a bottleneck here. As for my card, unfortunately the ram is a bit crappy, so I'm only able to reach 1100. The cores are solid though, and had no problem running at 900 if I wanted them to.

Anyway, these are *almost* 5870 stock clock results on my 5970 with an overclocked E8400 at 4.05 Ghz. Oh, and those voltages are at load. I'm running orthos in the background to show my actual cpu speed.


----------



## NoahDiamond

Quote:


Originally Posted by *Ihatethedukes* 
You clearly don't understand the architectures.

First off, ATI shaders are grouped in an arrangement that has 1 'fat' SIMD core and 4 'skinny' cores that are not capable of complex computing. nV's architecture has ONLY 'fat' SIMD cores that are all capable of complex computing.

Secondly, nV's sp's are NOT running at 576MHz. They are running at ~1250Mhz (the shader clock). ATI doesn't have heterogeneous clock speeds like nV, so their shaders run at the core speed of 725MHz.

So it's REALLY more like:

Code:



Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/(SIMD core*clocks)  Core Clock
5970         37FPS  640s+2560d    320s+1280d       ~7.97x10^-5          725mhz
295GTX     26FPS  480s              240s                 ~4.33x10^-5       ~1250mhz

You see, nV solved the problem with a shader that has more frequency and ATI solved it with giving their core little 'helpers'. If you factor in shader speed ATI looks even better (only because their 'helper' cores give the SIMD units more oomph.)

You appear to be trying to do some research, but there is a lot of information you are lacking. I don't mean to demean you or make you feel bad, but I do feel the need to straighten out some of the information you released. The information was not entirely wrong, but it was, well wrong to a degree. I feel bad saying that because I know you are trying to help, but here is what I have to offer.

The 5XXX are most certainly capable of complex computing. nVidia didn't create the idea of vertex math boards, but they did certainly widen the roadways and bring it to the forefront of technology for a while.

The limiting factor we have right now is the software. Many developers were stuck in the idea of using CUDA without so much as taking a look at the possibilities of parallel computing on different vertex math boards.

If you are referring to the Tesla boards, they are certainly capable, and have a specialized purpose. ATI has a significant lead in processing power, but it took a major endeavor on their part to get the attention of software developers and hardware integrators. As it stands, ATI just recently overtook the graphics card market share... but they have yet to successfully fully penetrate the vertex math board market as they have focused their attention on graphics cards. Now that companies are realizing the processing potential of the new ATI boards, and the performance per watt that is available, they are BEGINNING to switch over to open source and ATI boards, but ATI have yet to release a fully dedicated vertex math board like the Tesla.

Calculations per watt, ATI certainly has the lead. What they need now is marketing. nVidia has FANTASTIC marketing and pushes their products so much that they even have TV ads, magazine ads and almost as much internet advertising as Microsoft.


----------



## NoahDiamond

Quote:


Originally Posted by *Ihatethedukes* 
You clearly don't understand the architectures.

First off, ATI shaders are grouped in an arrangement that has 1 'fat' SIMD core and 4 'skinny' cores that are not capable of complex computing. nV's architecture has ONLY 'fat' SIMD cores that are all capable of complex computing.

Secondly, nV's sp's are NOT running at 576MHz. They are running at ~1250Mhz (the shader clock). ATI doesn't have heterogeneous clock speeds like nV, so their shaders run at the core speed of 725MHz.

So it's REALLY more like:

Code:



Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/(SIMD core*clocks)  Core Clock
5970         37FPS  640s+2560d    320s+1280d       ~7.97x10^-5          725mhz
295GTX     26FPS  480s              240s                 ~4.33x10^-5       ~1250mhz

You see, nV solved the problem with a shader that has more frequency and ATI solved it with giving their core little 'helpers'. If you factor in shader speed ATI looks even better (only because their 'helper' cores give the SIMD units more oomph.)

You do have a proper understanding of the clock architecture, but there are many objects you are leaving out. Your FAT and SKINNY cores are misleading, and more so, partly untrue. Though nVidia do have robust shader processors running at asynchronous clock speeds, and they function independently from the graphics core clock, ATI have a much more robust design than you give them credit for. The cores are at a synchronized clock speed, yes, but they are not limited as you have stated, and they are much more than "helper" cores. The SPs in the ATI 5XXX series are very robust, and have the capacity to handle full shader and vertex math, and do much more than act as helpers. They are completely capable of performing full operations by themselves, do not require a pre-fetch system as the nVidia cores do, they posses a much more advanced data execution pathway with more efficient individual stage pipeline flushing when necessary, and are configured in a way that allows each final SIMD unit perform multiple tasks and in the end have more execution, branch prediction, bandwidth, and shear processing power than the nVidia solution.

http://resources.vr-zone.com//upload...4122130798.png

Furthermore, the ATI solution allows all the I/O functions to function all the SPs at any point, simultaneously, without needing to go through a bus to address through the other cores to pass data to the deeper cores, and the basic diagram, and this diagram link is VERY basic, shows how the 4 SIMDs you talk about are capable of not only working together, but can function as one large SIMD of needed, depending on what the software requires, while allowing the processing power to be allocated on the fly to ensure other functions don't impede on the the performance of the other cores.

It is more efficient, has a well thought out design, and is very capable of handling the same tasks that nVidia does, except with better execution. The only limiting factor now is the adaptation of software.

In the end, the ATI design allows software developers much more freedom to design their code, and gives them dedicated pathways to the protocols in hardware without limiting the coder to specific procedure calls in order to work around the limits of the nVidia design.

The only reason nVidia had the success they have had is due to marketing and shoveling money down the throats of every developer they could. I must admit, I would say the Fermi Solution was superior if nVidia paid me such a hefty salary, though I probably wouldn't be able to sleep naturally at night but it wouldn't matter because I could afford the medication to overcome such deceit. Hell, if they paid me enough, I would say the world id flat, I was born a little black girl, and the Chrysler Sebring convertible was and still is the best designed car ever.


----------



## NoahDiamond

Quote:


Originally Posted by *chaz* 
Seriously guy, instead of countering each of your delusional malformed-facts paragraphs, I'm just gonna say with most of the BS you bring around here, not only Hugh Laurie wouldn't give one but that you should really reconsider the amount of off-topic bullery you present on the internet.

Also, look at the amount of complexity and quantities of the SMs for both sides, I know this is bad math but the error margin is permissible, not taking into considerations ROPS, texture units etc. Here I'll take some benchmark statistics from another site.

Code:



Code:


Crysis:WH  FPS     Cores total    Cores per GPU    Frames/core  Core Clock
5970         37FPS  3000            1500                ~0.01233      725mhz
295GTX     26FPS  480              240                 ~0.05417       576mhz

From what I see, nvidia's SM technology is drastically unsubmissive to ATI even under a 200mhz core clock difference.
I haven't gotten any other solid numbers for the 400 series that are equally comparable in terms equivalence with other components and clock settings. Here is where I provide evidence to believe Nvidia is and always will be the graphics computing leader. And yes, I firmly believe the 5970 is the king currently, but to take that accolade as a stronger title, they'd have to prove their manufacturing process is superior to Nvidia.

I really don't see why you feel the need to post facts when your facts about the nVidia designs are nowhere near on target.

There are 1600 Cores per GPU on the 5870/5970 chips, the GTX has a 576 core speed and 1276 SP speed, the nvidia design is based on a dual GPU array not their prime cores, but their GTX 275 CPUs, and even nVidia's 295 when highly overclocked OCFU BFG and FTW EVGA designs are released to compensate, when they are not bursting into flame, they are still a mess due to the amount of processing errors the produce.

It doesn't matter how many clocks you have if too many of the clocks have to be flushed.

I don't mean to insult you, and I don't feel insulted by you. Maybe if you had solid facts to back up your statements, I would feel insulted, but it appears to me that you threw together some numbers and posted them.

I beg of you, please go do some research before you make yourself look silly.

As for the off topic subject... this is page 33 of an internet forum. I don't know how long you have had experience to the internet, but you need to understand that not every syllable typed here will be dead on topic.

Welcome to the forums, by the way. It was a pleasure to meet you.

P.S.: Sorry for the double posting above. I originally replied to a poster, then felt the need to add a bit. If anyone is an idiot, it is me, and yes, I am not worthy of the British actors persona, but hey, it looks cool, he is a great actor, and House is a great show.

I also thought I should add that I like having intercourse with women. It is fun. They have these awesome soft things on their chests that I find amazingly attractive and somehow awe inspiring... even when they are different sizes... on the same woman.


----------



## NCspecV81

tl:dr. Really man? You can edit, you know!~?


----------



## gtarmanrob

i've already had a go at him before about it...drives me nuts. sends the thread way out of control.

Noah, i hate to imagine talking to you in real life dude. if you're one of those people that post like they speak....****


----------



## Ulver

Quote:



Originally Posted by *gtarmanrob*


i've already had a go at him before about it...drives me nuts. Sends the thread way out of control.

Noah, i hate to imagine talking to you in real life dude. If you're one of those people that post like they speak....****


indeed


----------



## 2Luke2

I will say that I enjoy his posts much more than all the complainers and he seems to know or has done a bit of research to backup his posts. He hasn't from what I have read; offended anyone intentionally, but he has made discussions from other’s posts. It may be a bit too much for most people to read, but this is a forum and this would be the place to type up your opinions. I will admit that he does seem a bit bias towards AMD/ATI, but maybe it’s just him comparing the current state of things in which is one of the reason’s I went back to ATI from Nvidia.


----------



## NoahDiamond

I apologize. I have Aspergers, bi-polar disorder, and a few other mental quirks. I do have a problem of not knowing when to stop, and I am working on that.

I love nVidia cards, and I have said this countless times. It pains me in a strange way to praise the 5xxx series designs, as I wish nVidia were the crown holder and I wish they did more engineering and less marketing. If I could get a nVidia card that has the efficiency and power of ATI's present technology, I would do it in a moment's notice.

As it stands, the truth is the truth. Honestly, I want a 3dfx Voodoo Rampage with modern technology. They were my true love.

I do also love boobs though, so yeah.

ATI is the present leader, and from the coding we have received, Fermi won't be spectacular, but it will be necessary. If nVidia released a product that had multiple operating points and better efficiency, I would be praising them.

I may be praising ATI, but I also praise Intel. There is no denying they have the superior central processor design. If AMD made a superior CPU, I would use an AMD CPU. If sun released a Sparc that was fully compatible and superior, I would praise and use their CPU.

When computers go to 128-bit computing, I am sure the world will change for the better.

I still like boobs more than graphics cards. I don't think this will ever change.


----------



## NoahDiamond

Quote:


Originally Posted by *NCspecV81* 
tl:dr. Really man? You can edit, you know!~?

You are the victim of a high level functioning Autistic person. I am sorry. I will try to keep my posts shorter. I know I type too much, but sometimes, I feel that a short post won't do justice when more information is needed.

Yes I do know how to edit, and no, I did not utilize that. That was a failure on my part.

If I could boil all my posts down to a simple slogan set...

ATI has the better graphics design even compared to Fermi.
Intel has the better CPU design compared to AMD.
Intel has a better chipset design compared to all the other manufacturers (this is due to them having access to huge resources of engineering and manufacturing).
Boobs rule.

There, is that short enough for you? It's not fully descriptive, and it leaves too much room for wonder, but it is simplified for your "ADHD" needs.


----------



## NoahDiamond

We have received updated code for the Fermi, and are asked to implement it ASAP. The code is a bit more refined, but not sensational. There are a few new features that we are being asked to utilize. These graphical "shortcuts" won't dramatically improve performance, but they should help to boost performance.

Expect a new driver any time now, with the fixed fan control issue.

We have a beta of the new drivers, 197. The new driver design is interesting to say the least. The profiles for SLI are improved, and they are experimenting with a "tiling" method to improve graphics scaling and reduce stutter.

Another new hope for nVidia is that the new code resolves a major performance issue that existed in DX11 for the Fermi simulations when Tessellation is used. Sadly, Tessellation still has the flaw of real time updating on the SPs since there is no dedicated tessellation, but nVidia tells us they are working on refining the software.

Lastly, by the looks of things, the Fermi, if all goes well and software progress continues to evolve and newer, more efficient software methods are used, the chips just may prove to be able to surpass the 5870 in performance, but it will require a very fast CPU, as nVidia are solving most of the performance issues by utilizing the CPU to calculate and pre-cache instructions for the Fermi, and previous nVidia boards.

I will give them 9 months to prove themselves. Right now, things don't look that great, but the future may hold improvements. One thing you will notice with the nVidia boards is when you go to a faster CPU with more hardware cores, you will notice a performance increase. Our i9 simulations show an improvement of almost 5%, but if the coding makes it's way down the supply chain, and nVidia updates their libraries, things could be sweet and I can go back to using a nVidia board.

On a side note, nVidia is still working on it's new 128-bit CPU design, but it is still not x86, however the future of computing already has the mortality of the x86 world in the midst of it's demise. When this happens, nVidia have the chance to try and create a new market, and their CPU design has specific optimizations for nVidia graphics cards, though it will improve ATI performance as well.

----------

I really hope all this works out in the end, because I DO want to go back to nVidia. I feel kinda weird running this ATI card, regardless of it presently being superior in almost every way. Right now, marketing is finally taking a back seat to software development. I wish nVidia luck so I can see green again.

Besides, I really am Red/Green color deficient... almost red/green color blind. Kinda ironic, eh?


----------



## NoahDiamond

Quote:



Originally Posted by *gtarmanrob*


i've already had a go at him before about it...drives me nuts. sends the thread way out of control.

Noah, i hate to imagine talking to you in real life dude. if you're one of those people that post like they speak....****


I am sure there are two things we can agree upon... Fear and Loathing in Las Vegas was a fantastic movie, and Johnny Depp rules.


----------



## xira

i have enjoyed these posts thoroughly ^__^;


----------



## gtarmanrob

Quote:


Originally Posted by *NoahDiamond* 
I am sure there are two things we can agree upon... Fear and Loathing in Las Vegas was a fantastic movie, and Johnny Depp rules.

QFT

and i dont disagree with anything you have said i might add, i disagree with the methods which you use when saying them


----------



## NoahDiamond

Quote:



Originally Posted by *gtarmanrob*


QFT

and i dont disagree with anything you have said i might add, i disagree with the methods which you use when saying them










Thank you. Sadly, while I am aware of my flaws and I do try to control them, I am not always able to properly avoid them.

When it comes to coding, hardware design and engineering, I can do some seriously amazing stuff. When interacting with people... I can quickly become very annoying.


----------



## NoahDiamond

I emailed Unigine in regards to the Fermi Heaven benchmarks being run on Heaven V1.1 vs the public v1.0, and asked for the source codes. I also requested information about the Heaven 2.0 benchmark.

http://images.roosterteeth.com/image...259271d974.jpg

Here is an image with the few frames where heaven is shown running V1.1.

The contact I am using is below.

Denis Shergin
CEO / Unigine Corp.
[email protected]
tel.: +73822553458 (office)
tel.: +79138250566 (mobile)

So, I emailed Unigine. I am waiting for a response.

Email is cheap. Phone is not.


----------



## NoahDiamond

This is sad. The benchmarks we have been seeing had the v1.1 run on the nVidia card, and v1.0 run on the ATI card.

If and when I get hold of the new benchmark tool, I will run it and let you know the results, if I am allowed. If Unigien tells me I cannot release, then I cannot. If I find it on a torrent or some other site and I get it by other happenstance, I can report back.

For now... http://nvidia.fullviewmedia.com/gdc2...aprjagaev.html


----------



## Ihatethedukes

Hope you get it. It'll be interesting.


----------



## dejanh

Looks to me a single HD5850 can beat the GTX480, at least at the settings they used...sure the GTX480 will OC too but I think this speaks volumes in terms of the anticipated performance gap - there won't be much if any.

I did this at 1920x1200 with 24/7 stable clocks and 4xAT, AA off, no Vsync







That's quite a bit higher than their settings of 1920x1080 and 1xAT.


----------



## Ihatethedukes

They used tesselation. You won't get half of that.


----------



## dejanh

Quote:



Originally Posted by *Ihatethedukes*


They used tesselation. You won't get half of that.


I am using tessellation...I'm not sure what made you think I am not...

The drop of FPS during tessellation heavy parts is ~15FPS, so to about 30FPS.

Edit: I just did a quick run and waited till the bit that you can see tessellation in action...


----------



## NoahDiamond

I pull over 80 average FPS with a score of over 2000. I just need to post a pic. That is running the exact same settings the nVidia video showed, using version 1.0. Version 1.1 is optimized, but I am having trouble getting hold of the full installer.

I am using the 10.3a pre-release. They are faster than the initial 10.3 pre release. I gained about 5% minimum in most games.

5970, 850/4800 110 PCI-E bus

I am looking forward to testing out Metro 2033... If only the torrent would hurry up.


----------



## linkin93

FYI: Increasing the PCI-E clock has minimal benefits and it can kill hard drives.

Nice results though!


----------



## NoahDiamond

Quote:


Originally Posted by *linkin93* 
FYI: Increasing the PCI-E clock has minimal benefits and it can kill hard drives.

Nice results though!

The potential for RAID corruption and drive damage is possible, but I keep things cool and backed up.

Getting back on topic, and before I go on a seriously pissed off rant, let me say that the latest "PRODUCTION" version of the uningne is much improved...

Below is a link to some of the screen shots using CUSTOM LIBRARIES and unique coding.
http://unigine.com/press-releases/080325-playsys/

You can see more of what happens when developers get their hands dirty with the engine and completely recompile it for their own needs.
http://unigine.com/clients/

Now, the royally pissed off rant:
I ran some tests and decided to break down the source code to the Heaven benchmark, and I have found some interesting things. It's about time we stopped using this beta graphics engine as a tool to measure the power of real world gaming. We need a proper test, and the best test I have found so far is the Dirt 2 Benchmark. It uses AI, memory swap, CPU, HD access, and all the features on current cards... But now that we have Metro 2033, perhaps we can get a benchmarking tool out of that to measure the performance of our rigs as a whole.

The problem I found in the Heaven code is it is purely a graphical test, and since it puts full load on the card in a VERY inefficient manor, we need something better. Heaven is loaded with bugs, and the Tessellation system is even worse, with breaks in the real time coding causing corruption and improper refraction/refraction and visible gaps between models and textures. I am very disappointed in the code.

Heaven has 4 major flaws (more flaws than that, but these are some of the most significant):

1: It's rendering engine does not take into account the ability to occlude objects that are not visible to the view point, forcing the card to render objects even when they are behind something. This is a very old technology, and could have easily been used by the engine, but it was not. It just renders everything that the camera resolution view point sees, even if it's behind something else.

2: It's tessellation methods are very poor and take many cycles to complete an operation that can easily be coded to automatically created on the fly instructions and optimize them to produce single instructions that can then be stored in memory, such as terrain motion or environmental damage.

3: It's level of detail engine has a major problem with scaling when tessellation is enabled, and the texture mapping is based on very old basic libraries that were released with the initial SDK of Directx 11 that were intended to and can very easily be refined into simpler, better looking, faster subroutines, and the engine also lacks proper texture map to model scaling, causing performance issues and artifacts in the real time render and gaps in the model/texture output.

4: Worst of all, it lacks the ability to offload operations to different SIMD engines, since it uses a stock library to run the in-order/out of order processing, which causes slowdowns on all cards, but nVidia and ATI. These flaws cause both types of cards, in either method of rendering, to use the default single pipeline method which doesn't allow the card or CPU to use it's branch prediction properly or flush the pipelines when errors occur, and they do occur often with the engine 1.0

Perhaps Heaven 2.0 will address some of these issues, but the tessellation gaps will LIKELY, but not certainly still exist, and the out of order processing issue is still not a focus. These NEED to be addressed, and hopefully at least the gaping issue will be resolved.

When the MSI for Heaven 2.0 comes out, I will break it down as best I can to see what changes were made and what new additions have been made to the library. Right now, though, it looks like the core of the engine needs a complete redesign and needs to be better compiled.

When I find a way to fully reverse engineer the expanded compiled files I have to a compiler level, which I am not certain I can do without the raw source code but I will try, I will upload it. It is sad to say the least that we have been using an engine that appears to have been built with the initial intention to have game developers write their own libraries. Any of you who have worked in C and Directx/OpenGL will be disgusted by the coding. To be fair, the OpenGL setup is nowhere as bad as the Directx 11. It honestly looks to me like they just took the basic SDK for Directx 11 when it was first introduced, and just threw them together, made some models and textures, programmed some camera motions and lighting paths, and packaged it.

I am aware that it was a "Tech Demo" that happened to include a benchmark, but they could have at least updated it through the course of development. Directx 11 and OpenGL have so much more potential. I will always be partial to OpenGL because it can be run on almost anything, including cell phones, and I always prefer open source to allow developers to write whatever they want without licensing.

This reminds me of when Crysis was first launched. It was rushed out, the engine was almost entirely based on SDK libraries, and it had serious problems. It was so badly written that even though they kept patching it up to 1.21, EA told Crytek to drop the initial engine re-build and start on the next game. In the end, the patches were canceled due to the amount of time it took to rebuild the engine.

I would rather a piece of software be thoroughly tested and refined before it be released, but then again, most of us live in the Microsoft world here, so we have come accustomed to incomplete software, major patches, service packs, "fixes" and even program service End Of Life time-lines.

I will keep you informed on my progress. When I get sick of it, I will give up, but I am very stubborn. If I have to, I will just steal the source code if I can dig it up.

I am sorry if I upset anyone here with my rant, but I felt the need to inform you all of the world we have been stuck in.

Here is the ADHD version of the rant.

Unigine 1.0 is broken software held together by chewing gum and spit.


----------



## NoahDiamond

Update from Unigine. Heaven 1.1 is not publicly available because Unigine 1.1 does not exist to the public. nVidia bought a license from Unigine and built their own 1.1 based on the original source code.

nVidia refuses to release it to me, but it will possibly available as a tech demo for the Fermi boards on the nVidia tech demo site, along with Medusa and such. nVidia did not build the engine for Medusa... the licensed it as well... From Crytek. It uses a simplified Cryengine 2.

Now I'm getting really annoyed.

Unigine were nice enough to give me their SDK demo, but it is so limited, and does not contain any source code. They want $40,000 for a complete license to use the Unigine with all it's features, and it's not even a complete engine. It's just a base plate.

The authors have to create their own libraries and build their engine based on source models... almost from scratch.

You want to see what a good looking engine can do when coded properly... and with the older Directx 9 or OpenGL 2.0?

  
 YouTube- Id Tech 5 Engine Demo HD  



 
It's all in the software. Sure, we have extremely fast hardware with a tremendous amount of memory, coming from both manufacturers, but when a game engine can be coded properly, you can get a LOT more out of it.

You can also look at the "4A engine". It is a "NEW" engine that has amazing potential that has been in development for a lot longer, and has true blood, sweat and tears put into it. This doesn't mean that the new game "Metro 2033" is the perfect game for it, as it was rushed out to meet market demand and the launch of the Fermi boards, but the engine itself is spectacular and has a wealth of custom libraries.


----------



## NoahDiamond

Someone, please... kill me now. I wish we could just go back to the days of Duke Nukem 3D, Doom and Wolfenstein, but we live in an age where game production takes longer than movie production due to interactivity. Please, for the love of humanity, GIVE US MORE TIME TO BUILD OUR GAMES... or just kill me. Otherwise, I will go back to 1985 game consoles and soft-core porn.


----------



## gtarmanrob

Quote:



Originally Posted by *NoahDiamond*


Update from Unigine. Heaven 1.1 is not publicly available because Unigine 1.1 does not exist to the public. nVidia bought a license from Unigine and built their own 1.1 based on the original source code.

nVidia refuses to release it to me, but it will possibly available as a tech demo for the Fermi boards on the nVidia tech demo site, along with Medusa and such. nVidia did not build the engine for Medusa... the licensed it as well... From Crytek. It uses a simplified Cryengine 2.

Now I'm getting really annoyed.

Unigine were nice enough to give me their SDK demo, but it is so limited, and does not contain any source code. They want $40,000 for a complete license to use the Unigine with all it's features, and it's not even a complete engine. It's just a base plate.

The authors have to create their own libraries and build their engine based on source models... almost from scratch.

You want to see what a good looking engine can do when coded properly... and with the older Directx 9 or OpenGL 2.0?

YouTube- Id Tech 5 Engine Demo HD

YouTube- Rage ID Tech 5

It's all in the software. Sure, we have extremely fast hardware with a tremendous amount of memory, coming from both manufacturers, but when a game engine can be coded properly, you can get a LOT more out of it.

You can also look at the "4A engine". It is a "NEW" engine that has amazing potential that has been in development for a lot longer, and has true blood, sweat and tears put into it. This doesn't mean that the new game "Metro 2033" is the perfect game for it, as it was rushed out to meet market demand and the launch of the Fermi boards, but the engine itself is spectacular and has a wealth of custom libraries.


wow...

so basically what you're getting at is, Nvidia are going to be benching their own version of Unigine, seemingly built just for them and their new cards, and then comparing it to a 1.0 build of Unigine for everyone else?

how will that ever make a fair comparison?

if Nvidia release their Unigine tech demo to Nvidia users only, i know you Nvidia guys will do the right thing







unless they code it that specifically, that it looks for certain .dll runtimes that ATI does not have.


----------



## cs_maan

Quote:



Originally Posted by *Eduardv*


Wow seems the GTx will be close to the 5970,then if priced right,omg sorry for the 5970.


But not close enough







hehe. Red Tide FTW.


----------



## NoahDiamond

Quote:


Originally Posted by *gtarmanrob* 
wow...

so basically what you're getting at is, Nvidia are going to be benching their own version of Unigine, seemingly built just for them and their new cards, and then comparing it to a 1.0 build of Unigine for everyone else?

how will that ever make a fair comparison?

if Nvidia release their Unigine tech demo to Nvidia users only, i know you Nvidia guys will do the right thing







unless they code it that specifically, that it looks for certain .dll runtimes that ATI does not have.

I say, the heck with it. I'm going to play Tetris on my GameBoy, and download Asian porn.

Quote:


Originally Posted by *cs_maan* 
But not close enough







hehe. Red Tide FTW.

Nowhere near close enough.

Does anyone have a few bottles of scotch? I want to forget about all about this nonsense.

We game coders need time to develop a work of art, and when investors, publishers and hardware manufacturers pressure us on time constraints, swear words come to mind, and I feel like carpet bombing them.

If only I worked for id Software, I would have all the time I needed to write proper game code.

-------------------------------

I was just too lazy last night to edit. Looks like Grunion edited it for me.


----------



## NCspecV81

o.0 there's an edit button hombre.


----------



## NoahDiamond

Yes, there is an edit button. No, I did not use it last night. No, I was not drunk. Yes, I was a bit belligerent.

Darned annoying random driver tests. I get a call this morning to do a driver test for the new nVidia pre-release on the simulator... AT 3 FREAKING AM!!!

Now I have been informed that we will have a huge re-write tomorrow for the code. Guess who invested money for "specific optimizations"? I'll give you a hint... It wasn't ATI.

It's not a little bit of money either. I feel dirty, and even a shower didn't help.

--------------------------------------

One more thing, and yes, I used the Edit button this time. Our higher up development team is working to get the 1.1 version used for the Fermi test.

We have a policy of not placing any special logos on our software... but that might change starting next week.

I think I'm going to apply for a job as a fry cook. At least there I won't feel belittled. Perhaps it is just the fact that last night I was already pissed off, I have only 3 hours sleep in the past 3 days, and I have to work... right now.

For life's sake... it's just a driving simulator. All we were originally supposed to do was make a realistic driving simulation not only for play, but for real world driving instruction and vehicle testing in a virtual environment to significantly reduce manufacturing costs. It was INTENDED to not be a resource hog. Now we have to implement a host of new complex code. The game originally ran fine on a Pentium III 1GHz with 512MB of memory with lower graphics, but NOOOOOO... they want dual versions now. One for entertainment and graphics, and one a dedicated simulator. I'm sorry if we were trying to produce a quality piece of efficient software.

Right now, our program installer is under 200MB, and installed with all the custom upgrades is still under 300MB for the full install, granted it is Alpha Z, but it looks like it's going to be re-built solely for eye candy, and will probably be over 6GB just for the graphics libraries.

If you are interested, check out Live For Speed. Not everything is publicly available, and the host site is in GB, but you can at least download the demo, and the full version for support is not expensive.

Look for our release title next year. We will be using a third party to do console porting. Don't know who yet.

Right now, Alpha Z is a prime example of simplified code and high performance with stellar graphics.

  
 YouTube- Live for Speed movie "Just_racing"  



 
This is an old video, but it shows what the game looked like 2 years ago.


----------



## NoahDiamond

Quote:



Originally Posted by *linkin93*


FYI: Increasing the PCI-E clock has minimal benefits and it can kill hard drives.

Nice results though!


Talk about irony... Only 6 hours after you mentioned this, one of my SATA drives encountered a mechanical failure. Luckily, I was able to record a windows image and get it repaired promptly. I just replaced the C RAID drives with new drives to be careful. One of the drives had been intermittently dropping off every now and then from close to the day they were installed... It was that drive that failed. I don't think the increased PCI-E bus caused the failure though, as it was attached to the RAID controller. The other drives are fine.

It's always a good idea to have backup images of your computer for critical drive failures, and an external drive solely for data backup.

YAY for experience in the tech industry. I learned that lesson years ago when I had drives fail. Everything seems to be fine now, as I am up and running stable... Again.

It was one of those $54.99 WD blue 500GB drives from CompUsa. I just played it safe and replaced both drives.

To be extra safe, I replaced the SATA cables too. They were worn and kinked. Now I have neon green cables instead of those old fashioned red cables. It looks cool.

*The pure definition of ironic coincidence.*


----------



## NoahDiamond

I have been trying to find out why the benchmarks keept crashing, and everything kept crashing for that matter... I thought maybe my raid backup was corrupted, but that was not the case. It turns out, my Corsair H50 Liquid CPU Cooler just decided to blow out today. I have the best of luck in catching problems before they become serious, but I am having bad luck this week.

In the end, all my problems with are annoying me. Someone, please send me a ray of hope that things will stop breaking. I would put in a water block, and was thinking about putting a water block on the HD 5970, but it keeps itself cool enough. I thought it was over heating, but it turns out the CPU was overheating due to lack of coolant flow. Luckily, I had a spare Copper Block heat pipe cooler and I am using it right now. It's not as quiet, and I don't want to game with it because of the noise, but tomorrow I will get a new block.

I know this post is off topic, but have any of you had one of those weeks?


----------



## Acerimmer1

Taking this at face value it seems to me that the 480 performs alot more impressively on this test than many seem to think.

The 480 is the only card that does not drop below 30 fps at any point in the bench. Basically if this were a game I'd want to run it on the 480 rather than either of the ATI cards because that way I'd have a reasonable frame rate all the time. Plus it'd be cheaper than a 5970 by the sounds of it.

Yes the 5970 has the higher average fps but thats because when the DX11 features aren't really being pushed it runs at some sick frame rates I probably wouldn't even notice.

And can somebody remove the dark purple line from the graph because it uses a cpu clock speed which is 800mhz faster.


----------



## Imglidinhere

Why is this thread still being posted in!? Good god... The legit benchmarks haven't even been released yet!


----------



## dejanh

Quote:


Originally Posted by *Imglidinhere* 
Why is this thread still being posted in!? Good god... The legit benchmarks haven't even been released yet!

Thank you for contributing


----------



## xira

Quote:



Originally Posted by *Acerimmer1*


Taking this at face value it seems to me that the 480 performs alot more impressively on this test than many seem to think.

The 480 is the only card that does not drop below 30 fps at any point in the bench. Basically if this were a game I'd want to run it on the 480 rather than either of the ATI cards because that way I'd have a reasonable frame rate all the time. Plus it'd be cheaper than a 5970 by the sounds of it.

Yes the 5970 has the higher average fps but thats because when the DX11 features aren't really being pushed it runs at some sick frame rates I probably wouldn't even notice.

And can somebody remove the dark purple line from the graph because it uses a cpu clock speed which is 800mhz faster.


nVidia is running a custom version of the benchmark on their card that THEY built by purchasing a license. At the very least this shows they're hiding something, I would go as far to guess their Tessellation is even worse than ATI's as they're probably using Cuda cores to emulate it.


----------



## Nautilus

Quote:



Originally Posted by *xira*


nVidia is running a custom version of the benchmark on their card that THEY built by purchasing a license. At the very least this shows they're hiding something, I would go as far to guess their Tessellation is even worse than ATI's as they're probably using Cuda cores to emulate it.


It's not bad thing to have CUDA cores do tesellation work. It's ingenious! ATI should have done it as well.


----------



## xira

Quote:



Originally Posted by *Nautilus*


It's not bad thing to have CUDA cores do tesellation work. It's ingenious! ATI should have done it as well.


 There are no specialized parts of the nVidia chips, Even OpenCL is emulated. DirectX is Emulated. Physx is Emulated. Tessellation is emulated, all by the CUDA cores.

ATI has a part for all functions, leaving their SPs free for other tasks.

nVidia needs a beefed up system to emulate. This is an ongoing problem they've been having with their architecture, and this is why they need more power usage even though they shrunk their architecture.

Not only this but AMD and Intel are now working together as apart of the Fusion project to wipe nVidia off the map. I am probably more concerned about this than you, no nVidia means no GPU competition and that would be a horrible thing.

nVidia is in a hole and they need a miracle.


----------



## Nautilus

Quote:



Originally Posted by *xira*


There are no specialized parts of the nVidia chips, Even OpenCL is emulated. DirectX is Emulated. Physx is Emulated. Tessellation is emulated, all by the CUDA cores.

ATI has a part for all functions, leaving their SPs free for other tasks.

nVidia needs a beefed up system to emulate. This is an ongoing problem they've been having with their architecture, and this is why they need more power usage even though they shrunk their architecture.

Not only this but AMD and Intel are now working together as apart of the Fusion project to wipe nVidia off the map. I am probably more concerned about this than you, no nVidia means no GPU competition and that would be a horrible thing.

nVidia is in a hole and they need a miracle.


I generally think like you but not in this. I don't understand why it is bad having CUDA cores do the computation tasks. This or that way what matters is if the job is efficiently done or not. Intel also emulated 64bit for years but you can't say that it has disadvantages because it hasn't. AMD fanbois has been saying for years AMD has the real 64bit and what happened? Did anyone care about the way 64bit works in CPU?

ATI's tesellation engine has a constant clock speed which was set by ATI and can't be changed. But nvidia's CUDA cores can be. By this way you can enhance it even more.

To sum up, if nvidia has such a great technology why shouldn't they use it in every possible way?


----------



## xira

These are direct quotes from Noah Diamond, but it looks like you didn't even read my post.


----------



## xira

[11:29] Bubba da Jew: Physics PPUs were designed originally to do PHYSICS.
[11:29] Bubba da Jew: Intel released it public, and ATI uses it in hardware.
[11:30] Bubba da Jew: You see, there are no specialized parts of the nVidia chips.
[11:30] Bubba da Jew: Even OpenCL is emulated.
[11:30] Bubba da Jew: DirectX is Emulated.
[11:30] Bubba da Jew: Physx is Emulated.
[11:30] Bubba da Jew: They Emulate using the CUDA cores.
[11:30] Bubba da Jew: Even Tessellation is handled by the CUDA cores.
[11:31] Bubba da Jew: ATI has a part for all functions, leaving their SPs free for other taska.
[11:31] Bubba da Jew: tasks.
[11:34] Bubba da Jew: Just watch the 5970 eat the 480
[11:34] Bubba da Jew: Just watch the 5970 eat the 480
[11:34] Bubba da Jew: This is using 1.0 on the HD 5970.
[11:31] nicholas: So this is why nVidia is struggling
[11:31] nicholas: with new archs
[11:31] nicholas: it all makes sense now
[11:31] Bubba da Jew: Yes.
[11:31] Bubba da Jew: They need a beefed up system to Emulate.
[11:31] Bubba da Jew: I have been saying this for months.
[11:31] Bubba da Jew: You seem to be the first to be grasping the idea.
[12:59] Bubba da Jew: Both Intel and AMD learned the hard way to not lock themselves into a single chip.
[12:59] Bubba da Jew: nVidia learned the hard way too, but did not recover.


----------



## Hy3RiD

So, GTX 480 ~= a 5870?


----------



## buster2010

Quote:


Originally Posted by *shredzy* 
SIG:

Graphics Card
XFX 5850 Black Edition

Sorry but I'd rather ignore your ignorant ATi fanboy posts.

Have a good day

What's wrong with the XFX BE? It's just a factory overclocked edition of the card. But a Nvidia fanboy wouldn't know that huh?


----------



## xira

I think the point is he saw me say that I, and many others think nVidia is in trouble, he looked at my sig and became emotional and defensive.

I and both Noah have owned both ATi and nVidia cards, I love both companies. I WANT both companies to be successful, because competition is good for the consumer. This doesn't change the fact that I AM WORRIED for the reasons I have posted. Take it as you will


----------



## shredzy

Quote:



Originally Posted by *buster2010*


What's wrong with the XFX BE? It's just a factory overclocked edition of the card. But a Nvidia fanboy wouldn't know that huh?


Errr it has nothing to do with XFX BE. It has to do that he has a ATi card and is blatantly talking crap on Nvidia.

Also I'm not a fanboy, I like both Nvidia and ATi and have previously owned ATi cards. I just hate when people talk a load crap.

Good try but


----------



## NoahDiamond

What the hell happened here? Did I become a prophet here or something?

The GTX 480 is more than capable of being a performer. Fanboys are a problem, for sure. Right now, nVidia is doing what they have always been doing... building a fast FPU processor to handle emulation tasks. There are some hardware implementations on their GPUs, but OpenCL, OpenGL, Directx 10 and Physx are all emulated on CUDA. The coding is efficient, but not perfect.

As for the heaven benchmark... Yes, nVidia used their own optimized version 1.1 that they built by buying a license from Uningne and modifying the Heaven program to use occlusion and overlapping models/textures, as well as adding code to the program that is better optimized for their hardware. The ATI tests were run using V1.0, which is coded using basic libraries direct from the Microsoft SDK. The opportunity to optimize the program is not present in the ATI tests, so the video shown is moot.

The down side to nVidia's "raw power" approach is that all the tasks, Render, Physx, Tessellation, HDR, etc... need to be run on the CUDA cores at the same time, cutting into the processing power. Using a custom built synthetic benchmark to prove tessellation power is ridiculous. In a real world scenario, this much free power on the cores won't be there, and furthermore, the CUDA engine is highly CPU dependent, where the ATI 5XXX series has hardware units for each function.

There is an upside and down side to having dedicated units. The up side is that you don't cut into the cards power by using a particular feature. The down side is that it does not expand so much with faster processors. Granted, testing a GTX 480 with beta drivers against the 5870 with Catalyst 10.2 pre-release drivers is unfair.

nVidia is targeting a 6 month old card, while that card is still being tuned. The GTX 480 has proven to be very powerful, but it is not meant to rock the world or break any new barriers. The 480 is more like the concept of taking two GTX 275 chips, re-vamping the SPs and memory, then slapping them together on a single chip, which is essentially what they did. This is what they did in essence to make the Tesla. It's not a surprise really.

It would be fair to say that one is specifically better than the other. They are both very fast cards. If is unfair that nVidia bribes companies to optimize for their cards and cripple the competition like they have done so many times. Eventually, patches come out, but it's too late.

A prime example is Metro 2033. The game was released with the ATI cards being crippled in the game, but they already released a patch. It's not like nVidia can take back their money. The problem with that scenario was that they released the game to launch a particular product, only to find that the product would not be available for some time and would be very scarce, which would limit it's sale, so they patched it.

nVidia also did some crude work with their drivers. They overwrite the Physx files in Futuremark 3dMark Vantage, they wrote their drivers to lock out their Physx functions and CUDA operations if an ATI card is the primary display device, and they lied to investors about net losses and omitted recurring losses, and inflated their own stock by buying it back from their own employees and some stockholders for more than it was sold for.

I love nVidia personally, but mostly because I feel 3dfx lives on in them, but now it's starting to show even more so... in a bad way. nVidia are driving themselves into the ground trying to compete, when they could just release on normal product cycles. Right now, the option of building a dual GPU solution card is VERY out of the question because the GTX 480 is right at the 300 watt maximum of the ATX standard. If that standard is revised, which would take a lot, things might change, but their single GPU card draws more power than the HD 5970, unless you overclock the HD 5970. This is a problem.

nVidia had the power problem with the dual GPU 200 series. Asus released the MARS 295 which was 2 GTX 285 boards with 2GB of memory per board in SLI on a single card, but they only made 1000 and it came with a big warning about being out of ATX spec. The real GTX 295 is just two GTX 275s, clocked down with reduced power, and it is still 2 watts over the max of the ATX spec.

When all these problems are resolved, and nVidia work with proper ethics, and engineer a truly NEW product, then things will change. For now, it's just more of the same. Shrink down, clock up, add more power.

There is one upshot to the new drivers being released, and that is the potential for all CUDA cards to support Directx 11 and Tessellation, but it will be a big performance hit, just as it is on the GTX 480. The 480 counters it with raw clock speed, but game developers need to be very careful about how and where they use the features, because they can EASILY overload the card.

In the auto industry, there used to be a saying... there is no replacement for displacement. This is an old saying that has died. nVidia are stuck in the old saying.


----------



## NoahDiamond

Quote:



Originally Posted by *Imglidinhere*


Why is this thread still being posted in!? Good god... The legit benchmarks haven't even been released yet!


I agree with you.







I think I have been saying that for a while now.

Also, why is tessellation so important when the same tessellation processor can process voxels instead?

Maybe this will lighten the mood.








YouTube- Real Life Vs. Internet
http://www.albinoblacksheep.com/flash/posting

It looks like it's time for a FLAME WAR!!!


----------



## NoahDiamond

Quote:



Originally Posted by *xira*


I think the point is he saw me say that I, and many others think nVidia is in trouble, he looked at my sig and became emotional and defensive.

I and both Noah have owned both ATi and nVidia cards, I love both companies. I WANT both companies to be successful, because competition is good for the consumer. This doesn't change the fact that I AM WORRIED for the reasons I have posted. Take it as you will


I know there is an edit button, but I am not using it for a specific reason...

You are welcome to your own opinions, but don't follow me as a prophet on the future. I do perform research, but I would rather you quote the research sources than the person who compiled the finished statement. If I were to quote a song lyric, it would just be plagiarism, where if I used sources, it would be more like an essay. It's easy to get shot down when you have no sources to back up your statements. It's pretty hard to get shot down if you have a wall of armor from all the sources your information came from, especially when those sources are from nVidia themselves, the stock market, and software developers.

http://www.marketwatch.com/story/nvi...k=MW_news_stmp

http://developer.nvidia.com/page/home.html

http://developer.nvidia.com/object/nvemulate.html

http://xtreview.com/addcomment-id-11...enchmarks.html

http://developer.nvidia.com/object/opencl.html

So you know, when you see a feature that says "RUNS ON THE CUDA ARCHITECTURE", it means CUDA cores are emulating it.


----------



## xira

Quote:



Originally Posted by *NoahDiamond*


I know there is an edit button, but I am not using it for a specific reason...

You are welcome to your own opinions, but don't follow me as a prophet on the future. I do perform research, but I would rather you quote the research sources than the person who compiled the finished statement. If I were to quote a song lyric, it would just be plagiarism, where if I used sources, it would be more like an essay. It's easy to get shot down when you have no sources to back up your statements. It's pretty hard to get shot down if you have a wall of armor from all the sources your information came from, especially when those sources are from nVidia themselves, the stock market, and software developers.

http://www.marketwatch.com/story/nvi...k=MW_news_stmp

http://developer.nvidia.com/page/home.html

http://developer.nvidia.com/object/nvemulate.html

http://xtreview.com/addcomment-id-11...enchmarks.html

http://developer.nvidia.com/object/opencl.html

So you know, when you see a feature that says "RUNS ON THE CUDA ARCHITECTURE", it means CUDA cores are emulating it.


i should've taken my time and done so i just sort of felt quick to defend myself because he obvious pulled the fanboy card because he saw me say something negative about nvidia and saw my system spec sig despite the fact that in the post i said i _love_ both companies


----------



## NoahDiamond

Quote:



Originally Posted by *xira*


i should've taken my time and done so i just sort of felt quick to defend myself because he obvious pulled the fanboy card because he saw me say something negative about nvidia and saw my system spec sig despite the fact that in the post i said i _love_ both companies


Welcome to the internet.

When someone jumps in front of you and says...

"Na-Nany-BooBoo! Stick your head in DOO DOO! You have cooties and you're stupid!"

This is your chance to become a teacher to the first grader. When a person can't accept your subjective, and albeit quoted point of view, It's in your place to realize that they are expressing their opinion.

Everyone is entitled to their own opinion... even when their wrong.









This reminds me of Season 7, episode 11 of Family Guy, the first line said by Brian Griffin, and I can say we could all use a break from the internet. I'm going to post the statement here, but modified to fit my own agenda.

"Ahh yes, real life... Where for one day the internet geeks take their lips off the barrel of a loaded gun and adjust their eyes to sunlight." - Makes me laugh every time I see it. Yes I like Star Trek.

This is getting boring real fast. I wish someone would just slander me so I had some reason to post more. Until then, enjoy everyone and remember. Noah's got his eye one you. He's gonna get you. He's gonna get you with the Kodak Disc.

Oh screw it, just watch the episode.


----------



## NoahDiamond

http://www.ngohq.com/home.php?page=F...eme&dwn_id=930

Version 2.0

Download it now.


----------



## Chucklez

Quote:



Originally Posted by *NoahDiamond*


http://www.ngohq.com/news/17525-unig...ark-2-0-a.html

Version 2.0

Download it now.


I tried it quit on me for the 4th time. For now I quit until someone posts this on a different host site, preferably Megaupload.


----------



## NoahDiamond

Quote:



Originally Posted by *Chucklez*


I tried it quit on me for the 4th time. For now I quit until someone posts this on a different host site, preferably Megaupload.


I updated the link.

I'm downloading it at just under 50K.

http://www.ngohq.com/home.php?page=F...eme&dwn_id=930


----------



## NoahDiamond

http://unigine.com/download/files/Un...Heaven-2.0.msi

Direct link.

You're welcome.

I am getting close to 300K


----------



## BlakHart

2.8 mb/s thanks!


----------



## NoahDiamond




----------



## gtarmanrob

Quote:


Originally Posted by *NoahDiamond* 
http://unigine.com/download/files/Un...Heaven-2.0.msi

Direct link.

You're welcome.

I am getting close to 300K

lol that link is NOT aussie friendly. pulling a mean 4kb/s off there. spewing.


----------



## codejunki

Damn lol, 480 just got creamed, like total creamed in the faceinator!!!!


----------



## NoahDiamond

YouTube- I Love My Computer


----------



## NoahDiamond

Quote:



Originally Posted by *codejunki*


Damn lol, 480 just got creamed, like total creamed in the faceinator!!!!


Lets not jump to conclusions just yet. When I get my hands on the board, I will give it a fair bench against my card at stock clocks.


----------



## NoahDiamond

http://nexus404.com/Blog/2010/03/22/...-pushed-april/

If you look at the die design, you can clearly see how it is two GPUs mated together on one die. They took the GTX 275, similar to the GTX 295 concept, and made a single IC. This explains the power consumption. There is an increase in the transistor count, and CUDA performance has been ironed out, but it is apparent now that it is little more than 2 GPUs slapped together with a compatibility upgrade and cache added on die.

nVidia have been struggling for years to get on die cache improvements, and with the die shrink, they were able to fit it in, but it is little more than a retrofit of previous designs.

You can find more information here.

http://www.sddt.com/News/article.cfm...de=20100315tbi

The chip design is still very CPU dependent, and running two of these in SLI will have issues with heat and power consumption.

The Floating Point Units are not modified much, but they do have a larger pipeline between the units and the central core. If you look at the image of the die, you can see the SLI bridge in hardware, so output represents a single GPU. This is an improvement, but it has the same problems the CELL processor has... The Element Interconnect Bus design of the new Fermi chip relies on the cache to work properly, since the new "hardware patches" are not integrated into the Stream Processors, but rather into the central hub.

The new chip uses a lot of internal translation hardware to achieve it's compatibility, and the clock speeds are boosted to compensate. The GPU runs VERY HOT, and has had serious problem with manufacturing, as the design has little room for manufacturing error. TSMC, the manufacturer of the chips, is working hard to improve the yield, but you can count on a re-design of the GPU in the future that resolves the fabrication and yield problems.

Right now, the only way to keep multiple GPUs cool is to run with the case open and a lot of air flow.

There is also another problem. Each of the two integrated GPUs only have access to half of the PCI Express pipeline. The new design is working to correct this.

Don't expect much overclocking headroom, and be prepared for manufacturing shortcuts such as the lack of software controlled voltage like the Voltera IC. Some companies may use them, but it is looking like the new boards will use the same Resistor Array voltage control to lock the voltages.

Oh well. The card is not a total flop... but it is far from perfected. The first boards to be released will be a test of the market.

http://www.semiaccurate.com/2010/03/...tx480-benches/


----------



## NoahDiamond

Dead thread is dead.


----------



## amstech

The last several weeks have been rough enough for Nvidia fan boys...do we really need more charts showing how poor it performs at $500?


----------



## NoahDiamond

We should send them all a cake. A red, cherry iced cake.


----------



## RODINE

Quote:



Originally Posted by *ljason8eg*


How is it flawed? You could compare a 5970 to a 8400GS if you wanted to. Sure we'd all know the outcome but I don't get what makes a test like this flawed or unfair.



Since your card is a dual GPU then the only fair comparison would be to test it again two GTX 480's. That would be fair... in fact to make the comparison fair... hardware vs hardware... The highest end ATI single gpu card should be selected to compare against the NVIDIA highest end Single GPU Card.. if you wanting to compare dual gpu cards.. they should be compared side by side with two gpu's... not a dual to a single.

ATI fans are only excited because their cards are showing good numbers against this card... but in essence id be ticked off that i had to use a card with dual gpu's in it to even come close.

I play world of warcraft.. i have no use for a dual gpu card.. world of warcraft does not support SLI or Crossfire... a card like the GTX 480 would be just what i am looking for.. it can pump out the graphix and do it on a single gpu... cards like the 5970 and the GTX 295 are a waste of money for world of warcraft as they are on card sli / crossfire for my application.


----------



## jj775

Quote:



Originally Posted by *RODINE*


Since your card is a dual GPU then the only fair comparison would be to test it again two GTX 480's. That would be fair... in fact to make the comparison fair... hardware vs hardware... The highest end ATI single gpu card should be selected to compare against the NVIDIA highest end Single GPU Card.. if you wanting to compare dual gpu cards.. they should be compared side by side with two gpu's... not a dual to a single.

ATI fans are only excited because their cards are showing good numbers against this card... but in essence id be ticked off that i had to use a card with dual gpu's in it to even come close.

I play world of warcraft.. i have no use for a dual gpu card.. world of warcraft does not support SLI or Crossfire... a card like the GTX 480 would be just what i am looking for.. it can pump out the graphix and do it on a single gpu... cards like the 5970 and the GTX 295 are a waste of money for world of warcraft as they are on card sli / crossfire for my application.



You need a gtx 480 to play wow?









anything a gtx 480 can do the 5870 can also.

A gtx 295 is a waste of money, not the 5970.

I play wow you do not need to change your gpu.

Stop finding excuses to support you fanboy talk.


----------



## ljason8eg

Quote:



Originally Posted by *RODINE*


Since your card is a dual GPU then the only fair comparison would be to test it again two GTX 480's. That would be fair... in fact to make the comparison fair... hardware vs hardware... The highest end ATI single gpu card should be selected to compare against the NVIDIA highest end Single GPU Card.. if you wanting to compare dual gpu cards.. they should be compared side by side with two gpu's... not a dual to a single.

ATI fans are only excited because their cards are showing good numbers against this card... but in essence id be ticked off that i had to use a card with dual gpu's in it to even come close.

I play world of warcraft.. i have no use for a dual gpu card.. world of warcraft does not support SLI or Crossfire... a card like the GTX 480 would be just what i am looking for.. it can pump out the graphix and do it on a single gpu... cards like the 5970 and the GTX 295 are a waste of money for world of warcraft as they are on card sli / crossfire for my application.


A GTX 480 is a massive waste for WoW too.

Also I seriously don't understand why cards have to be of near equal performance to be compared. A graphics card is a graphics card. A 5870 trades blows with a 295, but since they use a different amount of GPUs, its not fair? Come on.


----------



## doc2142

Quote:



Originally Posted by *RODINE*


Since your card is a dual GPU then the only fair comparison would be to test it again two GTX 480's. That would be fair... in fact to make the comparison fair... hardware vs hardware... The highest end ATI single gpu card should be selected to compare against the NVIDIA highest end Single GPU Card.. if you wanting to compare dual gpu cards.. they should be compared side by side with two gpu's... not a dual to a single.

ATI fans are only excited because their cards are showing good numbers against this card... but in essence id be ticked off that i had to use a card with dual gpu's in it to even come close.

I play world of warcraft.. i have no use for a dual gpu card.. world of warcraft does not support SLI or Crossfire... a card like the GTX 480 would be just what i am looking for.. it can pump out the graphix and do it on a single gpu... cards like the 5970 and the GTX 295 are a waste of money for world of warcraft as they are on card sli / crossfire for my application.


By your logic if the highest nvidia card cost $1000 and its a single gpu and the highest ATI card is $500 and it too a single gpu that makes it a fair comparison?


----------



## NoahDiamond

I think we should buy whatever we want to buy. Yeah, some of us had Directx 11 hardware 9 months ago, and nVidia is just catching up, but if you have not gone the way of Directx 11, it's not a bad idea to get a GTX 480. It couldn't hurt. It's not like it comes with herpes.

Quote:



Originally Posted by *RODINE*


Since your card is a dual GPU then the only fair comparison would be to test it again two GTX 480's. That would be fair... in fact to make the comparison fair... hardware vs hardware... The highest end ATI single gpu card should be selected to compare against the NVIDIA highest end Single GPU Card.. if you wanting to compare dual gpu cards.. they should be compared side by side with two gpu's... not a dual to a single.

ATI fans are only excited because their cards are showing good numbers against this card... but in essence id be ticked off that i had to use a card with dual gpu's in it to even come close.

I play world of warcraft.. i have no use for a dual gpu card.. world of warcraft does not support SLI or Crossfire... a card like the GTX 480 would be just what i am looking for.. it can pump out the graphix and do it on a single gpu... cards like the 5970 and the GTX 295 are a waste of money for world of warcraft as they are on card sli / crossfire for my application.


I use my HD 5970 to play solitaire. It makes those cards fly! Mad power! Peggle also rocks, as does pac-man and chess.

You won't believe how good they look.


----------



## Celeras

Quote:


Originally Posted by *amstech* 
The last several weeks have been rough enough for Nvidia fan boys...do we really need more charts showing how poor it performs at $500?









Charts? We do videos now ;p








YouTube- GTX 480 Vs HD 5870 EXTREME Tessillation Showdown


----------



## sLowEnd

Quote:


Originally Posted by *NoahDiamond* 
We should send them all a cake. A red, cherry iced cake.


----------



## 2Luke2

Quote:



Originally Posted by *RODINE*


Since your card is a dual GPU then the only fair comparison would be to test it again two GTX 480's. That would be fair... in fact to make the comparison fair... hardware vs hardware... The highest end ATI single gpu card should be selected to compare against the NVIDIA highest end Single GPU Card.. if you wanting to compare dual gpu cards.. they should be compared side by side with two gpu's... not a dual to a single.

ATI fans are only excited because their cards are showing good numbers against this card... but in essence id be ticked off that i had to use a card with dual gpu's in it to even come close.

I play world of warcraft.. i have no use for a dual gpu card.. world of warcraft does not support SLI or Crossfire... a card like the GTX 480 would be just what i am looking for.. it can pump out the graphix and do it on a single gpu... cards like the 5970 and the GTX 295 are a waste of money for world of warcraft as they are on card sli / crossfire for my application.


No offense friend, but they slapped two GPUs into one massive die... Would you still believe what you say if they had one block that housed four GPUs smashed together... as long as it's one physical chip? Doesn't make much sense to me... consumes more power, produces more heat, and performs lower than the dual gpu card you're referring to.

Now before you take a look at my system and call me a fan boy please do know that I have already purchased two 480s.

Shipment 1 of 1
Not yet shipped 
We'll notify you via e-mail when we have an estimated delivery date for this item. It will ship separately. You can cancel at any time. 
2x EVGA GeForce GTX480 1536 MB DDR5 PCI-Express 2.0 Graphics Card 015-P3-1480-AR 
Sold by: Amazon.com, LLC

I just think Nvidia did a piss poor job on the engineering side of the house this time around. I think there might have been a reason they made such a huge die. I think they were looking for a way to still call themselves the king of something and to give people a reason to agrue that we shouldn't compare single and dual gpu cards. I used to think the same way, but after the cards stopped having issues with micro stuttering and driver issues; I started to think of them as one card or package. I don't care if the card has 10 gpus on it. If it consumes a reasonable amount of power, produces a reasonable amount of heat, and the costs to own the card are equal to the performance it brings then it's a good buy to me. If it fills the same two slots in my computer... then it's 1 card.


----------



## NoahDiamond

I liked the Companion Cube Cake. I still say we send them a red Ferbi cake.

This thread is still alive?


----------



## th3c0ol

The 480 was priced in the middle of 5870 and 5970.. but i still reckon its not worth its value.


----------



## saulin

Why do people talk about the die size of Fermi?

How about the actual size of a 5970 that doesn't fit a lot of cases?

Just curious


----------



## Suprcynic

Quote:


Originally Posted by *saulin* 
Why do people talk about the die size of Fermi?

How about the actual size of a 5970 that doesn't fit a lot of cases?

Just curious









At least you know once you get it in there via hacksaw or whatever it's not going to shut your mobo down with the heat it creates like I'm seeing creeping around on the web with the 480.

Also I thought it was impressive that the 480 killed the 5970 in the heaven benchmark. Maybe I'll SLI them and see how that goes although I know I'll get killed on resale value after they are used if I don't like them.

I've got an antec 1200 with a lot of fans so I might be alright with the 480 SLI. Maybe I'll get a fire extinguisher just in case.

On the other hand I'll have to give up Eyefinity which is the baddest thing around in my opinion. I don't know.


----------



## gtarmanrob

i would love to get a GTX480 if they didnt cost as much as a 5970 down here.


----------



## NoahDiamond

Quote:


Originally Posted by *Ocnewb* 
Nah if u look @ the 2 pics that i linked below my benchie, u can see that they ran @ 4x AA and 16x too. It's just that the GTX470 is supposed to be in between the 5850 n 5870 but it's not really if not any faster than the 5850.

Dance that bird!


----------



## Dv_Destruction

Well, it was always that ATI brought out a magnificent card (best on the face of the planet), and than nVidia always lagged behind. And every nVidia fanboy can't stop talking about nVidia's comeback. If ATI launced its beasts, you will see nVidia bringing out there pride about six months down the line. I mean SIX MONTHS. DUH! It's common sense nVidia will bring a better card out if they had such a delay. Take the time of the launch of the HD 58xx series and the gtx 470 & 480. So again ATI fanboys will forget that they have a HD 5970 in their PC before nVidia brings out there dual chip. That is a fact!


----------

